-----------------------------------------------------------------------------------
Post ID:14437
Sender:=?Windows-1252?Q?J=F8rn_Wildt?= <jw@...>
Post Date/Time:2010-01-04 03:41:08
Subject:Re: [rest-discuss] Re: RESTful claims-based authorization
Message:
Hi Bruno, Are you using the SSL certificate to convey claims like email etc.? As I understand SSL it will be imposible to cache anything like you normaly can using REST - is that right? Seems like you loose something that way - especially if it's not encryption you are aiming for, but only trust in claims. > This being said, discovering the identity in FOAF+SSL is really where > this system makes use of REST: your ID is a URI (a WebID) than can be > dereferenced and about which things can be said using RDF/semantic web. Yes, using a URL as an ID seems like a good idea. As you say, you can state all sorts of thing at the end of the URL. As I remember it, this is quite a bit like what OpenID is doing, and, using some crypto stuff I do not understand, the server can be assured that the URL is in fact controlled by the client. > I sometimes wish there were 'WWW-Authenticate: transport' (or something > similar, to make handling tokens out of HTTP like SSL client-certificate > cleaner, and thus avoid some problems related to the TLS renegotiation > issue) or 'WWW-Authenticate: token' (to have clear > authentication-dedicated tokens, rather than cookies that are also used > for sessions), but they just don't exist in browser. Okay, so using SSL, OpenID, SAML, whatever: it seems like their is no standard way of transporting non-username/passsword claims after an initial handshake, except for using cookies. So the defacto standard way is quite simple: use cookies for storing proprietary security claims or session identifiers. Right? Or, if you ignore browsers, you can use "Authorization: MyVendorAuth XYZ" where MyVendorAuth identifies a proprietary claims-encoding method. Right? /J�rn ----- Original Message ----- From: "Bruno Harbulot" <Bruno.Harbulot@...> To: <rest-discuss@yahoogroups.com> Sent: Wednesday, December 30, 2009 6:12 PM Subject: [rest-discuss] Re: RESTful claims-based authorization > Hello J�rn, > > You could in principle define your own headers (or try to standardise > some headers) to propagate SAML assertions (or similar tokens) in a > RESTful way. Unfortunately, that's unlikely to work in browsers. > > Even SAML's HTTP Redirect (GET) Binding is often only a one-off thing > that can only be used to log in (and thus get a cookie), otherwise you'd > have to repeat this query for all URIs you want to use (and thus change > the URI, since the query is part of the URI, strictly speaking). > > We've been doing some work on FOAF+SSL whereby you avoid the non-RESTful > authentication issue by using a TLS/SSL client-certificate for the > authentication (which is under the HTTP level), but for servers that > don't support SSL (or even the settings required for FOAF+SSL), we've > also had to use some SSO-like login mechanism via cookies. > This being said, discovering the identity in FOAF+SSL is really where > this system makes use of REST: your ID is a URI (a WebID) than can be > dereferenced and about which things can be said using RDF/semantic web. > > > The issue of using cookies for authentication/authorisation comes from > the lack of browser support (and standardisation) for other headers. > I sometimes wish there were 'WWW-Authenticate: transport' (or something > similar, to make handling tokens out of HTTP like SSL client-certificate > cleaner, and thus avoid some problems related to the TLS renegotiation > issue) or 'WWW-Authenticate: token' (to have clear > authentication-dedicated tokens, rather than cookies that are also used > for sessions), but they just don't exist in browser. > Would it be worth suggesting this approach to the HTTP WG? Perhaps, but > there's little point doing so if the major browser vendors are not on > board. I presume most people consider that cookies are an acceptable > practical solution, even if it breaks the REST principles. > > > Best wishes, > > Bruno. > > > J�rn Wildt wrote: >> >> >> Is there any standard RESTful way of doing claims based authorization >> a'la >> SAML and CardSpace? The authorization schemes I have seen so far usually >> encodes a user reference and nothing more - there's no secure way to >> assert >> claims like email=xxx@... <mailto:email%3Dxxx%40yyy.zz> or >> employeenumber=12345 or age-below-twenty. >> >> I guess you can use SAML "HTTP Redirect (GET) Binding", but that >> generates >> such a huge URL that it seems impractical to use (it's a base-64 >> encoding of >> a zip-encoding of a SAML XML document). >> >> As I understand it a RESTful authorization scheme must be stateless, so >> you >> cannot rely on any kind of session use. This means you have to transfer >> all >> the claims on each and every request which again means a potentially big >> overhead. >> >> What is needed is a standard way of encoding multiple claims in a >> compact, >> secure, trusted way such that they can be transferred on each request >> without too much overhead (including whatever crypto stuff is needed). >> >> Maybe you could create a temporary ressource somewhere with the claims, >> then >> at least you only had to transfer the claims URL, not all the claims, and >> the server could then cache these claims. >> >> Any ideas or references? >> >> It even occurs to me that claims could be more RESTful than >> username/password since they don't require any out-of-band setup of user >> accounts. All that is needed is a standard for claims and then everything >> should work if the claims are issued by an authority that the web service >> trusts. No need for any human interaction - the server just sends a >> challenge "show me your claims (and I accept them from authority X, Y and >> Z)" whereafter the client sends the claims. These claims can even be >> obtained without human interaction if the client and the claims server >> trust >> each other. >> >> Comments? >> >> Thanks, J�rn > > > > ------------------------------------ > > Yahoo! Groups Links > > >
As this thread seems to have come to a stop, I'll reference the conclusion I reached over at the atom-protocol list for those that do not follow both lists. It pretty much answers the question for me. See http://www.imc.org/atom-protocol/mail-archive/msg11490.html Jan On Dec 16, 2009, at 4:21 PM, Jan Algermissen wrote: > I can't help it: I see no possible way to implement a non-human-driven > client for a service without (in one way or another) classifying the > resources the service provides. > > For example, consider a helpdesk ticket system: When writing a client > that searches for tickets and then updates the foo:status of the > individual tickets contained in the result set, I need to make the > assumption that the result set contains tickets (and not just > resources). In order to being able to make such an assumptions, the > classification information must be made available by the service. In > addition, when client developers should be enabled to develop clients > before the services exist this information is needed as some form of > service type description. The specification of application/atomsrv+xml > is a good example of such a service type description. > > But however this is approached, it essentially comes down to telling > the client what kinds of resources (IOW: kinds of application states) > to expect on the server. I just cannot code to update the resource > foo:status when I have now clue that this user goal is applicable to > the resource in the first place. > > Does anyone have an idea how to align this (IMHO fact) with the > constraint that no information about resource types must be made > available to clients in RESTful systems? > > Jan > > P.S. In human driven interactions the situation is different: We still > have knowledge of the resource type iin general (we know a trouble > ticket when we see one) but we are not dependent on knowing that the > result of some interaction will be a trouble ticket. We can allways > follow some human-targeted links and make a few hops to reach the > trouble ticket resource we expect should be 'somwehere'. M2M clients > do not have that luxury (unless we apply some form of AI I guess). > > > ------------------------------------ > > Yahoo! Groups Links > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
I've posted "DOCTR: a better approach to HTTP transactions" to my blog (http://amundsen.com/blog/archives/1024) and welcome any feedback. Some excerpts: <snip> A better approach is to expose an HTTP-compliant transaction service interface (TSI) that takes advantage of the protocol's inherent architectural style. Transactions over HTTP should be optional, discoverable, negotiable, based on optimistic commits, and (in the case of failures) use compenstating requests as a way to reverse previous work. </snip> and <snip> What is described here is a transaction service interface (TSI) that clients and servers can choose to implement in ways all parties can agree upon. This is the way media-types are currenly handled and TSIs could be defined, documented, and registered in much the same way. This includes the possibility of registering a media-type that matches the TSI implementation. True to the archetictural style of the HTTP protocol, this approach proposes a "uniform interface" for handling units of work over HTTP. </snip> Thanks in advance. mca http://amundsen.com/blog/
Watching this was well worth my time: http://www.infoq.com/presentations/Systems-that-Never-Stop-Joe-Armstrong I'm integrating Joe's ideas (and those of his source references) into my applied REST architecture work. I don't think in terms of coding, particularly in Erlang. But, some of Joe's laws have direct corollaries in REST -- the layered system and self-descriptive messaging constraints come to mind. Others have direct corollaries in my proposed system architecture, like using ZFS for storage, and Solaris Zones to isolate system layers executing on the same computer. REST allows for a system which obeys Joe's laws for system reliability. While it specifies the layered system constraint, REST says nothing about isolating those layers for concurrency. To meet the "software development on the scale of decades" goal of REST, I believe applied REST architecture should extend beyond REST per se, and consider other requirements of a long-running system like isolation and concurrency. Joe's Stable Storage law is highly pragmatic to the goals of a REST system, even though REST itself says nothing of storage. Isn't the notion of persistent storage something that belongs in an architectural model of an overall system? There are certainly plenty of alternatives for implementation. Just because it isn't part of REST seems no reason to exclude all notion of storage from the Model. Having a "law" to follow for the long-term development of a storage system, or achieve isolation and concurrency of layers, serves exactly the same purpose as having a "law" to follow (REST) for messaging between connectors. Isolation and concurrency are addressed by layered system and Zones. Self-descriptive messaging directly addresses what Joe's saying about failure detection and fault identification -- restricting communication to a uniform REST interface eases error detection and debugging. As to error correction, I agree with fail-early-and-exit, just like a browser does when the application/xhtml+xml media type is used, but has ill- formed XSLT output. Other media types cause browsers to handle XSLT output as text/html, meaning an error resulting from a syntax error in the HTML output may escape notice for an unacceptable length of time (i.e., until something that should have been caught during development winds up being filed as a bug report). Live Code Upgrade is important to REST also. Joe's clearly talking about something else entirely, however REST's layered system constraint makes it possible to swap out components in a system without affecting users, provided that messaging is stateless. What Joe's saying about stateless messaging's importance in OOP/COP directly applies to REST. REST also allows client-side code to be changed at any time, due to the hypertext constraint. Changes to a REST system are easy to roll out and roll back. Joe makes an important point that may be restated as, altering your server-side code ought to be possible without rebooting the computer (or even a Zone within a computer), as this affects other running processes. A REST "system that never stops" is a goal which may be achieved independently of language choice, although I am in favor of functional programming. In fact, I see this as a key reason to use XSLT over Javascript whenever possible -- immutable data as the basis for browser-resident code is less likely to crash some random system. My point is, the REST architectural style doesn't address key aspects of system architecture, like storage. So how can REST alone be the basis for any applied-software-architecture Implementation? The Model should therefore address issues like isolation, concurrency and storage based on the REST-agnostic laws of long-running systems. These laws clearly apply to the goals of any system that needs REST in the first place. -Eric
On Tue, Jan 5, 2010 at 9:21 PM, mike amundsen <mamund@...> wrote: > <snip> > A better approach is to expose an HTTP-compliant transaction service > interface (TSI) that takes advantage of the protocol's inherent > architectural style. Transactions over HTTP should be optional, > discoverable, negotiable, based on optimistic commits, and (in the > case of failures) use compenstating requests as a way to reverse > previous work. > </snip> Compensation often does not work. Think of an offer to buy. If it's optimistically committed, both buyer and seller may take followup actions that could be difficult or impossible to undo (e.g. starting production) if the offer is cancelled. Alternatively, both parties may need to wait for some undeterminate timespan to make sure a compensating request does not arrive. A better alternative in many situations, which I've described on this list before, is to use a provisional-final pattern, where the initial request is provisional, and then both parties agree that it should be made final or cancelled. All these interactions can be done RESTfully. An example in regular business practice is a request for quotation (provisional) followed by an order (final). You may think that is not a transaction, but it follows a 2-phase commit pattern, altho it does not obey all the ACID rules (which are impossible to follow RESTfully).
On Jan 6, 2010, at 10:24 AM, Eric J. Bowman wrote: > My point is, the REST architectural style doesn't address key > aspects of > system architecture, like storage. So how can REST alone be the basis > for any applied-software-architecture Implementation? The Model > should > therefore address issues like isolation, concurrency and storage based > on the REST-agnostic laws of long-running systems. These laws clearly > apply to the goals of any system that needs REST in the first place. RESTs primary goal is not fault tolerance but decentralized evolution, scalability and simplicity. REST's focus is coordination of components owned by independent parties. I think the comparision you try to make is a bit problematic. However, there is certainly a lot of overlap between the area of application. The stateless server constraint for example enables live server upgrade and the hypermedia constraint even enables changing the server side state machine while the client is stepping through it from hyperlink to hyperlink. In fact I find it hard to see how you could achieve better fault tolerance regarding the component communication than the fault tolerance you have with TCP/IP, DNS and HTTP. Jan -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
> An example in regular business practice is a request for quotation (provisional) followed by an order (final). > You may think that is not a transaction, but it follows a 2-phase commit pattern, altho it does not obey all the ACID rules (which are impossible to follow RESTfully). I agree. My only doubt so far is that in this case the client should be fat enough to know how to deal with situations where it breaks between the first and second step working with two resources. i.e. the user wants to buy a set of products, which was broken by the client app in two subsets, each one to be bought in two different suppliers. Both request for quotations were sent successfully, one order is also fine, but the second order breaks due to out-of-stock issues (someone just bought it.... not acid). The client is now responsible for trying to cancel that first order. Any workarounds/suggestions? Guilherme Silveira Caelum | Ensino e Inovao http://www.caelum.com.br/ On Wed, Jan 6, 2010 at 10:45 AM, Bob Haugen <bob.haugen@...> wrote: > > > On Tue, Jan 5, 2010 at 9:21 PM, mike amundsen <mamund@...<mamund%40yahoo.com>> > wrote: > > <snip> > > A better approach is to expose an HTTP-compliant transaction service > > interface (TSI) that takes advantage of the protocol's inherent > > architectural style. Transactions over HTTP should be optional, > > discoverable, negotiable, based on optimistic commits, and (in the > > case of failures) use compenstating requests as a way to reverse > > previous work. > > </snip> > > Compensation often does not work. Think of an offer to buy. If it's > optimistically committed, both buyer and seller may take followup > actions that could be difficult or impossible to undo (e.g. starting > production) if the offer is cancelled. Alternatively, both parties > may need to wait for some undeterminate timespan to make sure a > compensating request does not arrive. > > A better alternative in many situations, which I've described on this > list before, is to use a provisional-final pattern, where the initial > request is provisional, and then both parties agree that it should be > made final or cancelled. All these interactions can be done > RESTfully. > > An example in regular business practice is a request for quotation > (provisional) followed by an order (final). > > You may think that is not a transaction, but it follows a 2-phase > commit pattern, altho it does not obey all the ACID rules (which are > impossible to follow RESTfully). > > >
On Jan 6, 2010, at 2:00 PM, Guilherme Silveira wrote: > > > > An example in regular business practice is a request for quotation > (provisional) followed by an order (final). > > You may think that is not a transaction, but it follows a 2-phase > commit pattern, altho it does not obey all the ACID rules (which are > impossible to follow RESTfully). > > I agree. My only doubt so far is that in this case the client should > be fat enough to know how to deal with situations where it breaks > between the first and second step working with two resources. > > i.e. the user wants to buy a set of products, which was broken by > the client app in two subsets, each one to be bought in two > different suppliers. Both request for quotations were sent > successfully, one order is also fine, but the second order breaks > due to out-of-stock issues (someone just bought it.... not acid). > The client is now responsible for trying to cancel that first order. > > Any workarounds/suggestions? This inevitably leads to 2PC kind of problems and the usually best way to deal with that in practice is to simply account for the fact that some errors occur. That way, you loose some e.g. customers but gain the possibility to create manageable software systems. Flight overbooking is the best example. Jan > > > Guilherme Silveira > Caelum | Ensino e Inovao > http://www.caelum.com.br/ > > > On Wed, Jan 6, 2010 at 10:45 AM, Bob Haugen <bob.haugen@...> > wrote: > > > On Tue, Jan 5, 2010 at 9:21 PM, mike amundsen <mamund@...> > wrote: > > <snip> > > A better approach is to expose an HTTP-compliant transaction service > > interface (TSI) that takes advantage of the protocol's inherent > > architectural style. Transactions over HTTP should be optional, > > discoverable, negotiable, based on optimistic commits, and (in the > > case of failures) use compenstating requests as a way to reverse > > previous work. > > </snip> > > Compensation often does not work. Think of an offer to buy. If it's > optimistically committed, both buyer and seller may take followup > actions that could be difficult or impossible to undo (e.g. starting > production) if the offer is cancelled. Alternatively, both parties > may need to wait for some undeterminate timespan to make sure a > compensating request does not arrive. > > A better alternative in many situations, which I've described on this > list before, is to use a provisional-final pattern, where the initial > request is provisional, and then both parties agree that it should be > made final or cancelled. All these interactions can be done > RESTfully. > > An example in regular business practice is a request for quotation > (provisional) followed by an order (final). > > You may think that is not a transaction, but it follows a 2-phase > commit pattern, altho it does not obey all the ACID rules (which are > impossible to follow RESTfully). > > > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
On Wed, Jan 6, 2010 at 4:24 AM, Eric J. Bowman <eric@...> wrote: > Watching this was well worth my time: > > http://www.infoq.com/presentations/Systems-that-Never-Stop-Joe-Armstrong > > I'm integrating Joe's ideas (and those of his source references) into my > applied REST architecture work. I don't think in terms of coding, > particularly in Erlang. But, some of Joe's laws have direct > corollaries in REST -- the layered system and self-descriptive > messaging constraints come to mind. Others have direct corollaries in > my proposed system architecture, like using ZFS for storage, and Solaris > Zones to isolate system layers executing on the same computer. > > REST allows for a system which obeys Joe's laws for system reliability. > While it specifies the layered system constraint, REST says nothing > about isolating those layers for concurrency. To meet the "software > development on the scale of decades" goal of REST, I believe applied > REST architecture should extend beyond REST per se, and consider other An "applied REST architecture" extends REST by definition, right? On the other hand, if you're talking about starting with REST and applying additional constraints, that would be creating a new style hybrid style based on but not REST? > requirements of a long-running system like isolation and concurrency. Regarding isolation, a RESTful system gets the same property (e.g. reliability) evoked by way of being a hybrid of the client-stateless-server style through separation of concern+stateless right? I'm only familiar with concurrency withing application architecture and his talk is simply trumpeting message-passing over shared-memory - which is important at that level of abstraction. It seems to me that the REST architecture is inherently concurrency friendly, can you elaborate on what's missing? > Joe's Stable Storage law is highly pragmatic to the goals of a REST > system, even though REST itself says nothing of storage. Isn't the > notion of persistent storage something that belongs in an architectural > model of an overall system? It seems important to origin servers within a system but not relevant to the REST style itself. It's not the level of abstraction that REST was intended to address is it? I'm not clear on how you're using the term "architectural *model*" - if it's a model of the REST architectural style, it seems that discussing storage is out of scope. Suppose you wanted to achieve this by adding a new constraint, wouldn't the constraint at this level of abstraction have to be in terms of the addressability of the resource over time? > My point is, the REST architectural style doesn't address key aspects of > system architecture, like storage. So how can REST alone be the basis > for any applied-software-architecture Implementation? The Model should > therefore address issues like isolation, concurrency and storage based > on the REST-agnostic laws of long-running systems. That seems to run counter to the framework on which REST is based though. At this level of architectural abstraction, the focus shouldn't be on "how do we address isolation" but, rather, how can we maximally induce isolation. Similarly, we shouldn't worry about concurrency, but rather performance and user-perceived performance. The framework is good because it encourages reasonable decisions in terms of desired properties instead of the technical mechanism. --tim
oops... +4s/isolation/reliability > That seems to run counter to the framework on which REST is based > though. At this level of architectural abstraction, the focus > shouldn't be on "how do we address isolation" but, rather, how can we > maximally induce *reliability*. Similarly, we shouldn't worry about > concurrency, but rather performance and user-perceived performance. > The framework is good because it encourages reasonable decisions in > terms of desired properties instead of the technical mechanism.
On Wed, Jan 6, 2010 at 7:00 AM, Guilherme Silveira <guilherme.silveira@caelum.com.br> wrote: > > An example in regular business practice is a request for quotation > > (provisional) followed by an order (final). > > You may think that is not a transaction, but it follows a 2-phase > > commit pattern, altho it does not obey all the ACID rules (which are > > impossible to follow RESTfully). > I agree. My only doubt so far is that in this case the client should be fat enough to know how to deal with situations where it breaks between the first and second step working with two resources. > i.e. the user wants to buy a set of products, which was broken by the client app in two subsets, each one to be bought in two different suppliers. Both request for quotations were sent successfully, one order is also fine, but the second order breaks due to out-of-stock issues (someone just bought it.... not acid). The client is now responsible for trying to cancel that first order. > Any workarounds/suggestions? If I understand correctly, you are posing a situation where the provisional phase (quotations) has become final (orders, which both buyer and seller have agreed to) and now the second seller has failed to deliver so the buyer wants to cancel the first order. Correct? Ain't no transaction protocol that will save you. As Jan Algermissen suggested, stuff happens... If the orders had penalty clauses, the second seller may need to make good. If the first order had a cancellation fee, the buyer may need to pay it. And the first seller may lose some resources of some type.
> Ain't no transaction protocol that will save you. As Jan Algermissen suggested, stuff happens... That's what I thought, but maybe I was wrong. Thanks Bob, Jan... > If the orders had penalty clauses, the second seller may need to make > good. If the first order had a cancellation fee, the buyer may need > to pay it. And the first seller may lose some resources of some type. > >
Bob: Thanks for the feedback. I agree that my example does not cover all possibilities. In fact, I referenced (as "final commit") the interaction type you mention here as one that I was not focusing upon in this first post. So far, I've treated provisional-final work as application-level activity. The approach I outlined here attempts to describe a pattern for handling transaction details when they are _not_ the primary application activity, but orthogonal to it (if I'm using that word correctly). Having said that, I think it is possible to use a "uniform transaction service interface" approach to handle provisional-final interactions. In this case, a Content-Trans:provisional-final header could be advertised by servers and negotiated for by clients. The returned LINK[rel="trans"] element (in a header or in the representation) could point to a URI that allows not only a GET (to view the status of the transaction) and a DELETE (to cancel the open work), but also a POST to execute the "final." Alternately, the representation returned in the GET to the transaction URI could contain one or more link elements needed to progress the unit of work to it's ultimate completion. mca http://amundsen.com/blog/ On Wed, Jan 6, 2010 at 07:45, Bob Haugen <bob.haugen@...> wrote: > On Tue, Jan 5, 2010 at 9:21 PM, mike amundsen <mamund@...> wrote: >> <snip> >> A better approach is to expose an HTTP-compliant transaction service >> interface (TSI) that takes advantage of the protocol's inherent >> architectural style. Transactions over HTTP should be optional, >> discoverable, negotiable, based on optimistic commits, and (in the >> case of failures) use compenstating requests as a way to reverse >> previous work. >> </snip> > > Compensation often does not work. Think of an offer to buy. If it's > optimistically committed, both buyer and seller may take followup > actions that could be difficult or impossible to undo (e.g. starting > production) if the offer is cancelled. Alternatively, both parties > may need to wait for some undeterminate timespan to make sure a > compensating request does not arrive. > > A better alternative in many situations, which I've described on this > list before, is to use a provisional-final pattern, where the initial > request is provisional, and then both parties agree that it should be > made final or cancelled. All these interactions can be done > RESTfully. > > An example in regular business practice is a request for quotation > (provisional) followed by an order (final). > > You may think that is not a transaction, but it follows a 2-phase > commit pattern, altho it does not obey all the ACID rules (which are > impossible to follow RESTfully). >
On Wed, Jan 6, 2010 at 8:24 AM, Bob Haugen <bob.haugen@...> wrote: > So a 2PC client-server interaction might have classical ACID database > transactions on the server for each request, but the full client > experience would have 2 phases at a higher level. P.S. compensation is also 2PC, just with an optional 2nd phase.
On Wed, Jan 6, 2010 at 8:14 AM, mike amundsen <mamund@...> wrote: > So far, I've treated provisional-final work as application-level > activity. I think once you are doing anything on HTTP it's all application-level. > I think it's true that 2PC-style transactions over HTTP are not needed. > Two-Phase Commit may be important on the server-side behind the HTTP interface > between the client and server, but there is no reason to expose them to HTTP clients; > they don't care what goes on at the server. You may be confusing 2PC with ACID. Peter Furniss once told me that any application-level agreement requires 2 phases, that is, 2PC in some form or other. You just don't get Isolation over HTTP, and you shouldn't hold locks over a long-running process dependent on a response from an unreliable partner. So a 2PC client-server interaction might have classical ACID database transactions on the server for each request, but the full client experience would have 2 phases at a higher level.
Bob: <snip> I think once you are doing anything on HTTP it's all application-level. </snip> I've been focusing on the the control data aspect HTTP lately. It's used to handle authentication, encoding, caching, etc. My attempt here has been to see if there is value in treating some level of transactions as control data instead of first-class application information. </snip> You may be confusing 2PC with ACID. </snip> I think you're correct. Thanks. mca http://amundsen.com/blog/ On Wed, Jan 6, 2010 at 09:24, Bob Haugen <bob.haugen@...> wrote: > On Wed, Jan 6, 2010 at 8:14 AM, mike amundsen <mamund@...> wrote: >> So far, I've treated provisional-final work as application-level >> activity. > > I think once you are doing anything on HTTP it's all application-level. > >> I think it's true that 2PC-style transactions over HTTP are not needed. >> Two-Phase Commit may be important on the server-side behind the HTTP interface >> between the client and server, but there is no reason to expose them to HTTP clients; >> they don't care what goes on at the server. > > You may be confusing 2PC with ACID. Peter Furniss once told me that > any application-level agreement requires 2 phases, that is, 2PC in > some form or other. You just don't get Isolation over HTTP, and you > shouldn't hold locks over a long-running process dependent on a > response from an unreliable partner. > > So a 2PC client-server interaction might have classical ACID database > transactions on the server for each request, but the full client > experience would have 2 phases at a higher level. >
Jan Algermissen wrote: > > On Jan 6, 2010, at 10:24 AM, Eric J. Bowman wrote: > > > My point is, the REST architectural style doesn't address key > > aspects of > > system architecture, like storage. So how can REST alone be the > > basis for any applied-software-architecture Implementation? The > > Model should > > therefore address issues like isolation, concurrency and storage > > based on the REST-agnostic laws of long-running systems. These > > laws clearly apply to the goals of any system that needs REST in > > the first place. > > > RESTs primary goal is not fault tolerance but decentralized > evolution, scalability and simplicity. REST's focus is coordination > of components owned by independent parties. I think the comparision > you try to make is a bit problematic. > I think it depends on what one means by fault-tolerant. I hate when a website hangs entirely because some ad rotator isn't responding. If my website doesn't hang in such a case, I'd consider it fault-tolerant, as it's the best I can do when dealing with third-party components. REST takes into account the unreliability of the network itself. If my website goes offline only in a particular region, I'd consider it fault- tolerant. So fault tolerance on REST's global scale, is a different beast entirely from fault tolerance within a programming language or OS. But you do make a good argument. > > However, there is certainly a lot of overlap between the area of > application. The stateless server constraint for example enables > live server upgrade and the hypermedia constraint even enables > changing the server side state machine while the client is stepping > through it from hyperlink to hyperlink. > > In fact I find it hard to see how you could achieve better fault > tolerance regarding the component communication than the fault > tolerance you have with TCP/IP, DNS and HTTP. > Exactly! REST gets you most of the way to these laws for high availability systems. If you really need Web scale and REST, you probably want high availability anyway, so I think it may be useful to incorporate these ideas into an abstract model of a system, where the purpose of that model is to guide long-term development towards an ideal that may always lie just over the horizon. -Eric
Tim Williams wrote: > > > REST allows for a system which obeys Joe's laws for system > > reliability. While it specifies the layered system constraint, REST > > says nothing about isolating those layers for concurrency. To meet > > the "software development on the scale of decades" goal of REST, I > > believe applied REST architecture should extend beyond REST per se, > > and consider other > > An "applied REST architecture" extends REST by definition, right? On > the other hand, if you're talking about starting with REST and > applying additional constraints, that would be creating a new style > hybrid style based on but not REST? > I don't think that applying REST architecture extends REST in any way. "REST... [focuses] on the roles of components, the constraints upon their interaction with other components, and their interpretation of significant data elements." That's a limited scope within an overall system. My choice of storage implementation has nothing to do with the roles of any components, or the constraints upon messaging between components, or the interpretation of data. So I'm not really creating an additional REST constraint, or altering the REST style in any way. The idea of applied software architecture is to create mappings from the abstraction to the implementation. But the abstraction of an entire REST system as a model for its developers to follow, needs to address component implementation, which is outside the scope of REST. The overall architecture encompasses REST to provide a uniform interface. Implementation details are hidden behind this uniform interface, so from the user perspective the system is only REST -- users care nothing about storage implementation, only HTTP interaction. -Eric
Compensation is not the same as undo, that is a big error in this discussion. Undo is a recovery mechanism. Compensation is something every transaction needs to have. Every well formed transaction has a compensating transaction associated with it. The classic example is for a credit card debit transaction, a corresponding credit card credit compensating transaction is defined in case an erroneous charge was made. Compensation has nothing to do with recovery. It's about compensating for an erroneous transaction.
A similar bit of bad logic is involved in the idea of a "provisional" transaction. I realize this idea has been around for a while, and there are several proponents of it. However a transaction is a transaction, and either it is complete or it is not complete and is rolled back (or undone). If you want to model two transactions, that's fine, but don't call one "provisional" when it has the exact same semantics as the "real" transaction.
The best modeling of a transaction for REST is to model a transaction as a resource, as is described in the "RESTful Web Services" book. Phil and I incorporated that into the second edition of our TP book. We did not include "provisional" transactions because they don't really accomplish anything. If you want to have a more "optimistic" approach to transaction modeling, the classic "pseudo conversational" model is the way to go.
Eric
________________________________
From: Bob Haugen <bob.haugen@...>
To: mike amundsen <mamund@...>
Cc: rest-discuss <rest-discuss@yahoogroups.com>
Sent: Wed, January 6, 2010 7:45:15 AM
Subject: Re: [rest-discuss] A different approach to supporting transactions over HTTP
On Tue, Jan 5, 2010 at 9:21 PM, mike amundsen <mamund@yahoo. com> wrote:
> <snip>
> A better approach is to expose an HTTP-compliant transaction service
> interface (TSI) that takes advantage of the protocol's inherent
> architectural style. Transactions over HTTP should be optional,
> discoverable, negotiable, based on optimistic commits, and (in the
> case of failures) use compenstating requests as a way to reverse
> previous work.
> </snip>
Compensation often does not work. Think of an offer to buy. If it's
optimistically committed, both buyer and seller may take followup
actions that could be difficult or impossible to undo (e.g. starting
production) if the offer is cancelled. Alternatively, both parties
may need to wait for some undeterminate timespan to make sure a
compensating request does not arrive.
A better alternative in many situations, which I've described on this
list before, is to use a provisional- final pattern, where the initial
request is provisional, and then both parties agree that it should be
made final or cancelled. All these interactions can be done
RESTfully.
An example in regular business practice is a request for quotation
(provisional) followed by an order (final).
You may think that is not a transaction, but it follows a 2-phase
commit pattern, altho it does not obey all the ACID rules (which are
impossible to follow RESTfully).
I should probably explain that Eric Newcomer is an expert in transactions and I am not. Nevertheless, I disagree, and will cite some other transaction experts (below). (And I promised to myself I would stop engaging in this thread, but whatever...) (And yes this level of transactional detail is probably off-topic for REST-discuss so I will shut up now...) On Wed, Jan 6, 2010 at 7:49 PM, Eric Newcomer <e_newcomer@...> wrote: > > Compensation is not the same as undo, that is a big error in this discussion. Compensation has been used as undo, as a replacement for cancel or abort in the 2nd phase of 2PC, in several transactional methods, including BTP (below) and Sagas: www.cs.cornell.edu/andru/cs711/2002fa/reading/sagas.pdf "To amend partial executions, each saga T, should be provided with a compensating transaction C. The compensating transaction undoes, from a semantic point of view, any of the actions performed by T, but does not necessarily return the database to the state that existed when the execution of T began." > A similar bit of bad logic is involved in the idea of a "provisional" transaction. I realize this idea has been around for a while, and there are several proponents of it. However a transaction is a transaction, and either it is complete or it is not complete and is rolled back (or undone). If you want to model two transactions, that's fine, but don't call one "provisional" when it has the exact same semantics as the "real" transaction. > That is an inaccurate representatiton of the provisional-final pattern, leaving off the "final" part (the 2nd phase). Here's one fairly early version. The paper is introduced by its author:: '"The Escrow Transactional Method," Presented at First Annual Workshop on High Performance Transaction Systems, September 1985, later published in ACM Transactions on Database Systems, V. 11, No. 4, December 1986,pp. 405-430. This has been an influential paper. I know of nearly 30 citations to the Escrow Transactions paper in the literature including a series of papers by different authors extending the Escrow method to database replication. Authors of these papers include: Theo Haerder, Akhil Kumar and Michael Stonebraker, N. Soparkar and Avi Silberschats, Daniel Barbar-Mill and Hector Garcia-Molina.' http://www.cs.umb.edu/~poneil/EscrowTM.pdf And here's another from OASIS BTP: http://www.oasis-open.org/committees/tc_home.php?wg_abbrev=business-transaction <excerpt> In general, to be able to satisfy such contracts a BTP-enabled service must support in some manner provisional or tentative state changes (the transactions Provisional Effect) and completion either through confirmation (Final Effect) or cancellation (Counter-effect). The meaning of provisional, final, and Counter-effect are specific to the application and to the implementation of the application. [...] Table 1 Some alternatives for Provisional, Final and Counter-Effects ------------------------------ Provisional Effect: Perform the changes, making them visible; store information to undo the changes Final Effect:: Delete undo information Counter effect: Perform undo action Comment: One form of compensation approach ----------------------------------------------------- Provisional Effect: Perform the changes, marked or typed as provisional, making them visible Final Effect:: Mark or transform as final Counter effect: Delete or mark/transform as cancelled Comment: E.g. quote-to-order cycle </excerpt>
Hi Bob,
I'm aware of the research, and with BTP. I just want to point out that while this is an interesting theory, it has not been proven in practice. Also I wanted to correct the terminology.
Another interesting theory that hasn't worked out in practice is three-phase commit. This does actually solve a known problem with two-phase commit (the uncertainty phase) but it isn't practical.
I don't know that the so called "provisional transactions" solve a real problem, although the idea is interesting. I also know it was a centerpiece of BTP, which has not been adopted.
With respect to "undo" - the problem is that we need to be precise about the terms we're using to describe transactions. I am going to assume we are discussing "technical" transactions rather than "logical" transactions - i.e. the way in which the transaction paradigm is implemented in software. This is a very important point, because any suggestion of a new capability has to take into account existing implementations (something BTP didn't really do very well BTW).
In current practice, a transaction is the execution of a program that operates on shared data, typically on behalf of an online user. The program either runs to completion and the results are permanent, or the program does not complete and the results are discarded. A transaction has no meaning unless it is operating on data, since it's purpose is to reliably change the state of the shared data, without leaving any partial results in the case of a crash or other failure. Thus, there is no such thing as a "partial" transaction in today's world.
The term "undo" refers to one mechanism that ensures that there are no partial results, by undoing the "temporary" writes to a database or other transactional resource manager that are typically done in place for better performance (i.e the assumption is that a commit is a more likely to occur than an abort) when a failure occurs. The undo mechanism relies on writing the "before" state of the resource manager's data to a recovery of undo log, from which it can be retrieved and restored if necessary.
The term "compensation" refers to a separate transaction that is run only after a previous transaction has already run to completion, in fact only after a previous transaction has committed. Because the definition of a commit includes the assumption that the results are permanent, the only way to change the results is to run another transaction (which is called a compensating transaction when it reverses or as you say undoes the results of a prior transaction). Compensation is not undo - the key difference is that undo runs before commit, and compensation runs after commit.
The fallacy in the partial transaction or reservation pattern is that partial results are visible. No results are visible until after commit - this is the way things work in practice, and to suggest otherwise ignores reality.
Now if you compare the reservation pattern to a saga, that's is a reference to an accepted pattern. Compensation is used in this pattern exactly as I describe it, to reverse the results of a prior commit. A saga is just a string of individual transactions that commit or abort separately. They may have a logical relationship, but there is no transaction mechanism that joins them together. A saga is typically done to avoid maintaining locks over a long sequence of operations on data, and this is fine and works. But it also requires the developer to write specific compensation transactions for each step in the saga. No automatic mechanism exists for this.
The problem is the suggestion that a partial result of a transaction is possible. It is not. A transaction either commits or aborts. If you run multiple transactions in sequence (as in a saga) they are related only logically, not in the transaction paradigm. If you run a "provisional" transaction it has exactly the same behavior as a "final" transaction.
I just don't want anyone thinking this theory represents actual practice.
Eric
----- Original Message ----
From: Bob Haugen <bob.haugen@...>
To: rest-discuss <rest-discuss@yahoogroups.com>
Sent: Thu, January 7, 2010 6:49:37 AM
Subject: Re: [rest-discuss] A different approach to supporting transactions over HTTP
I should probably explain that Eric Newcomer is an expert in
transactions and I am not. Nevertheless, I disagree, and will cite
some other transaction experts (below).
(And I promised to myself I would stop engaging in this thread, but whatever...)
(And yes this level of transactional detail is probably off-topic for
REST-discuss so I will shut up now...)
On Wed, Jan 6, 2010 at 7:49 PM, Eric Newcomer <e_newcomer@...> wrote:
>
> Compensation is not the same as undo, that is a big error in this discussion.
Compensation has been used as undo, as a replacement for cancel or
abort in the 2nd phase of 2PC, in several transactional methods,
including BTP (below) and Sagas:
www.cs.cornell.edu/andru/cs711/2002fa/reading/sagas.pdf
"To amend partial executions, each saga T, should be provided with a
compensating transaction C. The compensating transaction undoes, from
a semantic point of view, any of the actions performed by T, but does
not necessarily return the database to the state that existed when the
execution of T began."
> A similar bit of bad logic is involved in the idea of a "provisional" transaction. I realize this idea has been around for a while, and there are several proponents of it. However a transaction is a transaction, and either it is complete or it is not complete and is rolled back (or undone). If you want to model two transactions, that's fine, but don't call one "provisional" when it has the exact same semantics as the "real" transaction.
>
That is an inaccurate representatiton of the provisional-final
pattern, leaving off the "final" part (the 2nd phase).
Here's one fairly early version. The paper is introduced by its author::
'"The Escrow Transactional Method," Presented at First Annual Workshop
on High Performance Transaction Systems, September 1985, later
published in ACM Transactions on Database Systems, V. 11, No. 4,
December 1986,pp. 405-430. This has been an influential paper. I know
of nearly 30 citations to the Escrow Transactions paper in the
literature including a series of papers by different authors extending
the Escrow method to database replication. Authors of these papers
include: Theo Haerder, Akhil Kumar and Michael Stonebraker, N.
Soparkar and Avi Silberschats, Daniel Barbar-Mill and Hector
Garcia-Molina.'
http://www.cs.umb.edu/~poneil/EscrowTM.pdf
And here's another from OASIS BTP:
http://www.oasis-open.org/committees/tc_home.php?wg_abbrev=business-transaction
<excerpt>
In general, to be able to satisfy such contracts a BTP-enabled service
must support in some
manner provisional or tentative state changes (the transaction’s
Provisional Effect) and
completion either through confirmation (Final Effect) or cancellation
(Counter-effect). The
meaning of provisional, final, and Counter-effect are specific to the
application and to the
implementation of the application.
[...]
Table 1 Some alternatives for Provisional, Final and Counter-Effects
------------------------------
Provisional Effect:
Perform the changes, making them visible; store information to undo the changes
Final Effect::
Delete undo information
Counter effect:
Perform undo action
Comment:
One form of compensation approach
-----------------------------------------------------
Provisional Effect:
Perform the changes, marked or typed as provisional, making them visible
Final Effect::
Mark or transform as final
Counter effect:
Delete or mark/transform as cancelled
Comment:
E.g. quote-to-order cycle
</excerpt>
------------------------------------
Yahoo! Groups Links
WS-REST 2010
http://ws-rest.org/
Paper Submission: February 8, 2010
Call for Papers
The First International Workshop on RESTful Design (WS-REST 2010) aims to provide a forum for discussion and dissemination of research on the emerging resource-oriented style of Web service design.
Background
Over the past few years, several discussions between advocates of the two major architectural styles for designing and implementing Web services (the RPC/ESB-oriented approach and the resource-oriented approach) have been mainly held outside of the research and academic community, within dedicated mailing lists, forums and practitioner communities. The RESTful approach to Web services has also received a significant amount of attention from industry as indicated by the numerous technical books being published on the topic.
This first edition of WS-REST, co-located with the WWW2010 conference, aims at providing an academic forum for discussing current emerging research topics centered around the application of REST, as well as advanced application scenarios for building large scale distributed systems.
In addition to presentations on novel applications of RESTful Web services technologies, the workshop program will also include discussions on the limits of the applicability of the REST architectural style, as well as recent advances in research that aim at tackling new problems that may require to extend the basic REST architectural style. The organizers are seeking novel and original, high quality paper submissions on research contributions focusing on the following topics:
* Applications of the REST architectural style to novel domains
* Design Patterns and Anti-Patterns for RESTful services
* RESTful service composition
* Inverted REST (REST for push events)
* Integration of Pub/Sub with REST
* Performance and QoS Evaluations of RESTful services
* REST compliant transaction models
* Mashups
* Frameworks and toolkits for RESTful service implementations
* Frameworks and toolkits for RESTful service consumption
* Modeling RESTful services
* Resource Design and Granularity
* Evolution of RESTful services
* Versioning and Extension of REST APIs
* HTTP extensions and replacements
* REST compliant protocols beyond HTTP
* Multi-Protocol REST (REST architectures across protocols)
All workshop papers are peer-reviewed and accepted papers will be published as part of the ACM Digital Library. Two kinds of contributions are sought: short position papers (not to exceed 4 pages in ACM style format) describing particular challenges or experiences relevant to the scope of the workshop, and full research papers (not to exceed 8 pages in the ACM style format) describing novel solutions to relevant problems. Technology demonstrations are particularly welcome, and we encourage authors to focus on "lessons learned" rather than describing an implementation.
Papers must be submitted electronically in PDF format. Submit at the WS-REST 2010 EasyChair installation
http://www.easychair.org/conferences/?conf=WSREST2010
Important Dates
* Submission deadline: February 8, 2010, 23.59 Hawaii time
* Notification of acceptance: March 1, 2010
* Camera-ready versions of accepted papers: March 14, 2010
* WS-REST 2010 Workshop: April 26, 2010
Program Committee Chairs
* Cesare Pautasso, Faculty of Informatics, USI Lugano, Switzerland
* Erik Wilde, School of Information, UC Berkeley, USA
* Alexandros Marinos, Faculty of Engineering & Physical Sciences, University of Surrey, UK
Program Committee
* Rosa Alarcon, Pontificia Universidad Catolica de Chile
* Subbu Allamaraju, Yahoo Inc., USA
* Tim Bray, Sun Microsystems, USA
* Bill Burke, Red Hat, USA
* Benjamin Carlyle, Australia
* Stuart Charlton, Elastra, USA
* Joe Gregorio, Google, USA
* Michael Hausenblas, DERI, Ireland
* Rohit Khare, 4K Associates, USA
* Frank Leymann, University of Stuttgart, Germany
* Mark Nottingham, Yahoo Inc., Australia
* Aristotle Pagaltzis, Germany
* Ian Robinson, Thoughtworks, USA
* Richard Taylor, UC Irvine, USA
* Stefan Tilkov, innoQ, Germany
* Steve Vinoski, Verivue, USA
* Jim Webber, Thoughtworks, USA
* Olaf Zimmermann, IBM Zurich Research Lab, Switzerland
Contact
WS-REST Web site: http://ws-rest.org/
WS-REST Email: chairs@ws-rest.org
Bob: Thanks for the pointers to Saga and Escrow. I'm famliar with the Saga paper and it influenced by thinking in working up the pattern described in my post, but was not able to find a freely-available PDF as a reference (I used a more "lightweight" description as a ref in my post). I'll review the Escrow paper this week. mca http://amundsen.com/blog/ On Thu, Jan 7, 2010 at 06:49, Bob Haugen <bob.haugen@...> wrote: > I should probably explain that Eric Newcomer is an expert in > transactions and I am not. Nevertheless, I disagree, and will cite > some other transaction experts (below). > > (And I promised to myself I would stop engaging in this thread, but whatever...) > > (And yes this level of transactional detail is probably off-topic for > REST-discuss so I will shut up now...) > > On Wed, Jan 6, 2010 at 7:49 PM, Eric Newcomer <e_newcomer@...> wrote: >> >> Compensation is not the same as undo, that is a big error in this discussion. > > Compensation has been used as undo, as a replacement for cancel or > abort in the 2nd phase of 2PC, in several transactional methods, > including BTP (below) and Sagas: > www.cs.cornell.edu/andru/cs711/2002fa/reading/sagas.pdf > > "To amend partial executions, each saga T, should be provided with a > compensating transaction C. The compensating transaction undoes, from > a semantic point of view, any of the actions performed by T, but does > not necessarily return the database to the state that existed when the > execution of T began." > >> A similar bit of bad logic is involved in the idea of a "provisional" transaction. I realize this idea has been around for a while, and there are several proponents of it. However a transaction is a transaction, and either it is complete or it is not complete and is rolled back (or undone). If you want to model two transactions, that's fine, but don't call one "provisional" when it has the exact same semantics as the "real" transaction. >> > > That is an inaccurate representatiton of the provisional-final > pattern, leaving off the "final" part (the 2nd phase). > > Here's one fairly early version. The paper is introduced by its author:: > '"The Escrow Transactional Method," Presented at First Annual Workshop > on High Performance Transaction Systems, September 1985, later > published in ACM Transactions on Database Systems, V. 11, No. 4, > December 1986,pp. 405-430. This has been an influential paper. I know > of nearly 30 citations to the Escrow Transactions paper in the > literature including a series of papers by different authors extending > the Escrow method to database replication. Authors of these papers > include: Theo Haerder, Akhil Kumar and Michael Stonebraker, N. > Soparkar and Avi Silberschats, Daniel Barbar-Mill and Hector > Garcia-Molina.' > http://www.cs.umb.edu/~poneil/EscrowTM.pdf > > And here's another from OASIS BTP: > http://www.oasis-open.org/committees/tc_home.php?wg_abbrev=business-transaction > > <excerpt> > In general, to be able to satisfy such contracts a BTP-enabled service > must support in some > manner provisional or tentative state changes (the transactions > Provisional Effect) and > completion either through confirmation (Final Effect) or cancellation > (Counter-effect). The > meaning of provisional, final, and Counter-effect are specific to the > application and to the > implementation of the application. > [...] > Table 1 Some alternatives for Provisional, Final and Counter-Effects > ------------------------------ > > Provisional Effect: > Perform the changes, making them visible; store information to undo the changes > > Final Effect:: > Delete undo information > > Counter effect: > Perform undo action > > Comment: > One form of compensation approach > > ----------------------------------------------------- > > Provisional Effect: > Perform the changes, marked or typed as provisional, making them visible > > Final Effect:: > Mark or transform as final > > Counter effect: > Delete or mark/transform as cancelled > > Comment: > E.g. quote-to-order cycle > </excerpt> > > > ------------------------------------ > > Yahoo! Groups Links > > > >
Hello, Have you looked at the pointer Mark Mc Keown sent a few weeks ago on this list: - http://tech.dir.groups.yahoo.com/group/rest-discuss/message/14035 - http://betathoughts.blogspot.com/2007/06/brief-history-of-consensus-2pc-and.html - http://www.allhands.org.uk/2006/proceedings/papers/624.pdf The initial application was the co-allocation of resources. It uses the Paxos-commit algorithm. Best wishes, Bruno. mike amundsen wrote: > > > I've posted "DOCTR: a better approach to HTTP transactions" to my blog > (http://amundsen.com/blog/archives/1024 > <http://amundsen.com/blog/archives/1024>) and welcome any feedback. > > Some excerpts: > > <snip> > A better approach is to expose an HTTP-compliant transaction service > interface (TSI) that takes advantage of the protocol's inherent > architectural style. Transactions over HTTP should be optional, > discoverable, negotiable, based on optimistic commits, and (in the > case of failures) use compenstating requests as a way to reverse > previous work. > </snip> > > and > > <snip> > What is described here is a transaction service interface (TSI) that > clients and servers can choose to implement in ways all parties can > agree upon. This is the way media-types are currenly handled and TSIs > could be defined, documented, and registered in much the same way. > This includes the possibility of registering a media-type that matches > the TSI implementation. True to the archetictural style of the HTTP > protocol, this approach proposes a "uniform interface" for handling > units of work over HTTP. > </snip> > > Thanks in advance.
In [1] Roy writes "A REST API should spend almost all of its descriptive effort in defining the media type(s) used for representing resources and driving application state, or in defining extended relation names and/or hypertext-enabled mark-up for existing standard media types. [...]" Maybe I am reading too much into this, but then...usually Roy chooses his words quite carefully: Does anyone know why it is "*almost* all of its descriptive effort" and not simply "all of its descriptive effort"? What else is there to be described than the media types? Jan [1] http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven
The media type definition says what methods do. In most cases there's plenty of wiggle-room on response codes, so that's left to the API to describe. A system's security implementation isn't described by its media types, so your API has to describe the use of HTTP Digest, user roles, and such. Your API does this via hypertext -- descriptive error responses with links instead of the default 4xx response message, for example. -Eric > > In [1] Roy writes > > "A REST API should spend almost all of its descriptive effort in > defining the media type(s) used for representing resources and > driving application state, or in defining extended relation names > and/or hypertext-enabled mark-up for existing standard media types. > [...]" > > Maybe I am reading too much into this, but then...usually Roy > chooses his words quite carefully: > > Does anyone know why it is "*almost* all of its descriptive effort" > and not simply "all of its descriptive effort"? What else is there > to be described than the media types? > > Jan > > > [1] > http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven >
On 7 Jan 2010 13:03, "Eric Newcomer" <e_newcomer@...> wrote: Hi Bob, I'm aware of the research, and with BTP. I just want to point out that while this is an interesting theory, it has not been proven in practice. Also I wanted to correct the terminology. Another interesting theory that hasn't worked out in practice is three-phase commit. This does actually solve a known problem with two-phase commit (the uncertainty phase) but it isn't practical. I don't know that the so called "provisional transactions" solve a real problem, although the idea is interesting. I also know it was a centerpiece of BTP, which has not been adopted. With respect to "undo" - the problem is that we need to be precise about the terms we're using to describe transactions. I am going to assume we are discussing "technical" transactions rather than "logical" transactions - i.e. the way in which the transaction paradigm is implemented in software. This is a very important point, because any suggestion of a new capability has to take into account existing implementations (something BTP didn't really do very well BTW). In current practice, a transaction is the execution of a program that operates on shared data, typically on behalf of an online user. The program either runs to completion and the results are permanent, or the program does not complete and the results are discarded. A transaction has no meaning unless it is operating on data, since it's purpose is to reliably change the state of the shared data, without leaving any partial results in the case of a crash or other failure. Thus, there is no such thing as a "partial" transaction in today's world. The term "undo" refers to one mechanism that ensures that there are no partial results, by undoing the "temporary" writes to a database or other transactional resource manager that are typically done in place for better performance (i.e the assumption is that a commit is a more likely to occur than an abort) when a failure occurs. The undo mechanism relies on writing the "before" state of the resource manager's data to a recovery of undo log, from which it can be retrieved and restored if necessary. The term "compensation" refers to a separate transaction that is run only after a previous transaction has already run to completion, in fact only after a previous transaction has committed. Because the definition of a commit includes the assumption that the results are permanent, the only way to change the results is to run another transaction (which is called a compensating transaction when it reverses or as you say undoes the results of a prior transaction). Compensation is not undo - the key difference is that undo runs before commit, and compensation runs after commit. The fallacy in the partial transaction or reservation pattern is that partial results are visible. No results are visible until after commit - this is the way things work in practice, and to suggest otherwise ignores reality. Now if you compare the reservation pattern to a saga, that's is a reference to an accepted pattern. Compensation is used in this pattern exactly as I describe it, to reverse the results of a prior commit. A saga is just a string of individual transactions that commit or abort separately. They may have a logical relationship, but there is no transaction mechanism that joins them together. A saga is typically done to avoid maintaining locks over a long sequence of operations on data, and this is fine and works. But it also requires the developer to write specific compensation transactions for each step in the saga. No automatic mechanism exists for this. The problem is the suggestion that a partial result of a transaction is possible. It is not. A transaction either commits or aborts. If you run multiple transactions in sequence (as in a saga) they are related only logically, not in the transaction paradigm. If you run a "provisional" transaction it has exactly the same behavior as a "final" transaction. I just don't want anyone thinking this theory represents actual practice. Eric ----- Original Message ---- From: Bob Haugen <bob.haugen@...> To: rest-discuss <rest-discuss@yahoogroups.com> Sent: Thu, January 7, 2010 6:49:37 AM Subject: Re: [... I should probably explain that Eric Newcomer is an expert in transactions and I am not. Nevertheless...
This is a very interesting post, clear, concise, to the point, I think I learned more about transactions and compensation with this then with entire books on the subject. You should consider writing this as a article or blog entry as for future reference. Cheers. On 7 Jan 2010 13:03, "Eric Newcomer" <e_newcomer@...> wrote: Hi Bob, I'm aware of the research, and with BTP. I just want to point out that while this is an interesting theory, it has not been proven in practice. Also I wanted to correct the terminology. Another interesting theory that hasn't worked out in practice is three-phase commit. This does actually solve a known problem with two-phase commit (the uncertainty phase) but it isn't practical. I don't know that the so called "provisional transactions" solve a real problem, although the idea is interesting. I also know it was a centerpiece of BTP, which has not been adopted. With respect to "undo" - the problem is that we need to be precise about the terms we're using to describe transactions. I am going to assume we are discussing "technical" transactions rather than "logical" transactions - i.e. the way in which the transaction paradigm is implemented in software. This is a very important point, because any suggestion of a new capability has to take into account existing implementations (something BTP didn't really do very well BTW). In current practice, a transaction is the execution of a program that operates on shared data, typically on behalf of an online user. The program either runs to completion and the results are permanent, or the program does not complete and the results are discarded. A transaction has no meaning unless it is operating on data, since it's purpose is to reliably change the state of the shared data, without leaving any partial results in the case of a crash or other failure. Thus, there is no such thing as a "partial" transaction in today's world. The term "undo" refers to one mechanism that ensures that there are no partial results, by undoing the "temporary" writes to a database or other transactional resource manager that are typically done in place for better performance (i.e the assumption is that a commit is a more likely to occur than an abort) when a failure occurs. The undo mechanism relies on writing the "before" state of the resource manager's data to a recovery of undo log, from which it can be retrieved and restored if necessary. The term "compensation" refers to a separate transaction that is run only after a previous transaction has already run to completion, in fact only after a previous transaction has committed. Because the definition of a commit includes the assumption that the results are permanent, the only way to change the results is to run another transaction (which is called a compensating transaction when it reverses or as you say undoes the results of a prior transaction). Compensation is not undo - the key difference is that undo runs before commit, and compensation runs after commit. The fallacy in the partial transaction or reservation pattern is that partial results are visible. No results are visible until after commit - this is the way things work in practice, and to suggest otherwise ignores reality. Now if you compare the reservation pattern to a saga, that's is a reference to an accepted pattern. Compensation is used in this pattern exactly as I describe it, to reverse the results of a prior commit. A saga is just a string of individual transactions that commit or abort separately. They may have a logical relationship, but there is no transaction mechanism that joins them together. A saga is typically done to avoid maintaining locks over a long sequence of operations on data, and this is fine and works. But it also requires the developer to write specific compensation transactions for each step in the saga. No automatic mechanism exists for this. The problem is the suggestion that a partial result of a transaction is possible. It is not. A transaction either commits or aborts. If you run multiple transactions in sequence (as in a saga) they are related only logically, not in the transaction paradigm. If you run a "provisional" transaction it has exactly the same behavior as a "final" transaction. I just don't want anyone thinking this theory represents actual practice. Eric ----- Original Message ---- From: Bob Haugen <bob.haugen@...> To: rest-discuss <rest-discuss@yahoogroups.com> Sent: Thu, January 7, 2010 6:49:37 AM Subject: Re: [... I should probably explain that Eric Newcomer is an expert in transactions and I am not. Nevertheless...
On Jan 7, 2010, at 8:58 PM, Eric J. Bowman wrote: > The media type definition says what methods do. In most cases there's > plenty of wiggle-room on response codes, so that's left to the API to > describe. No, the error codes need no description - they are fully described by RFC 2616. > A system's security implementation isn't described by its > media types, so your API has to describe the use of HTTP Digest, user > roles, and such. Your API does this via hypertext -- descriptive > error > responses with links instead of the default 4xx response message, for > example. > I was referring to design time documentation, not runtime. What does a service description need besides describing what the set of media types (and link rels etc) is, it uses? Jan > -Eric > >> >> In [1] Roy writes >> >> "A REST API should spend almost all of its descriptive effort in >> defining the media type(s) used for representing resources and >> driving application state, or in defining extended relation names >> and/or hypertext-enabled mark-up for existing standard media types. >> [...]" >> >> Maybe I am reading too much into this, but then...usually Roy >> chooses his words quite carefully: >> >> Does anyone know why it is "*almost* all of its descriptive effort" >> and not simply "all of its descriptive effort"? What else is there >> to be described than the media types? >> >> Jan >> >> >> [1] >> http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven >> -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
Hi Folks, I am seeking examples of websites that don't have a URL for each webpage. Let me illustrate: I type into my browser the URL to the website. I then see its homepage, which contains links. I click on a link and it gives me a new page, but the URL doesn't change. I continue clicking on links and the URL remains the same throughout my visit to the website. Can you provide an example website? /Roger
Lots of the Google GWT-based or otherwise AJAXified products like Gmail, Google Calendar, and Google Reader have this sort of behavior. Jon ........ Jon Moore On Jan 8, 2010, at 7:59 AM, "Costello, Roger L." <costello@mitre.org> wrote: > Hi Folks, > > I am seeking examples of websites that don't have a URL for each > webpage. > > Let me illustrate: I type into my browser the URL to the website. I > then see its homepage, which contains links. I click on a link and > it gives me a new page, but the URL doesn't change. I continue > clicking on links and the URL remains the same throughout my visit > to the website. > > Can you provide an example website? > > /Roger >
Flex/flash based ones also, although they can play around with anchors, they usually do not create real pages: http://www.tucano.org.br/portal.html Everything you click is an anchor, not another web page, therefore google can not index it. Guilherme Silveira Caelum | Ensino e Inovao http://www.caelum.com.br/ On Fri, Jan 8, 2010 at 11:28 AM, Moore, Jonathan (CIM) < jonathan_moore@...> wrote: > > > Lots of the Google GWT-based or otherwise AJAXified products like Gmail, > Google Calendar, and Google Reader have this sort of behavior. > > Jon > ........ > Jon Moore > > > On Jan 8, 2010, at 7:59 AM, "Costello, Roger L." <costello@...> > wrote: > > > > Hi Folks, > > I am seeking examples of websites that don't have a URL for each webpage. > > Let me illustrate: I type into my browser the URL to the website. I then > see its homepage, which contains links. I click on a link and it gives me a > new page, but the URL doesn't change. I continue clicking on links and the > URL remains the same throughout my visit to the website. > > Can you provide an example website? > > /Roger > > >
On Fri, Jan 8, 2010 at 9:24 AM, Guilherme Silveira < guilherme.silveira@...> wrote: > > > Flex/flash based ones also, although they can play around with anchors, > they usually do not create real pages: > http://www.tucano.org.br/portal.html > Everything you click is an anchor, not another web page, therefore google > can not index it. > > There are also web application where almost every page is a form that POSTs back to the same never-changing URL. Even "links" that you click actually just run javascript to submit the form.
> There are also web application where almost every page is a form that POSTs > back to the same never-changing URL. Even "links" that you click actually > just run javascript to submit the form. Some touch of irony? :) Regards
going all the way round and almost back to the start again :
http://code.quirkey.com/sammy/
... perhaps ?
>
> On Fri, Jan 8, 2010 at 9:24 AM, Guilherme Silveira <guilherme.silveira@...
> > wrote:
>
> Flex/flash based ones also, although they can play around with
> anchors, they usually do not create real pages: http://www.tucano.org.br/portal.html
>
> Everything you click is an anchor, not another web page, therefore
> google can not index it.
>
>
> There are also web application where almost every page is a form
> that POSTs back to the same never-changing URL. Even "links" that
> you click actually just run javascript to submit the form.
>
>
Roger Menday (PhD)
<roger.menday@...>
Senior Researcher, Fujitsu Laboratories of Europe Limited
Hayes Park Central, Hayes End Road, Hayes, Middlesex, UB4 8FE, U.K.
Tel: +44 (0) 208 606 4534
______________________________________________________________________
Fujitsu Laboratories of Europe Limited
Hayes Park Central, Hayes End Road, Hayes, Middlesex, UB4 8FE
Registered No. 4153469
This e-mail and any attachments are for the sole use of addressee(s) and
may contain information which is privileged and confidential. Unauthorised
use or copying for disclosure is strictly prohibited. The fact that this
e-mail has been scanned by Trendmicro Interscan and McAfee Groupshield does
not guarantee that it has not been intercepted or amended nor that it is
virus-free.
On Jan 8, 2010, at 6:30 AM, Guilherme Silveira wrote: >> There are also web application where almost every page is a form that POSTs >> back to the same never-changing URL. Even "links" that you click actually >> just run javascript to submit the form. > Some touch of irony? :) I am told that lots of enterprises continue to pay a hefty ransom to some of the innovative frameworks that thought it is really cool to have a single URI for an entire site. Subbu
Jan Algermissen wrote: > > On Jan 7, 2010, at 8:58 PM, Eric J. Bowman wrote: > > > The media type definition says what methods do. In most cases > > there's plenty of wiggle-room on response codes, so that's left to > > the API to describe. > > No, the error codes need no description - they are fully described > by RFC 2616. > Please see sections 4.4 and 5.5 of RFC 5023. Response codes are indeed described by HTTP, that isn't the issue. The issue is what response is issued to what request, and that isn't defined by media type. My Atom Protocol implementation may respond 202 Accepted instead of 201 Created because I've implemented moderation of new content. Only my API can describe that authoritatively. > > > A system's security implementation isn't described by its > > media types, so your API has to describe the use of HTTP Digest, > > user roles, and such. Your API does this via hypertext -- > > descriptive error > > responses with links instead of the default 4xx response message, > > for example. > > > > I was referring to design time documentation, not runtime. > I know you were. But, a REST system combines self-descriptive messaging with a self-documenting API. You can write out-of-band documentation for it to your heart's content. But that documentation ought to be reflected at runtime, so I can know exactly how your system works by stepping my way through it using curl or a protocol analyzer, without having to constantly refer to the design-time documentation. -Eric
Roger Menday wrote: > > > > going all the way round and almost back to the start again : > > http://code.quirkey.com/sammy/ > > ... perhaps ? > sammy.js is just a nifty way of controlling client state using html and javascript.. what's wrong with that? It's code on demand >> >> On Fri, Jan 8, 2010 at 9:24 AM, Guilherme >> Silveira <guilherme.silveira@... >> <mailto:guilherme.silveira@...>> wrote: >> >> >> >> Flex/flash based ones also, although they can play around with >> anchors, they usually do not create real >> pages: http://www.tucano.org.br/portal.html >> <http://www.tucano.org.br/portal.html> >> >> Everything you click is an anchor, not another web page, >> therefore google can not index it. >> >> >> There are also web application where almost every page is a form that >> POSTs back to the same never-changing URL. Even "links" that you >> click actually just run javascript to submit the form. >> > > > Roger Menday (PhD) > <roger.menday@... <mailto:roger.menday@...>> > > Senior Researcher, Fujitsu Laboratories of Europe Limited > Hayes Park Central, Hayes End Road, Hayes, Middlesex, UB4 8FE, U.K. > Tel: +44 (0) 208 606 4534
On Jan 8, 2010, at 9:46 PM, Eric J. Bowman wrote: > Jan Algermissen wrote: >> >> On Jan 7, 2010, at 8:58 PM, Eric J. Bowman wrote: >> >>> The media type definition says what methods do. In most cases >>> there's plenty of wiggle-room on response codes, so that's left to >>> the API to describe. >> >> No, the error codes need no description - they are fully described >> by RFC 2616. >> > > Please see sections 4.4 and 5.5 of RFC 5023. I know - but RFC 5023 should actually not do that. > Response codes are indeed > described by HTTP, that isn't the issue. The issue is what response > is > issued to what request, and that isn't defined by media type. No, it is completely up to the server. The server may respond in any way, as long as the response is correctly telling the client what happened (e.g. send 201 when a resource has been created). The examples in RFC 5023 do actually tell you nothing about how a given server might choose to respond. > My Atom > Protocol implementation may respond 202 Accepted instead of 201 > Created > because I've implemented moderation of new content. Only my API can > describe that authoritatively. No, RFC2616 describes what has to happen. Return 201 upon creation and 202 if the request has been accepted but not yet been processed. > >> >>> A system's security implementation isn't described by its >>> media types, so your API has to describe the use of HTTP Digest, >>> user roles, and such. Your API does this via hypertext -- >>> descriptive error >>> responses with links instead of the default 4xx response message, >>> for example. >>> >> >> I was referring to design time documentation, not runtime. >> > > I know you were. But, a REST system combines self-descriptive > messaging with a self-documenting API. What is a 'self documenting API'? > You can write out-of-band > documentation for it to your heart's content. But that documentation > ought to be reflected at runtime, so I can know exactly how your > system > works by stepping my way through it using curl or a protocol analyzer, You can never know from observing an interaction at a given time how the server might respond the next time. > without having to constantly refer to the design-time documentation. What 'design time documentation' are you refering to? Jan > > -Eric -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
On 8 Jan 2010, at 21:05, Mike Kelly wrote:
> Roger Menday wrote:
>>
>>
>>
>> going all the way round and almost back to the start again :
>>
>> http://code.quirkey.com/sammy/
>>
>> ... perhaps ?
>
> sammy.js is just a nifty way of controlling client state using html
> and javascript.. what's wrong with that?
i wasn't saying there was anything wrong with it - i rather like infact.
> It's code on demand
>
>>>
>>> On Fri, Jan 8, 2010 at 9:24 AM, Guilherme Silveira <guilherme.silveira@...
>>> <mailto:guilherme.silveira@...>> wrote:
>>>
>>>
>>> Flex/flash based ones also, although they can play around with
>>> anchors, they usually do not create real
>>> pages: http://www.tucano.org.br/portal.html
>>> <http://www.tucano.org.br/portal.html>
>>>
>>> Everything you click is an anchor, not another web page,
>>> therefore google can not index it.
>>>
>>>
>>> There are also web application where almost every page is a form
>>> that POSTs back to the same never-changing URL. Even "links" that
>>> you click actually just run javascript to submit the form.
>>>
>>
>>
>> Roger Menday (PhD)
>> <roger.menday@... <mailto:roger.menday@...>>
>>
>> Senior Researcher, Fujitsu Laboratories of Europe Limited
>> Hayes Park Central, Hayes End Road, Hayes, Middlesex, UB4 8FE, U.K.
>> Tel: +44 (0) 208 606 4534
Roger Menday (PhD)
<roger.menday@...>
Senior Researcher, Fujitsu Laboratories of Europe Limited
Hayes Park Central, Hayes End Road, Hayes, Middlesex, UB4 8FE, U.K.
Tel: +44 (0) 208 606 4534
______________________________________________________________________
Fujitsu Laboratories of Europe Limited
Hayes Park Central, Hayes End Road, Hayes, Middlesex, UB4 8FE
Registered No. 4153469
This e-mail and any attachments are for the sole use of addressee(s) and
may contain information which is privileged and confidential. Unauthorised
use or copying for disclosure is strictly prohibited. The fact that this
e-mail has been scanned by Trendmicro Interscan and McAfee Groupshield does
not guarantee that it has not been intercepted or amended nor that it is
virus-free.
Jan Algermissen wrote: > > > Please see sections 4.4 and 5.5 of RFC 5023. > > I know - but RFC 5023 should actually not do that. > Regardless, RFC 5023 is correct that RFC 2616 says exactly what I said, about using descriptive messaging (and linking) in 4xx response bodies. When someone wants to know what REST is, I point them to what it is not -- every major CMS implementation out there (wiki, forum, weblog etc.) returns a fully-styled page, either to create the resource, or a search interface, when a document isn't found. The problem is they all do this with a 200 OK response, when the correct response is 404. There's nothing wrong with a 404 page that looks like the rest of the site, complete with menu links, offering a variety of choices for error recovery. I explain that changing the response code wouldn't make the system RESTful, but it does illustrate the idea of hypertext driving application state very well. Is the 404 failure mode of the system a search interface (forum), or a create new content interface (wiki)? You can document this out-of-band if you'd like, but this aspect of the API documents itself in hypertext, along with all the other choices for transitioning to another steady- state (menu links), to recover from the error. The standard 404 response is essentially terminating the application, the user has to back up and try something different, instead of being instructed as to how to move forwards. That isn't a self-documenting API. > > > Response codes are indeed > > described by HTTP, that isn't the issue. The issue is what > > response is > > issued to what request, and that isn't defined by media type. > > No, it is completely up to the server. The server may respond in any > way, as long as the response is correctly telling the client what > happened (e.g. send 201 when a resource has been created). The > examples in RFC 5023 do actually tell you nothing about how a given > server might choose to respond. > Exactly. The server's response tells you how the server responded to a hypertext interaction. You asked what you need to document outside the scope of a media type. Whatever out-of-band documentation you have should be reflective of what's actually happening in-band. The media type is only telling you what methods to use and how to use them, not what the responses are going to be... wiggle-room, like I said. > > > My Atom > > Protocol implementation may respond 202 Accepted instead of 201 > > Created > > because I've implemented moderation of new content. Only my API can > > describe that authoritatively. > > No, RFC2616 describes what has to happen. Return 201 upon creation > and 202 if the request has been accepted but not yet been processed. > RFC 2616 tells the client how to interpret the response. Whether the response is a 201 or a 202 is up to your server. Isn't this the sort of thing you were asking about documenting, that's outside the scope of a media type (or the protocol definition)? > > > I know you were. But, a REST system combines self-descriptive > > messaging with a self-documenting API. > > What is a 'self documenting API'? > Any truly RESTful API documents itself in-band. My Xforms Atom Protocol client (minus the PATCH stuff) is RESTful because it is hypertext-driven. The specifics of my implementation may be deduced by driving the hypertext application (and viewing source) in a browser with xforms and protocol analyzer extensions. All out-of-band knowledge relevant to protocol interaction, is encapsulated within the media types used, which are standard. I use the term self-documenting, because people get confused into thinking that payloads must be self-describing. My use of the term self-documenting is w3c approved: http://www.w3.org/TR/gov-data/ Because while following their advice won't get you REST as a result, it's at least hypertext-driven, i.e. has a self-documenting API even if the messaging isn't self-descriptive. > > > You can write out-of-band > > documentation for it to your heart's content. But that > > documentation ought to be reflected at runtime, so I can know > > exactly how your system > > works by stepping my way through it using curl or a protocol > > analyzer, > > You can never know from observing an interaction at a given time how > the server might respond the next time. > Of course not. At any given time, the only authoritative information is what the server's response is, at that time. If your implementation isn't bounded by your API documentation, you can change whatever you want whenever you want, even while I'm stepping through it with a protocol analyzer. That's an edge case. > > > without having to constantly refer to the design-time documentation. > > What 'design time documentation' are you refering to? > You're the one who brought it up, I assumed you knew? Assuming a stable implementation with documentation, I shouldn't have to rely on your out-of-band documentation to figure out what's what. I ought to be able to determine everything I need to know that isn't covered by media type or protocol definitions in-band from the hypertext. The only authoritative source for how a server will respond to an Atom Protocol POST request, is the server's response. One day that response may be 201, the next day it may change to 202. Presumably, the user is informed of the change using natural language somewhere on the create- post form, that moderation is now in effect and their posts will no longer appear instantly. So the API documents itself in-band, and sure enough, the change is reflected in that my protocol analyzer is no longer receiving 201 responses to POST requests, the response code is now 202. If I care about why, the explanation is right there in-band. If all out-of-band knowledge in REST is relegated to media type definitions (except for domain-specific vocabularies presented within a standard media type), then all other knowledge must be documented in- band (including in-band links to definitions of domain-specific vocabularies), right? So that's the other thing you can document, the meaning of whatever domain-specific ontology you're presenting using RDFa inside XHTML. It's part of your API, though, so you should link to it in-band such that its use is self-documenting. -Eric
Many websites like amazon.com, show personalized pages using cookies. This is not restful as URIs do not represent the resources, but there is hidden state maintained on client side. Roy Fielding's dissertation has following comment on cookie based state mechanism "A state mechanism that involves preferences can be more efficiently implemented using judicious use of context-setting URI rather than cookies, where judicious means one URI per state rather than an unbounded number of URI due to the embedding of a user-id." Are there any examples of this? In particular, what does URI per state means? Thanks, Unmesh
"unmesh_joshi" wrote: > > Many websites like amazon.com, show personalized pages using cookies. > This is not restful as URIs do not represent the resources, but there > is hidden state maintained on client side. Roy Fielding's > dissertation has following comment on cookie based state mechanism > > "A state mechanism that involves preferences can be more efficiently > implemented using judicious use of context-setting URI rather than > cookies, where judicious means one URI per state rather than an > unbounded number of URI due to the embedding of a user-id." > > Are there any examples of this? In particular, what does URI per > state means? > The state of the resource and the state of the application are not always the same in REST. A personalized page is really the same resource state for all users, although each user's application state varies by username. Instead of assigning each application state a URI like http://example.org/joe/welcome_page, use one URI per resource state, i.e. http://example.org/welcome_page, and vary its output based on the username present in HTTP Digest authentication headers (instead of cookies). That's just one example. -Eric
>>>>> "unmesh" == unmesh joshi <unmeshjoshi@...> writes:
unmesh> Many websites like amazon.com, show personalized pages
unmesh> using cookies. This is not restful as URIs do not
unmesh> represent the resources, but there is hidden state
unmesh> maintained on client side. Roy Fielding's dissertation
unmesh> has following comment on cookie based state mechanism
unmesh> "A state mechanism that involves preferences can be more
unmesh> efficiently implemented using judicious use of
unmesh> context-setting URI rather than cookies, where judicious
unmesh> means one URI per state rather than an unbounded number of
unmesh> URI due to the embedding of a user-id."
unmesh> Are there any examples of this? In particular, what does
unmesh> URI per state means?
In the case of a user who has logged in, personalized pages are really
easy, because you have access to the user name that is logged in
(REMOTE_USER in CGI).
So if that is set, you just do your stuff. No need for cookies, nor
need for context-setting URI.
The latter is usually implemented as a prefix/suffix in the URL.
--
Cheers,
Berend de Boer
Another way to handle personalization over HTTP in the Web browser world is to return a shared representation that includes code-on-demand that uses the current user-id to make additional requests for resources directly associated w/ user-id. In this way, the application can still take advantage of shard caches for the initial representations and private caches for the additional, user-id specific representations. mca http://amundsen.com/blog/ On Fri, Jan 8, 2010 at 20:52, <berend@...> wrote: >>>>>> "unmesh" == unmesh joshi <unmeshjoshi@...> writes: > > unmesh> Many websites like amazon.com, show personalized pages > unmesh> using cookies. This is not restful as URIs do not > unmesh> represent the resources, but there is hidden state > unmesh> maintained on client side. Roy Fielding's dissertation > unmesh> has following comment on cookie based state mechanism > > unmesh> "A state mechanism that involves preferences can be more > unmesh> efficiently implemented using judicious use of > unmesh> context-setting URI rather than cookies, where judicious > unmesh> means one URI per state rather than an unbounded number of > unmesh> URI due to the embedding of a user-id." > > unmesh> Are there any examples of this? In particular, what does > unmesh> URI per state means? > > In the case of a user who has logged in, personalized pages are really > easy, because you have access to the user name that is logged in > (REMOTE_USER in CGI). > > So if that is set, you just do your stuff. No need for cookies, nor > need for context-setting URI. > > The latter is usually implemented as a prefix/suffix in the URL. > > -- > Cheers, > > Berend de Boer > > > ------------------------------------ > > Yahoo! Groups Links > > > >
Similarly, some non-browser apps I've worked on leverage shared XML representations containing XInclude elements that make the secondary private requests. mca http://amundsen.com/blog/ > On Fri, Jan 8, 2010 at 21:19, Peter <pkeane@...> wrote: >> >> >> --- In rest-discuss@yahoogroups.com, mike amundsen <mamund@...> wrote: >>> >>> Another way to handle personalization over HTTP in the Web browser >>> world is to return a shared representation that includes >>> code-on-demand that uses the current user-id to make additional >>> requests for resources directly associated w/ user-id. In this way, >>> the application can still take advantage of shard caches for the >>> initial representations and private caches for the additional, user-id >>> specific representations. >>> >> >> We do that exact thing. The page itself is served identically to everyone, and a subsequent (ajax) request grabs some data that javascript code uses to "decorate" the page w/ personalized info. The user id is actually stored in a cookie when the user first logs in, and on each request the js uses that user id to construct the URI used for the ajax request. It is (I think) one of the few (basically) RESTful uses of cookies -- the resource itself (unique user data) is completely visible. >> >> http://tech.groups.yahoo.com/group/rest-discuss/message/10027 >> >> Of course, you can also do http basic auth on that second request as well. >> >> --peter >> >> >>> mca >>> http://amundsen.com/blog/ >>> >>> >>> >>> >>> On Fri, Jan 8, 2010 at 20:52, <berend@...> wrote: >>> >>>>>> "unmesh" == unmesh joshi <unmeshjoshi@...> writes: >>> > >>> > � �unmesh> Many websites like amazon.com, show personalized pages >>> > � �unmesh> using cookies. This is not restful as URIs do not >>> > � �unmesh> represent the resources, but there is hidden state >>> > � �unmesh> maintained on client side. �Roy Fielding's dissertation >>> > � �unmesh> has following comment on cookie based state mechanism >>> > >>> > � �unmesh> "A state mechanism that involves preferences can be more >>> > � �unmesh> efficiently implemented using judicious use of >>> > � �unmesh> context-setting URI rather than cookies, where judicious >>> > � �unmesh> means one URI per state rather than an unbounded number of >>> > � �unmesh> URI due to the embedding of a user-id." >>> > >>> > � �unmesh> Are there any examples of this? In particular, what does >>> > � �unmesh> URI per state means? >>> > >>> > In the case of a user who has logged in, personalized pages are really >>> > easy, because you have access to the user name that is logged in >>> > (REMOTE_USER in CGI). >>> > >>> > So if that is set, you just do your stuff. No need for cookies, nor >>> > need for context-setting URI. >>> > >>> > The latter is usually implemented as a prefix/suffix in the URL. >>> > >>> > -- >>> > Cheers, >>> > >>> > Berend de Boer >>> > >>> > >>> > ------------------------------------ >>> > >>> > Yahoo! Groups Links >>> > >>> > >>> > >>> > >>> >> >> >> >
a cookie is a client side temporary memory ... I don't see relationship with the server side.. . Cookies seems more like data annotation than "state" in terms of REST.... You don't need to impose REST in both sides.... after defining your service in a way all states are reachable from an entry point, your application is REST ... and if a client application uses tricks to mimics states it is nothing to do with your application being REST or not.. IMHO :) On Sat, Jan 9, 2010 at 3:41 AM, mike amundsen <mamund@...m> wrote: > > > Similarly, some non-browser apps I've worked on leverage shared XML > representations containing XInclude elements that make the secondary > private requests. > > > mca > http://amundsen.com/blog/ > > > On Fri, Jan 8, 2010 at 21:19, Peter <pkeane@...<pkeane%40mail.utexas.edu>> > wrote: > > >> > >> > >> --- In rest-discuss@yahoogroups.com <rest-discuss%40yahoogroups.com>, > mike amundsen <mamund@...> wrote: > >>> > >>> Another way to handle personalization over HTTP in the Web browser > >>> world is to return a shared representation that includes > >>> code-on-demand that uses the current user-id to make additional > >>> requests for resources directly associated w/ user-id. In this way, > >>> the application can still take advantage of shard caches for the > >>> initial representations and private caches for the additional, user-id > >>> specific representations. > >>> > >> > >> We do that exact thing. The page itself is served identically to > everyone, and a subsequent (ajax) request grabs some data that javascript > code uses to "decorate" the page w/ personalized info. The user id is > actually stored in a cookie when the user first logs in, and on each request > the js uses that user id to construct the URI used for the ajax request. It > is (I think) one of the few (basically) RESTful uses of cookies -- the > resource itself (unique user data) is completely visible. > >> > >> http://tech.groups.yahoo.com/group/rest-discuss/message/10027 > >> > >> Of course, you can also do http basic auth on that second request as > well. > >> > >> --peter > > >> > >> > >>> mca > >>> http://amundsen.com/blog/ > >>> > >>> > >>> > >>> > >>> On Fri, Jan 8, 2010 at 20:52, <berend@...> wrote: > >>> >>>>>> "unmesh" == unmesh joshi <unmeshjoshi@...> writes: > >>> > > >>> > � �unmesh> Many websites like amazon.com, show personalized > pages > >>> > � �unmesh> using cookies. This is not restful as URIs do not > >>> > � �unmesh> represent the resources, but there is hidden state > >>> > � �unmesh> maintained on client side. �Roy Fielding's > dissertation > >>> > � �unmesh> has following comment on cookie based state mechanism > >>> > > >>> > � �unmesh> "A state mechanism that involves preferences can be > more > >>> > � �unmesh> efficiently implemented using judicious use of > >>> > � �unmesh> context-setting URI rather than cookies, where > judicious > >>> > � �unmesh> means one URI per state rather than an unbounded > number of > >>> > � �unmesh> URI due to the embedding of a user-id." > >>> > > >>> > � �unmesh> Are there any examples of this? In particular, what > does > >>> > � �unmesh> URI per state means? > > >>> > > >>> > In the case of a user who has logged in, personalized pages are > really > >>> > easy, because you have access to the user name that is logged in > >>> > (REMOTE_USER in CGI). > >>> > > >>> > So if that is set, you just do your stuff. No need for cookies, nor > >>> > need for context-setting URI. > >>> > > >>> > The latter is usually implemented as a prefix/suffix in the URL. > >>> > > >>> > -- > >>> > Cheers, > >>> > > >>> > Berend de Boer > >>> > > >>> > > >>> > ------------------------------------ > >>> > > >>> > Yahoo! Groups Links > >>> > > >>> > > >>> > > >>> > > >>> > >> > >> > >> > > > > -- ------------------------------------------ Felipe Gacho 10+ Java Programmer CEJUG Senior Advisor
On Jan 9, 2010, at 1:01 AM, Eric J. Bowman wrote: > Assuming a > stable implementation with documentation, I shouldn't have to rely on > your out-of-band documentation to figure out what's what. Well, you surely need media type specifications as out-of-band documentation. I have just been asking which documentation you need besides that - because Roy wrote that the API should spend *almost* all of its descriptive effort in media type and link relation specifications. Since he wrote *almost* I was wondering if there was anything besides that. > I ought to > be able to determine everything I need to know that isn't covered by > media type or protocol definitions in-band from the hypertext. One question: how do you *choose* the service to interact with? Jan
Jan Algermissen wrote: > > Since he wrote *almost* I was wondering if there was anything > besides that. > Yeah, I understood your question, I thought it was a good one. But I also thought I gave a good answer -- response codes and security implementation. I was simply pointing out that anything beyond "almost" probably belongs in-line. > > > I ought to > > be able to determine everything I need to know that isn't covered by > > media type or protocol definitions in-band from the hypertext. > > One question: how do you *choose* the service to interact with? > By following a published link to some resource. Any entry point on your system ought to allow me to navigate to whatever resource of interest I wish to bookmark as a future entry point. Consider the website I'm developing now, for a local law firm. Each attorney has an entry on cobar.org whether they want it or not. I can't link from the attorneys' entries on their own domain, to their entries on cobar.org, because cobar.org violates the identification of resources constraint. The only bookmarkable entry point to their attorney-info retrieval service is at its root level. I ought to be able to publish a link from each attorney to his or her entry on cobar.org, and anyone following that link ought to be able to determine the URI allocation scheme from introspecting the link, and create their own link to some other attorney without ever consulting documentation. Instead, I would have to disguise a POST form as a link, and that form could be introspected to determine the API, but you can't bookmark a POST request. But I digress. It's just this pragmatic aspect of REST was thrown in my face yesterday, when I had to explain to my client why I can't link his page to his cobar.org profile. He did not understand why I could not just link to something he could navigate to, initially. Web APIs like cobar's put Web Developers between a rock and a hard place with their clients, that we shouldn't have to deal with at all. Go REST. -Eric
On Jan 9, 2010, at 2:27 PM, Eric J. Bowman wrote: > Jan Algermissen wrote: >> >> Since he wrote *almost* I was wondering if there was anything >> besides that. >> > > Yeah, I understood your question, I thought it was a good one. But I > also thought I gave a good answer -- response codes and security > implementation. I was simply pointing out that anything beyond > "almost" probably belongs in-line. Ah, ok. Still (as said) I think response codes are uniform (their meaning and when they are applicable) and security is orthogonal. > >> >>> I ought to >>> be able to determine everything I need to know that isn't covered by >>> media type or protocol definitions in-band from the hypertext. >> >> One question: how do you *choose* the service to interact with? >> > > By following a published link to some resource. Any entry point on > your system ought to allow me to navigate to whatever resource of > interest I wish to bookmark as a future entry point. So, there would then be a link relation that identifies the service type? Would it not be simpler to use a generic service relation and let the client (which might as well be a crawler that wills a registry) figure out the service type based on what the service says about itself? Anyhow, neither way are you getting rid of the question what makes a service type and how one would code a client for that service type *before* seeing any service instance. (Here is what I think: <http://algermissen.blogspot.com/2010/01/service-type-specifications-ii.html > ) > > Consider the website I'm developing now, for a local law firm. Each > attorney has an entry on cobar.org whether they want it or not. I > can't link from the attorneys' entries on their own domain, to their > entries on cobar.org, because cobar.org violates the identification of > resources constraint. The only bookmarkable entry point to their > attorney-info retrieval service is at its root level. Sounds like bad design, OTH, maybe cobar.org just does not want those resource to be bookmarkable. > > I ought to be able to publish a link from each attorney to his or her > entry on cobar.org, and anyone following that link ought to be able to > determine the URI allocation scheme from introspecting the link, and > create their own link to some other attorney without ever consulting > documentation. If cobar.org's intention was to enable that, then yes. The link relation specs should define the involved template parameters to enable URI construction from lawyer identity 'elements' (e.g. firstname, surname). > Instead, I would have to disguise a POST form as a > link, and that form could be introspected to determine the API, but > you can't bookmark a POST request. You could send your clients JavaScript code that executes the coburg.org application in th background to get to the desired information (ignoring cross domain issues for the moment). For that to be possible there would need to be machine processable hypermedia at coburg.org. > > But I digress. It's just this pragmatic aspect of REST was thrown in > my face yesterday, when I had to explain to my client why I can't link > his page to his cobar.org profile. He did not understand why I could > not just link to something he could navigate to, initially. Web APIs > like cobar's put Web Developers between a rock and a hard place with > their clients, that we shouldn't have to deal with at all. Go REST. Jan > > -Eric > > > ------------------------------------ > > Yahoo! Groups Links > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
Hi Eric, Particularly for amazon.com, you don't have to be logged in all the time. Even for anonymous users, the content changes based on the user's previous traversal thought the web site. Some web sites have content that is region based. For example, for www.xyz.com, different content will be shown for users from region1 and region2. It might look bad to have explicit URLs like www.xyz.com/region1, www.xyz.com/region2. What will be the ' judicious use of context-setting URI' in this case? Thanks, Unmesh On Sat, Jan 9, 2010 at 7:18 AM, Eric J. Bowman <eric@...> wrote: > "unmesh_joshi" wrote: >> >> Many websites like amazon.com, show personalized pages using cookies. >> This is not restful as URIs do not represent the resources, but there >> is hidden state maintained on client side. Roy Fielding's >> dissertation has following comment on cookie based state mechanism >> >> "A state mechanism that involves preferences can be more efficiently >> implemented using judicious use of context-setting URI rather than >> cookies, where judicious means one URI per state rather than an >> unbounded number of URI due to the embedding of a user-id." >> >> Are there any examples of this? In particular, what does URI per >> state means? >> > > The state of the resource and the state of the application are not > always the same in REST. A personalized page is really the same > resource state for all users, although each user's application state > varies by username. Instead of assigning each application state a URI > like http://example.org/joe/welcome_page, use one URI per resource > state, i.e. http://example.org/welcome_page, and vary its output based > on the username present in HTTP Digest authentication headers (instead > of cookies). That's just one example. > > -Eric >
What would one do to black box test a RESTful service? Are there any assumptions one can make about the observable behaviour of a RESTful service that could be tested? Or, rephrased, under what conditions would one claim that a RESTful service behaves wrongly? Jan
Hi Jan, For our book, Ian, Savas and I have been doing black box testing of hypermedia services. For us this means creating some tests which exercise particular workflows that a service supports (decision points are advertised through different rel values ins links). By programming a client with some goal and rules towards reaching that goal, we can then let it loose against a service and watch as it drives towards a conclusion. What's been nice for us (and has demonstrated the value of hypermedia for describing business protocols) has been that we wrote one set of tests per service (typically in Java) whereas for illustrative purposes the book has multiple implementations of each service (Java and .NET typically). We've used both custom media types and common media types (especially Atom) for our services and in either case our black box tests have worked, providing both positive assertions that services work, and have uncovered bugs in our differing implementations. Jim
Jim, On Jan 11, 2010, at 2:01 PM, Jim Webber wrote: > Hi Jan, > > For our book, Ian, Savas and I have been doing black box testing of > hypermedia services. For us this means creating some tests which > exercise particular workflows that a service supports (decision > points are advertised through different rel values ins links). > > By programming a client with some goal and rules towards reaching > that goal, we can then let it loose against a service and watch as > it drives towards a conclusion. Are you testing whether the client can actually reach the intended goal? > > What's been nice for us (and has demonstrated the value of > hypermedia for describing business protocols) has been that we wrote > one set of tests per service (typically in Java) whereas for > illustrative purposes the book has multiple implementations of each > service (Java and .NET typically). > > We've used both custom media types and common media types > (especially Atom) for our services and in either case our black box > tests have worked, providing both positive assertions that services > work, and have uncovered bugs in our differing implementations. Hmm, sorry, I think I made a mistake by saying "black box testing", which you (rightly) took as implementation testing. What I intended to ask was whether a RESTful service could ever produce a response that could be considered 'wrong'. Excluding server crashes and bugs like sending ill formed XML etc. Jan P.S. The implicit theme being 'a RESTful service can never send a wrong response because the client must expect to be sent anything'. > > Jim > > ------------------------------------ > > Yahoo! Groups Links > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
Hi Jan, > Are you testing whether the client can actually reach the intended goal? Yes, which means the business protocol is tested and the implementation is tested. Just like normal functional testing really. [snip] > What I intended to ask was whether a RESTful service could ever produce a response that could be considered 'wrong'. Assuming the service implementation is correct (big assumption), all the media types are properly declared (smaller assumption) and that the service won't fail for other reasons (implausible assumption!), then I agree: hypermedia corrals you towards an appropriate outcome. > P.S. The implicit theme being 'a RESTful service can never send a wrong response because the client must expect to be sent anything'. "anything" can only be taken in context. It will be constrained by the media types your service uses, any additional link relations you've added in, and the domain application protocol (DAP, or business protocol) that your service supports. Jim
Hi Jim, On Jan 11, 2010, at 3:36 PM, Jim Webber wrote: > Hi Jan, > >> Are you testing whether the client can actually reach the intended >> goal? > > Yes, which means the business protocol is tested and the > implementation is tested. Just like normal functional testing really. > What do you mean by 'business protocol'? And how do you document that 'business protocol' (if you *test* it, I assume it is 'written down' somehow)? (Sorry, please bear with me - trying to make a point) > [snip] > >> What I intended to ask was whether a RESTful service could ever >> produce a response that could be considered 'wrong'. > > Assuming the service implementation is correct (big assumption), all > the media types are properly declared (smaller assumption) and that > the service won't fail for other reasons (implausible assumption!), > then I agree: hypermedia corrals you towards an appropriate outcome. +1 to the assumptions and I also like the wording 'corrals you towards an appropriate outcome'. > >> P.S. The implicit theme being 'a RESTful service can never send a >> wrong response because the client must expect to be sent anything'. > > "anything" can only be taken in context. It will be constrained by > the media types your service uses, any additional link relations > you've added in, and the domain application protocol (DAP, or > business protocol) that your service supports. I agree (assuming I understand you correctly) that the perimeter prescribed by the hypermedia semantics (media types, rels, etc.) the server uses provides a context that can be used to differentiate between a 'valid' response and a 'nonsense' response. A 'nonsense' response being one that the client could not possibly have expected. I am having trouble with the 'domain application protocol' - I think REST deliberately aims to avoid it to reduce coupling. OTH, I am not sure what you mean by 'DAP or business protocol'. Can you explain? Most importantly, I'd be interested in whether and how you document such a thing. What I think though (and maybe that is what you mean anyway) is that a server cannot send an arbitrary response to any request but is bound to the semantics with which it linked to a given resource. For example, a Web server that sends HTML containing <img href="/img/foo"/ > to a client must respond with an image representation when the client GETs /img/foo. Otherwise the server would be bogous (this is stuff I think is a good candidate for tests). Web browsers (at least the ones I checked) populate the Accept header with image/xxx values when following the <img href=""/>. This is appropriate behavior because the HTML spec establishes the contract that <img> elements reference images. Due to the commonly known nature of 'an image' the spec does not really go into details what an image is. If this line of thought is applied to resource kinds (kind as in "The target of <img> is an image resource") that need more detailed specification or carry some sort of semantic of containing references to other resources the situation is not that simple anymore. In my view, an AtomPub 'collection' is such a resource kind and there are certain expectations implied about the nature of the resource a <collection href=""> element points to. The developer of an AtomPub client would not populate the Accept header with image/* but rather application/atom+xml. However, I think that application/rss+xml or even text/uri-list are also possible choices because they represent collections. What if this is applied to resources like orders or bug tracking tickets? Is there a need for media types to be 'mapped' or 'linked' with such resource kinds? I think to some extend: yes. And I think that these kinds of 'contracts' constitute a sufficient 'domain protocol' when combined with the knowledge about what set hypermedia semantics a service uses. Jan > > Jim > > ------------------------------------ > > Yahoo! Groups Links > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
Hey Jan, > What do you mean by 'business protocol'? And how do you document that 'business protocol' (if you *test* it, I assume it is 'written down' somehow)? A business protocol is a sequence of steps taken to achieve some outcome - namely the changing of state on a computer system for some useful purpose. Protocols are written down, but in terms of media types and their associated processing models. For instance if you know application/atom+xml and the AtomPub links then you know how to process links to cause state changes on the server. Ditto for custom media types like application/vnd.restbucks+xml (a media type that covers the business > +1 to the assumptions and I also like the wording 'corrals you towards an appropriate outcome'. Good, that wording's in the book. Unfortunately for me I think Ian wrote it, so he can claim the credit :-) > I agree (assuming I understand you correctly) that the perimeter prescribed by the hypermedia semantics (media types, rels, etc.) the server uses provides a context that can be used to differentiate between a 'valid' response and a 'nonsense' response. A 'nonsense' response being one that the client could not possibly have expected. Yup. > I am having trouble with the 'domain application protocol' - I think REST deliberately aims to avoid it to reduce coupling. OTH, I am not sure what you mean by 'DAP or business protocol'. Can you explain? Most importantly, I'd be interested in whether and how you document such a thing. The domain application protocol is a business protocol that sits atop other (typically RESTful) application protocols like HTTP. It narrows the underlying protocol for a specific business case. For example, HTTP as an application protocol is very broad, but I might just want to use it to order a coffee - this narrower, more specific protocol is a DAP. Ditto if I take Atom and AtomPub and turn it into a competing consumers implementation for event-driven communication between systems narrows those formats and protocols to something more specific - a DAP in our nomenclature. > What I think though (and maybe that is what you mean anyway) is that a server cannot send an arbitrary response to any request but is bound to the semantics with which it linked to a given resource. For example, a Web server that sends HTML containing <img href="/img/foo"/> to a client must respond with an image representation when the client GETs /img/foo. Otherwise the server would be bogous (this is stuff I think is a good candidate for tests). Agreed. If a server returns something illegal from an interaction, then *it* has broken the contract. And contracts are binding on *both* parties. > Web browsers (at least the ones I checked) populate the Accept header with image/xxx values when following the <img href=""/>. This is appropriate behavior because the HTML spec establishes the contract that <img> elements reference images. Due to the commonly known nature of 'an image' the spec does not really go into details what an image is. Yes, although browsers are dead to me :-) > If this line of thought is applied to resource kinds (kind as in "The target of <img> is an image resource") that need more detailed specification or carry some sort of semantic of containing references to other resources the situation is not that simple anymore. In my view, an AtomPub 'collection' is such a resource kind and there are certain expectations implied about the nature of the resource a <collection href=""> element points to. The developer of an AtomPub client would not populate the Accept header with image/* but rather application/atom+xml. However, I think that application/rss+xml or even text/uri-list are also possible choices because they represent collections. > > What if this is applied to resources like orders or bug tracking tickets? Is there a need for media types to be 'mapped' or 'linked' with such resource kinds? Seems a good use of Atom to me - for (time ordered) lists of stuff. Ian has written up a good example where Atom is used for a competing consumers implementation in an ecommerce scenario in our book. What's delightful about it, is that *all* the DAP stuff is done with HTTP and Atom. The code which understands what's in the Atom element is totally separate. > I think to some extend: yes. And I think that these kinds of 'contracts' constitute a sufficient 'domain protocol' when combined with the knowledge about what set hypermedia semantics a service uses. Great. But now I think I've taken us off track from our original testing question, sorry! Jim
Jim, On Jan 11, 2010, at 9:28 PM, Jim Webber wrote: > Great. But now I think I've taken us off track from our original > testing question, sorry! Not really. My intention was to approach the 'coupling theme' from a different angle by focussing on what is testable. Something that makes sense to be tested contributes to the contract between client and server and constitutes coupling. Jan
> Not really. My intention was to approach the 'coupling theme' from a > different angle by focussing on what is testable. Something that makes > sense to be tested contributes to the contract between client and > server and constitutes coupling. Ah, fair point. I get it now. Jim
You should instead focus on why defining media type(s) used for representing resources is the most important design activity. If you have some steps left over, then it will be obvious what it doesn't cover. For what it's worth, this is exactly why the Amazon programming language APIs, such as the C# S3 API, are very non-RESTful. Even the third party APIs incur epic failure when subject to this basic eyeball test. Actually, Amazon's API documentation for programming languages is also inconsistent in how it wraps HTTP, from language to language. The sample C# code is simply wrong and broken, stupid and ugly. It doesn't even pass through the FxCop/StyleCop rules gauntlet. Whoever wrote it was hopefully an intern, because I'd hate to think Amazon hires developers to write "REST APIs" without having any clue how to even program correctly, much less understand REST... --- In rest-discuss@yahoogroups.com, Jan Algermissen <algermissen1971@...> wrote: > > > In [1] Roy writes > > "A REST API should spend almost all of its descriptive effort in > defining the media type(s) used for representing resources and driving > application state, or in defining extended relation names and/or > hypertext-enabled mark-up for existing standard media types. [...]" > > Maybe I am reading too much into this, but then...usually Roy chooses > his words quite carefully: > > Does anyone know why it is "*almost* all of its descriptive effort" > and not simply "all of its descriptive effort"? What else is there to > be described than the media types? > > Jan > > > [1] http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven >
> > Particularly for amazon.com, you don't have to be logged in all the > time. Even for anonymous users, the content changes based on the > user's previous traversal thought the web site. > If you aren't logged in, the server can just send the anonymous representation. The server isn't required to initiate challenge- response. We just discussed this here: http://tech.groups.yahoo.com/group/rest-discuss/message/14399 As to tracking anonymous users who aren't logged in, well, that is a job for cookies AFAIK. > > Some web sites have content that is region based. For example, for > www.xyz.com, different content will be shown for users from region1 > and region2. > It might look bad to have explicit URLs like www.xyz.com/region1, > www.xyz.com/region2. > > What will be the ' judicious use of context-setting URI' in this case? > Good question. I'm pretty busy today, so I'm going to punt... -Eric
>>>>> "Eric" == Eric J Bowman <eric@...> writes:
Eric> As to tracking anonymous users who aren't logged in, well,
Eric> that is a job for cookies AFAIK.
The techniques that have been proposed here, and are in use by people,
are:
1. Always login someone, so create a dummy account as soon as a new
user logs in. You can accept any or the dummy password in that case
and with JavaScript you can log them in automatically.
2. And of course url with a unique id in it.
Of course, both techniques have issues with session restart.
But with HTML5 we have local storage, which will work as well as
cookies, so we can do away with them entirely. I.e. I would use
technique 1 with local storage to keep track with the dummy user id.
--
Cheers,
Berend de Boer
ok,
I can use link "rel" to offer the client a list of possible next states ....
question: doing tat I expect the client to know what is the semantic
of each link, isn't it ?
If yes, hateoas seems not feasible since the client needs a previous
knowledge about what to look for in the list of links... (unless there
is only one link, of course)
the question is: can I create my own relation semantics ? or should I
grasp in the available ones for Atom and Xhtml in case I want to
produce HATEOAS systems?
* I am trying to implement HATEOAS using Jersey.. I know it has no
support out of the box, so I am designing some tricks here :)
regards,
Felipe Gacho
Hi Felipe, [snip] > question: doing tat I expect the client to know what is the semantic > of each link, isn't it ? Yes, that's right. The client should understand the rel value within the context of the representation it's found in. This may come from a media type, maybe a protocol, or a registry of link relations. But the client does need to know about them. > If yes, hateoas seems not feasible since the client needs a previous > knowledge about what to look for in the list of links... (unless there > is only one link, of course) I'd disagree here. HATEAOS is still quite feasible, but the client needs to know the processing model for the media types it's dealing with - they're a key part of a service's contract. > the question is: can I create my own relation semantics ? or should I > grasp in the available ones for Atom and Xhtml in case I want to > produce HATEOAS systems? You can create your own semantics; you can create your own media types too. The trade off is one of reach (that is, reusing existing software out there on the Web) versus applicability (you can craft something that matches your domain exactly). FWIW I generally prefer to try to solve problems with existing media types before resorting to custom media types. > * I am trying to implement HATEOAS using Jersey.. I know it has no > support out of the box, so I am designing some tricks here :) Shameless plug: Chapter 5 of the book Ian, Savas, and I are writing actually has examples of a hypermedia system written with Jersey. Jim
Felipe,
On Jan 13, 2010, at 3:49 PM, Felipe Gacho wrote:
> ok,
>
> I can use link "rel" to offer the client a list of possible next
> states ....
>
> question: doing tat I expect the client to know what is the semantic
> of each link, isn't it ?
Yes. HTML browsers do this everyday (images, stylesheets,..).
>
> If yes, hateoas seems not feasible since the client needs a previous
> knowledge about what to look for in the list of links... (unless there
> is only one link, of course)
When the client receives text/html and sees an <img href=""/> it does
make an assumption about the nature of the target resource. In this
case, that it is an image. The set of formats the client puts in the
Accept header when requesting the image representation (GET) is driven
by built-in knowledge which media types the client understands are
media types for images.
This gets a bit more interesting when the nature of the link target
implies that there are other links. When an AtomPub service document
contains <collection href="/cols/1"> the client makes the assumption
(based on RFC5023) that /cols/1 is a collection. The semantic of
'collection' includes the notion of having a (possibly empty) set of
members. While the client should not make any assumptions about the
exact media type the server will send for a GET to the collection, the
client does make the assumption that whatever representation it
receives 'matches' the collection nature.
The essential question (for me anyway) is, on what basis the client
populates the Accept header when GETing /cols/1. RFC5023 sort of
mandates the server to return an Atom feed, but from a REST POV (as
Roy rightly pointed out[1]) this is overly constraining. Have a look
at OpenSearch; nothing in the OpenSearch set of specs tells the client
developer what media types will be used for search results (which are
essentially sets of links with metadata). But from reading the
documentation a client developer learns that Atom feeds and RSS feeds
are commonly used.
Client and server not only need to understand the media types used for
communication, but they must share an understanding of what media
types are used or can be expected to be used for which kinds of
resources ('kind of resource' being implied by hypermedia semantics).
(I have agonized this list with this subject over the last weeks, so
you might want to check those archives. The same on atom-protocol: [2])
>
> the question is: can I create my own relation semantics ? or should I
> grasp in the available ones for Atom and Xhtml in case I want to
> produce HATEOAS systems?
Oops - seems like you did not ask what I thought you were asking. Sorry.
Yes, you create your own hypermdia semantics by defining media types
(or defining extensions for existing ones) or 'stand-alone' link
relations. The more you can re-use existing ones the better, but often
these are just too unspecific. Before risking to accidentally baking
out-of-band knowledge into your clients and servers I'd roll my own
hypermedia.
>
> * I am trying to implement HATEOAS using Jersey.. I know it has no
> support out of the box, so I am designing some tricks here :)
Are you in a pure machine to machine scenario or is the client
application controlled by a human user?
Jan
>
[1] http://www.imc.org/atom-protocol/mail-archive/msg11487.html
[2] http://www.imc.org/atom-protocol/mail-archive/msg11463.html
> regards,
>
>
> Felipe Gacho
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
--------------------------------------
Jan Algermissen
Mail: algermissen@...
Blog: http://algermissen.blogspot.com/
Home: http://www.jalgermissen.com
--------------------------------------
Hello Felipe, > > If yes, hateoas seems not feasible since the client needs a previous > > knowledge about what to look for in the list of links... (unless there > > is only one link, of course) > If content negotiation is taking place, all responses are understood by your client application, otherwise a response code indicating that there is no media type available for answering the request would be the proper response. If your server stick with well known media types, most clients will have no problem understanding responses. If it goes for its own custom one, the intimacy between both parts might be higher than it should. As Jim pointed, try to use a well known one. If you can not, use your own. Regards
My current approach is to use existing media types (to take advantage of their existing semantics) and add "rel" values as needed. I start by using the values already registered and available [1,2] and supplement those with ones I create to meet the domain-specific need. When I create my own, I use a rel value that is a resolve-able URI and place helpful documentation at that URI. I'ved used the following two formats in the past: rel="http://www.example.org/rels/cancel-order" rel="http://www.example.org/rels/purchasing#cancel-order" In the second example, a single document is available that lists several related rel values. It's a documentation optimization only. [1] http://www.iana.org/assignments/link-relations/link-relations.xhtml [2] http://dublincore.org/documents/dces/ mca http://amundsen.com/blog/ 2010/1/13 Felipe Gacho <fgaucho@...>: > ok, > > I can use link "rel" to offer the client a list of possible next states .... > > question: doing tat I expect the client to know what is the semantic > of each link, isn't it ? > > If yes, hateoas seems not feasible since the client needs a previous > knowledge about what to look for in the list of links... (unless there > is only one link, of course) > > the question is: can I create my own relation semantics ? or should I > grasp in the available ones for Atom and Xhtml in case I want to > produce HATEOAS systems? > > * I am trying to implement HATEOAS using Jersey.. I know it has no > support out of the box, so I am designing some tricks here :) > > regards, > > > Felipe Gacho > > > ------------------------------------ > > Yahoo! Groups Links > > > >
Hi amundsen, you are using the relation attribute for the navigation.. strange.. relation should just give the client a hint about the relation between a resource and the url in the link element.. not a URI itself.. imho.. I got the point about the original question, thanks a lot for all responses. Felipe Gacho. 2010/1/13 mike amundsen <mamund@...>: > My current approach is to use existing media types (to take advantage > of their existing semantics) and add "rel" values as needed. > > I start by using the values already registered and available [1,2] and > supplement those with ones I create to meet the domain-specific need. > When I create my own, I use a rel value that is a resolve-able URI and > place helpful documentation at that URI. I'ved used the following two > formats in the past: > > rel="http://www.example.org/rels/cancel-order" > rel="http://www.example.org/rels/purchasing#cancel-order" > > In the second example, a single document is available that lists > several related rel values. It's a documentation optimization only. > > [1] http://www.iana.org/assignments/link-relations/link-relations.xhtml > [2] http://dublincore.org/documents/dces/ > > mca > http://amundsen.com/blog/ > > > > > 2010/1/13 Felipe Gacho <fgaucho@...>: >> ok, >> >> I can use link "rel" to offer the client a list of possible next states .... >> >> question: doing tat I expect the client to know what is the semantic >> of each link, isn't it ? >> >> If yes, hateoas seems not feasible since the client needs a previous >> knowledge about what to look for in the list of links... (unless there >> is only one link, of course) >> >> the question is: can I create my own relation semantics ? or should I >> grasp in the available ones for Atom and Xhtml in case I want to >> produce HATEOAS systems? >> >> * I am trying to implement HATEOAS using Jersey.. I know it has no >> support out of the box, so I am designing some tricks here :) >> >> regards, >> >> >> Felipe Gacho >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> >> > -- ------------------------------------------ Felipe Gacho 10+ Java Programmer CEJUG Senior Advisor
On Jan 13, 2010, at 4:52 PM, Felipe Gacho wrote: > Hi amundsen, > > you are using the relation attribute for the navigation.. strange.. No, the URI identifies the extension relation. Google for Link header draft. Jan > > relation should just give the client a hint about the relation between > a resource and the url in the link element.. not a URI itself.. > > imho.. > > I got the point about the original question, thanks a lot for all > responses. > > > Felipe Gacho. > > 2010/1/13 mike amundsen <mamund@...>: >> My current approach is to use existing media types (to take advantage >> of their existing semantics) and add "rel" values as needed. >> >> I start by using the values already registered and available [1,2] >> and >> supplement those with ones I create to meet the domain-specific need. >> When I create my own, I use a rel value that is a resolve-able URI >> and >> place helpful documentation at that URI. I'ved used the following two >> formats in the past: >> >> rel="http://www.example.org/rels/cancel-order" >> rel="http://www.example.org/rels/purchasing#cancel-order" >> >> In the second example, a single document is available that lists >> several related rel values. It's a documentation optimization only. >> >> [1] http://www.iana.org/assignments/link-relations/link-relations.xhtml >> [2] http://dublincore.org/documents/dces/ >> >> mca >> http://amundsen.com/blog/ >> >> >> >> >> 2010/1/13 Felipe Gacho <fgaucho@...>: >>> ok, >>> >>> I can use link "rel" to offer the client a list of possible next >>> states .... >>> >>> question: doing tat I expect the client to know what is the semantic >>> of each link, isn't it ? >>> >>> If yes, hateoas seems not feasible since the client needs a previous >>> knowledge about what to look for in the list of links... (unless >>> there >>> is only one link, of course) >>> >>> the question is: can I create my own relation semantics ? or >>> should I >>> grasp in the available ones for Atom and Xhtml in case I want to >>> produce HATEOAS systems? >>> >>> * I am trying to implement HATEOAS using Jersey.. I know it has no >>> support out of the box, so I am designing some tricks here :) >>> >>> regards, >>> >>> >>> Felipe Gacho >>> >>> >>> ------------------------------------ >>> >>> Yahoo! Groups Links >>> >>> >>> >>> >> > > > > -- > ------------------------------------------ > Felipe Gacho > 10+ Java Programmer > CEJUG Senior Advisor > > > ------------------------------------ > > Yahoo! Groups Links > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
Mike,
On Jan 13, 2010, at 4:49 PM, mike amundsen wrote:
> My current approach is to use existing media types (to take advantage
> of their existing semantics) and add "rel" values as needed.
>
> I start by using the values already registered and available [1,2] and
> supplement those with ones I create to meet the domain-specific need.
> When I create my own, I use a rel value that is a resolve-able URI and
> place helpful documentation at that URI. I'ved used the following two
> formats in the past:
>
> rel="http://www.example.org/rels/cancel-order"
> rel="http://www.example.org/rels/purchasing#cancel-order"
Just a comment:
For such cases I'd actually avoid the introduction of a link semantic
and use DELETE because the client would presumably already know it is
dealing with an order resource:
DELETE /orders/6
As a rule of thumb I do this:
When the domain operation in question ('cancel an order') maps o the
base semantics of an existing HTTP method use that method to gain the
visibility advantages over using the generic POST. In the example,
caching would (in theory) benefit from the visibility of DELETE (as
opposed to POST) by knowing that all caches of that order resource
could be flushed.
Jan
>
> In the second example, a single document is available that lists
> several related rel values. It's a documentation optimization only.
>
> [1] http://www.iana.org/assignments/link-relations/link-
> relations.xhtml
> [2] http://dublincore.org/documents/dces/
>
> mca
> http://amundsen.com/blog/
>
>
>
>
> 2010/1/13 Felipe Gacho <fgaucho@...>:
>> ok,
>>
>> I can use link "rel" to offer the client a list of possible next
>> states ....
>>
>> question: doing tat I expect the client to know what is the semantic
>> of each link, isn't it ?
>>
>> If yes, hateoas seems not feasible since the client needs a previous
>> knowledge about what to look for in the list of links... (unless
>> there
>> is only one link, of course)
>>
>> the question is: can I create my own relation semantics ? or should I
>> grasp in the available ones for Atom and Xhtml in case I want to
>> produce HATEOAS systems?
>>
>> * I am trying to implement HATEOAS using Jersey.. I know it has no
>> support out of the box, so I am designing some tricks here :)
>>
>> regards,
>>
>>
>> Felipe Gacho
>>
>>
>> ------------------------------------
>>
>> Yahoo! Groups Links
>>
>>
>>
>>
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
--------------------------------------
Jan Algermissen
Mail: algermissen@...
Blog: http://algermissen.blogspot.com/
Home: http://www.jalgermissen.com
--------------------------------------
> FWIW I generally prefer to try to solve problems with existing media types > before resorting to custom media types. > Why would this make a difference? We keep talking about reach, but most (if not all?) media types were designed for a specific client. If not developing an application that is not 'type' of that original client (say a feed aggregator), I can't see how the existing media types (outside of possible html) would not require some extensions to it (as alluded too by Mike in his examples). Why is this extension better than rolling out a new media type if in both cases, "additional" information needs to be documented? What are the practical reasons? When would a custom media type make sense? Thanks. Eb
>
>
> For such cases I'd actually avoid the introduction of a link semantic
> and use DELETE because the client would presumably already know it is
> dealing with an order resource:
>
+1, but I think Mike just wrote the example to show how one could use it.
Jan, about the Link header, I wanted to ask earlier in another discussion
but found it too offtopic. I can't see how I can use it (instead of atom
links) to represent multiple links within a collection. For example, if I
have a collection of cities and each city has its own URI, how could the
Link header in a no-nasty way represent a "self" relation to each city?
The city representations are in the content body, but the resource being
presented is the set of cities. The header is related to the set, but how to
relate them to each item?
I am probably missing something.
Regards
>
> DELETE /orders/6
>
> As a rule of thumb I do this:
>
> When the domain operation in question ('cancel an order') maps o the
> base semantics of an existing HTTP method use that method to gain the
> visibility advantages over using the generic POST. In the example,
> caching would (in theory) benefit from the visibility of DELETE (as
> opposed to POST) by knowing that all caches of that order resource
> could be flushed.
>
> Jan
>
>
> >
> > In the second example, a single document is available that lists
> > several related rel values. It's a documentation optimization only.
> >
> > [1] http://www.iana.org/assignments/link-relations/link-
> > relations.xhtml
> > [2] http://dublincore.org/documents/dces/
> >
> > mca
> > http://amundsen.com/blog/
> >
> >
> >
> >
> > 2010/1/13 Felipe Gacho <fgaucho@... <fgaucho%40gmail.com>>:
> >> ok,
> >>
> >> I can use link "rel" to offer the client a list of possible next
> >> states ....
> >>
> >> question: doing tat I expect the client to know what is the semantic
> >> of each link, isn't it ?
> >>
> >> If yes, hateoas seems not feasible since the client needs a previous
> >> knowledge about what to look for in the list of links... (unless
> >> there
> >> is only one link, of course)
> >>
> >> the question is: can I create my own relation semantics ? or should I
> >> grasp in the available ones for Atom and Xhtml in case I want to
> >> produce HATEOAS systems?
> >>
> >> * I am trying to implement HATEOAS using Jersey.. I know it has no
> >> support out of the box, so I am designing some tricks here :)
> >>
> >> regards,
> >>
> >>
> >> Felipe Gacho
> >>
> >>
> >> ------------------------------------
> >>
> >> Yahoo! Groups Links
> >>
> >>
> >>
> >>
> >
> >
> > ------------------------------------
> >
> > Yahoo! Groups Links
> >
> >
> >
>
> --------------------------------------
> Jan Algermissen
>
> Mail: algermissen@... <algermissen%40acm.org>
> Blog: http://algermissen.blogspot.com/
> Home: http://www.jalgermissen.com
> --------------------------------------
>
>
>
Jan:
<snip>
For such cases I'd actually avoid the introduction of a link semantic and
use DELETE because the client would presumably already know it is dealing
with an order resource:
</snip>
In the case i gave, the rel value documentation instructs developers
to use the DELETE method when activating the associated URI unless
they are working with an agent that does not support the DELETE method
(Web browsers) in which case the POST method is an acceptable
alternative.
mca
http://amundsen.com/blog/
On Wed, Jan 13, 2010 at 11:02, Jan Algermissen <algermissen1971@...> wrote:
> Mike,
>
> On Jan 13, 2010, at 4:49 PM, mike amundsen wrote:
>
>> My current approach is to use existing media types (to take advantage
>> of their existing semantics) and add "rel" values as needed.
>>
>> I start by using the values already registered and available [1,2] and
>> supplement those with ones I create to meet the domain-specific need.
>> When I create my own, I use a rel value that is a resolve-able URI and
>> place helpful documentation at that URI. I'ved used the following two
>> formats in the past:
>>
>> rel="http://www.example.org/rels/cancel-order"
>> rel="http://www.example.org/rels/purchasing#cancel-order"
>
> Just a comment:
>
> For such cases I'd actually avoid the introduction of a link semantic and
> use DELETE because the client would presumably already know it is dealing
> with an order resource:
>
> DELETE /orders/6
>
> As a rule of thumb I do this:
>
> When the domain operation in question ('cancel an order') maps o the base
> semantics of an existing HTTP method use that method to gain the visibility
> advantages over using the generic POST. In the example, caching would (in
> theory) benefit from the visibility of DELETE (as opposed to POST) by
> knowing that all caches of that order resource could be flushed.
>
> Jan
>
>
>>
>> In the second example, a single document is available that lists
>> several related rel values. It's a documentation optimization only.
>>
>> [1] http://www.iana.org/assignments/link-relations/link-relations.xhtml
>> [2] http://dublincore.org/documents/dces/
>>
>> mca
>> http://amundsen.com/blog/
>>
>>
>>
>>
>> 2010/1/13 Felipe Gacho <fgaucho@...>:
>>>
>>> ok,
>>>
>>> I can use link "rel" to offer the client a list of possible next states
>>> ....
>>>
>>> question: doing tat I expect the client to know what is the semantic
>>> of each link, isn't it ?
>>>
>>> If yes, hateoas seems not feasible since the client needs a previous
>>> knowledge about what to look for in the list of links... (unless there
>>> is only one link, of course)
>>>
>>> the question is: can I create my own relation semantics ? or should I
>>> grasp in the available ones for Atom and Xhtml in case I want to
>>> produce HATEOAS systems?
>>>
>>> * I am trying to implement HATEOAS using Jersey.. I know it has no
>>> support out of the box, so I am designing some tricks here :)
>>>
>>> regards,
>>>
>>>
>>> Felipe Gacho
>>>
>>>
>>> ------------------------------------
>>>
>>> Yahoo! Groups Links
>>>
>>>
>>>
>>>
>>
>>
>> ------------------------------------
>>
>> Yahoo! Groups Links
>>
>>
>>
>
> --------------------------------------
> Jan Algermissen
>
> Mail: algermissen@...
> Blog: http://algermissen.blogspot.com/
> Home: http://www.jalgermissen.com
> --------------------------------------
>
>
>
>
On Jan 13, 2010, at 5:06 PM, Eb wrote: > > > > FWIW I generally prefer to try to solve problems with existing media > types before resorting to custom media types. > > Why would this make a difference? We keep talking about reach, but > most (if not all?) media types were designed for a specific client. > If not developing an application that is not 'type' of that > original client (say a feed aggregator), I can't see how the > existing media types (outside of possible html) would not require > some extensions to it (as alluded too by Mike in his examples). Why > is this extension better than rolling out a new media type if in > both cases, "additional" information needs to be documented? What > are the practical reasons? When would a custom media type make sense? > It is a trade off. You are correctly saying that either way, you need additional semantics. Using existing types with extensions makes it possible to use existing tools. For example, for debugging. It is a huge benefit if you can view your HTML+microformats orders in a browser to see what is going on or to subscribe to the collection of latest orders with a feed reader. However, managing extensions is somewhat difficult because you are constantly dealing with a set of independent stuff that constitutes some contract but yet has no name. I like using a profile notion for this kind of 'media type subclassing'. The profile URI provides a nice 'handle' for the semantics established by the set of extensions. You can use a profile for conneg like this Accept: text/html;profile="http://foo-company.org/profiles/simple-order-uf-profile to tell the server that the client expects a certain 'kind of html'. Jan > Thanks. > > Eb > > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
On Jan 13, 2010, at 5:06 PM, Guilherme Silveira wrote:
>
> For such cases I'd actually avoid the introduction of a link semantic
> and use DELETE because the client would presumably already know it is
> dealing with an order resource:
> +1, but I think Mike just wrote the example to show how one could
> use it.
yes, just wanted to add that.
>
> Jan, about the Link header, I wanted to ask earlier in another
> discussion but found it too offtopic. I can't see how I can use it
> (instead of atom links) to represent multiple links within a
> collection. For example, if I have a collection of cities and each
> city has its own URI, how could the Link header in a no-nasty way
> represent a "self" relation to each city?
It can't. The Link header applies to the requested resource.
>
> The city representations are in the content body, but the resource
> being presented is the set of cities. The header is related to the
> set, but how to relate them to each item?
Use <atom:link>s instead.
>
> I am probably missing something.
Jan
>
> Regards
>
>
>
> DELETE /orders/6
>
> As a rule of thumb I do this:
>
> When the domain operation in question ('cancel an order') maps o the
> base semantics of an existing HTTP method use that method to gain the
> visibility advantages over using the generic POST. In the example,
> caching would (in theory) benefit from the visibility of DELETE (as
> opposed to POST) by knowing that all caches of that order resource
> could be flushed.
>
> Jan
>
>
> >
> > In the second example, a single document is available that lists
> > several related rel values. It's a documentation optimization only.
> >
> > [1] http://www.iana.org/assignments/link-relations/link-
> > relations.xhtml
> > [2] http://dublincore.org/documents/dces/
> >
> > mca
> > http://amundsen.com/blog/
> >
> >
> >
> >
> > 2010/1/13 Felipe Gacho <fgaucho@...>:
> >> ok,
> >>
> >> I can use link "rel" to offer the client a list of possible next
> >> states ....
> >>
> >> question: doing tat I expect the client to know what is the
> semantic
> >> of each link, isn't it ?
> >>
> >> If yes, hateoas seems not feasible since the client needs a
> previous
> >> knowledge about what to look for in the list of links... (unless
> >> there
> >> is only one link, of course)
> >>
> >> the question is: can I create my own relation semantics ? or
> should I
> >> grasp in the available ones for Atom and Xhtml in case I want to
> >> produce HATEOAS systems?
> >>
> >> * I am trying to implement HATEOAS using Jersey.. I know it has no
> >> support out of the box, so I am designing some tricks here :)
> >>
> >> regards,
> >>
> >>
> >> Felipe Gacho
> >>
> >>
> >> ------------------------------------
> >>
> >> Yahoo! Groups Links
> >>
> >>
> >>
> >>
> >
> >
> > ------------------------------------
> >
> > Yahoo! Groups Links
> >
> >
> >
>
> --------------------------------------
> Jan Algermissen
>
> Mail: algermissen@...
> Blog: http://algermissen.blogspot.com/
> Home: http://www.jalgermissen.com
> --------------------------------------
>
>
>
>
--------------------------------------
Jan Algermissen
Mail: algermissen@...
Blog: http://algermissen.blogspot.com/
Home: http://www.jalgermissen.com
--------------------------------------
On Jan 13, 2010, at 5:11 PM, mike amundsen wrote:
> Jan:
>
> <snip>
> For such cases I'd actually avoid the introduction of a link
> semantic and
> use DELETE because the client would presumably already know it is
> dealing
> with an order resource:
> </snip>
>
> In the case i gave, the rel value documentation instructs developers
> to use the DELETE method when activating the associated URI unless
> they are working with an agent that does not support the DELETE method
> (Web browsers) in which case the POST method is an acceptable
> alternative.
Is that a production use case? Can you share the complete hypermedia?
Jan
>
> mca
> http://amundsen.com/blog/
>
>
>
>
> On Wed, Jan 13, 2010 at 11:02, Jan Algermissen <algermissen1971@mac.com
> > wrote:
>> Mike,
>>
>> On Jan 13, 2010, at 4:49 PM, mike amundsen wrote:
>>
>>> My current approach is to use existing media types (to take
>>> advantage
>>> of their existing semantics) and add "rel" values as needed.
>>>
>>> I start by using the values already registered and available [1,2]
>>> and
>>> supplement those with ones I create to meet the domain-specific
>>> need.
>>> When I create my own, I use a rel value that is a resolve-able URI
>>> and
>>> place helpful documentation at that URI. I'ved used the following
>>> two
>>> formats in the past:
>>>
>>> rel="http://www.example.org/rels/cancel-order"
>>> rel="http://www.example.org/rels/purchasing#cancel-order"
>>
>> Just a comment:
>>
>> For such cases I'd actually avoid the introduction of a link
>> semantic and
>> use DELETE because the client would presumably already know it is
>> dealing
>> with an order resource:
>>
>> DELETE /orders/6
>>
>> As a rule of thumb I do this:
>>
>> When the domain operation in question ('cancel an order') maps o
>> the base
>> semantics of an existing HTTP method use that method to gain the
>> visibility
>> advantages over using the generic POST. In the example, caching
>> would (in
>> theory) benefit from the visibility of DELETE (as opposed to POST) by
>> knowing that all caches of that order resource could be flushed.
>>
>> Jan
>>
>>
>>>
>>> In the second example, a single document is available that lists
>>> several related rel values. It's a documentation optimization only.
>>>
>>> [1] http://www.iana.org/assignments/link-relations/link-relations.xhtml
>>> [2] http://dublincore.org/documents/dces/
>>>
>>> mca
>>> http://amundsen.com/blog/
>>>
>>>
>>>
>>>
>>> 2010/1/13 Felipe Gacho <fgaucho@...>:
>>>>
>>>> ok,
>>>>
>>>> I can use link "rel" to offer the client a list of possible next
>>>> states
>>>> ....
>>>>
>>>> question: doing tat I expect the client to know what is the
>>>> semantic
>>>> of each link, isn't it ?
>>>>
>>>> If yes, hateoas seems not feasible since the client needs a
>>>> previous
>>>> knowledge about what to look for in the list of links... (unless
>>>> there
>>>> is only one link, of course)
>>>>
>>>> the question is: can I create my own relation semantics ? or
>>>> should I
>>>> grasp in the available ones for Atom and Xhtml in case I want to
>>>> produce HATEOAS systems?
>>>>
>>>> * I am trying to implement HATEOAS using Jersey.. I know it has no
>>>> support out of the box, so I am designing some tricks here :)
>>>>
>>>> regards,
>>>>
>>>>
>>>> Felipe Gacho
>>>>
>>>>
>>>> ------------------------------------
>>>>
>>>> Yahoo! Groups Links
>>>>
>>>>
>>>>
>>>>
>>>
>>>
>>> ------------------------------------
>>>
>>> Yahoo! Groups Links
>>>
>>>
>>>
>>
>> --------------------------------------
>> Jan Algermissen
>>
>> Mail: algermissen@...
>> Blog: http://algermissen.blogspot.com/
>> Home: http://www.jalgermissen.com
>> --------------------------------------
>>
>>
>>
>>
--------------------------------------
Jan Algermissen
Mail: algermissen@...
Blog: http://algermissen.blogspot.com/
Home: http://www.jalgermissen.com
--------------------------------------
Jan: <snip> Accept: text/html;profile="http://foo-company.org/profiles/simple-order-uf-profile </snip> interesting... i've been working on a case right now that uses XHTML and was considering using HTML Profiles [1] to mark the HEAD of the document w/ a set of "pre-conditions" that the client would scan to make sure they can "understand" the contents of the document. Still playing with this idea, but it sounds like it might be similar. [1] http://www.w3.org/TR/html401/struct/global.html#profiles mca http://amundsen.com/blog/ On Wed, Jan 13, 2010 at 11:17, Jan Algermissen <algermissen1971@...> wrote: > > On Jan 13, 2010, at 5:06 PM, Eb wrote: > >> >> >> >> FWIW I generally prefer to try to solve problems with existing media >> types before resorting to custom media types. >> >> Why would this make a difference? We keep talking about reach, but >> most (if not all?) media types were designed for a specific client. >> If not developing an application that is not 'type' of that >> original client (say a feed aggregator), I can't see how the >> existing media types (outside of possible html) would not require >> some extensions to it (as alluded too by Mike in his examples). Why >> is this extension better than rolling out a new media type if in >> both cases, "additional" information needs to be documented? What >> are the practical reasons? When would a custom media type make sense? >> > > It is a trade off. You are correctly saying that either way, you need > additional semantics. Using existing types with extensions makes it > possible to use existing tools. For example, for debugging. It is a > huge benefit if you can view your HTML+microformats orders in a > browser to see what is going on or to subscribe to the collection of > latest orders with a feed reader. However, managing extensions is > somewhat difficult because you are constantly dealing with a set of > independent stuff that constitutes some contract but yet has no name. > > I like using a profile notion for this kind of 'media type > subclassing'. The profile URI provides a nice 'handle' for the > semantics established by the set of extensions. You can use a profile > for conneg like this > > Accept: text/html;profile="http://foo-company.org/profiles/simple-order-uf-profile > > to tell the server that the client expects a certain 'kind of html'. > > Jan > > > >> Thanks. >> >> Eb >> >> >> >> > > -------------------------------------- > Jan Algermissen > > Mail: algermissen@... > Blog: http://algermissen.blogspot.com/ > Home: http://www.jalgermissen.com > -------------------------------------- > > > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
Jan:
<snip>
Is that a production use case? Can you share the complete hypermedia?
</snip>
the example i have in mind is behind a firewall. if you're interested,
i can proly work up a quick sample to public view.
mca
http://amundsen.com/blog/
On Wed, Jan 13, 2010 at 11:23, Jan Algermissen <algermissen1971@...> wrote:
>
> On Jan 13, 2010, at 5:11 PM, mike amundsen wrote:
>
>> Jan:
>>
>> <snip>
>> For such cases I'd actually avoid the introduction of a link semantic and
>> use DELETE because the client would presumably already know it is dealing
>> with an order resource:
>> </snip>
>>
>> In the case i gave, the rel value documentation instructs developers
>> to use the DELETE method when activating the associated URI unless
>> they are working with an agent that does not support the DELETE method
>> (Web browsers) in which case the POST method is an acceptable
>> alternative.
>
> Is that a production use case? Can you share the complete hypermedia?
>
> Jan
>
>
>>
>> mca
>> http://amundsen.com/blog/
>>
>>
>>
>>
>> On Wed, Jan 13, 2010 at 11:02, Jan Algermissen <algermissen1971@...>
>> wrote:
>>>
>>> Mike,
>>>
>>> On Jan 13, 2010, at 4:49 PM, mike amundsen wrote:
>>>
>>>> My current approach is to use existing media types (to take advantage
>>>> of their existing semantics) and add "rel" values as needed.
>>>>
>>>> I start by using the values already registered and available [1,2] and
>>>> supplement those with ones I create to meet the domain-specific need.
>>>> When I create my own, I use a rel value that is a resolve-able URI and
>>>> place helpful documentation at that URI. I'ved used the following two
>>>> formats in the past:
>>>>
>>>> rel="http://www.example.org/rels/cancel-order"
>>>> rel="http://www.example.org/rels/purchasing#cancel-order"
>>>
>>> Just a comment:
>>>
>>> For such cases I'd actually avoid the introduction of a link semantic and
>>> use DELETE because the client would presumably already know it is dealing
>>> with an order resource:
>>>
>>> DELETE /orders/6
>>>
>>> As a rule of thumb I do this:
>>>
>>> When the domain operation in question ('cancel an order') maps o the base
>>> semantics of an existing HTTP method use that method to gain the
>>> visibility
>>> advantages over using the generic POST. In the example, caching would (in
>>> theory) benefit from the visibility of DELETE (as opposed to POST) by
>>> knowing that all caches of that order resource could be flushed.
>>>
>>> Jan
>>>
>>>
>>>>
>>>> In the second example, a single document is available that lists
>>>> several related rel values. It's a documentation optimization only.
>>>>
>>>> [1] http://www.iana.org/assignments/link-relations/link-relations.xhtml
>>>> [2] http://dublincore.org/documents/dces/
>>>>
>>>> mca
>>>> http://amundsen.com/blog/
>>>>
>>>>
>>>>
>>>>
>>>> 2010/1/13 Felipe Gacho <fgaucho@...>:
>>>>>
>>>>> ok,
>>>>>
>>>>> I can use link "rel" to offer the client a list of possible next states
>>>>> ....
>>>>>
>>>>> question: doing tat I expect the client to know what is the semantic
>>>>> of each link, isn't it ?
>>>>>
>>>>> If yes, hateoas seems not feasible since the client needs a previous
>>>>> knowledge about what to look for in the list of links... (unless there
>>>>> is only one link, of course)
>>>>>
>>>>> the question is: can I create my own relation semantics ? or should I
>>>>> grasp in the available ones for Atom and Xhtml in case I want to
>>>>> produce HATEOAS systems?
>>>>>
>>>>> * I am trying to implement HATEOAS using Jersey.. I know it has no
>>>>> support out of the box, so I am designing some tricks here :)
>>>>>
>>>>> regards,
>>>>>
>>>>>
>>>>> Felipe Gacho
>>>>>
>>>>>
>>>>> ------------------------------------
>>>>>
>>>>> Yahoo! Groups Links
>>>>>
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>> ------------------------------------
>>>>
>>>> Yahoo! Groups Links
>>>>
>>>>
>>>>
>>>
>>> --------------------------------------
>>> Jan Algermissen
>>>
>>> Mail: algermissen@...
>>> Blog: http://algermissen.blogspot.com/
>>> Home: http://www.jalgermissen.com
>>> --------------------------------------
>>>
>>>
>>>
>>>
>
> --------------------------------------
> Jan Algermissen
>
> Mail: algermissen@...
> Blog: http://algermissen.blogspot.com/
> Home: http://www.jalgermissen.com
> --------------------------------------
>
>
>
>
Jan Algermissen wrote: > On Jan 13, 2010, at 5:06 PM, Guilherme Silveira wrote: > > >> Jan, about the Link header, I wanted to ask earlier in another >> discussion but found it too offtopic. I can't see how I can use it >> (instead of atom links) to represent multiple links within a >> collection. For example, if I have a collection of cities and each >> city has its own URI, how could the Link header in a no-nasty way >> represent a "self" relation to each city? >> > > It can't. The Link header applies to the requested resource. > > Which is, in some (most?) situations, a good thing - because it discourages designs where resources are 'overlapped' (i.e. composites), and therefore promotes visibility. - Mike
On Jan 13, 2010, at 5:30 PM, mike amundsen wrote:
> Jan:
>
> <snip>
> Is that a production use case? Can you share the complete hypermedia?
> </snip>
>
> the example i have in mind is behind a firewall. if you're interested,
> i can proly work up a quick sample to public view.
Sure.
But then - do not put additional work on your desk!!
Jan
>
> mca
> http://amundsen.com/blog/
>
>
>
>
> On Wed, Jan 13, 2010 at 11:23, Jan Algermissen <algermissen1971@...
> > wrote:
>>
>> On Jan 13, 2010, at 5:11 PM, mike amundsen wrote:
>>
>>> Jan:
>>>
>>> <snip>
>>> For such cases I'd actually avoid the introduction of a link
>>> semantic and
>>> use DELETE because the client would presumably already know it is
>>> dealing
>>> with an order resource:
>>> </snip>
>>>
>>> In the case i gave, the rel value documentation instructs developers
>>> to use the DELETE method when activating the associated URI unless
>>> they are working with an agent that does not support the DELETE
>>> method
>>> (Web browsers) in which case the POST method is an acceptable
>>> alternative.
>>
>> Is that a production use case? Can you share the complete hypermedia?
>>
>> Jan
>>
>>
>>>
>>> mca
>>> http://amundsen.com/blog/
>>>
>>>
>>>
>>>
>>> On Wed, Jan 13, 2010 at 11:02, Jan Algermissen <algermissen1971@...
>>> >
>>> wrote:
>>>>
>>>> Mike,
>>>>
>>>> On Jan 13, 2010, at 4:49 PM, mike amundsen wrote:
>>>>
>>>>> My current approach is to use existing media types (to take
>>>>> advantage
>>>>> of their existing semantics) and add "rel" values as needed.
>>>>>
>>>>> I start by using the values already registered and available
>>>>> [1,2] and
>>>>> supplement those with ones I create to meet the domain-specific
>>>>> need.
>>>>> When I create my own, I use a rel value that is a resolve-able
>>>>> URI and
>>>>> place helpful documentation at that URI. I'ved used the
>>>>> following two
>>>>> formats in the past:
>>>>>
>>>>> rel="http://www.example.org/rels/cancel-order"
>>>>> rel="http://www.example.org/rels/purchasing#cancel-order"
>>>>
>>>> Just a comment:
>>>>
>>>> For such cases I'd actually avoid the introduction of a link
>>>> semantic and
>>>> use DELETE because the client would presumably already know it is
>>>> dealing
>>>> with an order resource:
>>>>
>>>> DELETE /orders/6
>>>>
>>>> As a rule of thumb I do this:
>>>>
>>>> When the domain operation in question ('cancel an order') maps o
>>>> the base
>>>> semantics of an existing HTTP method use that method to gain the
>>>> visibility
>>>> advantages over using the generic POST. In the example, caching
>>>> would (in
>>>> theory) benefit from the visibility of DELETE (as opposed to
>>>> POST) by
>>>> knowing that all caches of that order resource could be flushed.
>>>>
>>>> Jan
>>>>
>>>>
>>>>>
>>>>> In the second example, a single document is available that lists
>>>>> several related rel values. It's a documentation optimization
>>>>> only.
>>>>>
>>>>> [1] http://www.iana.org/assignments/link-relations/link-relations.xhtml
>>>>> [2] http://dublincore.org/documents/dces/
>>>>>
>>>>> mca
>>>>> http://amundsen.com/blog/
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> 2010/1/13 Felipe Gacho <fgaucho@...>:
>>>>>>
>>>>>> ok,
>>>>>>
>>>>>> I can use link "rel" to offer the client a list of possible
>>>>>> next states
>>>>>> ....
>>>>>>
>>>>>> question: doing tat I expect the client to know what is the
>>>>>> semantic
>>>>>> of each link, isn't it ?
>>>>>>
>>>>>> If yes, hateoas seems not feasible since the client needs a
>>>>>> previous
>>>>>> knowledge about what to look for in the list of links...
>>>>>> (unless there
>>>>>> is only one link, of course)
>>>>>>
>>>>>> the question is: can I create my own relation semantics ? or
>>>>>> should I
>>>>>> grasp in the available ones for Atom and Xhtml in case I want to
>>>>>> produce HATEOAS systems?
>>>>>>
>>>>>> * I am trying to implement HATEOAS using Jersey.. I know it has
>>>>>> no
>>>>>> support out of the box, so I am designing some tricks here :)
>>>>>>
>>>>>> regards,
>>>>>>
>>>>>>
>>>>>> Felipe Gacho
>>>>>>
>>>>>>
>>>>>> ------------------------------------
>>>>>>
>>>>>> Yahoo! Groups Links
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> ------------------------------------
>>>>>
>>>>> Yahoo! Groups Links
>>>>>
>>>>>
>>>>>
>>>>
>>>> --------------------------------------
>>>> Jan Algermissen
>>>>
>>>> Mail: algermissen@...
>>>> Blog: http://algermissen.blogspot.com/
>>>> Home: http://www.jalgermissen.com
>>>> --------------------------------------
>>>>
>>>>
>>>>
>>>>
>>
>> --------------------------------------
>> Jan Algermissen
>>
>> Mail: algermissen@...
>> Blog: http://algermissen.blogspot.com/
>> Home: http://www.jalgermissen.com
>> --------------------------------------
>>
>>
>>
>>
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
--------------------------------------
Jan Algermissen
Mail: algermissen@acm.org
Blog: http://algermissen.blogspot.com/
Home: http://www.jalgermissen.com
--------------------------------------
For extensibility reasons, link relation type values can be URIs. In fact, if the link relation type is not a registered type, URIs are better. Such types are called "extension relation types". begin shameless-plugin-motivated-by-jim-webber-:) See recipe 5.4 in http://my.safaribooksonline.com/9780596809140 end Also see http://tools.ietf.org/html/draft-nottingham-http-link-header-06#section-4.2. Subbu On Jan 13, 2010, at 7:52 AM, Felipe Gacho wrote: > Hi amundsen, > > you are using the relation attribute for the navigation.. strange.. > > relation should just give the client a hint about the relation between > a resource and the url in the link element.. not a URI itself.. > > imho.. > > I got the point about the original question, thanks a lot for all responses. > > > Felipe Gacho. > > 2010/1/13 mike amundsen <mamund@...>: >> My current approach is to use existing media types (to take advantage >> of their existing semantics) and add "rel" values as needed. >> >> I start by using the values already registered and available [1,2] and >> supplement those with ones I create to meet the domain-specific need. >> When I create my own, I use a rel value that is a resolve-able URI and >> place helpful documentation at that URI. I'ved used the following two >> formats in the past: >> >> rel="http://www.example.org/rels/cancel-order" >> rel="http://www.example.org/rels/purchasing#cancel-order" >> >> In the second example, a single document is available that lists >> several related rel values. It's a documentation optimization only. >> >> [1] http://www.iana.org/assignments/link-relations/link-relations.xhtml >> [2] http://dublincore.org/documents/dces/ >> >> mca >> http://amundsen.com/blog/ >> >> >> >> >> 2010/1/13 Felipe Gacho <fgaucho@...>: >>> ok, >>> >>> I can use link "rel" to offer the client a list of possible next states .... >>> >>> question: doing tat I expect the client to know what is the semantic >>> of each link, isn't it ? >>> >>> If yes, hateoas seems not feasible since the client needs a previous >>> knowledge about what to look for in the list of links... (unless there >>> is only one link, of course) >>> >>> the question is: can I create my own relation semantics ? or should I >>> grasp in the available ones for Atom and Xhtml in case I want to >>> produce HATEOAS systems? >>> >>> * I am trying to implement HATEOAS using Jersey.. I know it has no >>> support out of the box, so I am designing some tricks here :) >>> >>> regards, >>> >>> >>> Felipe Gacho >>> >>> >>> ------------------------------------ >>> >>> Yahoo! Groups Links >>> >>> >>> >>> >> > > > > -- > ------------------------------------------ > Felipe Gacho > 10+ Java Programmer > CEJUG Senior Advisor > > > ------------------------------------ > > Yahoo! Groups Links > > >
Hi Eb, > Why would this make a difference? We keep talking about reach, but most (if > not all?) media types were designed for a specific client. Or clients. Atom has reach because lots of systems out there already understand how to process it. application/vnd.restbucks+xml does not have reach because there aren't many systems out there that understand it. Nor are there lots of libraries to choose from on lots of platforms that implement the processing model for that type. But it does have the advantage that it works really well for coffee ordering within the Restbucks domain. Jim
> Which is, in some (most?) situations, a good thing - because it discourages > designs where resources are 'overlapped' (i.e. composites), and therefore > promotes visibility. Hello Mike, If I avoid using a nested element that overlaps another resource representation, as mentioned, how can a client understand the any difference between an order products and its other related resources and act acorrdingly? Or the client should not know how to add products to this order? <order> <link rel="related" href="http://server/products" /> (or similar header) <link rel="related" href="http://server/similar-orders" /> (or similar header) </order> In the human web, the client reads "products" instead of "related" prior to acting upon a representation, in a M2M system, it should not? If there was a link describing a products relationship (with its namespace), the client knows how to deal with it. Regards > > - Mike >
Hey Jim - Or clients. Atom has reach because lots of systems out there already > understand how to process it. > > application/vnd.restbucks+xml does not have reach because there aren't many > systems out there that understand it. Nor are there lots of libraries to > choose from on lots of platforms that implement the processing model for > that type. But it does have the advantage that it works really well for > coffee ordering within the Restbucks domain. > > But what type(s) of systems are we really talking about here? Could we classify/aggregate them? I would love to see example of business systems that just use (for example) Atom as is with no extensions. It would seem to me that the semantics (from a registered link relations perspective) available in Atom (as is) is limited in reach whereas most business applications have a much richer vocabulary. I would love to be completely off base here!! :) Eb
> > > It would seem to me that the semantics (from a registered link relations > perspective) available in Atom (as is) is limited in reach whereas most > business applications have a much richer vocabulary. > As per my example on the previous message: <order> <link rel="related" href="http://server/products" /> (or similar header) <link rel="related" href="http://server/similar-orders" /> (or similar header) </order> Atom (and link headers) following only registered link relations would only allow representations as the above? Regards > > I would love to be completely off base here!! :) > > Eb > > >
On Jan 13, 2010, at 8:45 AM, Jim Webber wrote: > application/vnd.restbucks+xml does not have reach because there aren't many systems out there that understand it. Nor are there lots of libraries to choose from on lots of platforms that implement the processing model for that type. But it does have the advantage that it works really well for coffee ordering within the Restbucks domain. I started with the same premise about 15 months ago, but later on found some bumps that made me change my mind. As long as "application/vnd.restbucks+xml" is the only variant, I think you are right. No other party except the sender and the receiver need to deal with the name and semantics of this media type. However, once the application requires any other party (proxies, CDNs, monitoring tools, log file analyzers etc.) understand such media types, this model starts to fall apart. All such tools will be more than happy to oblige URI patterns and not media types. This is not a media type problem, but a potential reality that Restbucks Inc. may need to account for. Subbu
Hello Subbu - I started with the same premise about 15 months ago, but later on found some > bumps that made me change my mind. As long as > "application/vnd.restbucks+xml" is the only variant, I think you are right. > No other party except the sender and the receiver need to deal with the name > and semantics of this media type. > > However, once the application requires any other party (proxies, CDNs, > monitoring tools, log file analyzers etc.) understand such media types, this > model starts to fall apart. All such tools will be more than happy to oblige > URI patterns and not media types. This is not a media type problem, but a > potential reality that Restbucks Inc. may need to account for. > > I like this point. From a reach perspective, we should think more of the other agents/intermediaries that would have no clue on how to handle this media type (if they needed too). Regardless, the receiver (in a lot of cases) will need to understand the extensions to an existing media type.
<order> > <link rel="related" href="http://server/products" /> (or similar header) > <link rel="related" href="http://server/similar-orders" /> (or similar > header) > </order> > > Atom (and link headers) following only registered link relations would only > allow representations as the above? > > I'm not sure I understand the question in its entirety, but your relation (rel) would/could be a URI (and not "related") allowing a client to distinguish between the two link relations.
Hey Eb, > I like this point. From a reach perspective, we should think more of the > other agents/intermediaries that would have no clue on how to handle this > media type (if they needed too). Regardless, the receiver (in a lot of > cases) will need to understand the extensions to an existing media type. If a client or intermediary doesn't understand a media type then all bets are off - the client or intermediary doesn't understand the service's contract. Some clients know about lots of media types (e.g. browsers) some clients know about few media types (e.g. Restbucks systems). That's the essence of reach. Jim
Guilherme Silveira wrote: >> Which is, in some (most?) situations, a good thing - because it discourages >> designs where resources are 'overlapped' (i.e. composites), and therefore >> promotes visibility. >> > > Hello Mike, > > If I avoid using a nested element that overlaps another resource > representation, as mentioned, how can a client understand the any > difference between an order products and its other related resources > and act acorrdingly? Using distinctive rel values for each type of link relation - your example doesn't do this: > Or the client should not know how to add products > to this order? > > <order> > <link rel="related" href="http://server/products" /> (or similar header) > <link rel="related" href="http://server/similar-orders" /> (or similar header) > </order> > > In the human web, the client reads "products" instead of "related" > prior to acting upon a representation, in a M2M system, it should not? Sorry, I don't understand the question I think those rel values are wrong, a 'related' link relation doesn't really say much.. Maybe something like this instead: <order> <link rel="products" href="http://server/products" /> (or similar header) <link rel="similar" href="http://server/similar-orders" /> (or similar header) </order> ? - Mike
Hey Jim - > If a client or intermediary doesn't understand a media type then all bets > are off - the client or intermediary doesn't understand the service's > contract. > > Some clients know about lots of media types (e.g. browsers) some clients > know about few media types (e.g. Restbucks systems). That's the essence of > reach. > I concur however, its sorta assumed that my client will understand the media becuase I had to take the media type into consideration from the get go. I probably won't think (or care about) intermediaries depending on the "reach" of my solution, intranet versus internet for example. So when we suggest using "standard" media types for purposes of reach, we just need to be clear as to what concerns we're focusing on. Eb
> Sorry, I don't understand the question Sorry, I couldnt come up with the question so clearly. > I think those rel values are wrong, a 'related' link relation doesn't really > say much.. Maybe something like this instead: So do I, but creating such custom relations and sticking to a well known media type (an atom feed containing this order, for example) does or does not break the reach issue that Subbu mentioned? Proxies, log tools and so on can understand atom based resources but not "http://server/products" rels. > <order> > <link rel="http://server/products" href="http://server/products" /> (or similar header) > <link rel="http://server/similar" href="http://server/similar-orders" /> (or similar > header) > </order> Regards > > > ? > > - Mike >
Subbu, On Jan 13, 2010, at 7:29 PM, Subbu Allamaraju wrote: > However, once the application requires any other party (proxies, > CDNs, monitoring tools, log file analyzers etc.) understand such > media types, this model starts to fall apart. All such tools will be > more than happy to oblige URI patterns and not media types. This is > not a media type problem, but a potential reality that Restbucks > Inc. may need to account for. Can you provide an example of the situation that made you change your mind? jan -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
Reach can be double-edged. If you assume no one outside of your sphere of expertise will care but your service is widely popular in the wild, you're going to be somewhat stuck with the assumption. This wouldn't be the first time[1] this has happened. -Noah [1] http://www.theaustralian.com.au/news/web-creator-apologises-for-his-strokes/story-e6frgal6-1225786657345 On Wed, Jan 13, 2010 at 12:15 PM, Eb <amaeze@...> wrote: > > > > Hey Jim - > > >> If a client or intermediary doesn't understand a media type then all bets >> are off - the client or intermediary doesn't understand the service's >> contract. >> >> Some clients know about lots of media types (e.g. browsers) some clients >> know about few media types (e.g. Restbucks systems). That's the essence of >> reach. >> > > I concur however, its sorta assumed that my client will understand the > media becuase I had to take the media type into consideration from the get > go. I probably won't think (or care about) intermediaries depending on the > "reach" of my solution, intranet versus internet for example. So when we > suggest using "standard" media types for purposes of reach, we just need to > be clear as to what concerns we're focusing on. > > Eb > > >
--- In rest-discuss@yahoogroups.com, Jim Webber <jim@...> wrote: > > Hey Eb, > > > I like this point. From a reach perspective, we should think more of the > > other agents/intermediaries that would have no clue on how to handle this > > media type (if they needed too). Regardless, the receiver (in a lot of > > cases) will need to understand the extensions to an existing media type. > > If a client or intermediary doesn't understand a media type then all bets are off - the client or intermediary doesn't understand the service's contract. > > Some clients know about lots of media types (e.g. browsers) some clients know about few media types (e.g. Restbucks systems). That's the essence of reach. > > Jim > I keep coming back to this on this list from different angles -- so appologies if I sound redundant, but I always get the feeling that folks don't quite get where I'm coming from -- I keep trying because I think this is an important point for understanding REST and this list contains the set of key individuals educating the broader development community. Is the media type a part of the service's contract or the client's? It seems to me that a key distinction between REST and RPC is that in RPC the service provides the contract while in REST, the client provides the contract (via the media type). A client, in the Accept header, constrains the set of acceptable media types -- isn't this essentially run-time contract negotiation? The server agrees to the contract at run time by returning an appropriate representation of the requested resource (or rejects the contract by returning "Not Acceptable"). *Typically*, services can easily extend their "reach" by supporting as many media types as they like while clients support a fixed set of media types. So in order to give a client reach, it is best to support media types that are able to be used by a broad range of services. For example, HTML can obviously be used to express an incredible range of services. VoiceXML (used by automated phone systems) can also be used to express a broad range of services. Supporting one of these media types would give a client a broad reach as it could interact with many services. But a service could address both HTML and VoiceXML clients via conneg (or simply two disjoint sets of URIs). Isn't this the root of the client-server decoupling provided by REST? If so -- then the question I keep coming back to is if a service that uses a "service-specific" media type is really an instance of REST. By service-specific, I don't mean "not standardized" or vendor-specific -- this has nothing to do with the nature of the media type itself, just whether or not it's been approved by a standards body. I mean that the media type represents a contract set by the service because the media type is not designed to represent a set of services. This is because the semantics of the media type map exactly to the semantics of the service. You see this in most "REST APIs" that are simply serializing service data structures as JSON or XML. To me a RESTful service "translates" it's own internal semantics into the media type(s) of the client(s) it is trying to address -- the specific translation used being negotiated at runtime. This, to me is the point of having a distinction between resources and representations in REST. The translation doesn't just allow the service to "reach" a broader set of clients, but it also allows the client to "reach" a broader set of services. This is because the representation format captures information using semantics that are specific to the client. By designing the client's format around the information processing capabilities of the client, the client can interact with as many services as possible. For example, HTML represents information in terms of common structures of visually displayed, interactive text -- it's designed around visual browsers. VoiceXML is designed around voice browsers. Yes, you can write a spider to consume HTML (and VoiceXML). And yes, you can use a screen reader to process HTML, but VoiceXML is a much more natural way to represent information for speech-based consumption and interaction (it won a standards war with SALT -- a set of extensions to HTML for speech). So there are other ways to consume the information (something afforded by the Principle of Least Power), but that doesn't diminish the fact that the media type is designed to cater to a specific flavor of client. So when a media type is designed around a service rather than a type of client, I question if the result can be called REST. For example, a banking service that spits out a JSON format that simply serializes the account and transaction data structures used internally to represent the service's resources. i.e. if you aren't targeting a specific "type" of client by translating to that client's media type are you violating the constraints of REST? What specific constraints are being violated is a hard question and the reason I have a hard time explaining this -- I would point to "Self-descriptive messages" and its requirement for standard media types but the meaning of "standard" in the context of REST is so hard to pin down. Or maybe this is just inherent in the distinction between representations and resources. Or maybe this isn't part of REST's constraints at all... I don't know -- that's why I keep asking (but based on the answers I get, I don't think I ever get the question across properly). Anyways, I'm interested in all of your thoughts. Regards, Andrew
I think Atom provides a great primitive for collections in an organization and the syndication around those primitives. It's not a complete solution, but if you take the HTML/microformat, ATOM as a base you can probably build quite a bit. On Wed, Jan 13, 2010 at 10:23 AM, Eb <amaeze@...> wrote: > > > Hey Jim - > > > Or clients. Atom has reach because lots of systems out there already >> understand how to process it. >> >> application/vnd.restbucks+xml does not have reach because there aren't >> many systems out there that understand it. Nor are there lots of libraries >> to choose from on lots of platforms that implement the processing model for >> that type. But it does have the advantage that it works really well for >> coffee ordering within the Restbucks domain. >> >> > But what type(s) of systems are we really talking about here? Could we > classify/aggregate them? I would love to see example of business systems > that just use (for example) Atom as is with no extensions. It would seem to > me that the semantics (from a registered link relations perspective) > available in Atom (as is) is limited in reach whereas most business > applications have a much richer vocabulary. > > I would love to be completely off base here!! :) > > Eb > > >
> However, once the application requires any other party (proxies, CDNs, > monitoring tools, log file analyzers etc.) understand such media types, > this model starts to fall apart. I read your blog post about that and it certainly also made me re-think the use of media types like application/vnd.mystuff+xml (custom media types) and application/vnd.mystuff.v2+xml (versioning). An XML purchase order or movie representation is after all still XML. On the other hand one still need some header to switch on when dispatching the returned representation. Aren't there other HTTP headers which can be used for indicating sub-media-types and versions? Some things I dislike about the use of putting versioning and formating in the URL is 1) you no longer have a single ressource, 2) server logic needs to template embedded links based on the current URL. Example: URL 1: http://example.com/movies/1.xml URL 2: http://example.com/movies/1.json Each of these movie representations have relations to other ressources, but the ressource at URL 1 should link to the .xml versions, and the ressource at URL 2 should link to the .json versions and so on. It's much easier if all URL's are without format-extensions (as well as version-extensions). /J�rn
> So when a media type is designed around a service rather than a type of > client, I question if the result can be called REST. For example, a > banking service that spits out a JSON format that simply serializes the > account and transaction data structures used internally to represent the > service's resources. i.e. if you aren't targeting a specific "type" of > client by translating to that client's media type are you violating the > constraints of REST? I like this formulation of the problem. Yes, the service is sort-of targeting itself instead of it's users. But are there any alternatives? It is as if the smaller your audience is, the more are you violating REST. Example: the detailed representation of a egyptic scarabae collection - there's probably not many who are going to consume that. Is it not RESTful to have a special XML (maybe even binary) representation of that? There are people on this list who argue that representaions must be accepted as standards in order to call the service RESTful. Considering the above thoughts they have a good point: it would not matter if the audience for a representation was big or small, as long as it has a standard representation. Then banking clients works for all banks and scarabae collectors can browse any scarabae collection on the net. Maybe we could simplify the problem a bit: 1) we cannot expect each and every client to know all standard media-types, and 2) some media-types, like the scarabae collection, are just too specialized to get accepted as a standard media-type. So what if a service could be categorized as RESTful *with respect to a certain domain*? Now we could say that our scarabae service certainly is RESTful with respect to the scarabae domain - and similar with the banking example. Then we could use URL's for domain identifiers and have those URL's return a (standard) representation of the media types included in the domain. But maybe I'm just dreaming and should go to bed instead ... /J�rn
On Jan 13, 2010, at 10:09 PM, wahbedahbe wrote:
>
> Is the media type a part of the service's contract or the client's?
>
> It seems to me that a key distinction between REST and RPC is that
> in RPC the service provides the contract while in REST, the client
> provides the contract (via the media type).
>
> A client, in the Accept header, constrains the set of acceptable
> media types -- isn't this essentially run-time contract negotiation?
> The server agrees to the contract at run time by returning an
> appropriate representation of the requested resource (or rejects the
> contract by returning "Not Acceptable").
When you build a client that understands media type A, you need to
hard wire (or configure) two things into your client code:
1. knowledge about which hypermedia elements are traversal options
(links, forms)
2. knowledge about which media types to put into the Accept header
when the user (human or machine) of the client chooses to follow
a certain transition. (You do *not* code the client to simply list
all the media types it understands)
During the request handling, there happens runtime negotiation of the
content but there is a piece of contract that is a design time
artifact (2. above). The question really is: On the basis of what
information does the client choose what types to put in the Accept
header. It is not an arbitrary decision but a decision that
essentially reflects the client's design-time knowledge of the domain
protocol supported by the service.
>
> *Typically*, services can easily extend their "reach" by supporting
> as many media types as they like while clients support a fixed set
> of media types. So in order to give a client reach, it is best to
> support media types that are able to be used by a broad range of
> services.
I think it is the other way round: Services expect the clients to
understand a set of media types. This set constitutes the service's
type.
>
> For example, HTML can obviously be used to express an incredible
> range of services.
Hmm - I'd argue that HTML oly expresses the semantics needed by a
browser to turn human targetted hypermedia into an interactive GUI.
The 'incredible range' is a by-product of humans controlling the
browser.
> VoiceXML (used by automated phone systems) can also be used to
> express a broad range of services. Supporting one of these media
> types would give a client a broad reach as it could interact with
> many services. But a service could address both HTML and VoiceXML
> clients via conneg (or simply two disjoint sets of URIs).
Hmm, not sure I understand that. Can you illustrate?
>
> Isn't this the root of the client-server decoupling provided by REST?
The decoupling is achieved by removing *any* assumption on the client
side about what the server may do next. (Except for, for example,
returning images for requests to <img href=""> target URIs. The server
must not contradict itself.
In my posting regarding testing a couple of days ago I tried to
'investigate' the point by saying: "A server can never send a wrong
response" client's must expect anything. (See Jim's excellent point
about 'anything' being constrained by the used media types).
>
> If so -- then the question I keep coming back to is if a service
> that uses a "service-specific" media type is really an instance of
> REST. By service-specific, I don't mean "not standardized" or vendor-
> specific -- this has nothing to do with the nature of the media type
> itself, just whether or not it's been approved by a standards body.
> I mean that the media type represents a contract set by the service
> because the media type is not designed to represent a set of
> services. This is because the semantics of the media type map
> exactly to the semantics of the service. You see this in most "REST
> APIs" that are simply serializing service data structures as JSON or
> XML.
JSON or XML media types can never 'transport' the semantics of a
certain service or domain. They are so generic that they are useless
from a media type discussion POV. Maybe you are criticising the use of
such generic types and not really the issue of media types designed
for a certain application?
Also, I think it is very important to diferentiate between service
types and service instances. This is sometimes hard to do when you
look at the Web because there are mostly services that are unique (are
instances of their own type). But services that implement AtomPub are
*instances* of the kind of service defined by RFC5023. This is why you
can implement AtomPub clients without looking at a service instance.
I do think that certain problem domains (or service types) need their
own media types (maybe mixed with existing types). But, yes, I agree
that a media type should be designed for a set of services (aka type?)
and not for a single one.
OTH, when Google provides a set of quasi-standardized extensions when
publishing a service - that is fine. How's that different from Google
minting a few types for the job?
>
> To me a RESTful service "translates" it's own internal semantics
> into the media type(s) of the client(s) it is trying to address --
I would rather say: A service expects clients to understand certain
types. If known-to-be-supported types do not do the job, then mint new
types or extensions and publish them and hope clients implement them.
> the specific translation used being negotiated at runtime. This, to
> me is the point of having a distinction between resources and
> representations in REST. The translation doesn't just allow the
> service to "reach" a broader set of clients, but it also allows the
> client to "reach" a broader set of services. This is because the
> representation format captures information using semantics that are
> specific to the client. By designing the client's format around the
> information processing capabilities of the client, the client can
> interact with as many services as possible.
But you cannot magically make a client understand a semantic needed to
express your (the server's) state machine.
>
> For example, HTML represents information in terms of common
> structures of visually displayed, interactive text -- it's designed
> around visual browsers. VoiceXML is designed around voice browsers.
> Yes, you can write a spider to consume HTML (and VoiceXML). And yes,
> you can use a screen reader to process HTML, but VoiceXML is a much
> more natural way to represent information for speech-based
> consumption and interaction (it won a standards war with SALT -- a
> set of extensions to HTML for speech). So there are other ways to
> consume the information (something afforded by the Principle of
> Least Power), but that doesn't diminish the fact that the media type
> is designed to cater to a specific flavor of client.
>
> So when a media type is designed around a service rather than a type
> of client, I question if the result can be called REST. For example,
> a banking service that spits out a JSON format that simply
> serializes the account and transaction data structures used
> internally to represent the service's resources. i.e. if you aren't
> targeting a specific "type" of client by translating to that
> client's media type are you violating the constraints of REST?
Hmm - are you trying to say that media types should be design for a
kind of application (online purchasing, online bank account management
etc.)? If so - yes, of course!
>
> What specific constraints are being violated is a hard question and
> the reason I have a hard time explaining this
Sounds like you are talking about visibility in a sense. At least
putting application specific stuff into generoc formats and relying on
out-of-band contracts to fill the void violates the visibility
constraint.
> -- I would point to "Self-descriptive messages" and its requirement
> for standard media types but the meaning of "standard" in the
> context of REST is so hard to pin down. Or maybe this is just
> inherent in the distinction between representations and resources.
> "Self-descriptive messages" are another form of saying 'visibility'.
>
> Or maybe this isn't part of REST's constraints at all... I don't
> know -- that's why I keep asking (but based on the answers I get, I
> don't think I ever get the question across properly). Anyways, I'm
> interested in all of your thoughts.
Hope they help.
Jan
> Regards,
>
> Andrew
>
>
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
--------------------------------------
Jan Algermissen
Mail: algermissen@...
Blog: http://algermissen.blogspot.com/
Home: http://www.jalgermissen.com
--------------------------------------
On Jan 13, 2010, at 10:46 PM, Jrn Wildt wrote: > It is as if the smaller your audience is, the more are you violating > REST. Put it this way: The smaller your audience is the more you have to get the job done with what they already have to grow until the audience is large anough to roll your own and force it into their implementations. Jan
J On Wed, Jan 13, 2010 at 5:20 PM, Jan Algermissen <algermissen1971@...m>wrote: > > > > On Jan 13, 2010, at 10:46 PM, Jrn Wildt wrote: > > > It is as if the smaller your audience is, the more are you violating > > REST. > > Put it this way: The smaller your audience is the more you have to get > the job done with what they already have to grow until the audience is > large anough to roll your own and force it into their implementations. > > Jan, I'm not so sure I get your point here. Could you elaborate?
On Jan 13, 2010, at 12:55 PM, Jan Algermissen wrote: > Subbu, > > On Jan 13, 2010, at 7:29 PM, Subbu Allamaraju wrote: > >> However, once the application requires any other party (proxies, CDNs, monitoring tools, log file analyzers etc.) understand such media types, this model starts to fall apart. All such tools will be more than happy to oblige URI patterns and not media types. This is not a media type problem, but a potential reality that Restbucks Inc. may need to account for. > > > Can you provide an example of the situation that made you change your mind? Say, the server offers books and CDs with media types application/vnd.book.myformat and application/vnd.cd.myformat. The task of an ops engineer is to quickly come up with daily charts showing requests/day for books and CDs, by sifting through access logs. It is easy to implement this when the URIs used for these types of resources have stable patterns. Let's say, you now want to run all CD sales (POST requests) through a bigger box. URI patterns rule again. Both these are solvable based on media types, but when I design my application such that some key factors are reflected in URIs (and a few known media types), I get much more mileage out of the HTTP toolstack. If I start folding these into an elaborate media type scheme, I will be fighting with the toolstack to get it to work for me. In the long run, it may even cost more to maintain such a system. In other words, media types (along with URIs, content encodings, charset params etc.) keep representation visible to the protocol and the toolstack. But that does not necessarily follow that media types must be used to determine whether an incoming XML document is a book or a CD. There are other (simpler) ways to determine such things. Subbu
FWIW, I find this always a "good read" when I start contemplating media-type related issues: http://www.ics.uci.edu/~fielding/pubs/dissertation/evaluation.htm#sec_6_5_4 mca http://amundsen.com/blog/ On Wed, Jan 13, 2010 at 18:09, Subbu Allamaraju <subbu@...> wrote: > > On Jan 13, 2010, at 12:55 PM, Jan Algermissen wrote: > >> Subbu, >> >> On Jan 13, 2010, at 7:29 PM, Subbu Allamaraju wrote: >> >>> However, once the application requires any other party (proxies, CDNs, monitoring tools, log file analyzers etc.) understand such media types, this model starts to fall apart. All such tools will be more than happy to oblige URI patterns and not media types. This is not a media type problem, but a potential reality that Restbucks Inc. may need to account for. >> >> >> Can you provide an example of the situation that made you change your mind? > > Say, the server offers books and CDs with media types application/vnd.book.myformat and application/vnd.cd.myformat. The task of an ops engineer is to quickly come up with daily charts showing requests/day for books and CDs, by sifting through access logs. It is easy to implement this when the URIs used for these types of resources have stable patterns. > > Let's say, you now want to run all CD sales (POST requests) through a bigger box. URI patterns rule again. > > Both these are solvable based on media types, but when I design my application such that some key factors are reflected in URIs (and a few known media types), I get much more mileage out of the HTTP toolstack. If I start folding these into an elaborate media type scheme, I will be fighting with the toolstack to get it to work for me. In the long run, it may even cost more to maintain such a system. > > In other words, media types (along with URIs, content encodings, charset params etc.) keep representation visible to the protocol and the toolstack. But that does not necessarily follow that media types must be used to determine whether an incoming XML document is a book or a CD. There are other (simpler) ways to determine such things. > > Subbu > > ------------------------------------ > > Yahoo! Groups Links > > > >
On Jan 14, 2010, at 12:09 AM, Subbu Allamaraju wrote: > Let's say, you now want to run all CD sales (POST requests) through > a bigger box. URI patterns rule again. See your point. Thinking about it, I actually do not see any reason why an intermediary should not be able to learn at runtime what the URIs are that are used for the individual sales kinds. So knowing the pattern out of band is (in theory) not 100% necessary. Gotta go, rest of the reply tomorrow. Jan BTW, why are the sales distiguishable by the URI of the POST target resource (the order processing resource I assume)?
On Wed, Jan 13, 2010 at 6:09 PM, Subbu Allamaraju <subbu@...> wrote: > > > > Say, the server offers books and CDs with media types > application/vnd.book.myformat and application/vnd.cd.myformat. The task of > an ops engineer is to quickly come up with daily charts showing requests/day > for books and CDs, by sifting through access logs. It is easy to implement > this when the URIs used for these types of resources have stable patterns. > . > > > But your custom media type doesn't preclude you from using URIs in this particular case does it? So yes, the result would be the same if the media type were application/xml, would it not be? I don't so how this is a reason not to use a custom media type. Maybe I'm missing something here.
> > > But your custom media type doesn't preclude you from using URIs in this particular case does it? So yes, the result would be the same if the media type were application/xml, would it not be? I don't so how this is a reason not to use a custom media type. Maybe I'm missing something here. You're not missing anything. For this example it _does not matter_ if your media type application/xml or foo/bar. Subbu
> > > You're not missing anything. For this example it _does not matter_ if your > media type application/xml or foo/bar. > So sticking with atom (or header) links + custom rel + resource descriptive uri seems would not interfere with the visibility for intermediate layers? Regards > Subbu > > >
On Wed, Jan 13, 2010 at 5:16 PM, Jan Algermissen <algermissen1971@...>wrote: > > On Jan 13, 2010, at 10:09 PM, wahbedahbe wrote: > > >> Is the media type a part of the service's contract or the client's? >> >> It seems to me that a key distinction between REST and RPC is that in RPC >> the service provides the contract while in REST, the client provides the >> contract (via the media type). >> >> A client, in the Accept header, constrains the set of acceptable media >> types -- isn't this essentially run-time contract negotiation? >> The server agrees to the contract at run time by returning an appropriate >> representation of the requested resource (or rejects the contract by >> returning "Not Acceptable"). >> > > When you build a client that understands media type A, you need to hard > wire (or configure) two things into your client code: > > 1. knowledge about which hypermedia elements are traversal options > (links, forms) > 2. knowledge about which media types to put into the Accept header > when the user (human or machine) of the client chooses to follow > a certain transition. (You do *not* code the client to simply list > all the media types it understands) > > During the request handling, there happens runtime negotiation of the > content but there is a piece of contract that is a design time artifact (2. > above). The question really is: On the basis of what information does the > client choose what types to put in the Accept header. It is not an arbitrary > decision but a decision that essentially reflects the client's design-time > knowledge of the domain protocol supported by the service. > > Agreed (I followed your recent thread on the matter closely! ;-). I would argue that the "domain protocol supported by the service" is the media type and rel definitions. Is that your view? > > >> *Typically*, services can easily extend their "reach" by supporting as >> many media types as they like while clients support a fixed set of media >> types. So in order to give a client reach, it is best to support media types >> that are able to be used by a broad range of services. >> > > I think it is the other way round: Services expect the clients to > understand a set of media types. This set constitutes the service's type. That way doesn't seem to match the way it works on the web though. What are the "types of services" on the web? Social media applications, customer service applications, auction applications, banking applications, etc. Each service type doesn't have it's own set of media types. They all use HTML because, well that's what browsers support. If you want to expand your service's reach beyond browsers you'd have to support the media type of the client you are trying to reach. > > > >> For example, HTML can obviously be used to express an incredible range of >> services. >> > > Hmm - I'd argue that HTML oly expresses the semantics needed by a browser > to turn human targetted hypermedia into an interactive GUI. The 'incredible > range' is a by-product of humans controlling the browser. Ya I'm familiar with the "human driven" vs. "machine driven" argument. I don't agree with this line of thinking at all -- the problem is that the "machine driven" media types in use are just plain bad -- one of their big problem's is the fact they are service specific! > > > VoiceXML (used by automated phone systems) can also be used to express a >> broad range of services. Supporting one of these media types would give a >> client a broad reach as it could interact with many services. But a service >> could address both HTML and VoiceXML clients via conneg (or simply two >> disjoint sets of URIs). >> > > Hmm, not sure I understand that. Can you illustrate? > > Voice Browsers are used to deliver voice applications over the phone that are authored using the VoiceXML markup language. Rather that explain VoiceXML here, I'll point you do a short tutorial by Dave Ragett: http://www.w3.org/Voice/Guide/ Now, say you have a customer service application and you want to let customers interact with it over the web via an HTML browser and over the phone using an automated voice response system. The core application resources would likely be the same (user account resources, product resources, support ticket resources, etc.) for both the HTML and VoiceXML applications. You could use the same set of URIs for both and use conneg to serve HTML to web browsers and VoiceXML to voice browsers or use separate URI's to serve HTML and VoiceXML. > > >> Isn't this the root of the client-server decoupling provided by REST? >> > > The decoupling is achieved by removing *any* assumption on the client side > about what the server may do next. (Except for, for example, returning > images for requests to <img href=""> target URIs. The server must not > contradict itself. > > In my posting regarding testing a couple of days ago I tried to > 'investigate' the point by saying: "A server can never send a wrong > response" client's must expect anything. (See Jim's excellent point about > 'anything' being constrained by the used media types). > > Sure, but you are basically saying that the media type forms the contract right? And I what I'm trying to get at here is that the media type is the client's contract not the service's. In the customer support example above, both VoiceXML and HTML are mandated by the respective clients. The service is conforming to those media types in order to be able to interact with those clients. The client is not coupled to the service at all -- just the media type, URI and HTTP. The service has the freedom to support whatever representations it chooses -- it is not coupled to any one client's media type. > >> If so -- then the question I keep coming back to is if a service that uses >> a "service-specific" media type is really an instance of REST. By >> service-specific, I don't mean "not standardized" or vendor-specific -- this >> has nothing to do with the nature of the media type itself, just whether or >> not it's been approved by a standards body. I mean that the media type >> represents a contract set by the service because the media type is not >> designed to represent a set of services. This is because the semantics of >> the media type map exactly to the semantics of the service. You see this in >> most "REST APIs" that are simply serializing service data structures as JSON >> or XML. >> > > JSON or XML media types can never 'transport' the semantics of a certain > service or domain. They are so generic that they are useless from a media > type discussion POV. Maybe you are criticising the use of such generic types > and not really the issue of media types designed for a certain application? > Well, even if properly you "named" the XML format (e.g. used application/vnd.whatever+xml instead of application/xml) I think you still have the same problem. The issue goes beyond the media type name -- I'm talking about the format itself. > Also, I think it is very important to diferentiate between service types > and service instances. This is sometimes hard to do when you look at the Web > because there are mostly services that are unique (are instances of their > own type). But services that implement AtomPub are *instances* of the kind > of service defined by RFC5023. This is why you can implement AtomPub clients > without looking at a service instance. > Right... the clients conform to the spec. If my customer service example above wanted to make a subset of its resources accessible to Atom+AtomPub clients, say the customer support tickets, it could do so by supporting those media types. Well, almost -- there's the whole can of worms of how to represent the "content" or in this case, the support ticket data. Do you use a foreign XML namespace in the <entry> or do you put it in <content> (the extension vs. envelope question)? I'll avoid getting into that here, so let's just say that the formats that compose the contract consist of AtomPub plus the content format. But again, I see the format as the client's contract. The service in my example is chosing to support the Atom client type by adding Atom to its supported media types. It's hard to see it that way with Atom because of the whole mess caused by the embedded content format. The problem is that most Atom services use a service-specific content format which makes the whole thing service-specific in the end despite the use of Atom. > > I do think that certain problem domains (or service types) need their own > media types (maybe mixed with existing types). But, yes, I agree that a > media type should be designed for a set of services (aka type?) and not for > a single one. > > OTH, when Google provides a set of quasi-standardized extensions when > publishing a service - that is fine. How's that different from Google > minting a few types for the job? > > I think some of Google's services would be way more RESTful if they'd used established media types for the content. i.e. Atom feeds of vCards would have been way better than what they implemented IMO. > > > >> To me a RESTful service "translates" it's own internal semantics into the >> media type(s) of the client(s) it is trying to address -- >> > > I would rather say: A service expects clients to understand certain types. > If known-to-be-supported types do not do the job, then mint new types or > extensions and publish them and hope clients implement them. Nope. Can't agree here. As already discussed, I just don't see REST that way. For me, media types start with the client. > > > the specific translation used being negotiated at runtime. This, to me is >> the point of having a distinction between resources and representations in >> REST. The translation doesn't just allow the service to "reach" a broader >> set of clients, but it also allows the client to "reach" a broader set of >> services. This is because the representation format captures information >> using semantics that are specific to the client. By designing the client's >> format around the information processing capabilities of the client, the >> client can interact with as many services as possible. >> > > But you cannot magically make a client understand a semantic needed to > express your (the server's) state machine. > > Ok.... challenge accepted -- give me an example problem and I'll try to show you how to solve it. One rule: you'll need to be able to answer detailed questions about the example client as well as the service. > >> For example, HTML represents information in terms of common structures of >> visually displayed, interactive text -- it's designed around visual >> browsers. VoiceXML is designed around voice browsers. Yes, you can write a >> spider to consume HTML (and VoiceXML). And yes, you can use a screen reader >> to process HTML, but VoiceXML is a much more natural way to represent >> information for speech-based consumption and interaction (it won a standards >> war with SALT -- a set of extensions to HTML for speech). So there are other >> ways to consume the information (something afforded by the Principle of >> Least Power), but that doesn't diminish the fact that the media type is >> designed to cater to a specific flavor of client. >> >> So when a media type is designed around a service rather than a type of >> client, I question if the result can be called REST. For example, a banking >> service that spits out a JSON format that simply serializes the account and >> transaction data structures used internally to represent the service's >> resources. i.e. if you aren't targeting a specific "type" of client by >> translating to that client's media type are you violating the constraints of >> REST? >> > > Hmm - are you trying to say that media types should be design for a kind of > application (online purchasing, online bank account management etc.)? If so > - yes, of course! > > Nope. I am saying that media types should be designed for kinds of clients. This is the precedent set by HTML, VoiceXML, etc. > > >> What specific constraints are being violated is a hard question and the >> reason I have a hard time explaining this >> > > Sounds like you are talking about visibility in a sense. At least putting > application specific stuff into generoc formats and relying on out-of-band > contracts to fill the void violates the visibility constraint. > > > -- I would point to "Self-descriptive messages" and its requirement for >> standard media types but the meaning of "standard" in the context of REST is >> so hard to pin down. Or maybe this is just inherent in the distinction >> between representations and resources. >> > > "Self-descriptive messages" are another form of saying 'visibility'. >> > > Maybe... visibility is a part of it, but I think this goes beyond visibility. This also affects (and is perhaps more central to) substitutability, modifiability, evolvability, reusability, etc. > >> Or maybe this isn't part of REST's constraints at all... I don't know -- >> that's why I keep asking (but based on the answers I get, I don't think I >> ever get the question across properly). Anyways, I'm interested in all of >> your thoughts. >> > > Hope they help. > > Jan > > Thanks... this is a good conversation, even if we don't see things the same way (yet!) ;-) Regards, Andrew
> Put it this way: The smaller your audience is the more you have to get the job done with what they already have Interesting point: the smaller audience, the lesser likelihood of them having existing code for your service, the bigger need to use existing standards? Makes sence. Start with a not-so-suitable-but-standard media-type to get people going quick and then introduce newer and better formats. /J�rn
Hi Eb, > I concur however, its sorta assumed that my client will understand the media > becuase I had to take the media type into consideration from the get go. I agree. You have to bake knowledge of media type(s) and link relations into clients. > I > probably won't think (or care about) intermediaries depending on the "reach" > of my solution, intranet versus internet for example. So when we suggest > using "standard" media types for purposes of reach, we just need to be clear > as to what concerns we're focusing on. 1. Re-use of standard libraries for building consumers/services 2. Serendipitous re-use of services Jim
Hi Jim - > 1. Re-use of standard libraries for building consumers/services > 2. Serendipitous re-use of services > > Good points, although I will observe that most of the "new" media types we are discussing here are derivatives of existing media types. So from a tooling standpoint (standard libraries), I think it in many cases is a wash. Serendipitous re-use of services. Interesting. Eb
Hello Andrew - On Wed, Jan 13, 2010 at 11:05 PM, Andrew Wahbe <andrew.wahbe@...>wrote: > >> Sure, but you are basically saying that the media type forms the contract > right? And I what I'm trying to get at here is that the media type is the > client's contract not the service's. In the customer support example above, > both VoiceXML and HTML are mandated by the respective clients. The service > is conforming to those media types in order to be able to interact with > those clients. The client is not coupled to the service at all -- just the > media type, URI and HTTP. The service has the freedom to support whatever > representations it chooses -- it is not coupled to any one client's media > type. > While I think I agree with you (to an extent), this does suggest that clients are built in isolation which I do not believe is the case in practice. Maybe the mistake is that we should look at what capabilities the client needs to have and then use (or create a new) a media type that sufficiently expresses those capabilities. I see media type as the "language spoken" (excuse my poor expression) between a client and server and for anything meaningful to happen, they must both speak the same language. From that perspective, a client is indirectly coupled (only) to services that speak its media type(s) and vice versa. Where this gets interesting is that it seems to me that for every standard media type that exists, clients (and servers) have been hard wired to understand the media type in the context that the media type was created for. Browsers were written to understand HTML for a purpose. Sure, I can use HTML (or XML or Atom) to represent anything I want but does that mean that just because my service and browser and client agree on text/html that my browser will be able to do anything meaningful with the representation if the semantics have nothing to do with display (for example)? I don't think so. This is why I would still have to document (out of band) the semantics contained in my representation even if I used a standard media type. Now there are positives, such that the browser could possibly render aspects of the representation etc etc, but that's not the major point at the moment. As always, I would love to be corrected. :) Eb
On Thu, Jan 14, 2010 at 5:28 AM, Eb <amaeze@...> wrote: > Hello Andrew - > > > On Wed, Jan 13, 2010 at 11:05 PM, Andrew Wahbe <andrew.wahbe@...>wrote: > >> >>> Sure, but you are basically saying that the media type forms the contract >> right? And I what I'm trying to get at here is that the media type is the >> client's contract not the service's. In the customer support example above, >> both VoiceXML and HTML are mandated by the respective clients. The service >> is conforming to those media types in order to be able to interact with >> those clients. The client is not coupled to the service at all -- just the >> media type, URI and HTTP. The service has the freedom to support whatever >> representations it chooses -- it is not coupled to any one client's media >> type. >> > > > While I think I agree with you (to an extent), this does suggest that > clients are built in isolation which I do not believe is the case in > practice. Well let's limit our set of examples to systems widely agreed to be RESTful. It's certainly that way for HTML browsers. It is that way for "pure" AtomPub clients *designed for content publishing* (where the content is HTML or another standard format and there are no service-specific extensions). It is also that way for VoiceXML browsers (though I'm not sure there is wide agreement that they are RESTful though I tend to think that that is because not enough people know about them). This isn't the case with many M2M clients and services -- there the service usually comes first or the client and service are built at the same time. But here we also have folks like Roy and others routinely poking holes in the services and declaring they aren't RESTful. And we also have folks reaching the conclusion that there must be some fundamental limitations of M2M services because they can't achieve the same client-server decoupling as we see in the Web. I'm proposing that there is no fundamental limitation in M2M services -- system designers are just going about it the wrong way because they are designing their media types around the services instead of the clients. If they instead started with a specific client in mind, and designed the media type around the client they can achieve the same level of client-service decoupling as in the HTML web. This isn't just my opinion -- I can point to a (in progress) W3C XML format for M2M RESTful systems that is designed in this way, and has many implementations shipping today from multiple vendors that don't depend on any specific service. Anyone can design a service in that format that can do anything expressible in the language. This is CCXML: http://www.w3.org/TR/ccxml/ As I've said many times on this list -- you really ought to have a look at CCXML. It's not the best example of a RESTful markup language possible (e.g. like HTML it only uses GET and POST, etc.) but there's a lot to be learned from it. And most importantly, it stands as a counter example to the assertion that M2M clients are fundamentally different from "human driven" clients when it comes to client-service decoupling.... Or maybe I'm way off base and just confused about this... Don't know -- I haven't heard an argument to make me change my mind yet, but I'd genuinely like to hear it if it's out there. > Maybe the mistake is that we should look at what capabilities the client > needs to have and then use (or create a new) a media type that sufficiently > expresses those capabilities. I see media type as the "language spoken" > (excuse my poor expression) between a client and server and for anything > meaningful to happen, they must both speak the same language. From that > perspective, a client is indirectly coupled (only) to services that speak > its media type(s) and vice versa. > Absolutely, HTML (and VoiceXML) are designed around the client's needs and capabilities -- not a service's. The web browser is "coupled" the entire set of services that are expressible in HTML -- that is a big set. It's not coupled to any one service. > Where this gets interesting is that it seems to me that for every standard > media type that exists, clients (and servers) have been hard wired to > understand the media type in the context that the media type was created > for. Browsers were written to understand HTML for a purpose. Sure, I can > use HTML (or XML or Atom) to represent anything I want but does that mean > that just because my service and browser and client agree on text/html that > my browser will be able to do anything meaningful with the representation if > the semantics have nothing to do with display (for example)? I don't think > so. This is why I would still have to document (out of band) the semantics > contained in my representation even if I used a standard media type. Now > there are positives, such that the browser could possibly render aspects of > the representation etc etc, but that's not the major point at the moment. > > Yup... if you are using a "standard" media type in a "non-standard" way then you will forfeit interoperability. And this is where lots of folks are going wrong with Atom -- using it to do something other than publish and syndicate content. Sure you can introduce extensions to extend what a media type can do, but if those extensions are "service-specific" then you won't get general interoperability or client-service decoupling. > As always, I would love to be corrected. :) > > Me too! ;-) > > Eb > Regards, Andrew
On Wed, Jan 13, 2010 at 4:46 PM, Jrn Wildt <jw@...> wrote: > So when a media type is designed around a service rather than a type of >> client, I question if the result can be called REST. For example, a banking >> service that spits out a JSON format that simply serializes the account and >> transaction data structures used internally to represent the service's >> resources. i.e. if you aren't targeting a specific "type" of client by >> translating to that client's media type are you violating the constraints of >> REST? >> > > I like this formulation of the problem. Yes, the service is sort-of > targeting itself instead of it's users. But are there any alternatives? > > It is as if the smaller your audience is, the more are you violating REST. > Example: the detailed representation of a egyptic scarabae collection - > there's probably not many who are going to consume that. Is it not RESTful > to have a special XML (maybe even binary) representation of that? > Well is the concept of a egyptic scarabae collection native to the client or the service or both? I'm arguing to only put concepts truly important to the client in your media type. Represent things from the client's viewpoint not the service's. Don't worry about over-customizing things things for a certain type of client because a service has the freedom to support multiple representation formats and thereby increase it's reach to more types of clients. > There are people on this list who argue that representaions must be > accepted as standards in order to call the service RESTful. Considering the > above thoughts they have a good point: it would not matter if the audience > for a representation was big or small, as long as it has a standard > representation. Then banking clients works for all banks and scarabae > collectors can browse any scarabae collection on the net. > I'm not sure if standardization is exactly the same thing or not... it's a tough one. For example, you could standardize an RPC interface couldn't you? I think that the issues here are modifiability, evolvability, reusability etc. and they are a function of what you are standardizing, not whether the format/interface is standardized or not. > > Maybe we could simplify the problem a bit: 1) we cannot expect each and > every client to know all standard media-types, and 2) some media-types, like > the scarabae collection, are just too specialized to get accepted as a > standard media-type. So what if a service could be categorized as RESTful > *with respect to a certain domain*? > Well, it seems to me a RESTful "domain" is always associated with the media type and client. For the HTML web, the domain is (primarily visual) interactive information presentation. For the VoiceXML web it is (primarily speech-driven) interactive information presentation. For the Atom(Pub) web it is content syndication and publishing. A single service can be targeted to multiple domains by supporting multiple media types. Regards, Andrew
On Thu, Jan 14, 2010 at 10:40 AM, Andrew Wahbe <andrew.wahbe@...>wrote: > > Yup... if you are using a "standard" media type in a "non-standard" way > then you will forfeit interoperability. And this is where lots of folks are > going wrong with Atom -- using it to do something other than publish and > syndicate content. Sure you can introduce extensions to extend what a media > type can do, but if those extensions are "service-specific" then you won't > get general interoperability or client-service decoupling. > > Yeah, I agree with this which is way I'm never satisfied with the generic recommendation to use current media types (apart from existing tooling benefits, which is not really the point here) during design.
On Wed, Jan 13, 2010 at 4:09 PM, Subbu Allamaraju <subbu@...> wrote: > > On Jan 13, 2010, at 12:55 PM, Jan Algermissen wrote: > > Say, the server offers books and CDs with media types > application/vnd.book.myformat and application/vnd.cd.myformat. The > task of an ops engineer is to quickly come up with daily charts > showing requests/day for books and CDs, by sifting through access > logs. It is easy to implement this when the URIs used for these > types of resources have stable patterns. > > Let's say, you now want to run all CD sales (POST requests) through > a bigger box. URI patterns rule again. > > Both these are solvable based on media types, but when I design my > application such that some key factors are reflected in URIs (and a > few known media types), I get much more mileage out of the HTTP > toolstack. If I start folding these into an elaborate media type > scheme, I will be fighting with the toolstack to get it to work for > me. In the long run, it may even cost more to maintain such a > system. I am a fan of domain specific media types. However, i have never implemented a system that had just one URL and figured every thing out from the mime type, which seems like what you are arguing against. Every resource (each CD or book in your example) would have it's own URI so routing and reporting on a per-resource basis is still straight forward. If the service implementer chooses to implement the service is such a way that *it* can infer information about the requests from the URI that is great. (I do it all the time.) But clients and intermediates outside of the service implementer's control should never be expected to understand such implications. > In other words, media types (along with URIs, content encodings, > charset params etc.) keep representation visible to the protocol and > the toolstack. But that does not necessarily follow that media types > must be used to determine whether an incoming XML document is a book > or a CD. There are other (simpler) ways to determine such things. Media types don't just tell you the syntax (XML flavor, etc) of the representation, they also indicate the semantics. Even if a service chooses to have a separate URI for every representation media types would still be useful. An exclusively atom client that makes a request, with 'Accept: application/atom+xml', against an HTML only blog resource should get redirected to the atom representation resource. (Or perhaps a '406 Not Acceptable' response with a 'alternate' link header pointing to the atom representation resource.) Using domain specific media types allows that such behavior, while a generic media type approach precludes it. Using both URIs and media types in concert allows clients to continuously operate on constantly evolving services, and services to evolve quite aggressively without breaking existing clients. Peter
On Jan 14, 2010, at 12:09 AM, Subbu Allamaraju wrote: > > On Jan 13, 2010, at 12:55 PM, Jan Algermissen wrote: > >> Subbu, >> >> On Jan 13, 2010, at 7:29 PM, Subbu Allamaraju wrote: >> >>> However, once the application requires any other party (proxies, >>> CDNs, monitoring tools, log file analyzers etc.) understand such >>> media types, this model starts to fall apart. All such tools will >>> be more than happy to oblige URI patterns and not media types. >>> This is not a media type problem, but a potential reality that >>> Restbucks Inc. may need to account for. >> >> >> Can you provide an example of the situation that made you change >> your mind? > > Say, the server offers books and CDs with media types application/ > vnd.book.myformat and application/vnd.cd.myformat. The task of an > ops engineer is to quickly come up with daily charts showing > requests/day for books and CDs, by sifting through access logs. It > is easy to implement this when the URIs used for these types of > resources have stable patterns. > > Let's say, you now want to run all CD sales (POST requests) through > a bigger box. URI patterns rule again. > > Both these are solvable based on media types, but when I design my > application such that some key factors are reflected in URIs (and a > few known media types), I get much more mileage out of the HTTP > toolstack. If I start folding these into an elaborate media type > scheme, I will be fighting with the toolstack to get it to work for > me. In the long run, it may even cost more to maintain such a system. Hmm, without an intent to argue about the specific example, this sounds a bit far fetched to me. The routing issue could be solved by the server itself by simply providing the clients with different target URLs for CDs and books in the ordering form. This would also address the statistics issue (which I think is likely to be better solved by using the more sophisticated analytics engine of the backend system anyway) > > In other words, media types (along with URIs, content encodings, > charset params etc.) keep representation visible to the protocol and > the toolstack. But that does not necessarily follow that media types > must be used to determine whether an incoming XML document is a book > or a CD. There are other (simpler) ways to determine such things. I think I do not understand why you argue that you cannot have both; why you think you have to trade REST benefits for achieving the goals you describe. The coupling created by making particular URI patterns known to intermediaries for analytics and routing seems to be a heavy trade off. Especially since once it is established and figured out by the developers it will just spread all over the place. I'd go a long way to maintain URI opaqueness. Maybe I am not understanding you correctly, though. Jan > > Subbu > > ------------------------------------ > > Yahoo! Groups Links > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
On Jan 14, 2010, at 1:54 PM, Jan Algermissen wrote: > I think I do not understand why you argue that you cannot have both; why you think you have to trade REST benefits for achieving the goals you describe. The coupling created by making particular URI patterns known to intermediaries for analytics and routing seems to be a heavy trade off. Especially since once it is established and figured out by the developers it will just spread all over the place. I'd go a long way to maintain URI opaqueness. Heavy tradeoff because it makes the business achieve its goals? These are operational requirements that most real-world HTTP services deal with routinely. Subbu
On Jan 14, 2010, at 1:54 PM, Jan Algermissen wrote: > I think I do not understand why you argue that you cannot have both; why you think you have to trade REST benefits for achieving the goals you describe. By the way, please don't take this as an argument for or against these media types. AS proven time and again, this is a hard problem for a number of reasons and the community has no clear consensus. I am only trying to point out that we can't take one view of architecture overtake the rest. Designing applications with awareness of how infrastructure works helps a long way. Subbu
On Jan 14, 2010, at 11:41 PM, Subbu Allamaraju wrote: > > On Jan 14, 2010, at 1:54 PM, Jan Algermissen wrote: > >> I think I do not understand why you argue that you cannot have >> both; why you think you have to trade REST benefits for achieving >> the goals you describe. The coupling created by making particular >> URI patterns known to intermediaries for analytics and routing >> seems to be a heavy trade off. Especially since once it is >> established and figured out by the developers it will just spread >> all over the place. I'd go a long way to maintain URI opaqueness. > > Heavy tradeoff because it makes the business achieve its goals? No, heavy trade off because the business goal could very easily (at least as far as your example goes) be achieved without introducing the coupling. > These are operational requirements that most real-world HTTP > services deal with routinely. Might be - but I understand you to be saying those business goals could not be achieved without making use of shared knowledge about URI patterns. I question that. After all, the value of REST lies in the style-induced, guaranteed system properties and the short term price to be paid for that is considerably high. *Maybe* you gain some near time business advantage by breaking constraints but you are working against the long term goal that motivated the use of REST in the first place. I am all against architectural purity for its own sake but the call for business level goals that justify anything is far too often made too quickly. Jan > > Subbu -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
--- In rest-discuss@yahoogroups.com, "wahbedahbe" <andrew.wahbe@...> wrote: > So when a media type is designed around a service rather than a type of client, I question if the result can be called REST. For example, a banking service that spits out a JSON format that simply serializes the account and transaction data structures used internally to represent the service's resources. i.e. if you aren't targeting a specific "type" of client by translating to that client's media type are you violating the constraints of REST? I keep comming back to this statement, you really got some thoughts starting. Thanks. It also puts some light on the problems of using a framework like WCF REST for a REST server: In WCF REST the default representation of an object is it's standard XML/JSON serialization. It is very easy to just serialize your internal Data-Transfer-Objects and let that be the representation. In this case you really design around your own service instead of your clients. /Jrn
On Fri, Jan 15, 2010 at 6:56 AM, Jorn Wildt <jw@...> wrote: > > > --- In rest-discuss@yahoogroups.com <rest-discuss%40yahoogroups.com>, > "wahbedahbe" <andrew.wahbe@...> wrote: > > So when a media type is designed around a service rather than a type of > client, I question if the result can be called REST. For example, a banking > service that spits out a JSON format that simply serializes the account and > transaction data structures used internally to represent the service's > resources. i.e. if you aren't targeting a specific "type" of client by > translating to that client's media type are you violating the constraints of > REST? > > I keep comming back to this statement, you really got some thoughts > starting. Thanks. > > It also puts some light on the problems of using a framework like WCF REST > for a REST server: In WCF REST the default representation of an object is > it's standard XML/JSON serialization. It is very easy to just serialize your > internal Data-Transfer-Objects and let that be the representation. In this > case you really design around your own service instead of your clients. > > /Jrn > > __._,_.__ > While I'm not a big fan of WCF, I think this has less to do with the tooling and more to with connecting the dots between resources and representations and understanding that all of these are for your clients and that how your client(s) will use them should drive design decisions. So if my client wants the account and transaction data structures (as Andrew alludes too), and that's what it's given, then I don't see what's necessarily wrong with that. Eb
Jim Webber wrote: > > > Hey Eb, > > > I like this point. From a reach perspective, we should think more of the > > other agents/intermediaries that would have no clue on how to handle this > > media type (if they needed too). Regardless, the receiver (in a lot of > > cases) will need to understand the extensions to an existing media type. > > If a client or intermediary doesn't understand a media type then all > bets are off - the client or intermediary doesn't understand the > service's contract. > > Some clients know about lots of media types (e.g. browsers) some clients > know about few media types (e.g. Restbucks systems). That's the essence > of reach. It's not just that they know or don't know, it's that every now and then an intermediary will block vnd content; this is not unusual in mobile networks. Maybe this is like PUT/DELETE 5 years ago and it will come to pass that vnd types will be allowed to travel, I don't know. In the meantime the workaround is usually to use a generic media type like application/xml. I imagine there are implications to the notion of media types driving contracts when that is done - the test is whether changing the media type in HTTP to a generic type makes an actual difference (if not I imagine it's a point to point integration dressing up as REST). My observation is that something like RDF could work better that custom media types at the cost of extra abstraction since RDF can readily support semantic information in the data that can drive a contract. Bill
Bill, can you point me to some references about proxies not allowing app/vnd.* ? It would be quite useful to get those scenarios documented if they're not already, and a summary search hasn't returned anything. -- S > To: jim@... > CC: rest-discuss@yahoogroups.com > From: bill@dehora.net > Date: Fri, 15 Jan 2010 12:56:57 +0000 > Subject: Re: [rest-discuss] about rel and HATEOAS (theoretical question) > > Jim Webber wrote: > > > > > > Hey Eb, > > > > > I like this point. From a reach perspective, we should think more of the > > > other agents/intermediaries that would have no clue on how to handle this > > > media type (if they needed too). Regardless, the receiver (in a lot of > > > cases) will need to understand the extensions to an existing media type. > > > > If a client or intermediary doesn't understand a media type then all > > bets are off - the client or intermediary doesn't understand the > > service's contract. > > > > Some clients know about lots of media types (e.g. browsers) some clients > > know about few media types (e.g. Restbucks systems). That's the essence > > of reach. > > It's not just that they know or don't know, it's that every now and then > an intermediary will block vnd content; this is not unusual in mobile > networks. > > Maybe this is like PUT/DELETE 5 years ago and it will come to pass that > vnd types will be allowed to travel, I don't know. In the meantime the > workaround is usually to use a generic media type like application/xml. > I imagine there are implications to the notion of media types driving > contracts when that is done - the test is whether changing the media > type in HTTP to a generic type makes an actual difference (if not I > imagine it's a point to point integration dressing up as REST). > > My observation is that something like RDF could work better that custom > media types at the cost of extra abstraction since RDF can readily > support semantic information in the data that can drive a contract. > > Bill > > > ------------------------------------ > > Yahoo! Groups Links > > > _________________________________________________________________ Got a cool Hotmail story? Tell us now http://clk.atdmt.com/UKM/go/195013117/direct/01/
Recently, I've been thinking about how a coding framework or library can influence the way developers implement applications. What would a coding environment look like if it was meant to encourage results that followed a particular _architectural_ style (not programming style). IOW, is there a way to craft a framework that constrains developers in ways that results in a REST-ful implementation of the application? I did some digging, but have yet to find any writing on this topic. Here are some "off-the-top-of-my-head" items: For example, a framework might exhibit these REST-like traits: - there is a clear separation of concerns between resource identifiers, resources, and representations - developers must define a resource as the public application interface - the Uniform Interface is enforced (e.g. those methods are the only public members exposed for a resource) - developers must always associate one or more representation formats with a resource and/or resource method before the implementation is valid - there is no way to define and use server-side session state objects Some HTTP-specific traits might be: - support for content negotiation is "baked-in" - support for conditional requests is "baked-in" and automatic - RPC-like implementation patterns (e.g. gateway URIs) are somehow difficult to implement or are flagged as invalid Any comments? Is this line of thinking pure folly? old hat? already resolved sufficiently? mca http://amundsen.com/blog/
JAX-RS (Jersey) does a pretty good job IMO. Python's Piston on Django looks promising (...and that's the limit of my experience with REST frameworks I've used in anger). On Fri, Jan 15, 2010 at 12:27 PM, mike amundsen <mamund@...> wrote: > Recently, I've been thinking about how a coding framework or library > can influence the way developers implement applications. What would a > coding environment look like if it was meant to encourage results that > followed a particular _architectural_ style (not programming style). > > IOW, is there a way to craft a framework that constrains developers in > ways that results in a REST-ful implementation of the application? > > I did some digging, but have yet to find any writing on this topic. > > Here are some "off-the-top-of-my-head" items: > > For example, a framework might exhibit these REST-like traits: > - there is a clear separation of concerns between resource > identifiers, resources, and representations > - developers must define a resource as the public application interface > - the Uniform Interface is enforced (e.g. those methods are the only > public members exposed for a resource) > - developers must always associate one or more representation formats > with a resource and/or resource method before the implementation is > valid > - there is no way to define and use server-side session state objects > > Some HTTP-specific traits might be: > - support for content negotiation is "baked-in" > - support for conditional requests is "baked-in" and automatic > - RPC-like implementation patterns (e.g. gateway URIs) are somehow > difficult to implement or are flagged as invalid > > Any comments? Is this line of thinking pure folly? old hat? already > resolved sufficiently? > > mca > http://amundsen.com/blog/ > > > ------------------------------------ > > Yahoo! Groups Links > > > >
Noah: I am familiar w/ a handful of environments that are thought to be HTTP/REST-friendly. I am wondering what it is about these frameworks that give people reason to make that claim (true or false). IOW, what are the abstract traits that make them so? Are these traits shared across libraries/languages/platforms? Maybe a way to start is to document the REST-ful features of each of these libraries and draw conclusions. mca http://amundsen.com/blog/ On Fri, Jan 15, 2010 at 15:47, Noah Campbell <noahcampbell@gmail.com> wrote: > JAX-RS (Jersey) does a pretty good job IMO. Python's Piston on Django > looks promising (...and that's the limit of my experience with REST > frameworks I've used in anger). > > On Fri, Jan 15, 2010 at 12:27 PM, mike amundsen <mamund@...> wrote: >> Recently, I've been thinking about how a coding framework or library >> can influence the way developers implement applications. What would a >> coding environment look like if it was meant to encourage results that >> followed a particular _architectural_ style (not programming style). >> >> IOW, is there a way to craft a framework that constrains developers in >> ways that results in a REST-ful implementation of the application? >> >> I did some digging, but have yet to find any writing on this topic. >> >> Here are some "off-the-top-of-my-head" items: >> >> For example, a framework might exhibit these REST-like traits: >> - there is a clear separation of concerns between resource >> identifiers, resources, and representations >> - developers must define a resource as the public application interface >> - the Uniform Interface is enforced (e.g. those methods are the only >> public members exposed for a resource) >> - developers must always associate one or more representation formats >> with a resource and/or resource method before the implementation is >> valid >> - there is no way to define and use server-side session state objects >> >> Some HTTP-specific traits might be: >> - support for content negotiation is "baked-in" >> - support for conditional requests is "baked-in" and automatic >> - RPC-like implementation patterns (e.g. gateway URIs) are somehow >> difficult to implement or are flagged as invalid >> >> Any comments? Is this line of thinking pure folly? old hat? already >> resolved sufficiently? >> >> mca >> http://amundsen.com/blog/ >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> >> >
You might check out this post from Erik Wilde: http://dret.typepad.com/dretblog/2009/05/rest-programming-toolbox-requirements.html For me, the bare minimum includes: - support for GET,PUT,POST, DELETE http methods (at least ... HEAD, OPTIONS nice, too). - for GET requests, I want to know what media type is preferred (combination of looking at Accept header and/or file extension, etc.) - for PUT/POST I want to be able to quickly and easily now the incoming Content-type - an easy way to parse the requested URL (w/ regex or named sections, etc.) - the ability to dispatch to a handler function by any combination of the above - tools for creating representations in the most common media types: HTML (a template language), JSON, Atom, RDF, etc. and serving the proper one based on requested type. In PHP, I don't think there is an obvious option (we've built our own RESTful classes for Zend Framework: http://github.com/pkeane/cola-zend), what I know of RoR seems to meet criteria, Django seems pretty nicely RESTful, as does the Google App Engine "WebApp." --peter On Fri, Jan 15, 2010 at 2:27 PM, mike amundsen <mamund@...> wrote: > > > Recently, I've been thinking about how a coding framework or library > can influence the way developers implement applications. What would a > coding environment look like if it was meant to encourage results that > followed a particular _architectural_ style (not programming style). > > IOW, is there a way to craft a framework that constrains developers in > ways that results in a REST-ful implementation of the application? > > I did some digging, but have yet to find any writing on this topic. > > Here are some "off-the-top-of-my-head" items: > > For example, a framework might exhibit these REST-like traits: > - there is a clear separation of concerns between resource > identifiers, resources, and representations > - developers must define a resource as the public application interface > - the Uniform Interface is enforced (e.g. those methods are the only > public members exposed for a resource) > - developers must always associate one or more representation formats > with a resource and/or resource method before the implementation is > valid > - there is no way to define and use server-side session state objects > > Some HTTP-specific traits might be: > - support for content negotiation is "baked-in" > - support for conditional requests is "baked-in" and automatic > - RPC-like implementation patterns (e.g. gateway URIs) are somehow > difficult to implement or are flagged as invalid > > Any comments? Is this line of thinking pure folly? old hat? already > resolved sufficiently? > > mca > http://amundsen.com/blog/ > >
I know you know about it alread but... OpenRasta enforces registration of resource URIs, resources and media types (through what we call codecs), and enforces an absolute separation between them. OpenRasta is kind of a front-controller framework with http methods matching class methods on resource handlers, and separation from resources and the components that render them (codecs). Aka, multiple URIs may be available for one and only one resource, for which one or many handlers may return such resources, and one to many codecs may provide the various media types a resource supports It also lets you do HTTP content-type and content-language negotiation out of the box, and charset negotiation in the next version. Abeit, we don't do caching semantics yet, this is for the next version. I'm waiting for users to be coming back with scenarios so we can implement an API that makes sense to them, rather than one tailored only at being HTTP compliant (although of course that's also a target). -- S > To: rest-discuss@yahoogroups.com > From: mamund@... > Date: Fri, 15 Jan 2010 15:27:20 -0500 > Subject: [rest-discuss] How can a framework/library encourage REST-ful development? > > Recently, I've been thinking about how a coding framework or library > can influence the way developers implement applications. What would a > coding environment look like if it was meant to encourage results that > followed a particular _architectural_ style (not programming style). > > IOW, is there a way to craft a framework that constrains developers in > ways that results in a REST-ful implementation of the application? > > I did some digging, but have yet to find any writing on this topic. > > Here are some "off-the-top-of-my-head" items: > > For example, a framework might exhibit these REST-like traits: > - there is a clear separation of concerns between resource > identifiers, resources, and representations > - developers must define a resource as the public application interface > - the Uniform Interface is enforced (e.g. those methods are the only > public members exposed for a resource) > - developers must always associate one or more representation formats > with a resource and/or resource method before the implementation is > valid > - there is no way to define and use server-side session state objects > > Some HTTP-specific traits might be: > - support for content negotiation is "baked-in" > - support for conditional requests is "baked-in" and automatic > - RPC-like implementation patterns (e.g. gateway URIs) are somehow > difficult to implement or are flagged as invalid > > Any comments? Is this line of thinking pure folly? old hat? already > resolved sufficiently? > > mca > http://amundsen.com/blog/ > > > ------------------------------------ > > Yahoo! Groups Links > > > _________________________________________________________________ Tell us your greatest, weirdest and funniest Hotmail stories http://clk.atdmt.com/UKM/go/195013117/direct/01/
Hi Peter, Do you mind if we pick nits on this requirement? "an easy way to parse the requested URL (w/ regex or named sections, etc.)" I think the binding of a URL and the semantics of that URL to an implementation should be discouraged. Thoughts? -Noah On Fri, Jan 15, 2010 at 1:33 PM, Peter Keane <pkeane@...> wrote: > > > You might check out this post from Erik Wilde: > > > http://dret.typepad.com/dretblog/2009/05/rest-programming-toolbox-requirements.html > > For me, the bare minimum includes: > > - support for GET,PUT,POST, DELETE http methods (at least ... HEAD, OPTIONS > nice, too). > - for GET requests, I want to know what media type is preferred > (combination of looking at Accept header and/or file extension, etc.) > - for PUT/POST I want to be able to quickly and easily now the incoming > Content-type > - an easy way to parse the requested URL (w/ regex or named sections, etc.) > - the ability to dispatch to a handler function by any combination of the > above > - tools for creating representations in the most common media types: HTML > (a template language), JSON, Atom, RDF, etc. and serving the proper one > based on requested type. > > In PHP, I don't think there is an obvious option (we've built our own > RESTful classes for Zend Framework: http://github.com/pkeane/cola-zend), > what I know of RoR seems to meet criteria, Django seems pretty nicely > RESTful, as does the Google App Engine "WebApp." > > --peter > > > On Fri, Jan 15, 2010 at 2:27 PM, mike amundsen <mamund@...> wrote: > >> >> >> Recently, I've been thinking about how a coding framework or library >> can influence the way developers implement applications. What would a >> coding environment look like if it was meant to encourage results that >> followed a particular _architectural_ style (not programming style). >> >> IOW, is there a way to craft a framework that constrains developers in >> ways that results in a REST-ful implementation of the application? >> >> I did some digging, but have yet to find any writing on this topic. >> >> Here are some "off-the-top-of-my-head" items: >> >> For example, a framework might exhibit these REST-like traits: >> - there is a clear separation of concerns between resource >> identifiers, resources, and representations >> - developers must define a resource as the public application interface >> - the Uniform Interface is enforced (e.g. those methods are the only >> public members exposed for a resource) >> - developers must always associate one or more representation formats >> with a resource and/or resource method before the implementation is >> valid >> - there is no way to define and use server-side session state objects >> >> Some HTTP-specific traits might be: >> - support for content negotiation is "baked-in" >> - support for conditional requests is "baked-in" and automatic >> - RPC-like implementation patterns (e.g. gateway URIs) are somehow >> difficult to implement or are flagged as invalid >> >> Any comments? Is this line of thinking pure folly? old hat? already >> resolved sufficiently? >> >> mca >> http://amundsen.com/blog/ >> > > > >
<snip> You might check out this post from Erik Wilde: </snip> yep, i commented there already. The list of features you provide are pretty close to those mentioned in a few threads here and on the web. Seems that many of these lists focus on the day-to-day details of working with HTTP and being REST-ful along the way. That's cool. I need to do more thinking about _why_ these are listed in the context of REST and what, if anything can be said about the style itself apart from the protocol in use. I've found a handful of papers on matching software architecture to programming, but nothing yet on _network_ architecture and programming. I'm still digging. mca http://amundsen.com/blog/ On Fri, Jan 15, 2010 at 16:33, Peter Keane <pkeane@mail.utexas.edu> wrote: > You might check out this post from Erik Wilde: > > > http://dret.typepad.com/dretblog/2009/05/rest-programming-toolbox-requirements.html > > For me, the bare minimum includes: > > - support for GET,PUT,POST, DELETE http methods (at least ... HEAD, OPTIONS > nice, too). > - for GET requests, I want to know what media type is preferred > (combination of looking at Accept header and/or file extension, etc.) > - for PUT/POST I want to be able to quickly and easily now the incoming > Content-type > - an easy way to parse the requested URL (w/ regex or named sections, etc.) > - the ability to dispatch to a handler function by any combination of the > above > - tools for creating representations in the most common media types: HTML > (a template language), JSON, Atom, RDF, etc. and serving the proper one > based on requested type. > > In PHP, I don't think there is an obvious option (we've built our own > RESTful classes for Zend Framework: http://github.com/pkeane/cola-zend), > what I know of RoR seems to meet criteria, Django seems pretty nicely > RESTful, as does the Google App Engine "WebApp." > > --peter > > On Fri, Jan 15, 2010 at 2:27 PM, mike amundsen <mamund@...> wrote: > >> >> >> Recently, I've been thinking about how a coding framework or library >> can influence the way developers implement applications. What would a >> coding environment look like if it was meant to encourage results that >> followed a particular _architectural_ style (not programming style). >> >> IOW, is there a way to craft a framework that constrains developers in >> ways that results in a REST-ful implementation of the application? >> >> I did some digging, but have yet to find any writing on this topic. >> >> Here are some "off-the-top-of-my-head" items: >> >> For example, a framework might exhibit these REST-like traits: >> - there is a clear separation of concerns between resource >> identifiers, resources, and representations >> - developers must define a resource as the public application interface >> - the Uniform Interface is enforced (e.g. those methods are the only >> public members exposed for a resource) >> - developers must always associate one or more representation formats >> with a resource and/or resource method before the implementation is >> valid >> - there is no way to define and use server-side session state objects >> >> Some HTTP-specific traits might be: >> - support for content negotiation is "baked-in" >> - support for conditional requests is "baked-in" and automatic >> - RPC-like implementation patterns (e.g. gateway URIs) are somehow >> difficult to implement or are flagged as invalid >> >> Any comments? Is this line of thinking pure folly? old hat? already >> resolved sufficiently? >> >> mca >> http://amundsen.com/blog/ >> >> > >
Thanks for the feedback on my initial thoughts about the REST style and programming frameworks. In order to explore this more w/o cluttering up this list, I started a googlecode project/wiki [1] and related group [2]. I'd be happy to grant anyone interested in participating contributor permissions to the project. Just ping me offline (mamund AT yahoo DOT com) and I'll add you to the project and group. [1] http://code.google.com/p/implementing-rest/ [2] http://groups.google.com/group/implementing-rest mca http://amundsen.com/blog/ On Fri, Jan 15, 2010 at 16:48, mike amundsen <mamund@yahoo.com> wrote: > <snip> > You might check out this post from Erik Wilde: > </snip> > yep, i commented there already. > > The list of features you provide are pretty close to those mentioned in a > few threads here and on the web. Seems that many of these lists focus on > the day-to-day details of working with HTTP and being REST-ful along the > way. That's cool. > > I need to do more thinking about _why_ these are listed in the context of > REST and what, if anything can be said about the style itself apart from the > protocol in use. I've found a handful of papers on matching software > architecture to programming, but nothing yet on _network_ architecture and > programming. I'm still digging. > > mca > http://amundsen.com/blog/ > > > > On Fri, Jan 15, 2010 at 16:33, Peter Keane <pkeane@...> wrote: > >> You might check out this post from Erik Wilde: >> >> >> http://dret.typepad.com/dretblog/2009/05/rest-programming-toolbox-requirements.html >> >> For me, the bare minimum includes: >> >> - support for GET,PUT,POST, DELETE http methods (at least ... HEAD, >> OPTIONS nice, too). >> - for GET requests, I want to know what media type is preferred >> (combination of looking at Accept header and/or file extension, etc.) >> - for PUT/POST I want to be able to quickly and easily now the incoming >> Content-type >> - an easy way to parse the requested URL (w/ regex or named sections, >> etc.) >> - the ability to dispatch to a handler function by any combination of the >> above >> - tools for creating representations in the most common media types: HTML >> (a template language), JSON, Atom, RDF, etc. and serving the proper one >> based on requested type. >> >> In PHP, I don't think there is an obvious option (we've built our own >> RESTful classes for Zend Framework: http://github.com/pkeane/cola-zend), >> what I know of RoR seems to meet criteria, Django seems pretty nicely >> RESTful, as does the Google App Engine "WebApp." >> >> --peter >> >> On Fri, Jan 15, 2010 at 2:27 PM, mike amundsen <mamund@...> wrote: >> >>> >>> >>> Recently, I've been thinking about how a coding framework or library >>> can influence the way developers implement applications. What would a >>> coding environment look like if it was meant to encourage results that >>> followed a particular _architectural_ style (not programming style). >>> >>> IOW, is there a way to craft a framework that constrains developers in >>> ways that results in a REST-ful implementation of the application? >>> >>> I did some digging, but have yet to find any writing on this topic. >>> >>> Here are some "off-the-top-of-my-head" items: >>> >>> For example, a framework might exhibit these REST-like traits: >>> - there is a clear separation of concerns between resource >>> identifiers, resources, and representations >>> - developers must define a resource as the public application interface >>> - the Uniform Interface is enforced (e.g. those methods are the only >>> public members exposed for a resource) >>> - developers must always associate one or more representation formats >>> with a resource and/or resource method before the implementation is >>> valid >>> - there is no way to define and use server-side session state objects >>> >>> Some HTTP-specific traits might be: >>> - support for content negotiation is "baked-in" >>> - support for conditional requests is "baked-in" and automatic >>> - RPC-like implementation patterns (e.g. gateway URIs) are somehow >>> difficult to implement or are flagged as invalid >>> >>> Any comments? Is this line of thinking pure folly? old hat? already >>> resolved sufficiently? >>> >>> mca >>> http://amundsen.com/blog/ >>> >>> >> >> >
On Fri, Jan 15, 2010 at 3:45 PM, Noah Campbell <noahcampbell@...>wrote: > Hi Peter, > > Do you mind if we pick nits on this requirement? > > "an easy way to parse the requested URL (w/ regex or named sections, etc.)" > > I think the binding of a URL and the semantics of that URL to an > implementation should be discouraged. > > Thoughts? > > Noah- It's a valid point, and one that I have given some thought to. Opaqueness of the URL is valuable if/when is encourages use of HATEOS (and the myth that REST == "pretty URLs" is hard but important to debunk). I guess I'd have to say that it depends on what you mean by "implementation" and how leaky your implementation abstractions are. I can imagine a nicely abstracted system for which such "semantic" URLs could be quite appropriate (as long as hypertext drives the application state in typical use of the web app). Here's a bit of wisdom from Roy F. on the matter: http://tech.groups.yahoo.com/group/rest-discuss/message/3232 Either way, the server needs to do something useful w/ the URL and thus needs way to "understand" it. --peter > -Noah > > On Fri, Jan 15, 2010 at 1:33 PM, Peter Keane <pkeane@...>wrote: > >> >> >> You might check out this post from Erik Wilde: >> >> >> http://dret.typepad.com/dretblog/2009/05/rest-programming-toolbox-requirements.html >> >> For me, the bare minimum includes: >> >> - support for GET,PUT,POST, DELETE http methods (at least ... HEAD, >> OPTIONS nice, too). >> - for GET requests, I want to know what media type is preferred >> (combination of looking at Accept header and/or file extension, etc.) >> - for PUT/POST I want to be able to quickly and easily now the incoming >> Content-type >> - an easy way to parse the requested URL (w/ regex or named sections, >> etc.) >> - the ability to dispatch to a handler function by any combination of the >> above >> - tools for creating representations in the most common media types: HTML >> (a template language), JSON, Atom, RDF, etc. and serving the proper one >> based on requested type. >> >> In PHP, I don't think there is an obvious option (we've built our own >> RESTful classes for Zend Framework: http://github.com/pkeane/cola-zend), >> what I know of RoR seems to meet criteria, Django seems pretty nicely >> RESTful, as does the Google App Engine "WebApp." >> >> --peter >> >> >> On Fri, Jan 15, 2010 at 2:27 PM, mike amundsen <mamund@...> wrote: >> >>> >>> >>> Recently, I've been thinking about how a coding framework or library >>> can influence the way developers implement applications. What would a >>> coding environment look like if it was meant to encourage results that >>> followed a particular _architectural_ style (not programming style). >>> >>> IOW, is there a way to craft a framework that constrains developers in >>> ways that results in a REST-ful implementation of the application? >>> >>> I did some digging, but have yet to find any writing on this topic. >>> >>> Here are some "off-the-top-of-my-head" items: >>> >>> For example, a framework might exhibit these REST-like traits: >>> - there is a clear separation of concerns between resource >>> identifiers, resources, and representations >>> - developers must define a resource as the public application interface >>> - the Uniform Interface is enforced (e.g. those methods are the only >>> public members exposed for a resource) >>> - developers must always associate one or more representation formats >>> with a resource and/or resource method before the implementation is >>> valid >>> - there is no way to define and use server-side session state objects >>> >>> Some HTTP-specific traits might be: >>> - support for content negotiation is "baked-in" >>> - support for conditional requests is "baked-in" and automatic >>> - RPC-like implementation patterns (e.g. gateway URIs) are somehow >>> difficult to implement or are flagged as invalid >>> >>> Any comments? Is this line of thinking pure folly? old hat? already >>> resolved sufficiently? >>> >>> mca >>> http://amundsen.com/blog/ >>> >> >> >> >> > > >
I wonder about the importance of opacity to all parties involved in the request/response chain. I can see why clients should assume opacity and I can see why intermediaries should assume opacity. But is it important that the origin server treat the URI as opaque? mca http://amundsen.com/blog/ On Fri, Jan 15, 2010 at 18:25, Peter Keane <pkeane@...> wrote: > > > On Fri, Jan 15, 2010 at 3:45 PM, Noah Campbell <noahcampbell@...>wrote: > >> Hi Peter, >> >> Do you mind if we pick nits on this requirement? >> >> "an easy way to parse the requested URL (w/ regex or named sections, >> etc.)" >> >> I think the binding of a URL and the semantics of that URL to an >> implementation should be discouraged. >> >> Thoughts? >> >> > Noah- > > It's a valid point, and one that I have given some thought to. Opaqueness > of the URL is valuable if/when is encourages use of HATEOS (and the myth > that REST == "pretty URLs" is hard but important to debunk). I guess I'd > have to say that it depends on what you mean by "implementation" and how > leaky your implementation abstractions are. I can imagine a nicely > abstracted system for which such "semantic" URLs could be quite appropriate > (as long as hypertext drives the application state in typical use of the web > app). Here's a bit of wisdom from Roy F. on the matter: > > http://tech.groups.yahoo.com/group/rest-discuss/message/3232 > > Either way, the server needs to do something useful w/ the URL and thus > needs way to "understand" it. > > --peter > > >> -Noah >> >> On Fri, Jan 15, 2010 at 1:33 PM, Peter Keane <pkeane@...>wrote: >> >>> >>> >>> You might check out this post from Erik Wilde: >>> >>> >>> http://dret.typepad.com/dretblog/2009/05/rest-programming-toolbox-requirements.html >>> >>> For me, the bare minimum includes: >>> >>> - support for GET,PUT,POST, DELETE http methods (at least ... HEAD, >>> OPTIONS nice, too). >>> - for GET requests, I want to know what media type is preferred >>> (combination of looking at Accept header and/or file extension, etc.) >>> - for PUT/POST I want to be able to quickly and easily now the incoming >>> Content-type >>> - an easy way to parse the requested URL (w/ regex or named sections, >>> etc.) >>> - the ability to dispatch to a handler function by any combination of the >>> above >>> - tools for creating representations in the most common media types: HTML >>> (a template language), JSON, Atom, RDF, etc. and serving the proper one >>> based on requested type. >>> >>> In PHP, I don't think there is an obvious option (we've built our own >>> RESTful classes for Zend Framework: http://github.com/pkeane/cola-zend), >>> what I know of RoR seems to meet criteria, Django seems pretty nicely >>> RESTful, as does the Google App Engine "WebApp." >>> >>> --peter >>> >>> >>> On Fri, Jan 15, 2010 at 2:27 PM, mike amundsen <mamund@...> wrote: >>> >>>> >>>> >>>> Recently, I've been thinking about how a coding framework or library >>>> can influence the way developers implement applications. What would a >>>> coding environment look like if it was meant to encourage results that >>>> followed a particular _architectural_ style (not programming style). >>>> >>>> IOW, is there a way to craft a framework that constrains developers in >>>> ways that results in a REST-ful implementation of the application? >>>> >>>> I did some digging, but have yet to find any writing on this topic. >>>> >>>> Here are some "off-the-top-of-my-head" items: >>>> >>>> For example, a framework might exhibit these REST-like traits: >>>> - there is a clear separation of concerns between resource >>>> identifiers, resources, and representations >>>> - developers must define a resource as the public application interface >>>> - the Uniform Interface is enforced (e.g. those methods are the only >>>> public members exposed for a resource) >>>> - developers must always associate one or more representation formats >>>> with a resource and/or resource method before the implementation is >>>> valid >>>> - there is no way to define and use server-side session state objects >>>> >>>> Some HTTP-specific traits might be: >>>> - support for content negotiation is "baked-in" >>>> - support for conditional requests is "baked-in" and automatic >>>> - RPC-like implementation patterns (e.g. gateway URIs) are somehow >>>> difficult to implement or are flagged as invalid >>>> >>>> Any comments? Is this line of thinking pure folly? old hat? already >>>> resolved sufficiently? >>>> >>>> mca >>>> http://amundsen.com/blog/ >>>> >>> >>> >>> >>> >> >> >> >
We [1] are trying to implement most of the things people metion around here as being restful - where most of them find a consense, and leaving some options where they dont, but picking one behavior as its default. From both Erik's post and Peter's message, it just partially supports Atom/AtomPub so far, no RSS and RDF (language tag processing) out of the box, but its coming: http://restfulie.caelumobjects.com/restfulie_features Are we in the right path? [1] http://restfulie.caelumobjects.com Regards - support for GET,PUT,POST, DELETE http methods (at least ... HEAD, OPTIONS nice, too). - for GET requests, I want to know what media type is preferred (combination of looking at Accept header and/or file extension, etc.) - for PUT/POST I want to be able to quickly and easily now the incoming Content-type - an easy way to parse the requested URL (w/ regex or named sections, etc.) - the ability to dispatch to a handler function by any combination of the above - tools for creating representations in the most common media types: HTML (a template language), JSON, Atom, RDF, etc. and serving the proper one based on requested type.
On Jan 16, 2010, at 12:32 AM, mike amundsen wrote: > I can see why clients should assume opacity and I can see why > intermediaries should assume opacity. But is it important that the > origin server treat the URI as opaque? No, of course not. In fact - how would you implement a Web application without descting the URI to dispatch to the corresponding resource handler and to serve up the right representation? Jan -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
<snip> In fact - how would you implement a Web application without descting the URI to dispatch to the corresponding resource handler and to serve up the right representation? </snip> And this is where I think some folks get confused. Since most folks spend time building _servers_ they get used to the notion that the URI is laden with important information, they focus on the importance of constructing "pretty" URIs, etc. Then, when talking here about these details, there is a wave of "oh, but URIs should be treated as opaque", etc. It is for this reason that, when I talk about URIs in the context of REST/HTTP, I use the phrase "clients and intermediaries should treat the URIs constructed by servers as opaque identifiers." mca http://amundsen.com/blog/ On Sat, Jan 16, 2010 at 06:31, Jan Algermissen <algermissen1971@...> wrote: > > On Jan 16, 2010, at 12:32 AM, mike amundsen wrote: > >> I can see why clients should assume opacity and I can see why >> intermediaries should assume opacity. But is it important that the origin >> server treat the URI as opaque? > > No, of course not. In fact - how would you implement a Web application > without descting the URI to dispatch to the corresponding resource handler > and to serve up the right representation? > > Jan > > > -------------------------------------- > Jan Algermissen > > Mail: algermissen@... > Blog: http://algermissen.blogspot.com/ > Home: http://www.jalgermissen.com > -------------------------------------- > > > >
I know, right!?! Now, allowing intermediaries to break the opaqueness isn't a bad thing and I wouldn't want to take that away, but the exercise of treating URIs as opaque identifiers is helpful in understanding HATEOS. -Noah On Sat, Jan 16, 2010 at 3:31 AM, Jan Algermissen <algermissen1971@...> wrote: > > On Jan 16, 2010, at 12:32 AM, mike amundsen wrote: > >> I can see why clients should assume opacity and I can see why >> intermediaries should assume opacity. But is it important that the origin >> server treat the URI as opaque? > > No, of course not. In fact - how would you implement a Web application > without descting the URI to dispatch to the corresponding resource handler > and to serve up the right representation? > > Jan > > > -------------------------------------- > Jan Algermissen > > Mail: algermissen@... > Blog: http://algermissen.blogspot.com/ > Home: http://www.jalgermissen.com > -------------------------------------- > > > >
Noah, hmm - what is your point exactly? On Jan 16, 2010, at 11:49 PM, Noah Campbell wrote: > I know, right!?! > > Now, allowing intermediaries to break the opaqueness isn't a bad thing Why do you think it is not a bad thing to introduce coupling between servers and intermediaries around the URI structure? Once you do that, the server looses control over its URI space. > and I wouldn't want to take that away, but the exercise of treating > URIs as opaque identifiers is helpful in understanding HATEOS. Hmm - can it be you misunderstood what I said below? My sentence was targeted towards the 'But is it...' part. Jan > > -Noah > > On Sat, Jan 16, 2010 at 3:31 AM, Jan Algermissen > <algermissen1971@...> wrote: >> >> On Jan 16, 2010, at 12:32 AM, mike amundsen wrote: >> >>> I can see why clients should assume opacity and I can see why >>> intermediaries should assume opacity. But is it important that the >>> origin >>> server treat the URI as opaque? >> >> No, of course not. In fact - how would you implement a Web >> application >> without descting the URI to dispatch to the corresponding resource >> handler >> and to serve up the right representation? >> >> Jan >> >> >> -------------------------------------- >> Jan Algermissen >> >> Mail: algermissen@... >> Blog: http://algermissen.blogspot.com/ >> Home: http://www.jalgermissen.com >> -------------------------------------- >> >> >> >> > > > ------------------------------------ > > Yahoo! Groups Links > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
Jan Algermissen wrote: > Noah, > > hmm - what is your point exactly? > > On Jan 16, 2010, at 11:49 PM, Noah Campbell wrote: > > >> I know, right!?! >> >> Now, allowing intermediaries to break the opaqueness isn't a bad thing >> > > Why do you think it is not a bad thing to introduce coupling between > servers and intermediaries around the URI structure? Once you do that, > the server looses control over its URI space. > > Some types of reverse proxy mechanisms need to be configured against URI patterns though, no? - Mike
I don't think it's a bad thing because intermediaries can leverage the URI space that you've designed. If you make it completely opaque then how can you shape traffic based on the URL. For example, splitting static content from application content using the /app/... /static/... in your links can be exploited to make the application scale. I don't think this is outright wrong. On Sat, Jan 16, 2010 at 4:20 PM, Jan Algermissen <algermissen1971@mac.com> wrote: > Noah, > > hmm - what is your point exactly? > > On Jan 16, 2010, at 11:49 PM, Noah Campbell wrote: > >> I know, right!?! >> >> Now, allowing intermediaries to break the opaqueness isn't a bad thing > > Why do you think it is not a bad thing to introduce coupling between servers > and intermediaries around the URI structure? Once you do that, the server > looses control over its URI space. > >> and I wouldn't want to take that away, but the exercise of treating >> URIs as opaque identifiers is helpful in understanding HATEOS. > > Hmm - can it be you misunderstood what I said below? My sentence was > targeted towards the 'But is it...' part. > > Jan > > >> >> -Noah >> >> On Sat, Jan 16, 2010 at 3:31 AM, Jan Algermissen >> <algermissen1971@...> wrote: >>> >>> On Jan 16, 2010, at 12:32 AM, mike amundsen wrote: >>> >>>> I can see why clients should assume opacity and I can see why >>>> intermediaries should assume opacity. But is it important that the >>>> origin >>>> server treat the URI as opaque? >>> >>> No, of course not. In fact - how would you implement a Web application >>> without descting the URI to dispatch to the corresponding resource >>> handler >>> and to serve up the right representation? >>> >>> Jan >>> >>> >>> -------------------------------------- >>> Jan Algermissen >>> >>> Mail: algermissen@acm.org >>> Blog: http://algermissen.blogspot.com/ >>> Home: http://www.jalgermissen.com >>> -------------------------------------- >>> >>> >>> >>> >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> > > -------------------------------------- > Jan Algermissen > > Mail: algermissen@... > Blog: http://algermissen.blogspot.com/ > Home: http://www.jalgermissen.com > -------------------------------------- > > > >
where is a schema for the Atom APP ? * I plan to marshal that in Java classes and then use it for serializing application/atomserv+xml types... -- ------------------------------------------ Felipe Gacho 10+ Java Programmer CEJUG Senior Advisor
On Jan 17, 2010, at 10:55 AM, Felipe Gacho wrote: > where is a schema for the Atom APP ? http://tools.ietf.org/html/rfc5023#section-8.3.1 Jan > > * I plan to marshal that in Java classes and then use it for > serializing application/atomserv+xml types... > > > > -- > ------------------------------------------ > Felipe Gacho > 10+ Java Programmer > CEJUG Senior Advisor > > > ------------------------------------ > > Yahoo! Groups Links > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
On Jan 17, 2010, at 2:28 AM, Noah Campbell wrote: > I don't think it's a bad thing because intermediaries can leverage the > URI space that you've designed. If you make it completely opaque then > how can you shape traffic based on the URL. Yes, right. I thought of that after I hit 'send'. However, the violation of the constraint becomes usually painfully visible when the reverse proxy and it's configuration is owned by a party different from that one owning the Web app. Sometimes you cannot change the Web app because you cannot immediately adjust the proxy config. But the use case as such is indeed very common and often inevitable. Would be worthwhile to analyse what makes it so common and which variations of it exist. For example, the integration of static content (images etc.) and Web apps is a quite common use. > > For example, splitting static content from application content using > the /app/... /static/... in your links can be exploited to make the > application scale. I don't think this is outright wrong. Yes, agreed. I guess one aspect here is what part of the URI is being inspected. I guess it is less violating to inspect a general, more configuration level part as opposed to very specific ones such as looking for all product URIs with even product ID numbers. There is a relation between how much of the URI is Web app controlled and how much is controlled by the app container configuration. Usually I'd expect the proxy config to touch those URI parts that are controlled by the container config. Jan > > > On Sat, Jan 16, 2010 at 4:20 PM, Jan Algermissen > <algermissen1971@...> wrote: >> Noah, >> >> hmm - what is your point exactly? >> >> On Jan 16, 2010, at 11:49 PM, Noah Campbell wrote: >> >>> I know, right!?! >>> >>> Now, allowing intermediaries to break the opaqueness isn't a bad >>> thing >> >> Why do you think it is not a bad thing to introduce coupling >> between servers >> and intermediaries around the URI structure? Once you do that, the >> server >> looses control over its URI space. >> >>> and I wouldn't want to take that away, but the exercise of treating >>> URIs as opaque identifiers is helpful in understanding HATEOS. >> >> Hmm - can it be you misunderstood what I said below? My sentence was >> targeted towards the 'But is it...' part. >> >> Jan >> >> >>> >>> -Noah >>> >>> On Sat, Jan 16, 2010 at 3:31 AM, Jan Algermissen >>> <algermissen1971@...> wrote: >>>> >>>> On Jan 16, 2010, at 12:32 AM, mike amundsen wrote: >>>> >>>>> I can see why clients should assume opacity and I can see why >>>>> intermediaries should assume opacity. But is it important that the >>>>> origin >>>>> server treat the URI as opaque? >>>> >>>> No, of course not. In fact - how would you implement a Web >>>> application >>>> without descting the URI to dispatch to the corresponding resource >>>> handler >>>> and to serve up the right representation? >>>> >>>> Jan >>>> >>>> >>>> -------------------------------------- >>>> Jan Algermissen >>>> >>>> Mail: algermissen@... >>>> Blog: http://algermissen.blogspot.com/ >>>> Home: http://www.jalgermissen.com >>>> -------------------------------------- >>>> >>>> >>>> >>>> >>> >>> >>> ------------------------------------ >>> >>> Yahoo! Groups Links >>> >>> >>> >> >> -------------------------------------- >> Jan Algermissen >> >> Mail: algermissen@... >> Blog: http://algermissen.blogspot.com/ >> Home: http://www.jalgermissen.com >> -------------------------------------- >> >> >> >> -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
On Fri, Jan 15, 2010 at 3:27 PM, mike amundsen <mamund@...> wrote: > > > Recently, I've been thinking about how a coding framework or library > can influence the way developers implement applications. What would a > coding environment look like if it was meant to encourage results that > followed a particular _architectural_ style (not programming style). > > IOW, is there a way to craft a framework that constrains developers in > ways that results in a REST-ful implementation of the application? > > I did some digging, but have yet to find any writing on this topic. > > Here are some "off-the-top-of-my-head" items: > > For example, a framework might exhibit these REST-like traits: > - there is a clear separation of concerns between resource > identifiers, resources, and representations > - developers must define a resource as the public application interface > - the Uniform Interface is enforced (e.g. those methods are the only > public members exposed for a resource) > You seem to be equating resource with programming language object, but not all languages are OO (and in fact I would argue the truly useful and productive ones are not). Any comments? Is this line of thinking pure folly? old hat? already > resolved sufficiently? > Perhaps take a look at webmachine, starting specifically with this page: <http://bitbucket.org/justin/webmachine/wiki/WebmachineMechanics> You can also read more by going to the top of the wiki there. Fundamentally it's Alan Dean's HTTP diagram [1] in framework form. --steve [1] <http://webmachine.basho.com/diagram.html>
Steve: <snip> You seem to be equating resource with programming language object, but not all languages are OO </snip> good point. thanks for the pointer to Webmachine. I've added it to the list on the Project Wiki I started to track frameworks w/ REST traits. http://code.google.com/p/implementing-rest/wiki/RESTFrameworks BTW - if anyone would like to suggest frameworks/content to add here, send me an email off-list. mca http://amundsen.com/blog/ On Sun, Jan 17, 2010 at 02:36, Steve Vinoski <vinoski@...> wrote: > > > On Fri, Jan 15, 2010 at 3:27 PM, mike amundsen <mamund@...> wrote: >> >> >> >> Recently, I've been thinking about how a coding framework or library >> can influence the way developers implement applications. What would a >> coding environment look like if it was meant to encourage results that >> followed a particular _architectural_ style (not programming style). >> >> IOW, is there a way to craft a framework that constrains developers in >> ways that results in a REST-ful implementation of the application? >> >> I did some digging, but have yet to find any writing on this topic. >> >> Here are some "off-the-top-of-my-head" items: >> >> For example, a framework might exhibit these REST-like traits: >> - there is a clear separation of concerns between resource >> identifiers, resources, and representations >> - developers must define a resource as the public application interface >> - the Uniform Interface is enforced (e.g. those methods are the only >> public members exposed for a resource) > > You seem to be equating resource with programming language object, but not > all languages are OO (and in fact I would argue the truly useful and > productive ones are not). >> >> Any comments? Is this line of thinking pure folly? old hat? already >> >> resolved sufficiently? > > Perhaps take a look at webmachine, starting specifically with this page: > <http://bitbucket.org/justin/webmachine/wiki/WebmachineMechanics> > You can also read more by going to the top of the wiki there. Fundamentally > it's Alan Dean's HTTP diagram [1] in framework form. > --steve > [1] <http://webmachine.basho.com/diagram.html>
I believe the expectation is that once a URI is made public it's always out there. If you want to move someone from an old URL you can signal a 3xx redirect to the client. You're not really tied to another agency's configuration. This behavior is more a property of HTTP than REST, but that's probably another discussion. -Noah On Sun, Jan 17, 2010 at 4:16 AM, Jan Algermissen <algermissen1971@...> wrote: > > On Jan 17, 2010, at 2:28 AM, Noah Campbell wrote: > >> I don't think it's a bad thing because intermediaries can leverage the >> URI space that you've designed. If you make it completely opaque then >> how can you shape traffic based on the URL. > > Yes, right. I thought of that after I hit 'send'. > > However, the violation of the constraint becomes usually painfully visible > when the reverse proxy and it's configuration is owned by a party different > from that one owning the Web app. Sometimes you cannot change the Web app > because you cannot immediately adjust the proxy config. > > But the use case as such is indeed very common and often inevitable. Would > be worthwhile to analyse what makes it so common and which variations of it > exist. For example, the integration of static content (images etc.) and Web > apps is a quite common use. > > >> >> For example, splitting static content from application content using >> the /app/... /static/... in your links can be exploited to make the >> application scale. I don't think this is outright wrong. > > Yes, agreed. I guess one aspect here is what part of the URI is being > inspected. I guess it is less violating to inspect a general, more > configuration level part as opposed to very specific ones such as looking > for all product URIs with even product ID numbers. > > There is a relation between how much of the URI is Web app controlled and > how much is controlled by the app container configuration. Usually I'd > expect the proxy config to touch those URI parts that are controlled by the > container config. > > Jan > > >> >> >> On Sat, Jan 16, 2010 at 4:20 PM, Jan Algermissen >> <algermissen1971@...> wrote: >>> >>> Noah, >>> >>> hmm - what is your point exactly? >>> >>> On Jan 16, 2010, at 11:49 PM, Noah Campbell wrote: >>> >>>> I know, right!?! >>>> >>>> Now, allowing intermediaries to break the opaqueness isn't a bad thing >>> >>> Why do you think it is not a bad thing to introduce coupling between >>> servers >>> and intermediaries around the URI structure? Once you do that, the server >>> looses control over its URI space. >>> >>>> and I wouldn't want to take that away, but the exercise of treating >>>> URIs as opaque identifiers is helpful in understanding HATEOS. >>> >>> Hmm - can it be you misunderstood what I said below? My sentence was >>> targeted towards the 'But is it...' part. >>> >>> Jan >>> >>> >>>> >>>> -Noah >>>> >>>> On Sat, Jan 16, 2010 at 3:31 AM, Jan Algermissen >>>> <algermissen1971@...> wrote: >>>>> >>>>> On Jan 16, 2010, at 12:32 AM, mike amundsen wrote: >>>>> >>>>>> I can see why clients should assume opacity and I can see why >>>>>> intermediaries should assume opacity. But is it important that the >>>>>> origin >>>>>> server treat the URI as opaque? >>>>> >>>>> No, of course not. In fact - how would you implement a Web application >>>>> without descting the URI to dispatch to the corresponding resource >>>>> handler >>>>> and to serve up the right representation? >>>>> >>>>> Jan >>>>> >>>>> >>>>> -------------------------------------- >>>>> Jan Algermissen >>>>> >>>>> Mail: algermissen@... >>>>> Blog: http://algermissen.blogspot.com/ >>>>> Home: http://www.jalgermissen.com >>>>> -------------------------------------- >>>>> >>>>> >>>>> >>>>> >>>> >>>> >>>> ------------------------------------ >>>> >>>> Yahoo! Groups Links >>>> >>>> >>>> >>> >>> -------------------------------------- >>> Jan Algermissen >>> >>> Mail: algermissen@... >>> Blog: http://algermissen.blogspot.com/ >>> Home: http://www.jalgermissen.com >>> -------------------------------------- >>> >>> >>> >>> > > -------------------------------------- > Jan Algermissen > > Mail: algermissen@... > Blog: http://algermissen.blogspot.com/ > Home: http://www.jalgermissen.com > -------------------------------------- > > > >
On Jan 17, 2010, at 7:14 PM, Noah Campbell wrote: > I believe the expectation is that once a URI is made public it's > always out there. Yes. 'Cool URIs' are related to this issue. To me it is bit disturbing that on the one hand, the server is supposed to be completely in charge of its own URI space while at the same time, it must sort of assume that once it publishes a URI it is likely to end up as a pointer in other systems and therefore should not change (or be redirected). > If you want to move someone from an old URL you can > signal a 3xx redirect to the client. You're not really tied to > another agency's configuration. > > This behavior is more a property of HTTP than REST, but that's > probably another discussion. > Yes (and likely a good one). Jan > -Noah > > On Sun, Jan 17, 2010 at 4:16 AM, Jan Algermissen > <algermissen1971@...> wrote: >> >> On Jan 17, 2010, at 2:28 AM, Noah Campbell wrote: >> >>> I don't think it's a bad thing because intermediaries can leverage >>> the >>> URI space that you've designed. If you make it completely opaque >>> then >>> how can you shape traffic based on the URL. >> >> Yes, right. I thought of that after I hit 'send'. >> >> However, the violation of the constraint becomes usually painfully >> visible >> when the reverse proxy and it's configuration is owned by a party >> different >> from that one owning the Web app. Sometimes you cannot change the >> Web app >> because you cannot immediately adjust the proxy config. >> >> But the use case as such is indeed very common and often >> inevitable. Would >> be worthwhile to analyse what makes it so common and which >> variations of it >> exist. For example, the integration of static content (images etc.) >> and Web >> apps is a quite common use. >> >> >>> >>> For example, splitting static content from application content using >>> the /app/... /static/... in your links can be exploited to make the >>> application scale. I don't think this is outright wrong. >> >> Yes, agreed. I guess one aspect here is what part of the URI is being >> inspected. I guess it is less violating to inspect a general, more >> configuration level part as opposed to very specific ones such as >> looking >> for all product URIs with even product ID numbers. >> >> There is a relation between how much of the URI is Web app >> controlled and >> how much is controlled by the app container configuration. Usually >> I'd >> expect the proxy config to touch those URI parts that are >> controlled by the >> container config. >> >> Jan >> >> >>> >>> >>> On Sat, Jan 16, 2010 at 4:20 PM, Jan Algermissen >>> <algermissen1971@...> wrote: >>>> >>>> Noah, >>>> >>>> hmm - what is your point exactly? >>>> >>>> On Jan 16, 2010, at 11:49 PM, Noah Campbell wrote: >>>> >>>>> I know, right!?! >>>>> >>>>> Now, allowing intermediaries to break the opaqueness isn't a bad >>>>> thing >>>> >>>> Why do you think it is not a bad thing to introduce coupling >>>> between >>>> servers >>>> and intermediaries around the URI structure? Once you do that, >>>> the server >>>> looses control over its URI space. >>>> >>>>> and I wouldn't want to take that away, but the exercise of >>>>> treating >>>>> URIs as opaque identifiers is helpful in understanding HATEOS. >>>> >>>> Hmm - can it be you misunderstood what I said below? My sentence >>>> was >>>> targeted towards the 'But is it...' part. >>>> >>>> Jan >>>> >>>> >>>>> >>>>> -Noah >>>>> >>>>> On Sat, Jan 16, 2010 at 3:31 AM, Jan Algermissen >>>>> <algermissen1971@...> wrote: >>>>>> >>>>>> On Jan 16, 2010, at 12:32 AM, mike amundsen wrote: >>>>>> >>>>>>> I can see why clients should assume opacity and I can see why >>>>>>> intermediaries should assume opacity. But is it important that >>>>>>> the >>>>>>> origin >>>>>>> server treat the URI as opaque? >>>>>> >>>>>> No, of course not. In fact - how would you implement a Web >>>>>> application >>>>>> without descting the URI to dispatch to the corresponding >>>>>> resource >>>>>> handler >>>>>> and to serve up the right representation? >>>>>> >>>>>> Jan >>>>>> >>>>>> >>>>>> -------------------------------------- >>>>>> Jan Algermissen >>>>>> >>>>>> Mail: algermissen@... >>>>>> Blog: http://algermissen.blogspot.com/ >>>>>> Home: http://www.jalgermissen.com >>>>>> -------------------------------------- >>>>>> >>>>>> >>>>>> >>>>>> >>>>> >>>>> >>>>> ------------------------------------ >>>>> >>>>> Yahoo! Groups Links >>>>> >>>>> >>>>> >>>> >>>> -------------------------------------- >>>> Jan Algermissen >>>> >>>> Mail: algermissen@... >>>> Blog: http://algermissen.blogspot.com/ >>>> Home: http://www.jalgermissen.com >>>> -------------------------------------- >>>> >>>> >>>> >>>> >> >> -------------------------------------- >> Jan Algermissen >> >> Mail: algermissen@... >> Blog: http://algermissen.blogspot.com/ >> Home: http://www.jalgermissen.com >> -------------------------------------- >> >> >> >> > > > ------------------------------------ > > Yahoo! Groups Links > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
2010/1/17 Jan Algermissen <algermissen1971@...> > > > > On Jan 17, 2010, at 10:55 AM, Felipe Gaúcho wrote: > > > where is a schema for the Atom APP ? > > http://tools.ietf.org/html/rfc5023#section-8.3.1 > > Jan > > > > > > * I plan to marshal that in Java classes and then use it for > > serializing application/atomserv+xml types... > > > > > You might want to take a look at the ROME project, specifically the Propono library: http://wiki.java.net/bin/view/Javawsxml/RomePropono --peter > > > > -- > > ------------------------------------------ > > Felipe Gaúcho > > 10+ Java Programmer > > CEJUG Senior Advisor > > > > > > ------------------------------------ > > > > Yahoo! Groups Links > > > > > > > > -------------------------------------- > Jan Algermissen > > Mail: algermissen@... <algermissen%40acm.org> > Blog: http://algermissen.blogspot.com/ > Home: http://www.jalgermissen.com > -------------------------------------- > > >
Thanks.. yes, Abdera and Rome are always on the radar, but I am trying something new here.. reading and writting the data directly in the database, without intermediate frameworks ... I got Atom Feed working fine now: http://fgaucho.dyndns.org:8080/arena-http/atom After Jfokus I will check if adopting the same approach with the atom app is worthy, or if it is better just accept the traditional frameworks :) * For general purposes, I believe Abdera is the best choice, but for my little pet project I will push this idea of database driven a bit more.. and if I succeed on that, I come back here with the feedback... * JAXB is not Relax NG friendly... so eventually it will be faster to rewrite a schema for the Atom Pub than try to fix all errors during the xjc compilation.. (or to copy the classes from Rome if it is pure Atom Pub) thanks for all tips.
On Jan 17, 2010, at 10:25 PM, Felipe Gacho wrote: > JAXB is not Relax NG friendly... so eventually it will be faster to > rewrite a schema for the Atom Pub There are tools that generate XSD from Relax NG! Do not have pointers; I am using Oxygen for such things (oxygenxml.org). Jan -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
2010/1/17 Jan Algermissen <algermissen1971@...>: > > On Jan 17, 2010, at 10:25 PM, Felipe Gacho wrote: > >> JAXB is not Relax NG friendly... so eventually it will be faster to >> rewrite a schema for the Atom Pub > > There are tools that generate XSD from Relax NG! Trang (<http://code.google.com/p/jing-trang/>) is one such tool. It has worked quite well for me. Peter
Thanks Peter, it sounds great.. I executed it here and it generated a set of schemas.. later I will check how useful they are :) On Sun, Jan 17, 2010 at 10:43 PM, Peter Williams <pezra@...>wrote: > > > 2010/1/17 Jan Algermissen <algermissen1971@...<algermissen1971%40mac.com> > >: > > > > > On Jan 17, 2010, at 10:25 PM, Felipe Gacho wrote: > > > >> JAXB is not Relax NG friendly... so eventually it will be faster to > >> rewrite a schema for the Atom Pub > > > > There are tools that generate XSD from Relax NG! > > Trang (<http://code.google.com/p/jing-trang/>) is one such tool. It > has worked quite well for me. > > Peter > > -- ------------------------------------------ Felipe Gacho 10+ Java Programmer CEJUG Senior Advisor
On Sun, Jan 17, 2010 at 2:24 PM, Jan Algermissen <algermissen1971@...>wrote: > > > > To me it is bit disturbing that on the one hand, the server is > supposed to be completely in charge of its own URI space while at the > same time, it must sort of assume that once it publishes a URI it is > likely to end up as a pointer in other systems and therefore should > not change (or be redirected). > > > Even within the constraints of the REST architectural style?
> Can you elaborate on the database scenario you plan to work on? sure.. I will present it next week in Jfokus, a Java conference in Stockholm, Sweden... I am busy with the presentation slides .. but as soon I present my work I will post here the link to the PDF and explain better what I am doing.. basically, I am modeling Atom in the relational database, and using annotations to traverse data directly from the HTTP interface and the persistence layer, without transformations or copies in the middle of the way....
On Jan 18, 2010, at 4:37 AM, Eb wrote: > > > On Sun, Jan 17, 2010 at 2:24 PM, Jan Algermissen <algermissen1971@... > > wrote: > > > > To me it is bit disturbing that on the one hand, the server is > supposed to be completely in charge of its own URI space while at the > same time, it must sort of assume that once it publishes a URI it is > likely to end up as a pointer in other systems and therefore should > not change (or be redirected). > > > > Even within the constraints of the REST architectural style? Yes, I think so. Though it seems to conradict the client server decoupling objective. An interesting question would be what criteria to use to differentate between URIs that make good bookmarks and those that don't. Jan -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
--- In rest-discuss@yahoogroups.com, Noah Campbell <noahcampbell@...> wrote: > > JAX-RS (Jersey) does a pretty good job IMO. Python's Piston on Django > looks promising (...and that's the limit of my experience with REST > frameworks I've used in anger). > Sorry, but I don't understand comments like this. I'm sure you meant well, so I hope you won't mind me demanding more rigor. You're not contributing any insight or answering the OP's question. Instead, you are dropping names. Could you please elaborate with a short discussion of how the design of Jersey encourages an architectural style? Ditto for Piston. Otherwise, again, I see it as mere namedropping. Especially as you apparently have experience with "REST frameworks" you dislike, you should be able to form a qualitative if not quantitative analysis of what makes them good or bad.
--- In rest-discuss@yahoogroups.com, Peter Keane <pkeane@...> wrote: > > You might check out this post from Erik Wilde: > > http://dret.typepad.com/dretblog/2009/05/rest-programming-toolbox-requirements.html > > For me, the bare minimum includes: > > - support for GET,PUT,POST, DELETE http methods (at least ... HEAD, OPTIONS > nice, too). > - for GET requests, I want to know what media type is preferred (combination > of looking at Accept header and/or file extension, etc.) > - for PUT/POST I want to be able to quickly and easily now the incoming > Content-type > - an easy way to parse the requested URL (w/ regex or named sections, etc.) > - the ability to dispatch to a handler function by any combination of the > above > - tools for creating representations in the most common media types: HTML (a > template language), JSON, Atom, RDF, etc. and serving the proper one based > on requested type. > > In PHP, I don't think there is an obvious option (we've built our own > RESTful classes for Zend Framework: http://github.com/pkeane/cola-zend), > what I know of RoR seems to meet criteria, Django seems pretty nicely > RESTful, as does the Google App Engine "WebApp." > > --peter > I think you are conflating too many things here. First, you are conflating low-level plumbing with higher-level concepts. Second, since REST is about information resources, any API that doesn't have a strategy for automatically destroying resources after invalid operations, is useless, stupid, and ugly. This is higher-level than garbage collected memory management. Stuff like URI parsing seems silly to me to consider as on the same level as those other concerns. The author is monolithically treating the problem; he is not designing a media type. Instead, he is exposing the entire statespace of a session to the programmer. How do you impose semantics on that??? Again, Real-time Object-Oriented programming is helpful here, despite what some on this list have countered to saying (by attacking me unfairly by saying I was attempting to re-create "distributed objects", rather than using real math to back up their arguments, or rather than simply using anything more than attacking with buzzwords to justify their position). Modeling communication in a distributed system using composable Input/Output Automata sequences is a proven technique. State machines can thus radically simplify the design of any system. Realizing that in a truly object-oriented system, built from abstract data objects (ADOs), you cannot have any notion of history, allows you to more modularly solve problems. In short, yes, you can use the stuff listed above, but don't think it is as simple as AND-join'ing all those properties together and you get REST. That's just dumb and silly.
Used in anger means I've used these frameworks beyond the superficial hello world example. I actually like both. I can definitely see how my comment would be misinterpreted. -Noah On Mon, Jan 18, 2010 at 9:57 AM, johnzabroski <johnzabroski@...> wrote: > > > --- In rest-discuss@yahoogroups.com, Noah Campbell <noahcampbell@...> wrote: >> >> JAX-RS (Jersey) does a pretty good job IMO. Python's Piston on Django >> looks promising (...and that's the limit of my experience with REST >> frameworks I've used in anger). >> > > > Sorry, but I don't understand comments like this. I'm sure you meant well, so I hope you won't mind me demanding more rigor. > > You're not contributing any insight or answering the OP's question. Instead, you are dropping names. > > Could you please elaborate with a short discussion of how the design of Jersey encourages an architectural style? Ditto for Piston. > > Otherwise, again, I see it as mere namedropping. > > Especially as you apparently have experience with "REST frameworks" you dislike, you should be able to form a qualitative if not quantitative analysis of what makes them good or bad. > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
> Atom has reach because lots of systems out there already
> understand how to process it.
> application/vnd.restbucks+xml does not have reach
> because there aren't
> many systems out there that understand it.
There's an angle on this worth mentioning, which is that Atom has reach
because it can aggregate other content either as extensions, mapping, or
enveloping.
I find it very common that people don't design data formats with
aggregation in mind, instead investing in very specific ("rich!") formats.
So you get lots of this
<cup />
<barrista />
<order />
and sometimes you get this
<orders>
<order/>
but most people don't plan for this
<list>
<order />
<cup />
which leaves out a whole class of serendipity ("usecases!"). I guess the
same thing goes for business semantics, people mostly code things up as
business logic ("dsls!") instead of relying on generic techniques.
Bill
Jim Webber wrote:
>
>
> Hi Eb,
>
> > Why would this make a difference? We keep talking about reach, but
> most (if
> > not all?) media types were designed for a specific client.
>
> Or clients. Atom has reach because lots of systems out there already
> understand how to process it.
>
> application/vnd.restbucks+xml does not have reach because there aren't
> many systems out there that understand it. Nor are there lots of
> libraries to choose from on lots of platforms that implement the
> processing model for that type. But it does have the advantage that it
> works really well for coffee ordering within the Restbucks domain.
>
> Jim
>
>
Hi,
I've been writing my app using a Java Swing client, and a server. The
client asks for data in JSON-format from the server using a RESTful URL
structure, and present it in the UI. All is well.
Now I'm facing the problem of figuring out what the user is allowed to
do with the data, both based on security (=does the user have the
correct role) and state (=is an action valid given the state). With an
HTML client this would be easy, since then the HTML could change
depending on these things. In my situation, where the client just gets
JSON-data, it seems to me that I have to extend my usage of JSON so that
I can get both the data to view, and a list of what actions the client
is allowed to perform, and then enable/disable UI elements accordingly.
Something like this would be sent from the server:
{"commands":["cmd1","cmd2],"data":{"foo":"bar"}}
With the above I can both get the data I need to show, and know what the
user is allowed to do with it.
Has anyone done this? Or should I skip directly to XHTML instead and use
<a>'s to let the client know what it can do? And if so, how would the
data be best transferred (key/value data)?
Any tips would be appreciated!
/Rickard
On Jan 19, 2010, at 12:51 PM, Rickard berg wrote:
> Hi,
>
> I've been writing my app using a Java Swing client, and a server. The
> client asks for data in JSON-format from the server using a RESTful
> URL
> structure, and present it in the UI. All is well.
What do you mean by "RESTful URL structure"?
>
> Now I'm facing the problem of figuring out what the user is allowed to
> do with the data, both based on security (=does the user have the
> correct role) and state (=is an action valid given the state). With an
> HTML client this would be easy, since then the HTML could change
> depending on these things.
The JSON can simply change as well based on the provilidges the user
has.
> In my situation, where the client just gets
> JSON-data, it seems to me that I have to extend my usage of JSON so
> that
> I can get both the data to view, and a list of what actions the client
> is allowed to perform, and then enable/disable UI elements
> accordingly.
You should include Authentication information with the request and
determine on that basis what JSON to return. The JSON should drive the
UI with regard to the available actions (==next possible transitions
in the application by the user through your UI).
>
> Something like this would be sent from the server:
> {"commands":["cmd1","cmd2],"data":{"foo":"bar"}}
Well, maybe - if you think of 'cmd' in terms of 'next transition'. If
you UI reacts on the cmds received that would be good.
>
> With the above I can both get the data I need to show, and know what
> the
> user is allowed to do with it.
I think that is ok, if you context prevents simpler solutions (e.g.
use HTML+browser in the first place).
>
> Has anyone done this? Or should I skip directly to XHTML instead and
> use
> <a>'s to let the client know what it can do? And if so, how would the
> data be best transferred (key/value data)?
Why did you not create an HTML based UI?
Jan
>
> Any tips would be appreciated!
>
> /Rickard
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
--------------------------------------
Jan Algermissen
Mail: algermissen@...
Blog: http://algermissen.blogspot.com/
Home: http://www.jalgermissen.com
--------------------------------------
On Tue, Jan 19, 2010 at 6:16 AM, Bill de hOra <bill@...> wrote:
> > Atom has reach because lots of systems out there already
> > understand how to process it.
> > application/vnd.restbucks+xml does not have reach
> > because there aren't
> > many systems out there that understand it.
>
> There's an angle on this worth mentioning, which is that Atom has reach
> because it can aggregate other content either as extensions, mapping, or
> enveloping.
>
> I find it very common that people don't design data formats with
> aggregation in mind, instead investing in very specific ("rich!") formats.
>
> So you get lots of this
>
> <cup />
> <barrista />
> <order />
>
> and sometimes you get this
>
> <orders>
> <order/>
>
> but most people don't plan for this
>
> <list>
> <order />
> <cup />
>
> which leaves out a whole class of serendipity ("usecases!"). I guess the
> same thing goes for business semantics, people mostly code things up as
> business logic ("dsls!") instead of relying on generic techniques.
>
> Bill
>
>
>
Bill -
I'm not sure I get your point here. What do you mean by "reach"?
Thanks.
Eb
On 2010-01-19 13.05, Jan Algermissen wrote:
>> I've been writing my app using a Java Swing client, and a server. The
>> client asks for data in JSON-format from the server using a RESTful URL
>> structure, and present it in the UI. All is well.
>
> What do you mean by "RESTful URL structure"?
A URL structure that represents resources that can return state and
perform actions on them.
>> In my situation, where the client just gets
>> JSON-data, it seems to me that I have to extend my usage of JSON so that
>> I can get both the data to view, and a list of what actions the client
>> is allowed to perform, and then enable/disable UI elements accordingly.
>
> You should include Authentication information with the request and
> determine on that basis what JSON to return. The JSON should drive the
> UI with regard to the available actions (==next possible transitions in
> the application by the user through your UI).
That was the idea. I was just curious how others have formatted their
JSON (if anyone has done it at all) to include such action info.
>> Something like this would be sent from the server:
>> {"commands":["cmd1","cmd2],"data":{"foo":"bar"}}
>
> Well, maybe - if you think of 'cmd' in terms of 'next transition'. If
> you UI reacts on the cmds received that would be good.
That's the idea. We could also replace "cmd1" with URL's instead, so
that the client doesn't have to generate URL's, but then we need more
info to be sent, similar to <a>'s (rel+href attributes).
>> With the above I can both get the data I need to show, and know what the
>> user is allowed to do with it.
>
> I think that is ok, if you context prevents simpler solutions (e.g. use
> HTML+browser in the first place).
Yeah, that is not an option. We need the more powerful UI features of
Swing, and also want to be able to connect to several servers at once in
the UI.
/Rickard
> On 2010-01-19 13.05, Jan Algermissen wrote:
> >> I've been writing my app using a Java Swing client, and a server.
> The
> >> client asks for data in JSON-format from the server using a
> RESTful URL
> >> structure, and present it in the UI. All is well.
> >
> > What do you mean by "RESTful URL structure"?
>
> A URL structure that represents resources that can return state and
> perform actions on them.
>
> >> In my situation, where the client just gets
> >> JSON-data, it seems to me that I have to extend my usage of JSON
> so that
> >> I can get both the data to view, and a list of what actions the
> client
> >> is allowed to perform, and then enable/disable UI elements
> accordingly.
> >
> > You should include Authentication information with the request and
> > determine on that basis what JSON to return. The JSON should drive
> the
> > UI with regard to the available actions (==next possible
> transitions in
> > the application by the user through your UI).
>
> That was the idea. I was just curious how others have formatted their
> JSON (if anyone has done it at all) to include such action info.
>
> >> Something like this would be sent from the server:
> >> {"commands":["cmd1","cmd2],"data":{"foo":"bar"}}
> >
> > Well, maybe - if you think of 'cmd' in terms of 'next transition'.
> If
> > you UI reacts on the cmds received that would be good.
>
> That's the idea. We could also replace "cmd1" with URL's instead, so
> that the client doesn't have to generate URL's, but then we need more
> info to be sent, similar to <a>'s (rel+href attributes).
>
> >> With the above I can both get the data I need to show, and know
> what the
> >> user is allowed to do with it.
> >
> > I think that is ok, if you context prevents simpler solutions
> (e.g. use
> > HTML+browser in the first place).
>
> Yeah, that is not an option. We need the more powerful UI features of
> Swing, and also want to be able to connect to several servers at
> once in
> the UI.
>
hi Richard,
You could go a firefox plugin (or similar) - that'll get you round the
'several servers at once' issue, but, doesn't help with the lack of
powerful UI features - although nowadays I find that many of the
things which the browser environment enables - and easily too! - make
it better than swing ...
Roger
>
> /Rickard
>
>
______________________________________________________________________
Fujitsu Laboratories of Europe Limited
Hayes Park Central, Hayes End Road, Hayes, Middlesex, UB4 8FE
Registered No. 4153469
This e-mail and any attachments are for the sole use of addressee(s) and
may contain information which is privileged and confidential. Unauthorised
use or copying for disclosure is strictly prohibited. The fact that this
e-mail has been scanned by Trendmicro Interscan and McAfee Groupshield does
not guarantee that it has not been intercepted or amended nor that it is
virus-free.
Eb wrote:
>
>
>
> On Tue, Jan 19, 2010 at 6:16 AM, Bill de hOra <bill@...
> <mailto:bill@...>> wrote:
>
> > Atom has reach because lots of systems out there already
> > understand how to process it.
> > application/vnd.restbucks+xml does not have reach
> > because there aren't
> > many systems out there that understand it.
>
> There's an angle on this worth mentioning, which is that Atom has
> reach
> because it can aggregate other content either as extensions,
> mapping, or
> enveloping.
>
> I find it very common that people don't design data formats with
> aggregation in mind, instead investing in very specific ("rich!")
> formats.
>
> So you get lots of this
>
> <cup />
> <barrista />
> <order />
>
> and sometimes you get this
>
> <orders>
> <order/>
>
> but most people don't plan for this
>
> <list>
> <order />
> <cup />
>
> which leaves out a whole class of serendipity ("usecases!"). I
> guess the
> same thing goes for business semantics, people mostly code things
> up as
> business logic ("dsls!") instead of relying on generic techniques.
>
> Bill
>
>
>
> Bill -
>
> I'm not sure I get your point here. What do you mean by "reach"?
>
> Thanks.
>
> Eb
Essentially just the amount of clients that understand the media type.
I tend to look at it in terms of concentration/dilution - i.e. custom
media types are more 'concentrated', in that they have stronger meaning
but to a smaller spread of clients. An equally generic media types are
more dilute, in that they have weaker meaning but to a larger spread of
clients.
Where you draw the line should depend on how wide a particular
application is intended to be distributed - extending atom is an example
of a good opportunity for compromise.
- Mike
On Jan 19, 2010, at 1:25 PM, Rickard berg wrote:
> On 2010-01-19 13.05, Jan Algermissen wrote:
>>> I've been writing my app using a Java Swing client, and a server.
>>> The
>>> client asks for data in JSON-format from the server using a
>>> RESTful URL
>>> structure, and present it in the UI. All is well.
>>
>> What do you mean by "RESTful URL structure"?
>
> A URL structure that represents resources that can return state and
> perform actions on them.
Hmm - but any URL represents a resource by definition and resources
hold state. I wonder why you even mentioned it. What kind of actions
do you think of?
> [...]
>>> Something like this would be sent from the server:
>>> {"commands":["cmd1","cmd2],"data":{"foo":"bar"}}
>>
>> Well, maybe - if you think of 'cmd' in terms of 'next transition'. If
>> you UI reacts on the cmds received that would be good.
>
> That's the idea. We could also replace "cmd1" with URL's instead, so
> that the client doesn't have to generate URL's, but then we need
> more info to be sent, similar to <a>'s (rel+href attributes).
What do you mean by 'generate URLs'? The client should not do this
(only on the basis of a form or template it receives) because it
introduces coupling on out-of-band information (the knowledge of the
desired URI structure). If you use REST, use it all the way through.
The 'more info to be sent' is the price you pay for the decreased
coupling.
>
>>> With the above I can both get the data I need to show, and know
>>> what the
>>> user is allowed to do with it.
>>
>> I think that is ok, if you context prevents simpler solutions (e.g.
>> use
>> HTML+browser in the first place).
>
> Yeah, that is not an option. We need the more powerful UI features
> of Swing, and also want to be able to connect to several servers at
> once in the UI.
Ok, see the point.
Jan
>
> /Rickard
--------------------------------------
Jan Algermissen
Mail: algermissen@...
Blog: http://algermissen.blogspot.com/
Home: http://www.jalgermissen.com
--------------------------------------
On Tue, Jan 19, 2010 at 7:37 AM, Mike Kelly <mike@...> wrote: > Essentially just the amount of clients that understand the media type. > > I tend to look at it in terms of concentration/dilution - i.e. custom media > types are more 'concentrated', in that they have stronger meaning but to a > smaller spread of clients. An equally generic media types are more dilute, > in that they have weaker meaning but to a larger spread of clients. > > Where you draw the line should depend on how wide a particular application > is intended to be distributed - extending atom is an example of a good > opportunity for compromise. > > - Mike > Ok, I buy that more clients will "accept" (potentially) the media type, but that doesn't really mean they understand it (from a "get stuff done" perspective), if the media type has been extended to now include semantics not defined into the original specification of the media type. The client will just see junk. Maybe "reach" is the wrong word to define what we're trying to describe here because I find it very misleading (personally).
On Tue, Jan 19, 2010 at 7:25 AM, Rickard berg <rickardoberg@...>wrote: > > > That's the idea. We could also replace "cmd1" with URL's instead, so > that the client doesn't have to generate URL's, but then we need more > info to be sent, similar to <a>'s (rel+href attributes). > > > Introduce your own linking structure in the json representation. Nothing is really stopping you from returning html-like or atom-like attributes in your json. (At least I don't think so.).
On Jan 19, 2010, at 12:51 PM, Rickard berg wrote:
> Hi,
>
> I've been writing my app using a Java Swing client, and a server.
Are you implementing a program on the client side that has a GUI and
happens to asks several service for some data or are you implementing
a browser that is supposed to be entirely driven by the
representations it receives and just happens to hav a Swing GIU?
Jan
> The
> client asks for data in JSON-format from the server using a RESTful
> URL
> structure, and present it in the UI. All is well.
>
> Now I'm facing the problem of figuring out what the user is allowed to
> do with the data, both based on security (=does the user have the
> correct role) and state (=is an action valid given the state). With an
> HTML client this would be easy, since then the HTML could change
> depending on these things. In my situation, where the client just gets
> JSON-data, it seems to me that I have to extend my usage of JSON so
> that
> I can get both the data to view, and a list of what actions the client
> is allowed to perform, and then enable/disable UI elements
> accordingly.
>
> Something like this would be sent from the server:
> {"commands":["cmd1","cmd2],"data":{"foo":"bar"}}
>
> With the above I can both get the data I need to show, and know what
> the
> user is allowed to do with it.
>
> Has anyone done this? Or should I skip directly to XHTML instead and
> use
> <a>'s to let the client know what it can do? And if so, how would the
> data be best transferred (key/value data)?
>
> Any tips would be appreciated!
>
> /Rickard
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
--------------------------------------
Jan Algermissen
Mail: algermissen@...
Blog: http://algermissen.blogspot.com/
Home: http://www.jalgermissen.com
--------------------------------------
"Eb wrote: > > > > > On Tue, Jan 19, 2010 at 7:37 AM, Mike Kelly <mike@... > <mailto:mike@...>> wrote: > > Essentially just the amount of clients that understand the media type. > > I tend to look at it in terms of concentration/dilution - i.e. > custom media types are more 'concentrated', in that they have > stronger meaning but to a smaller spread of clients. An equally > generic media types are more dilute, in that they have weaker > meaning but to a larger spread of clients. > > Where you draw the line should depend on how wide a particular > application is intended to be distributed - extending atom is an > example of a good opportunity for compromise. > > - Mike > > > Ok, I buy that more clients will "accept" (potentially) the media > type, but that doesn't really mean they understand it (from a "get > stuff done" perspective), if the media type has been extended to now > include semantics not defined into the original specification of the > media type. The client will just see junk. Maybe "reach" is the > wrong word to define what we're trying to describe here because I find > it very misleading (personally). > Your concerns point to the essence of the 'reach' concept - the idea that as you add custom semantics, the amount of clients able to understand decreases i.e. you are "reaching" less clients. Extending atom - 'basic' atom clients will see the standard atom and your extra junk, rather than 'just junk'. That is why you would choose to extend, rather than start from scratch. I don't find "reach" massively misleading - what about that terminology causes the problem? http://www.google.com/search?q=define:+reach " be in or establish communication with; "Our advertisements reach millions"" " to extend as far as; "The sunlight reached the wall"; "Can he reach?" "The chair must not touch the wall" Cheers, Mike
On 2010-01-19 14.01, Jan Algermissen wrote: >> I've been writing my app using a Java Swing client, and a server. > > Are you implementing a program on the client side that has a GUI and > happens to asks several service for some data or are you implementing a > browser that is supposed to be entirely driven by the representations it > receives and just happens to hav a Swing GIU? The first. There's no generalness at all about it; the client is entirely coupled to my server, with regard to what to expect. The client is delivered using Java WebStart, so I also know that the client and server expectations always match. /Rickard
When representing links in JSON, mirroring the Atom "link" element works well:
{
"link" :
{
"rel" : "alternate",
"href" : "http://east-nj1.photos.example.org/987/nj1-1234",
}
}
OR
{
"links" :
[
{
"rel" : "alternate",
"href" : "http://east-nj1.photos.example.org/987/nj1-1234"
},
{
"rel" : "http://www.example.org/rels/owner",
"href" : "http://east-nj1.photos.example.org/987",
}
]
}
When sending a representation of the resource to the client, only
include the link elements that are valid at that time. The links
themselves, along with documented "rel" values, should be sufficient
to allow the client state engine to locate and interpret the available
state transitions (actions) for that representation.
You may also find that you also need to mimic the XHTML FORM element
in order to communicate to the client what data elements are valid
when sending a representation back to the server. Finally, you may
need to be able to communicate "embed" links that tell the client to
fetch the resource representation and render it in place (think IMG
tag) instead of treat the link as a navigation (A tag).
This all starts to sound a lot like creating a MIME media-type based
on the JSON data format, eh<g>?
mca
http://amundsen.com/blog/
On Tue, Jan 19, 2010 at 06:51, Rickard berg <rickardoberg@...> wrote:
> Hi,
>
> I've been writing my app using a Java Swing client, and a server. The
> client asks for data in JSON-format from the server using a RESTful URL
> structure, and present it in the UI. All is well.
>
> Now I'm facing the problem of figuring out what the user is allowed to
> do with the data, both based on security (=does the user have the
> correct role) and state (=is an action valid given the state). With an
> HTML client this would be easy, since then the HTML could change
> depending on these things. In my situation, where the client just gets
> JSON-data, it seems to me that I have to extend my usage of JSON so that
> I can get both the data to view, and a list of what actions the client
> is allowed to perform, and then enable/disable UI elements accordingly.
>
> Something like this would be sent from the server:
> {"commands":["cmd1","cmd2],"data":{"foo":"bar"}}
>
> With the above I can both get the data I need to show, and know what the
> user is allowed to do with it.
>
> Has anyone done this? Or should I skip directly to XHTML instead and use
> <a>'s to let the client know what it can do? And if so, how would the
> data be best transferred (key/value data)?
>
> Any tips would be appreciated!
>
> /Rickard
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
>
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 1/19/2010 5:47 AM, Eb wrote: > > > > > On Tue, Jan 19, 2010 at 7:25 AM, Rickard �berg > <rickardoberg@... <mailto:rickardoberg@...>> wrote: > > > > That's the idea. We could also replace "cmd1" with URL's instead, > so that the client doesn't have to generate URL's, but then we > need more info to be sent, similar to <a>'s (rel+href attributes). > > > > Introduce your own linking structure in the json representation. > Nothing is really stopping you from returning html-like or > atom-like attributes in your json. (At least I don't think so.). The JSON Schema specification is worth looking at if you are want to use JSON RESTful in a way that is visible to multiple agents. It allows you describe the linking elements in your JSON so that agents can navigate your data without being tightly coupled to your server. http://tools.ietf.org/html/draft-zyp-json-schema-01#section-6 In some ways, JSON Schema is kind of the JSON equivalent of Atom; it certainly takes a much different approach (allows for much terser data representations), but still with the goal of providing truly hypertext enabled data representations. Thanks, - -- Kris Zyp SitePen (503) 806-1841 http://sitepen.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (MingW32) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAktVtN8ACgkQ9VpNnHc4zAwFoQCfVg6GAOMxZwzaPeU/EaYAamlK SDsAn1JXrTz+UU9FAplJ89YezdxSyOOU =+2qT -----END PGP SIGNATURE-----
On Tue, Jan 19, 2010 at 8:04 AM, Mike Kelly <mike@...> wrote: > "Eb wrote: > > >> >> >> >> On Tue, Jan 19, 2010 at 7:37 AM, Mike Kelly <mike@... <mailto: >> mike@...>> wrote: >> >> Essentially just the amount of clients that understand the media type. >> >> I tend to look at it in terms of concentration/dilution - i.e. >> custom media types are more 'concentrated', in that they have >> stronger meaning but to a smaller spread of clients. An equally >> generic media types are more dilute, in that they have weaker >> meaning but to a larger spread of clients. >> >> Where you draw the line should depend on how wide a particular >> application is intended to be distributed - extending atom is an >> example of a good opportunity for compromise. >> >> - Mike >> >> >> Ok, I buy that more clients will "accept" (potentially) the media type, >> but that doesn't really mean they understand it (from a "get stuff done" >> perspective), if the media type has been extended to now include semantics >> not defined into the original specification of the media type. The client >> will just see junk. Maybe "reach" is the wrong word to define what we're >> trying to describe here because I find it very misleading (personally). >> >> > Your concerns point to the essence of the 'reach' concept - the idea that > as you add custom semantics, the amount of clients able to understand > decreases i.e. you are "reaching" less clients. > > Extending atom - 'basic' atom clients will see the standard atom and your > extra junk, rather than 'just junk'. That is why you would choose to extend, > rather than start from scratch. > > I don't find "reach" massively misleading - what about that terminology > causes the problem? > > http://www.google.com/search?q=define:+reach > > " be in or establish communication with; "Our advertisements reach > millions"" > > " to extend as far as; "The sunlight reached the wall"; "Can he reach?" > "The chair must not touch the wall" > > Cheers, > Mike > :) Good point on the concept of "extend". To gain mileage with atom, you should be using it such that your clients can derive some value from "basic" atom. With that in mind, I can accept the "reach" available with using atom.
On Jan 19, 2010, at 2:34 PM, Kris Zyp wrote: > > The JSON Schema specification is worth looking at if you are want to > use JSON RESTful in a way that is visible to multiple agents. Which reminds me of the task on my todo-list that I wanted to comment on the proposed way to associate schema with instance: I think the draft proposes the wrong parameter (describedby). It would be better to re-use HTML's 'profile' parameter[1][2] as suggested here[3] Personally, I like profiles to define identifiers for bundles of extensions that constitute a contract but refering to a JSON schema seems is conceptually not different. I am not sure the profile parameter would be used in Content-Type headers, though. I think it is more usefull for conneg. Thus I'd use it in Accept headers or type attributes of link elements. Jan [1] http://tools.ietf.org/html/rfc3236#section-8 [2] http://www.w3.org/TR/html401/struct/global.html#h-7.4.4.3 [3] http://buzzword.org.uk/2009/draft-inkster-profile-parameter-00.html > It > allows you describe the linking elements in your JSON so that agents > can navigate your data without being tightly coupled to your server. > http://tools.ietf.org/html/draft-zyp-json-schema-01#section-6 > > In some ways, JSON Schema is kind of the JSON equivalent of Atom; it > certainly takes a much different approach (allows for much terser data > representations), but still with the goal of providing truly hypertext > enabled data representations. > > Thanks, > - -- > > Kris Zyp > SitePen > (503) 806-1841 > http://sitepen.com > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.9 (MingW32) > Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ > > iEYEARECAAYFAktVtN8ACgkQ9VpNnHc4zAwFoQCfVg6GAOMxZwzaPeU/EaYAamlK > SDsAn1JXrTz+UU9FAplJ89YezdxSyOOU > =+2qT > -----END PGP SIGNATURE----- > > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 1/19/2010 7:06 AM, Jan Algermissen wrote: > > On Jan 19, 2010, at 2:34 PM, Kris Zyp wrote: > >> >> The JSON Schema specification is worth looking at if you are want to >> use JSON RESTful in a way that is visible to multiple agents. > > Which reminds me of the task on my todo-list that I wanted to > comment on the proposed way to associate schema with instance: > > I think the draft proposes the wrong parameter (describedby). It > would be better to re-use HTML's 'profile' parameter[1][2] as > suggested here[3] Excellent, thank you for the links, that does make sense. I'll edit that for the next draft. > > Personally, I like profiles to define identifiers for bundles of > extensions that constitute a contract but refering to a JSON schema > seems is conceptually not different. > > I am not sure the profile parameter would be used in Content-Type > headers, though. I think it is more usefull for conneg. Thus I'd use > it in Accept headers or type attributes of link elements. Yes, I agree that it could be useful in Accept headers. Thanks, Kris > > Jan > > > [1] http://tools.ietf.org/html/rfc3236#section-8 > [2] http://www.w3.org/TR/html401/struct/global.html#h-7.4.4.3 > > [3] http://buzzword.org.uk/2009/draft-inkster-profile-parameter-00.html > > > > > >> It >> allows you describe the linking elements in your JSON so that agents >> can navigate your data without being tightly coupled to your server. >> http://tools.ietf.org/html/draft-zyp-json-schema-01#section-6 >> >> In some ways, JSON Schema is kind of the JSON equivalent of Atom; it >> certainly takes a much different approach (allows for much terser data >> representations), but still with the goal of providing truly hypertext >> enabled data representations. >> >> Thanks, >> - -- >> >> Kris Zyp >> SitePen >> (503) 806-1841 >> http://sitepen.com >> -----BEGIN PGP SIGNATURE----- >> Version: GnuPG v1.4.9 (MingW32) >> Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ >> >> iEYEARECAAYFAktVtN8ACgkQ9VpNnHc4zAwFoQCfVg6GAOMxZwzaPeU/EaYAamlK >> SDsAn1JXrTz+UU9FAplJ89YezdxSyOOU >> =+2qT >> -----END PGP SIGNATURE----- >> >> >> >> > > -------------------------------------- > Jan Algermissen > > Mail: algermissen@... > Blog: http://algermissen.blogspot.com/ > Home: http://www.jalgermissen.com > -------------------------------------- > > > - -- Kris Zyp SitePen (503) 806-1841 http://sitepen.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (MingW32) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAktVvx8ACgkQ9VpNnHc4zAwADACfVZYP+HuwHV8Dnb3LZizjGb4E iTwAnj0ktDbLV++IP4khQjc97jwuV6RV =u+hQ -----END PGP SIGNATURE-----
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 1/19/2010 4:24 PM, Robert Brewer wrote: > Kris Zyp wrote: >> The JSON Schema specification is worth looking at if you are want to >> use JSON RESTful in a way that is visible to multiple agents. It >> allows you describe the linking elements in your JSON so that agents >> can navigate your data without being tightly coupled to your server. >> http://tools.ietf.org/html/draft-zyp-json-schema-01#section-6 >> >> In some ways, JSON Schema is kind of the JSON equivalent of Atom; it >> certainly takes a much different approach (allows for much terser data >> representations), but still with the goal of providing truly hypertext >> enabled data representations. > > That's funny; I would have said JSON Schema is kind of the JSON equivalent of XML Schema <wink>. Shoji [1] is much closer to being the "JSON equivalent of Atom" in my head, since it actually provides concrete, structured types rather than a schema for describing types in general. But then you knew I was going to say that ;) Yes, you are definitely right, Shoji is closer to Atom in structure and approach, I should have mentioned Shoji, sorry about that. JSON Schema's meta-definition strategy is definitely a different approach. However, they all do provide a means for user agents to follow links in data. - -- Kris Zyp SitePen (503) 806-1841 http://sitepen.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (MingW32) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAktWQMAACgkQ9VpNnHc4zAya8gCfVamp5N62ypfZ8PHvZH3IkY3r 9BEAnRdfL/QBCaqgZT/1lwX7eBpVyuRT =I3YC -----END PGP SIGNATURE-----
Kris Zyp wrote: > The JSON Schema specification is worth looking at if you are want to > use JSON RESTful in a way that is visible to multiple agents. It > allows you describe the linking elements in your JSON so that agents > can navigate your data without being tightly coupled to your server. > http://tools.ietf.org/html/draft-zyp-json-schema-01#section-6 > > In some ways, JSON Schema is kind of the JSON equivalent of Atom; it > certainly takes a much different approach (allows for much terser data > representations), but still with the goal of providing truly hypertext > enabled data representations. That's funny; I would have said JSON Schema is kind of the JSON equivalent of XML Schema <wink>. Shoji [1] is much closer to being the "JSON equivalent of Atom" in my head, since it actually provides concrete, structured types rather than a schema for describing types in general. But then you knew I was going to say that ;) Robert Brewer fumanchu@... [1] http://www.aminus.org/rbre/shoji/shoji-draft-01.txt
This may be a naive question, as I've only just joined the "rest-discuss" list in the past few days. I'm contemplating how to apply a REST approach to use cases where the primary client is likely not the browser, but other enterprise software that might otherwise be using SOAP services. Looking around the 'net, I've answered a lot of questions, but here's one I'm stumped about. Are there recommended approaches for supporting single-sign-on (SSO) with REST over HTTP for the scenario I've described? -Eric
On Wed, Jan 20, 2010 at 3:29 PM, Eric Johnson <eric@...> wrote: > > > This may be a naive question, as I've only just joined the > "rest-discuss" list in the past few days. > > I'm contemplating how to apply a REST approach to use cases where the > primary client is likely not the browser, but other enterprise software > that might otherwise be using SOAP services. > > Looking around the 'net, I've answered a lot of questions, but here's > one I'm stumped about. > > Are there recommended approaches for supporting single-sign-on (SSO) > with REST over HTTP for the scenario I've described? > > Authentication is always a question that comes up early-on when looking at RESTful architectures. You'll want to investigate what HTTP itself offers in that realm (specifically Basic and Digest authentication). It doesn't address SSO per-se, but gets at what needs to happen as part of the request-response cycle. Beyond that, I would take a good look at OpenID and OAuth. These are the two technologies developed as "open" standards that point to the right way of doing Auth/Authz on the web. That's a kind of vague answer, but will (I hope) point you in a good direction for further research. --peter > -Eric > >
Hello I am struggling with how to protect resources that are exposed through a service. Assuming the authentication part is 'done' it still seems necessary to have a fairly detailed control on who can read/write attributes and/or add/remove resources to collections. To a certain extent this same question is raised in http://stackoverflow.com/questions/1408571/restful-authorization (please ignore, in that post, the remarks on crud,active record and the likes...that is not what the question is about). My questions is really on what best practices are around to have relatively find grained control on 'who' can do 'what' with a given resource. Control that goes beyond authorizing the REST verbs being used on a resource. A few simple examples - The price of a given order item should be 'readable' by the 'creator' of the order item while an 'administrator' should be able to update the price - An collection of order states (not a separate resource but part of the order) should be readable by the 'creator'. An 'administrator' should be able to add states - ... I hope I made myself clear. Any pointers to material covering these types of issues is warmly welcomed Thanks Peter
Hi all, I was wondering if there was interest for creating a blog which would list relevant REST activities on a weekly basis? Following all REST-related development on the WWW is somewhat a hassle since there's a lot to follow: this discussion group, a lot of blogs covering both REST and non-REST stuff and a lot of other pages (projects, frameworks, news articles, scientific papers...) which are hard to search for since "REST" is not a very good query. What I'd like is a brief overview of only and all things REST that happened in the last week and to be able to receive that overview through my feed reader. A blog feels right for delivery, but raises the question of who would maintain it and how. A collaborative approach where everyone would send links to a single person which would compile the blog post would work. An alternative would be creating a wiki page for collaborative link gathering, but leaves the question of how the overview would be delivered on a weekly basis. Another alternative would be to use this group for both gathering links (as replies to a single request-for-links weekly post) and publishing the weekly overview (as a new weekly post). Anyway, what do you think? Cheers, Ivan
On Thu, Jan 21, 2010 at 12:40 AM, pgp.coppens <pc.subscriptions@...> wrote: > Hello > > I am struggling with how to protect resources that are exposed through a service. > > Assuming the authentication part is 'done' it still seems necessary to > have a fairly detailed control on who can read/write attributes and/or > add/remove resources to collections. To be honest, this doesn't have anything, per se, to do with REST. That is, REST doesn't bring anything to the table to address, nor does it limit what can or should be done. The only "obligation", perhaps, of REST is discussion about a mechanism(s) for providing Authentication and Identification information. Authorization is a domain problem, not necessarily an architectural (at the level that REST is sitting at least) issue. The only real area where there is any potential crossover is simply that two clients may well get two different representations for the same resource. For example, the Purchaser of a product might be able to see item, quantity and price on a Order, but a Shipper, may well simply be able to see item and quantity. But as long as the roles are consistent, this isn't a problem. There's nothing wrong with a Shipper having "read only", and a "limited" view of a resource compared to another role. You can see, though, that a REST system is pretty much ambivalent to the roles and their effects, and beyond being able to identify users, actual processing and limitations of roles are done at the domain level. Regards, Will Hartung (willh@...)
Hi Peter, Will's right when he says "REST" has nothing to with this. But there are regular ways of dealing with this using HTTP. > - The price of a given order item should be 'readable' by the 'creator' of the order item while an 'administrator' should be able to update the price Using one of the HTTP authentication mechanisms (or OpenID, OAuth even) the creator will happily receive 401 when applying some verbs (e.g. PUT, DELETE) to a resource (e.g. pricing) but find they're get 200 OK responses to other verbs (e.g. GET). The administrator might find they receive 200 OK responses to all of those operations. > - An collection of order states (not a separate resource but part of the order) should be readable by the 'creator'. An 'administrator' should be able to add states How the order is projected onto the Web in terms of resources is an implementation decision. You might choose to have the collection of order states physically written down on the same piece of paper in your back end, but you could still happily project them as different logical resources onto the Web. Hypermedia is your friend should you want to link them together. Eg: Administrator: GET/order/1234 200 OK.... <order> <atom:link href="http://myservice/order/1234/state" rel="order-state" type="application/x.ordering+xml"/> <line-item> ... </lineItem> </order> Creator: GET/order/1234 200 OK... <order> <line-item> ... </lineItem> </order> Creator, after some creative thought tries: GET http://myservice/order/1234/state 401 Unauthorized... Nonetheless, administrators will have more chance of a 200 OK response than creators just as above. Does that help at all? Jim
The most effective way to handle authorization (not authentication) of resources over HTTP is to map access control to the resource URI + the HTTP Method for a user (or group). I often use the following "model" (here in XML) to express authorization rights in HTTP apps: <user id="user"> <access href="/orders/" action="GET,HEAD,OPTIONS" /> </user> <user id="manager"> <access href="/orders/" action="GET,HEAD,OPTIONS,POST,PUT" /> </user> <user id="administrator"> <access href="/orders/" action="GET,HEAD,OPTIONS,POST,PUT,DELETE" /> </user> Typical optimizations I employ are: <access href="/orders/" action="*" /> <!-- all methods allowed --> <access href="/orders/" action="!" /> <!-- no methods allowed --> <access href="/*" action="*" /> <!-- all URIs, all methods (handy for sys-admin role --> Regular expressions come in handy here, too. I also optimize the "user id" to include either a username (possibly expressed as an email address, FOAF URI, etc.) or groupname (users mapped to groups) and the use of an "owner" moniker to allow owners (creators) unique access to a particular resource where that makes sense. mca http://amundsen.com/blog/ On Thu, Jan 21, 2010 at 16:46, Jim Webber <jim@...> wrote: > Hi Peter, > > Will's right when he says "REST" has nothing to with this. But there are regular ways of dealing with this using HTTP. > >> - The price of a given order item should be 'readable' by the 'creator' of the order item while an 'administrator' should be able to update the price > > Using one of the HTTP authentication mechanisms (or OpenID, OAuth even) the creator will happily receive 401 when applying some verbs (e.g. PUT, DELETE) to a resource (e.g. pricing) but find they're get 200 OK responses to other verbs (e.g. GET). The administrator might find they receive 200 OK responses to all of those operations. > >> - An collection of order states (not a separate resource but part of the order) should be readable by the 'creator'. An 'administrator' should be able to add states > > How the order is projected onto the Web in terms of resources is an implementation decision. You might choose to have the collection of order states physically written down on the same piece of paper in your back end, but you could still happily project them as different logical resources onto the Web. Hypermedia is your friend should you want to link them together. Eg: > > Administrator: GET/order/1234 > 200 OK.... > <order> > <atom:link href="http://myservice/order/1234/state" rel="order-state" type="application/x.ordering+xml"/> > <line-item> ... </lineItem> > </order> > > Creator: GET/order/1234 > 200 OK... > <order> > <line-item> ... </lineItem> > </order> > > Creator, after some creative thought tries: GET http://myservice/order/1234/state > 401 Unauthorized... > > Nonetheless, administrators will have more chance of a 200 OK response than creators just as above. > > Does that help at all? > > Jim > > ------------------------------------ > > Yahoo! Groups Links > > > >
What about allowing folks to submit links to a feed aggregator? One could submit a single link of an interesting post/article or could submit a feed link to a blog/site that can be fetched regularly. It might also be interesting to establish and publish tags (#rest, etc.) authors can use to mark their content for bots that can use these tags to find relevant material and add the links to the aggregator. mca http://amundsen.com/blog/ On Thu, Jan 21, 2010 at 15:54, izuzak <izuzak@...> wrote: > Hi all, > > I was wondering if there was interest for creating a blog which would list relevant REST activities on a weekly basis? Following all REST-related development on the WWW is somewhat a hassle since there's a lot to follow: this discussion group, a lot of blogs covering both REST and non-REST stuff and a lot of other pages (projects, frameworks, news articles, scientific papers...) which are hard to search for since "REST" is not a very good query. > > What I'd like is a brief overview of only and all things REST that happened in the last week and to be able to receive that overview through my feed reader. A blog feels right for delivery, but raises the question of who would maintain it and how. A collaborative approach where everyone would send links to a single person which would compile the blog post would work. An alternative would be creating a wiki page for collaborative link gathering, but leaves the question of how the overview would be delivered on a weekly basis. Another alternative would be to use this group for both gathering links (as replies to a single request-for-links weekly post) and publishing the weekly overview (as a new weekly post). > > Anyway, what do you think? > > Cheers, > Ivan > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
Hi Peter, Thanks for the response. On 01/20/2010 02:35 PM, Peter Keane wrote: > On Wed, Jan 20, 2010 at 3:29 PM, Eric Johnson <eric@... > <mailto:eric@...>> wrote: > > I'm contemplating how to apply a REST approach to use cases where the > primary client is likely not the browser, but other enterprise > software > that might otherwise be using SOAP services. > > Are there recommended approaches for supporting single-sign-on (SSO) > with REST over HTTP for the scenario I've described? > > Authentication is always a question that comes up early-on when > looking at RESTful architectures. You'll want to investigate what > HTTP itself offers in that realm (specifically Basic and Digest > authentication). It doesn't address SSO per-se, but gets at what > needs to happen as part of the request-response cycle. I've done enough with the standard HTTP authentication to recognize that what you're saying is theoretically possible, but if all I've got to go with that's a cross-platform standard is "Basic" and "Digest", that really doesn't get me single-sign-on - there's no guarantee that the current client has the credentials on hand. Were I implementing SOAP services, I'd probably be supporting/using SAML, for example. > > Beyond that, I would take a good look at OpenID and OAuth. These are > the two technologies developed as "open" standards that point to the > right way of doing Auth/Authz on the web. OpenID and oauth look promising. I was aware of OpenID. It looks like OpenID is more suited for a browser interface, where clients expect HTML responses that contain prompts for the user. For my enterprise usage, where the REST clients aren't the browser, that means a lot of extra "scraping" work for clients, at best. Am I mis-reading that spec? OAuth looks like it is intended for the program-as-client side of the problem-space. That sounds like just what I need, in that I can then write programs that consume custom media types in the service of our enterprise software. Yet I note that, as yet, oauth is not an IETF published draft. Poking around the internet, I didn't see any immediately obvious reports about its expected future (imminent success? substantial changes expected? slow death... unlikely!). Before I rush to suggest that my company do a lot of work around oauth, what are the caveats for using it? > > That's a kind of vague answer, but will (I hope) point you in a good > direction for further research. Definitely helpful. I've been doing research. Still more to do. Thanks! -Eric.
Hello Eric, > OpenID and oauth look promising. I was aware of OpenID. It looks like > OpenID is more suited for a browser interface, where clients expect HTML > responses that contain prompts for the user. For my enterprise usage, > where the REST clients aren't the browser, that means a lot of extra > "scraping" work for clients, at best. Am I mis-reading that spec? Ah, well OpenID was originally developed for user-centric use-cases, but there's no reason that you can't use it for computer-to-computer scenarios either. It's just exchanging a bunch of hypermedia documents (XHTML forms) between three parties really. In fact Ian, Savas, and I have developed an example doing just this for our book (apologies to the list for harping on about an incomplete book, again). > Before I rush to suggest that my company do a lot of work around oauth, > what are the caveats for using it? Nothing, apart from it's still a bit of a moving target AFAIK. (and at the risk of incurring the wrath-of-rest, it's in our book). Jim
Mike - that's not a bad idea also. Getting all relevant REST-related content on a regular basis is the goal and without people contributing in some way - I don't see it happening. Not to be nit-picky, but I myself would like a single post/page listing all the relevant articles with a short one line overview (versus a stream of links with possible duplicates and no structure). Something like the XMPP Roundup - http://blog.xmpp.org/index.php/2009/09/xmpp-roundup-12/. Jim Webber also pointed me to Scott Banwart's "Distributed Weekly" blog which basically covers what I had in mind - http://rogue-technology.com/blog/. Ideally, the blog would be only REST oriented and cover more stuff, e.g. posts on the REST discussion group. I don't mind creating and maintaining a new wiki/blog/whetever-we-like-the-most for this purpose if people here feel it's something that adds value and would contribute. Anyone else like any of these ideas? :) Thanks, Ivan On Thu, Jan 21, 2010 at 23:51, mike amundsen <mamund@...> wrote: > What about allowing folks to submit links to a feed aggregator? One > could submit a single link of an interesting post/article or could > submit a feed link to a blog/site that can be fetched regularly. It > might also be interesting to establish and publish tags (#rest, etc.) > authors can use to mark their content for bots that can use these tags > to find relevant material and add the links to the aggregator. > > mca > http://amundsen.com/blog/ > > > > > On Thu, Jan 21, 2010 at 15:54, izuzak <izuzak@...> wrote: >> Hi all, >> >> I was wondering if there was interest for creating a blog which would list relevant REST activities on a weekly basis? Following all REST-related development on the WWW is somewhat a hassle since there's a lot to follow: this discussion group, a lot of blogs covering both REST and non-REST stuff and a lot of other pages (projects, frameworks, news articles, scientific papers...) which are hard to search for since "REST" is not a very good query. >> >> What I'd like is a brief overview of only and all things REST that happened in the last week and to be able to receive that overview through my feed reader. A blog feels right for delivery, but raises the question of who would maintain it and how. A collaborative approach where everyone would send links to a single person which would compile the blog post would work. An alternative would be creating a wiki page for collaborative link gathering, but leaves the question of how the overview would be delivered on a weekly basis. Another alternative would be to use this group for both gathering links (as replies to a single request-for-links weekly post) and publishing the weekly overview (as a new weekly post). >> >> Anyway, what do you think? >> >> Cheers, >> Ivan >> >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> >> >
On Jan 21, 2010, at 9:54 PM, izuzak wrote: > Hi all, > > I was wondering if there was interest for creating a blog which > would list relevant REST activities on a weekly basis? Why not use the REST wiki: http://rest.blueoxen.net/cgi-bin/wiki.pl?WebWeekly and see where that goes? Jan > Following all REST-related development on the WWW is somewhat a > hassle since there's a lot to follow: this discussion group, a lot > of blogs covering both REST and non-REST stuff and a lot of other > pages (projects, frameworks, news articles, scientific papers...) > which are hard to search for since "REST" is not a very good query. > > What I'd like is a brief overview of only and all things REST that > happened in the last week and to be able to receive that overview > through my feed reader. A blog feels right for delivery, but raises > the question of who would maintain it and how. A collaborative > approach where everyone would send links to a single person which > would compile the blog post would work. An alternative would be > creating a wiki page for collaborative link gathering, but leaves > the question of how the overview would be delivered on a weekly > basis. Another alternative would be to use this group for both > gathering links (as replies to a single request-for-links weekly > post) and publishing the weekly overview (as a new weekly post). > > Anyway, what do you think? > > Cheers, > Ivan > > > > ------------------------------------ > > Yahoo! Groups Links > > > ----------------------------------- Jan Algermissen, Consultant Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
On Fri, Jan 22, 2010 at 11:30, Jan Algermissen <algermissen1971@...> wrote: > > On Jan 21, 2010, at 9:54 PM, izuzak wrote: > >> Hi all, >> >> I was wondering if there was interest for creating a blog which would list >> relevant REST activities on a weekly basis? > > Why not use the REST wiki: > http://rest.blueoxen.net/cgi-bin/wiki.pl?WebWeekly and see where that goes? > > Jan Sounds like a good idea to me. Let's give it a try next week. Ivan
On Fri, Jan 22, 2010 at 3:30 AM, Jan Algermissen <algermissen1971@...> wrote: > > Why not use the REST wiki: http://rest.blueoxen.net/cgi-bin/wiki.pl?WebWeekly > and see where that goes? One issue: It is not obvious how one would subscribe to just WebWeekly pages. Personally, would want these weekly updates to show up in my feed read but not the other pages from the wiki. (Actually, the rss feed for the wiki is completely broken for me at the moment. Error message says `Can't call method "loadUser" on an undefined value at /data/www/perl2/PurpleWiki/Syndication/Rss.pm line 80.`) I like the idea of a weekly roundup (assuming subscribing to it is easy) but i think a REST planet (like Mike suggested) would be nice too. Peter
I'm hearing that before issuing a HTTP PUT request from JavaScript, more recent browsers are sending out HTTP OPTIONS on the request resource to verify that the PUT method is supported. If the OPTIONS request returns with a 200 OK and specifies that PUT is allowed everything is alright. If not the browser will not execute the PUT (whether or not the resource supports it). Is this sane behavior from a UA, to check for just one method? Bill
<snip> I'm hearing that before issuing a HTTP PUT request from JavaScript, more recent browsers are sending out HTTP OPTIONS on the request resource to verify that the PUT method is supported. </snip> I've not heard this before. I just did a quick scan of some server logs and find no evidence that this is happening right now in a couple apps that use PUT via XMLHttpRequest. mca http://amundsen.com/blog/ On Fri, Jan 22, 2010 at 11:48, Bill de hOra <bill@...> wrote: > > I'm hearing that before issuing a HTTP PUT request from JavaScript, more > recent browsers are sending out HTTP OPTIONS on the request resource to > verify that the PUT method is supported. If the OPTIONS request returns > with a 200 OK and specifies that PUT is allowed everything is alright. > If not the browser will not execute the PUT (whether or not the resource > supports it). Is this sane behavior from a UA, to check for just one method? > > Bill > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 1/22/2010 9:48 AM, Bill de hOra wrote: > > > > I'm hearing that before issuing a HTTP PUT request from > JavaScript, more recent browsers are sending out HTTP OPTIONS on > the request resource to verify that the PUT method is supported. If > the OPTIONS request returns with a 200 OK and specifies that PUT is > allowed everything is alright. If not the browser will not execute > the PUT (whether or not the resource supports it). Is this sane > behavior from a UA, to check for just one method? > Just to make sure this is clear, this is for cross-origin XHR requests, per http://www.w3.org/TR/access-control/. These requests were not possible before, and the OPTIONS request is to ensure that the server is prepared to handle requests from the browser that used to be blocked, to prevent new security breaches. Kris > > > Bill > > - -- Kris Zyp SitePen (503) 806-1841 http://sitepen.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (MingW32) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAktZ2d8ACgkQ9VpNnHc4zAx4oQCeIbnNwM/bCX5jMDAz7J9zgttg i1EAn3eC0LhLHVL4v77cAbExP319HQMh =bj7m -----END PGP SIGNATURE-----
Bill de hOra wrote: > > I'm hearing that before issuing a HTTP PUT request from JavaScript, > more recent browsers are sending out HTTP OPTIONS on the request > resource to verify that the PUT method is supported. If the OPTIONS > request returns with a 200 OK and specifies that PUT is allowed > everything is alright. If not the browser will not execute the PUT > (whether or not the resource supports it). Is this sane behavior from > a UA, to check for just one method? > I think it's incorrect behavior. Many httpds are hard-coded to respond to OPTIONS requests by sending the Allow header. But, there's no reason not to make Allow available in response to HEAD requests. Otherwise you get into the issue of what sort of entity to return with OPTIONS, and that's undefined territory. Whereas, if the request simply intends to check a resource header, no entity is needed. So I send Allow in response to both GET and HEAD, and keep OPTIONS unimplemented until I've figured out what to do with it (a backend management interface, perhaps). IMO, what a UA SHOULD do, is to see if a HEAD request generates an Allow response header before falling back to the less-desirable OPTIONS. Do it right, instead of hard-coding an archaic and inefficient solution to retrieving Allow headers. -Eric
Eric Johnson wrote: > Before I rush to suggest that my company do a lot of work around oauth, > what are the caveats for using it? It's tricky for mobile and machine driven systems - the point of Oauth is to put a human in the loop to avoid the password anti-pattern and it more or less has a baked in assumption the human is using a browser. Myself and a colleague wrote an RFC for a 2-legged model to deal with those issues last year [1], I heard that Twitter are adopting it. OAuth2.0/Wrap are interesting things to watch [2]. Bill [1] http://tools.ietf.org/html/draft-dehora-farrell-oauth-accesstoken-creds-01 [2] http://radar.oreilly.com/2010/01/whats-going-on-with-oauth.html
As a server implementer, why should I have to implement OPTIONS just to support PUT for clients which incorrectly treat the Allow header as authoritative? I have enough on my plate... -Eric "Eric J. Bowman" wrote: > > Bill de hOra wrote: > > > > I'm hearing that before issuing a HTTP PUT request from JavaScript, > > more recent browsers are sending out HTTP OPTIONS on the request > > resource to verify that the PUT method is supported. If the OPTIONS > > request returns with a 200 OK and specifies that PUT is allowed > > everything is alright. If not the browser will not execute the PUT > > (whether or not the resource supports it). Is this sane behavior > > from a UA, to check for just one method? > > > > I think it's incorrect behavior. Many httpds are hard-coded to > respond to OPTIONS requests by sending the Allow header. But, > there's no reason not to make Allow available in response to HEAD > requests. Otherwise you get into the issue of what sort of entity to > return with OPTIONS, and that's undefined territory. > > Whereas, if the request simply intends to check a resource header, no > entity is needed. So I send Allow in response to both GET and HEAD, > and keep OPTIONS unimplemented until I've figured out what to do with > it (a backend management interface, perhaps). > > IMO, what a UA SHOULD do, is to see if a HEAD request generates an > Allow response header before falling back to the less-desirable > OPTIONS. Do it right, instead of hard-coding an archaic and > inefficient solution to retrieving Allow headers. > > -Eric >
On Fri, Jan 22, 2010 at 9:22 AM, Bill de hOra <bill@...> wrote: > It's tricky for mobile and machine driven systems - the point of Oauth > is to put a human in the loop to avoid the password anti-pattern and it > more or less has a baked in assumption the human is using a browser. > Myself and a colleague wrote an RFC for a 2-legged model to deal with > those issues last year [1], I heard that Twitter are adopting it. > OAuth2.0/Wrap are interesting things to watch [2]. I have no experience with it, but from what I saw with OAuth, it didn't seem abusive to machine systems, rather it required the machine systems to understand the credential exchange requirements of a specific provider. For an an in house system, this didn't seem like much of a burden. For a "generic" client, then, certainly. A generic client would need to "know" about the nuances of specific providers. Once that hurdle was passed, it looked pretty good overall to me. Regards, Will Hartung (willh@...)
On Fri, Jan 22, 2010 at 9:24 AM, Eric J. Bowman <eric@...> wrote: > As a server implementer, why should I have to implement OPTIONS just to > support PUT for clients which incorrectly treat the Allow header as > authoritative? I have enough on my plate... Particularly if the UA just got a payload with links that specify PUT, the UA then "already knows" what it can do, so why should it be asking again. Also, why make the extra request? The UA can't know when or when not the server is going to allow or disallow such a request. If the server doesn't support PUT, then it will (should) inform the client properly when it's tried. It's not like the UA can cache the OPTIONS response. So, why invoke new overhead? And why not do the same for GET, POST and DELETE? Makes no sense to me. Bad assumption, and weak to non-existent protection. Regards, Will Hartung (willh@...)
<snip> Just to make sure this is clear, this is for cross-origin XHR requests, per http://www.w3.org/TR/access-control/. </snip> Oh, yeah; forgot about this draft - thanks for the reminder, Kris. Using an additional preflight [1] request via OPTIONS on all cross-site origin requests other than GET, HEAD, and POST. [1] http://www.w3.org/TR/2009/WD-cors-20090317/#preflight-request0 <http://www.w3.org/TR/2009/WD-cors-20090317/#preflight-request0> mca http://amundsen.com/blog/ On Fri, Jan 22, 2010 at 12:01, Kris Zyp <kris@...> wrote: > > > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > > > On 1/22/2010 9:48 AM, Bill de hOra wrote: > > > > > > > > I'm hearing that before issuing a HTTP PUT request from > > JavaScript, more recent browsers are sending out HTTP OPTIONS on > > the request resource to verify that the PUT method is supported. If > > the OPTIONS request returns with a 200 OK and specifies that PUT is > > allowed everything is alright. If not the browser will not execute > > the PUT (whether or not the resource supports it). Is this sane > > behavior from a UA, to check for just one method? > > > > Just to make sure this is clear, this is for cross-origin XHR > requests, per http://www.w3.org/TR/access-control/. These requests > were not possible before, and the OPTIONS request is to ensure that > the server is prepared to handle requests from the browser that used > to be blocked, to prevent new security breaches. > Kris > > > > > > Bill > > > > > > - -- > Kris Zyp > SitePen > (503) 806-1841 > http://sitepen.com > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.9 (MingW32) > Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ > > iEYEARECAAYFAktZ2d8ACgkQ9VpNnHc4zAx4oQCeIbnNwM/bCX5jMDAz7J9zgttg > i1EAn3eC0LhLHVL4v77cAbExP319HQMh > =bj7m > -----END PGP SIGNATURE----- > > > >
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 1/22/2010 10:39 AM, Will Hartung wrote: > > > [snip] It's not like the UA can cache the OPTIONS response. So, why > invoke new overhead? And why not do the same for GET, POST and > DELETE? > > Makes no sense to me. Bad assumption, and weak to non-existent > protection. > If you'd like to read a more detailed rationale for (as well against) CORS (cross-origin request sharing), you can peruse the webapps mail archives: http://lists.w3.org/Archives/Public/public-webapps/ (searching for cors helps narrow down). The spec itself does include a little bit of rationale, but it is very brief: http://www.w3.org/TR/access-control/#design-decision-faq - -- Kris Zyp SitePen (503) 806-1841 http://sitepen.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (MingW32) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAktZ5esACgkQ9VpNnHc4zAydHQCglShUoHsSsWscqfyZDJLdvl1D idwAn3cgA32gOq0TWJcxWz0lpsWhN3Jn =hNCB -----END PGP SIGNATURE-----
Thanks Kris. But I'm pretty confused. Should browsers be doing this unilaterally or should the server flag it understands this constraining of http? Bill Kris Zyp wrote: > > > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > > On 1/22/2010 10:39 AM, Will Hartung wrote: >> >> >> [snip] It's not like the UA can cache the OPTIONS response. So, why >> invoke new overhead? And why not do the same for GET, POST and >> DELETE? >> >> Makes no sense to me. Bad assumption, and weak to non-existent >> protection. >> > > If you'd like to read a more detailed rationale for (as well against) > CORS (cross-origin request sharing), you can peruse the webapps mail > archives: > http://lists.w3.org/Archives/Public/public-webapps/ (searching for > cors helps narrow down). The spec itself does include a little bit of > rationale, but it is very brief: > http://www.w3.org/TR/access-control/#design-decision-faq > > - -- > Kris Zyp > SitePen > (503) 806-1841 > http://sitepen.com > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.9 (MingW32) > Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ > > iEYEARECAAYFAktZ5esACgkQ9VpNnHc4zAydHQCglShUoHsSsWscqfyZDJLdvl1D > idwAn3cgA32gOq0TWJcxWz0lpsWhN3Jn > =hNCB > -----END PGP SIGNATURE----- > >
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 1/22/2010 1:36 PM, Bill de hOra wrote: > Thanks Kris. > > But I'm pretty confused. Should browsers be doing this > unilaterally or should the server flag it understands this > constraining of http? The intent is that the OPTIONS request allows the server to indicate that it can safely handle HTTP requests from the web page without incorrect assumptions about authority implied by the included cookies or authorization headers. All of the constraints specified by the CORS spec are for the purpose of preventing CSRF attacks. CSRF is already a big problem on the web, and if unconstrained cross-origin requests could be made by malicious web sites (which include target host cookies, and thus many application servers would consider it be an authorized request) then CSRF problems would certainly increase. Therefore the specification's goal is to open up the ability to make cross-origin XHR requests in such a way that servers can opt-in and avoid introducing new CSRF problems. Does that make sense? - -- Kris Zyp SitePen (503) 806-1841 http://sitepen.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (MingW32) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAktaJaIACgkQ9VpNnHc4zAyeGwCfer8hJLWHCZpr5pah0M3NddxF KfkAn2kJlyTsQx8iu8JzQZ9B0h21Oqt7 =BTH8 -----END PGP SIGNATURE-----
Right... the wiki solves only one part of the "problem" (collecting relevant links), but does not solve the periodic distribution. As I said, I don't mind setting up a blog to which I'd publish the weekly links if the wiki part seems to work for collecting information. So, let's give the wiki a try first and see how it goes. Ivan --- In rest-discuss@yahoogroups.com, Peter Williams <pezra@...> wrote: > > On Fri, Jan 22, 2010 at 3:30 AM, Jan Algermissen > <algermissen1971@...> wrote: > > > > Why not use the REST wiki: http://rest.blueoxen.net/cgi-bin/wiki.pl?WebWeekly > > and see where that goes? > > One issue: It is not obvious how one would subscribe to just WebWeekly > pages. Personally, would want these weekly updates to show up in my > feed read but not the other pages from the wiki. > > (Actually, the rss feed for the wiki is completely broken for me at > the moment. Error message says `Can't call method "loadUser" on an > undefined value at /data/www/perl2/PurpleWiki/Syndication/Rss.pm line > 80.`) > > I like the idea of a weekly roundup (assuming subscribing to it is > easy) but i think a REST planet (like Mike suggested) would be nice > too. > > Peter >
When a client is in a steady state, what makes up the meaning of the state (what information does the client use to determine the meaning)? a) The current representation (plus the understanding of the media types) b) The current representation (plus the understanding of the media types) and the request the client made (the knowledge of the semantics of the resource that the representation has been obtained from) c) b) and knowledge about all the requests it made before (this is wrong from a REST POV, but I include it for completeness) Note that b) includes knowledge about any conneg that might have happened. E.g. the client might have asked for GET /products/6676 Accept: application/xhtml+xml;profile="http://foo.org/my-html-product-representation-uformats-profile and the server might have just answered with 200 Ok Content-Type: application/xhtml+xml knowing that it serves the requested profile. (it did not say: your request for HTML with tht profile is 406 Not Acceptable). From the pure representation it is not detectable that the server sent this as a response to a request for 'a product representation' My understanding is b) but I am not entirely sure. Jan
I've never seen a ;profile attribute for a media type before, where does this come from? My apologies if I've overlooked it... As to the issue of steady-states, consider the browser. The user follows one link, but the next steady-state may only be achieved by dereferencing pertinent URIs within the initial representation. Your post implies that there's a 1:1 relationship between that initial representation's retrieval and the next steady-state. -Eric "algermissen1971" wrote: > > When a client is in a steady state, what makes up the meaning of the > state (what information does the client use to determine the meaning)? > > a) The current representation (plus the understanding of the media > types) b) The current representation (plus the understanding of the > media types) and the request the client made (the knowledge of the > semantics of the resource that the representation has been obtained > from) c) b) and knowledge about all the requests it made before (this > is wrong from a REST POV, but I include it for completeness) > > Note that b) includes knowledge about any conneg that might have > happened. E.g. the client might have asked for > > GET /products/6676 > Accept: application/xhtml > +xml;profile="http://foo.org/my-html-product-representation-uformats-profile > > and the server might have just answered with > > 200 Ok > Content-Type: application/xhtml+xml > > knowing that it serves the requested profile. (it did not say: your > request for HTML with tht profile is 406 Not Acceptable). From the > pure representation it is not detectable that the server sent this as > a response to a request for 'a product representation' > > > My understanding is b) but I am not entirely sure. > > > Jan > > >
--- In rest-discuss@yahoogroups.com, "Eric J. Bowman" <eric@...> wrote: > > I've never seen a ;profile attribute for a media type before, where > does this come from? My apologies if I've overlooked it... See references in <http://tech.groups.yahoo.com/group/rest-discuss/message/14612> It was just one aspect of my question, though. Consider this: is the media type of a representation telling you enough to understand the current state or do you need the context of the request also? When you follow some <link rel="/featured-product" href="/products/3776"/> link and the servers sends an HTML page back do you understand the meaning of the steady state from looking at the Content-Type header or do you add-in the information that the server told you the resource you just requested is 'the featured product'? > As to the issue of steady-states, consider the browser. The user > follows one link, but the next steady-state may only be achieved by > dereferencing pertinent URIs within the initial representation. Your > post implies that there's a 1:1 relationship between that initial > representation's retrieval and the next steady-state. No. At least I did not want to imply that. The steady state is reached when all the outstanding sub-requests have been made and then, when the client has no pending requests - on what basis does it determine the meaning of the state? In a machine client this would mean: "what information is taken into account to determine the next transition to follow? Just the media types of the steady state or also the context of the request? Jan > > -Eric > > "algermissen1971" wrote: > > > > When a client is in a steady state, what makes up the meaning of the > > state (what information does the client use to determine the meaning)? > > > > a) The current representation (plus the understanding of the media > > types) b) The current representation (plus the understanding of the > > media types) and the request the client made (the knowledge of the > > semantics of the resource that the representation has been obtained > > from) c) b) and knowledge about all the requests it made before (this > > is wrong from a REST POV, but I include it for completeness) > > > > Note that b) includes knowledge about any conneg that might have > > happened. E.g. the client might have asked for > > > > GET /products/6676 > > Accept: application/xhtml > > +xml;profile="http://foo.org/my-html-product-representation-uformats-profile > > > > and the server might have just answered with > > > > 200 Ok > > Content-Type: application/xhtml+xml > > > > knowing that it serves the requested profile. (it did not say: your > > request for HTML with tht profile is 406 Not Acceptable). From the > > pure representation it is not detectable that the server sent this as > > a response to a request for 'a product representation' > > > > > > My understanding is b) but I am not entirely sure. > > > > > > Jan > > > > > > >
--- In rest-discuss@yahoogroups.com, "algermissen1971" <algermissen1971@...> wrote: > > > > --- In rest-discuss@yahoogroups.com, "Eric J. Bowman" <eric@> wrote: > > > > I've never seen a ;profile attribute for a media type before, where > > does this come from? My apologies if I've overlooked it... > > See references in <http://tech.groups.yahoo.com/group/rest-discuss/message/14612> > > It was just one aspect of my question, though. > > Consider this: is the media type of a representation telling you enough to understand the current state or do you need the context of the request also? When you follow some <link rel="/featured-product" href="/products/3776"/> link and the servers sends an HTML page back do you understand the meaning of the steady state from looking at the Content-Type header or do you add-in the information that the server told you the resource you just requested is 'the featured product'? Another way to ask this is: When you bookmark /products/3776 and later go back to it, do you know it represents a product? Or: If you send your friend the link: /products/3776, do you say "Check this out" or "Check this cool product out"? And, when you receive "Check out /products/3776" do you know /products/3776 represents a product when you look at the representation? jan > > > > > As to the issue of steady-states, consider the browser. The user > > follows one link, but the next steady-state may only be achieved by > > dereferencing pertinent URIs within the initial representation. Your > > post implies that there's a 1:1 relationship between that initial > > representation's retrieval and the next steady-state. > > No. At least I did not want to imply that. The steady state is reached when all the outstanding sub-requests have been made and then, when the client has no pending requests - on what basis does it determine the meaning of the state? > > In a machine client this would mean: "what information is taken into account to determine the next transition to follow? Just the media types of the steady state or also the context of the request? > > Jan > > > > > -Eric > > > > "algermissen1971" wrote: > > > > > > When a client is in a steady state, what makes up the meaning of the > > > state (what information does the client use to determine the meaning)? > > > > > > a) The current representation (plus the understanding of the media > > > types) b) The current representation (plus the understanding of the > > > media types) and the request the client made (the knowledge of the > > > semantics of the resource that the representation has been obtained > > > from) c) b) and knowledge about all the requests it made before (this > > > is wrong from a REST POV, but I include it for completeness) > > > > > > Note that b) includes knowledge about any conneg that might have > > > happened. E.g. the client might have asked for > > > > > > GET /products/6676 > > > Accept: application/xhtml > > > +xml;profile="http://foo.org/my-html-product-representation-uformats-profile > > > > > > and the server might have just answered with > > > > > > 200 Ok > > > Content-Type: application/xhtml+xml > > > > > > knowing that it serves the requested profile. (it did not say: your > > > request for HTML with tht profile is 406 Not Acceptable). From the > > > pure representation it is not detectable that the server sent this as > > > a response to a request for 'a product representation' > > > > > > > > > My understanding is b) but I am not entirely sure. > > > > > > > > > Jan > > > > > > > > > > > >
algermissen1971 wrote: > When a client is in a steady state, what makes up the meaning of the state (what information does the client use to determine the meaning)? > > a) The current representation (plus the understanding of the media types) > b) The current representation (plus the understanding of the media types) and the request the client made (the knowledge of the semantics of the resource that the representation has been obtained from) > c) b) and knowledge about all the requests it made before (this is wrong from a REST POV, but I include it for completeness) > Why is c necessarily wrong from a REST POV? I would say the 'meaning' of the state is derived from the path taken through the flow of your hypertext-driven application, so it would include all state transitions taken from a given 'entry point' in your application. - Mike
--- In rest-discuss@yahoogroups.com, Mike Kelly <mike@...> wrote: > > algermissen1971 wrote: > > When a client is in a steady state, what makes up the meaning of the state (what information does the client use to determine the meaning)? > > > > a) The current representation (plus the understanding of the media types) > > b) The current representation (plus the understanding of the media types) and the request the client made (the knowledge of the semantics of the resource that the representation has been obtained from) > > c) b) and knowledge about all the requests it made before (this is wrong from a REST POV, but I include it for completeness) > > > > Why is c necessarily wrong from a REST POV? Because the meaning of a given state in a RESTful application does not depend on whatever happened before. For example, consider an IO library where you need to remember that you opened a file. You read the next chunk, knowing it is still open, not because the state of the file descriptor provides a transition that lets you read a chunk. With REST you do not have a priori knowledge about the state the service (the file descriptor) goes through. You can understand each state in isolation. Jan > > I would say the 'meaning' of the state is derived from the path taken > through the flow of your hypertext-driven application, so it would > include all state transitions taken from a given 'entry point' in your > application. > > - Mike >
algermissen1971 wrote: > --- In rest-discuss@yahoogroups.com, Mike Kelly <mike@...> wrote: > >> algermissen1971 wrote: >> >>> When a client is in a steady state, what makes up the meaning of the state (what information does the client use to determine the meaning)? >>> >>> a) The current representation (plus the understanding of the media types) >>> b) The current representation (plus the understanding of the media types) and the request the client made (the knowledge of the semantics of the resource that the representation has been obtained from) >>> c) b) and knowledge about all the requests it made before (this is wrong from a REST POV, but I include it for completeness) >>> >>> >> Why is c necessarily wrong from a REST POV? >> > > Because the meaning of a given state in a RESTful application does not depend on whatever happened before. > The client/application state.. ? I'm not so sure given that 'hypertext is the engine of application state' - the only way for the client to understand the 'meaning' of its current state is in the context of the application flow (i.e. series of link relations) which led up to it. - Mike
(response to off-list message about off-list messages and rest-discuss) I had the same problem here, I forgot how I solved it, but now I can't use the Web interface -- only e-mail -- to post to rest-discuss. I wish I could remember, seeing as how you're stuck with the opposite. I'm going to hold off on answering your question until I have the demo I've been working on furiously for the past two weeks, posted. It addresses a variety of issues regarding REST, Xforms, and client-side XSLT. Due to the demo's nature as client-side XSLT, I'm wearing my application logic on my sleeve as hypertext (which goes beyond the hypertext constraint, as opposed to breaking said constraint). The application logic, whether the XSLT transformation is run on the client or the server, is entirely based on link relations. In fact, the easiest part of distilling out a static example from my dynamic project, turned out to be dismantling content negotiation. All I did was edit my Atom source files' link relation @hrefs by adding filename extensions. Which frees the demo up to run a directory browser so you can introspect my files, including the content of dot files. Since the application logic is based on link relations, changing the URLs in the source Atom files also changed the URLs in the XHTML output without my having to change the XSLT stylesheet. I think my demo will answer a lot of the questions you've been asking lately about machine clients, or at least give you plenty of food for thought. Bear with for a few days, I almost posted it already but I've torn the httpd apart again... the httpd running the demo is a FOSS project I've adopted and adapted to be a prototyping framework for REST development. The source code for the httpd is posted as part of the demo. Mostly not my httpd code, but everything else is, save a line of XSLT and an XPATH statement my partner contributed when I got stuck. While it looks like the same demo I've been posting occasionally for 3 1/2 years, there came a point where I took off my architect hat and decided to write the damn code myself, after failing to explain to many different coders how to base application logic on link relations. I'll put my markup and CSS up against anyone's, and with more practice I'll be saying that about my XSLT as well. This should lay to rest the recurring concerns about my skillz every time I try to contribute something of my knowledge (Web developer since Dec. 1993) somewhere. Like yesterday on www-tag when I was more-or-less accused of inventing the Content-Script-Type header, in response to my polite suggestion that it be considered within the context of an ongoing discussion. Sigh. Of course, my demo implements said header, and shows just exactly how it can be used to solve the issue of distinguishing between scripted and unscripted (X)HTML, and the issue of privilege escalation that I'm patiently trying to explain to w3c folks where media type identifiers other than text/plain are concerned. Jan, I really should point out that you're saying "media type" where you really mean "media type identifier", but the two are not the same thing. I've only recently decided that pointing this out isn't nit- picking, but crucial to understanding Web architecture (which REST is a subset of). Anyway, I'll save my elaboration on why media type identifers only describe containers, for some other time. You're leaning in a direction I reject out of hand, which is that a media type identifier is a contract. Contracts reside within the containers. -Eric
(response to off-list message about off-list messages and rest-discuss) I had the same problem here, I forgot how I solved it, but now I can't use the Web interface -- only e-mail -- to post to rest-discuss. I wish I could remember, seeing as how you're stuck with the opposite. I'm going to hold off on answering your question until I have the demo I've been working on furiously for the past two weeks, posted. It addresses a variety of issues regarding REST, Xforms, and client-side XSLT. Due to the demo's nature as client-side XSLT, I'm wearing my application logic on my sleeve as hypertext (which goes beyond the hypertext constraint, as opposed to breaking said constraint). The application logic, whether the XSLT transformation is run on the client or the server, is entirely based on link relations. In fact, the easiest part of distilling out a static example from my dynamic project, turned out to be dismantling content negotiation. All I did was edit my Atom source files' link relation @hrefs by adding filename extensions. Which frees the demo up to run a directory browser so you can introspect my files, including the content of dot files. Since the application logic is based on link relations, changing the URLs in the source Atom files also changed the URLs in the XHTML output without my having to change the XSLT stylesheet. I think my demo will answer a lot of the questions you've been asking lately about machine clients, or at least give you plenty of food for thought. Bear with for a few days, I almost posted it already but I've torn the httpd apart again... the httpd running the demo is a FOSS project I've adopted and adapted to be a prototyping framework for REST development. The source code for the httpd is posted as part of the demo. Mostly not my httpd code, but everything else is, save a line of XSLT and an XPATH statement my partner contributed when I got stuck. While it looks like the same demo I've been posting occasionally for 3 1/2 years, there came a point where I took off my architect hat and decided to write the damn code myself, after failing to explain to many different coders how to base application logic on link relations. I'll put my markup and CSS up against anyone's, and with more practice I'll be saying that about my XSLT as well. This should lay to rest the recurring concerns about my skillz every time I try to contribute something of my knowledge (Web developer since Dec. 1993) somewhere. Like yesterday on www-tag when I was more-or-less accused of inventing the Content-Script-Type header, in response to my polite suggestion that it be considered within the context of an ongoing discussion. Sigh. Of course, my demo implements said header, and shows just exactly how it can be used to solve the issue of distinguishing between scripted and unscripted (X)HTML, and the issue of privilege escalation that I'm patiently trying to explain to w3c folks where media type identifiers other than text/plain are concerned. Jan, I really should point out that you're saying "media type" where you really mean "media type identifier", but the two are not the same thing. I've only recently decided that pointing this out isn't nit- picking, but crucial to understanding Web architecture (which REST is a subset of). Anyway, I'll save my elaboration on why media type identifers only describe containers, for some other time. You're leaning in a direction I reject out of hand, which is that a media type identifier is a contract. Contracts reside within the containers. -Eric
Sorry for the double-post, list. I think y'all know how it goes with rest-discuss posting some days, right? ;-) -Eric
On Jan 24, 2010, at 2:07 PM, Mike Kelly wrote: > the only way for the client to understand the 'meaning' of its > current state is in the context of the application flow (i.e. series > of link relations) which led up to it. Correct me, but this is exactly what REST prevents. A client can use the URI of any steady-state and just proceed through the application from that point on without the need for any knowledge about prior interactions. If it can't, the representation is just badly designed. (Which makes a nice principle for media type design) Jan ----------------------------------- Jan Algermissen, Consultant Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
I have a representation with a collection, each item of the collection has a rel="itself" link targeting its details representation.... example: the collection: http://fgaucho.dyndns.org:8080/arena-http/competition following the itself link: http://fgaucho.dyndns.org:8080/arena-http/competition/_1929737737 Question: in this second URI, the link should disappear ? * I am trying to map the state of the resources direct in the database, and that's why the question. It is hard to make it truly dynamic in the database.. and I hate to waste server time adding and removing links to the resource representations :) * It is possible to have a second lits in the sub-resource table in the database, basically with the same links but with different names just to fit the style.......... -- ------------------------------------------ Felipe Gacho 10+ Java Programmer CEJUG Senior Advisor
Jan Algermissen wrote: > On Jan 24, 2010, at 2:07 PM, Mike Kelly wrote: > > >> the only way for the client to understand the 'meaning' of its >> current state is in the context of the application flow (i.e. series >> of link relations) which led up to it. >> > > > Correct me, but this is exactly what REST prevents. A client can use > the URI of any steady-state and just proceed through the application > from that point on without the need for any knowledge about prior > interactions. If it can't, the representation is just badly designed. > Are we drawing a distinction here between steady-state and entry-point? From my understanding - you are describing the latter, but not necessarily the former. I'm thinking purely from a client perspective. A server shouldn't be aware of any such 'context' since there is no shared state. - Mike
On Jan 24, 2010, at 4:46 PM, Mike Kelly wrote: > Jan Algermissen wrote: >> On Jan 24, 2010, at 2:07 PM, Mike Kelly wrote: >> >> >>> the only way for the client to understand the 'meaning' of its >>> current state is in the context of the application flow (i.e. >>> series of link relations) which led up to it. >>> >> >> >> Correct me, but this is exactly what REST prevents. A client can >> use the URI of any steady-state and just proceed through the >> application from that point on without the need for any knowledge >> about prior interactions. If it can't, the representation is just >> badly designed. >> > > Are we drawing a distinction here between steady-state and entry- > point? Hmm, IMHO each steady state is a potential entry point. Jan > From my understanding - you are describing the latter, but not > necessarily the former. > > I'm thinking purely from a client perspective. A server shouldn't be > aware of any such 'context' since there is no shared state. > > - Mike ----------------------------------- Jan Algermissen, Consultant Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
On Jan 24, 2010, at 4:22 PM, Felipe Gacho wrote: > I have a representation with a collection, each item of the collection > has a rel="itself" link targeting its details representation.... > > example: > > the collection: http://fgaucho.dyndns.org:8080/arena-http/competition > > following the itself link: > http://fgaucho.dyndns.org:8080/arena-http/competition/_1929737737 Do you mean rel="self"? > > Question: in this second URI, the link should disappear ? Do you mean in the representation of the entry? That's up to you. It might be useful in scenarios where the entry XML ends up on disk (when you loose the request URI) then the self tells you where the entry XML came from. > > * I am trying to map the state of the resources direct in the > database, and that's why the question. It is hard to make it truly > dynamic in the database.. In my experience, it is often a waste of time to try to be very dynamic/general/automatic. Just ask the underlying DAO for the data and pupulate the Atom entry object 'by hand'. That's fine. > and I hate to waste server time adding and > removing links to the resource representations :) Hmm - are you sure populating another XML element is significant in a networked environment (the network call consuming orders of magnitude more CPU cycles than adding the element)? > * It is possible to have a second lits in the sub-resource table in > the database, basically with the same links but with different names > just to fit the style.......... Do not understand. Can you explain? HTH, Jan > > -- > ------------------------------------------ > Felipe Gacho > 10+ Java Programmer > CEJUG Senior Advisor > > > ------------------------------------ > > Yahoo! Groups Links > > > ----------------------------------- Jan Algermissen, Consultant Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
On Jan 24, 2010, at 4:51 PM, Jan Algermissen wrote: > > On Jan 24, 2010, at 4:46 PM, Mike Kelly wrote: > >> Jan Algermissen wrote: >>> On Jan 24, 2010, at 2:07 PM, Mike Kelly wrote: >>> >>> >>>> the only way for the client to understand the 'meaning' of its >>>> current state is in the context of the application flow (i.e. >>>> series of link relations) which led up to it. >>>> >>> >>> >>> Correct me, but this is exactly what REST prevents. A client can >>> use the URI of any steady-state and just proceed through the >>> application from that point on without the need for any knowledge >>> about prior interactions. If it can't, the representation is just >>> badly designed. >>> >> >> Are we drawing a distinction here between steady-state and entry- >> point? > > Hmm, IMHO each steady state is a potential entry point. I knew it wasn't my clever observation but that Roy wrote it somewhere... here is the link: <http://tech.groups.yahoo.com/group/rest-discuss/message/5841> Jan > > Jan > > >> From my understanding - you are describing the latter, but not >> necessarily the former. >> >> I'm thinking purely from a client perspective. A server shouldn't be >> aware of any such 'context' since there is no shared state. >> >> - Mike > > ----------------------------------- > Jan Algermissen, Consultant > > Mail: algermissen@... > Blog: http://www.nordsc.com/blog/ > Work: http://www.nordsc.com/ > ----------------------------------- > > > > > > ------------------------------------ > > Yahoo! Groups Links > > > ----------------------------------- Jan Algermissen, Consultant Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
> Do you mean rel="self"? yes... more and less.. because it is inspired in the atom pub but it is not atom pub.. it is my domain specific link... I eventually can use atom instead of a proprietary one.. but in general my application doesn't use atom ........ > Do you mean in the representation of the entry? That's up to you. It might > be useful in scenarios where the entry XML ends up on disk (when you loose > the request URI) then the self tells you where the entry XML came from. ok....... > In my experience, it is often a waste of time to try to be very dynamic/general/automatic. Just ask the underlying DAO for the data and pupulate the Atom entry object 'by hand'. That's fine. the problem here is: to maintain a large amount of code just to do this copies .. it seems fine in a small example, but for hundreds of entities that eventually will be modified in the future is a mess to have copy and transformations between the DAo and the HTTP layer...... I created a small framework to traverse the data without extra processing.... > Hmm - are you sure populating another XML element is significant in a networked environment ... not significant in terms of processing, but for sure in terms of software maintenance... > Do not understand. Can you explain? I have an entity with a collection field. and I have a sub-entity that inherits the collection from its parent. I can eventually have a second collection in the subclass, overwriting the parent one, just for changing the names.. but I am now see it will add more problems than solutions.. thanks for the tips.......... * after the presentation at Jfokus I will publish the slides and also some blogs to show what I have here.. for now I am still focusing in a good presentation at Stockholm... any of you at Jfokus, Sweden this week, please tell me and we can share a beer or two over REST topic :)
On Jan 24, 2010, at 7:10 PM, Felipe Gacho wrote: >> Do you mean rel="self"? > > yes... more and less.. because it is inspired in the atom pub but it > is not atom pub.. it is my domain specific link... > You can use 'self' even if you do not use Atom. Link relations are independent of the kind of representation they occur in. > I eventually can use atom instead of a proprietary one.. but in > general my application doesn't use atom ........ > > >> Do you mean in the representation of the entry? That's up to you. >> It might >> be useful in scenarios where the entry XML ends up on disk (when >> you loose >> the request URI) then the self tells you where the entry XML came >> from. > > ok....... > >> In my experience, it is often a waste of time to try to be very >> dynamic/general/automatic. Just ask the underlying DAO for the data >> and pupulate the Atom entry object 'by hand'. That's fine. > > the problem here is: to maintain a large amount of code just to do > this copies .. it seems fine in a small example, but for hundreds of > entities that eventually will be modified in the future is a mess to > have copy and transformations between the DAo and the HTTP layer...... Factor out a super class... > I created a small framework to traverse the data without extra > processing.... > > >> Hmm - are you sure populating another XML element is significant in >> a networked environment ... > > not significant in terms of processing, but for sure in terms of > software maintenance... > > >> Do not understand. Can you explain? > > > I have an entity with a collection field. and I have a sub-entity that > inherits the collection from its parent. I can eventually have a > second collection in the subclass, overwriting the parent one, just > for changing the names.. but I am now see it will add more problems > than solutions.. > > thanks for the tips.......... > > * after the presentation at Jfokus I will publish the slides and also > some blogs to show what I have here.. for now I am still focusing in a > good presentation at Stockholm... Good luck. > > any of you at Jfokus, Sweden this week, please tell me and we can > share a beer or two over REST topic :) > Thanks, but, sorry, I am not there.. Jan > > ------------------------------------ > > Yahoo! Groups Links > > > ----------------------------------- Jan Algermissen, Consultant Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
> > > > * I am trying to map the state of the resources direct in the > > database, and that's why the question. It is hard to make it truly > > dynamic in the database.. > You can use non-sql databases which are -usually- better suited for dynamic structures. In my experience, it is often a waste of time to try to be very > dynamic/general/automatic. Just ask the underlying DAO for the data > and pupulate the Atom entry object 'by hand'. That's fine. > I partially agree here: while default behavior is overriden in a lot of applications, if well thought of, it can be used out of the box without much of a config: i.e. XStream's defaults were a mark on xml serialization libraries history for Java. Regards
> You can use non-sql databases which are -usually- better suited for dynamic structures.
You mean OO db ? I evaluated that option, but unfortunately the
community around such tools is too small :) but it is always an
option....
The point here is not how you persist the data but how you change it
on the fly... (unless I can have wildcards in a type of database)
> I partially agree here: while default behavior is overriden in a lot of applications, if well thought of, it can be used out of the box without much of a config: i.e. XStream's defaults were a mark on xml serialization libraries history for Java.
please point to an example...
my current code for reading a resource:
@Override
@TransactionAttribute(TransactionAttributeType.NOT_SUPPORTED)
public T read(Class<T> type, final Object primaryKey)
throws IllegalStateException, IllegalArgumentException {
return manager.find(type, primaryKey);
}
and the code to expose it on the web:
@GET
@Produces( { MediaType.APPLICATION_XML, MediaType.APPLICATION_JSON })
@Path("{name}")
public PujCompetitionDetailsEntity read(@PathParam("name") String name) {
return detailsFacade.read(PujCompetitionDetailsEntity.class, name);
}
Thanks for all the replies. (I tried to post a follow up message but for some reason it did not make it on the list. Probably messed up somehow) So, I agree (obviously) that it is more an implementation problem than a architectural one. That said, eventually architectures expose most of their value by being implemented :), so to rephrase my question, does anyone have any pointers to information that deals with implementing fine grained authorization at the REST level. Environment is not relevant right now for me as I am just trying to graps possible abstrations that make sense in this context. Personally I am trying to get something going by combining JSR311 with Spring security but it gets a bit hairy if one wants to keep authorization as a cross cutting concern that is mainly specified declaratively. Furthermore, I don't have a good way of coping with e.g. a PUT of a resource where an attribute is not provided. How to distinguish an attempt to 'nullify' that attribute vs a client that is PUTting the resource without having access to the resource and at the same time keeping the authorization code out of the REST processing logic. Any help warmly appreciated! Peter --- In rest-discuss@yahoogroups.com, Jim Webber <jim@...> wrote: > > Hi Peter, > > Will's right when he says "REST" has nothing to with this. But there are regular ways of dealing with this using HTTP. > > > - The price of a given order item should be 'readable' by the 'creator' of the order item while an 'administrator' should be able to update the price > > Using one of the HTTP authentication mechanisms (or OpenID, OAuth even) the creator will happily receive 401 when applying some verbs (e.g. PUT, DELETE) to a resource (e.g. pricing) but find they're get 200 OK responses to other verbs (e.g. GET). The administrator might find they receive 200 OK responses to all of those operations. > > > - An collection of order states (not a separate resource but part of the order) should be readable by the 'creator'. An 'administrator' should be able to add states > > How the order is projected onto the Web in terms of resources is an implementation decision. You might choose to have the collection of order states physically written down on the same piece of paper in your back end, but you could still happily project them as different logical resources onto the Web. Hypermedia is your friend should you want to link them together. Eg: > > Administrator: GET/order/1234 > 200 OK.... > <order> > <atom:link href="http://myservice/order/1234/state" rel="order-state" type="application/x.ordering+xml"/> > <line-item> ... </lineItem> > </order> > > Creator: GET/order/1234 > 200 OK... > <order> > <line-item> ... </lineItem> > </order> > > Creator, after some creative thought tries: GET http://myservice/order/1234/state > 401 Unauthorized... > > Nonetheless, administrators will have more chance of a 200 OK response than creators just as above. > > Does that help at all? > > Jim >
> please point to an example...
Convention over Configuration: Erlang's OTP imply on default function names
in order to work, otherwise you have to work it on your own. Rails is the
best example, being used around with a lot of conventions.
For the Java platform, Restfulie follows this path in the controller: the
same JAX-RS code you have mentioned in restfulie java could be to:
public T read(Object key) {
return manager.find(key);
}
public T retrieve(T object) {
return dao.read(object.getPrimaryKey());
}
Of couse, it depends on a lot of conventions - but they made your code much
simpler:
a) the dao object knows how to generic handle types
b) the controller methods that know how to handle resources have some
default names (retrieve, update, create, delete, head, ...)
All of this can be implemented - including removing both methods and
requiring zero code for this specific behavior, if thats the conventional
one.
cheers
Guilherme Silveira
Caelum | Ensino e Inovao
http://www.caelum.com.br/
my current code for reading a resource:
>
> @Override
> @TransactionAttribute(TransactionAttributeType.NOT_SUPPORTED)
> public T read(Class<T> type, final Object primaryKey)
> throws IllegalStateException, IllegalArgumentException {
> return manager.find(type, primaryKey);
> }
>
> and the code to expose it on the web:
>
> @GET
> @Produces( { MediaType.APPLICATION_XML, MediaType.APPLICATION_JSON })
> @Path("{name}")
> public PujCompetitionDetailsEntity read(@PathParam("name") String name) {
> return detailsFacade.read(PujCompetitionDetailsEntity.class, name);
> }
>
>
>
Hi all, As Jan suggested in the previous post http://tech.groups.yahoo.com/group/rest-discuss/message/14627, let's use the REST wiki for collecting interesting REST-related links and distribute them on a weekly basis. There are currently two wiki pages: 1. The list of all weekly summaries on http://rest.blueoxen.net/cgi-bin/wiki.pl?RESTWeekly. Every week on Monday, I (or someone else) will create a new page for that week and add it to the list. 2. The wiki page for this week is on http://rest.blueoxen.net/cgi-bin/wiki.pl?RESTWeekly_Jan_25_2010 So, if we find anything REST-related worth reading - just add a link to the wiki page and I'll post the links to this group when the week is over, and perhaps create a blog for feed-based distribution. Cheers! Ivan
just for curiosity... (if you posted a real code)....
- the code is digesting the exceptions ??
- how your code knows the return type (json, xml, xhtml, etc)?
as fair I see this are the only difference between the two codes..
* yes, I can use that trick to inspect the generic type, but I preferred to
pass as reference due to other purposes ...
2010/1/25 Guilherme Silveira <guilherme.silveira@...>
> > please point to an example...
> Convention over Configuration: Erlang's OTP imply on default function names
> in order to work, otherwise you have to work it on your own. Rails is the
> best example, being used around with a lot of conventions.
>
> For the Java platform, Restfulie follows this path in the controller: the
> same JAX-RS code you have mentioned in restfulie java could be to:
>
> public T read(Object key) {
> return manager.find(key);
> }
>
> public T retrieve(T object) {
> return dao.read(object.getPrimaryKey());
> }
>
>
> Of couse, it depends on a lot of conventions - but they made your code much
> simpler:
> a) the dao object knows how to generic handle types
> b) the controller methods that know how to handle resources have some
> default names (retrieve, update, create, delete, head, ...)
>
> All of this can be implemented - including removing both methods and
> requiring zero code for this specific behavior, if thats the conventional
> one.
>
> cheers
>
> Guilherme Silveira
> Caelum | Ensino e Inovao
> http://www.caelum.com.br/
>
>
> my current code for reading a resource:
>>
>> @Override
>> @TransactionAttribute(TransactionAttributeType.NOT_SUPPORTED)
>> public T read(Class<T> type, final Object primaryKey)
>> throws IllegalStateException, IllegalArgumentException {
>> return manager.find(type, primaryKey);
>> }
>>
>> and the code to expose it on the web:
>>
>> @GET
>> @Produces( { MediaType.APPLICATION_XML, MediaType.APPLICATION_JSON })
>> @Path("{name}")
>> public PujCompetitionDetailsEntity read(@PathParam("name") String name) {
>> return detailsFacade.read(PujCompetitionDetailsEntity.class, name);
>> }
>>
>>
>>
>
>
--
------------------------------------------
Felipe Gacho
10+ Java Programmer
CEJUG Senior Advisor
Hello Felipe!
- the code is digesting the exceptions ??
RuntimeExceptions are hidden there
- how your code knows the return type (json, xml, xhtml, etc)?
Content-negotiation, the server knows which types it can handle..
Regards
Guilherme Silveira
Caelum | Ensino e Inovao
http://www.caelum.com.br/
2010/1/25 Felipe Gacho <fgaucho@...>
>
>
> just for curiosity... (if you posted a real code)....
>
> - the code is digesting the exceptions ??
> - how your code knows the return type (json, xml, xhtml, etc)?
>
> as fair I see this are the only difference between the two codes..
>
> * yes, I can use that trick to inspect the generic type, but I preferred to
> pass as reference due to other purposes ...
>
>
> 2010/1/25 Guilherme Silveira <guilherme.silveira@...>
>
> > please point to an example...
>> Convention over Configuration: Erlang's OTP imply on default function
>> names in order to work, otherwise you have to work it on your own. Rails is
>> the best example, being used around with a lot of conventions.
>>
>> For the Java platform, Restfulie follows this path in the controller: the
>> same JAX-RS code you have mentioned in restfulie java could be to:
>>
>> public T read(Object key) {
>> return manager.find(key);
>> }
>>
>> public T retrieve(T object) {
>> return dao.read(object.getPrimaryKey());
>> }
>>
>>
>> Of couse, it depends on a lot of conventions - but they made your code
>> much simpler:
>> a) the dao object knows how to generic handle types
>> b) the controller methods that know how to handle resources have some
>> default names (retrieve, update, create, delete, head, ...)
>>
>> All of this can be implemented - including removing both methods and
>> requiring zero code for this specific behavior, if thats the conventional
>> one.
>>
>> cheers
>>
>> Guilherme Silveira
>> Caelum | Ensino e Inovao
>> http://www.caelum.com.br/
>>
>>
>> my current code for reading a resource:
>>>
>>> @Override
>>> @TransactionAttribute(TransactionAttributeType.NOT_SUPPORTED)
>>> public T read(Class<T> type, final Object primaryKey)
>>> throws IllegalStateException, IllegalArgumentException {
>>> return manager.find(type, primaryKey);
>>> }
>>>
>>> and the code to expose it on the web:
>>>
>>> @GET
>>> @Produces( { MediaType.APPLICATION_XML, MediaType.APPLICATION_JSON })
>>> @Path("{name}")
>>> public PujCompetitionDetailsEntity read(@PathParam("name") String name) {
>>> return detailsFacade.read(PujCompetitionDetailsEntity.class, name);
>>> }
>>>
>>>
>>
>
>
> --
> ------------------------------------------
> Felipe Gacho
> 10+ Java Programmer
> CEJUG Senior Advisor
>
>
>
> - the code is digesting the exceptions ?? > RuntimeExceptions are hidden there ok, but what is returned to the client in case of such exceptions ? - how your code knows the return type (json, xml, xhtml, etc)? > Content-negotiation, the server knows which types it can handle.. > ok, like default Jersey.. (annotating XML and JSON for Jersey is redundant.. but I got this habit of including the annotations explicitly anyway.. I just type them without thinking if I should or not.. :) .. there is a lot o unecessary annotations in my code, a way of keeping myself aware about the code surroundings...... Restfulie sounds pretty cool.. and, how about the hateoas ? the dynamic part ... how to add or remove links from the representations ? is it done programatically ?
On Jan 25, 2010, at 7:18 AM, Felipe Gacho wrote:
> @GET
> @Produces( { MediaType.APPLICATION_XML, MediaType.APPLICATION_JSON })
As a side note (in order to promote RESTful REST):
You should not use application/xml or application/json because these
violate REST's message self descriptiveness constraint (unless you
really only want to transfer XML or JSON without any additional, out-
of-band semantics).
Instead, use specific media types (even if you have to mint your own)
or at least use a profile parameter[1] on the generic media type to
put the out-of-band information in-band.
> @Path("{name}")
> public PujCompetitionDetailsEntity read(@PathParam("name") String
> name) {
> return detailsFacade.read(PujCompetitionDetailsEntity.class, name);
> }
Jan
[1] <http://tech.groups.yahoo.com/group/rest-discuss/message/14612>
-----------------------------------
Jan Algermissen, Consultant
Mail: algermissen@...
Blog: http://www.nordsc.com/blog/
Work: http://www.nordsc.com/
-----------------------------------
yes, yes, I learned that recently and I plan to change the plain:
application/xml
for something like:
application/arena+xml
2010/1/25 Jan Algermissen <algermissen1971@...>:
>
> On Jan 25, 2010, at 7:18 AM, Felipe Gacho wrote:
>
>> @GET
>> @Produces( { MediaType.APPLICATION_XML, MediaType.APPLICATION_JSON })
>
> As a side note (in order to promote RESTful REST):
>
> You should not use application/xml or application/json because these violate
> REST's message self descriptiveness constraint (unless you really only want
> to transfer XML or JSON without any additional, out-of-band semantics).
>
> Instead, use specific media types (even if you have to mint your own) or at
> least use a profile parameter[1] on the generic media type to put the
> out-of-band information in-band.
>
>
>> @Path("{name}")
>> public PujCompetitionDetailsEntity read(@PathParam("name") String name) {
>> return detailsFacade.read(PujCompetitionDetailsEntity.class, name);
>> }
>
>
> Jan
>
>
> [1] <http://tech.groups.yahoo.com/group/rest-discuss/message/14612>
>
>
>
>
> -----------------------------------
> Jan Algermissen, Consultant
>
> Mail: algermissen@...
> Blog: http://www.nordsc.com/blog/
> Work: http://www.nordsc.com/
> -----------------------------------
>
>
>
>
--
------------------------------------------
Felipe Gacho
10+ Java Programmer
CEJUG Senior Advisor
--- In rest-discuss@yahoogroups.com, Jan Algermissen <algermissen1971@...> wrote: > > > On Jan 24, 2010, at 4:51 PM, Jan Algermissen wrote: > > > > > On Jan 24, 2010, at 4:46 PM, Mike Kelly wrote: > > > >> Jan Algermissen wrote: > >>> On Jan 24, 2010, at 2:07 PM, Mike Kelly wrote: > >>> > >>> > >>>> the only way for the client to understand the 'meaning' of its > >>>> current state is in the context of the application flow (i.e. > >>>> series of link relations) which led up to it. > >>>> > >>> > >>> > >>> Correct me, but this is exactly what REST prevents. A client can > >>> use the URI of any steady-state and just proceed through the > >>> application from that point on without the need for any knowledge > >>> about prior interactions. If it can't, the representation is just > >>> badly designed. > >>> > >> > >> Are we drawing a distinction here between steady-state and entry- > >> point? > > > > Hmm, IMHO each steady state is a potential entry point. > > I knew it wasn't my clever observation but that Roy wrote it > somewhere... here is the link: > <http://tech.groups.yahoo.com/group/rest-discuss/message/5841> > Another relevant Roy writing (wow this list feels like theological discussions referencing Bible passages sometimes!) is in section 6.3.4.2 of his thesis where he talks about cookies messing up the back button. Your application state machine should allow arbitrary jumps (via "typed in" urls, back buttons, etc.) and so that implies that the application state shouldn't depend on the path taken. At the same time I can think there is "client state" that is independent of "application state". For example, the size of the window in a browser, the content that an atompub client is trying to publish are some examples. This client state affects the execution of the application, combining with the application state to affect client behavior. e.g. the application could display differently depending on the window size or the atompub client could pick different feeds from the service doc based on client state. I think was is important is that the application state should never rely on the client state value. One possible approach to ensuring this is to enforce that the application state should never affect/change the client state, but I don't think this is necessary or realistic. For example, the content that a client is trying to publish might depend on something that it previously fetched. The HTML5 client-side SQL store is another example of mutable client state (I think). So I think that application state can effect the client state, but as long as your application states never depend on the client state, you can jump around to arbitrary application states without things breaking. This does constrain how you do things though, for example, you have to write your HTML5 application to not depend on the state of client-side SQL tables. I'm making up my own terminology here as I'm not aware of a precedent. And maybe, this "client state" is really part of the application state -- but that doesn't seem right to me and doesn't match my experience. Regards, Andrew
Andrew: I treat client state as volatile optional data - similar to cached data. IOW, I write my code to work w/ any existing client state if it's available and, if not, lead the client into creating the needed state to complete the operation. Shopping cart is an example. When I client visits a URI, the server might check to see if there is any client state data (stored selections in the cart) to display. If not, the client is offered the proper link(s) to create some cart data. If I have any multiple-step operations (ala "Wizard UI") I'll inspect the client state on each step to see where in the process they currently are. If no state is avail (or some of the data is missing or has expired), the client is directed to the proper URI (possibly the start URI) and led along as needed. If a client attempts a write operation on the server (POST/PUT) without the required client state data (missing or invalid state data), the client is given the appropriate response code and offered one or more URIs to help deliver the expected data before re-attempting the write operation. The "client state data" might be complete data on the client (for custom clients) or a cookie identifier/authentication identifier (for common Web browsers). In the case of common browsers, I store any complex client state data on the server as a user resource (instead of user-agent resource) to get around data storage limits in current browsers and make sure users can log in via different agents and still retrieve their data. Now that local data storage APIs are available for scripted browsers, I hope to move some of this data back to the client. All these examples might include an authenticate step, but that's treated as an additional layer on the set of state transitions. In my experience, adopting these patterns from the outset takes more implementation time, but results in a more stable and, in the long run, more adaptable application. mca http://amundsen.com/blog/ On Mon, Jan 25, 2010 at 13:44, wahbedahbe <andrew.wahbe@...> wrote: > > > --- In rest-discuss@yahoogroups.com, Jan Algermissen <algermissen1971@...> wrote: >> >> >> On Jan 24, 2010, at 4:51 PM, Jan Algermissen wrote: >> >> > >> > On Jan 24, 2010, at 4:46 PM, Mike Kelly wrote: >> > >> >> Jan Algermissen wrote: >> >>> On Jan 24, 2010, at 2:07 PM, Mike Kelly wrote: >> >>> >> >>> >> >>>> the only way for the client to understand the 'meaning' of its >> >>>> current state is in the context of the application flow (i.e. >> >>>> series of link relations) which led up to it. >> >>>> >> >>> >> >>> >> >>> Correct me, but this is exactly what REST prevents. A client can >> >>> use the URI of any steady-state and just proceed through the >> >>> application from that point on without the need for any knowledge >> >>> about prior interactions. If it can't, the representation is just >> >>> badly designed. >> >>> >> >> >> >> Are we drawing a distinction here between steady-state and entry- >> >> point? >> > >> > Hmm, IMHO each steady state is a potential entry point. >> >> I knew it wasn't my clever observation but that Roy wrote it >> somewhere... here is the link: >> <http://tech.groups.yahoo.com/group/rest-discuss/message/5841> >> > > Another relevant Roy writing (wow this list feels like theological discussions referencing Bible passages sometimes!) is in section 6.3.4.2 of his thesis where he talks about cookies messing up the back button. > > Your application state machine should allow arbitrary jumps (via "typed in" urls, back buttons, etc.) and so that implies that the application state shouldn't depend on the path taken. > > At the same time I can think there is "client state" that is independent of "application state". For example, the size of the window in a browser, the content that an atompub client is trying to publish are some examples. This client state affects the execution of the application, combining with the application state to affect client behavior. e.g. the application could display differently depending on the window size or the atompub client could pick different feeds from the service doc based on client state. > > I think was is important is that the application state should never rely on the client state value. One possible approach to ensuring this is to enforce that the application state should never affect/change the client state, but I don't think this is necessary or realistic. For example, the content that a client is trying to publish might depend on something that it previously fetched. The HTML5 client-side SQL store is another example of mutable client state (I think). > > So I think that application state can effect the client state, but as long as your application states never depend on the client state, you can jump around to arbitrary application states without things breaking. This does constrain how you do things though, for example, you have to write your HTML5 application to not depend on the state of client-side SQL tables. > > I'm making up my own terminology here as I'm not aware of a precedent. And maybe, this "client state" is really part of the application state -- but that doesn't seem right to me and doesn't match my experience. > Regards, > > Andrew > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
On Jan 25, 2010, at 7:44 PM, wahbedahbe wrote: > At the same time I can think there is "client state" that is > independent of "application state". Yes. It is (part of) the state of the state machine that is the client- side program (or the human user). The interesting aspect here is what the appropriate programming model is for aligning the client side state machine and the user agent state machine (which is driven by the server). The situation is comparable to a program that is not promarily driven by a GUI but occasionally hands control over to a GUI event processing loop. Jan ----------------------------------- Jan Algermissen, Consultant Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
Felipe, you can also use media profiles:
http://buzzword.org.uk/2009/draft-inkster-profile-parameter-00.html
Guilherme Silveira
Caelum | Ensino e Inovao
http://www.caelum.com.br/
2010/1/25 Felipe Gacho <fgaucho@gmail.com>
>
>
> yes, yes, I learned that recently and I plan to change the plain:
>
> application/xml
>
> for something like:
>
> application/arena+xml
>
> 2010/1/25 Jan Algermissen <algermissen1971@...<algermissen1971%40mac.com>
> >:
>
> >
> > On Jan 25, 2010, at 7:18 AM, Felipe Gacho wrote:
> >
> >> @GET
> >> @Produces( { MediaType.APPLICATION_XML, MediaType.APPLICATION_JSON })
> >
> > As a side note (in order to promote RESTful REST):
> >
> > You should not use application/xml or application/json because these
> violate
> > REST's message self descriptiveness constraint (unless you really only
> want
> > to transfer XML or JSON without any additional, out-of-band semantics).
> >
> > Instead, use specific media types (even if you have to mint your own) or
> at
> > least use a profile parameter[1] on the generic media type to put the
> > out-of-band information in-band.
> >
> >
> >> @Path("{name}")
> >> public PujCompetitionDetailsEntity read(@PathParam("name") String name)
> {
> >> return detailsFacade.read(PujCompetitionDetailsEntity.class,
> name);
> >> }
> >
> >
> > Jan
> >
> >
> > [1] <http://tech.groups.yahoo.com/group/rest-discuss/message/14612>
> >
> >
> >
> >
> > -----------------------------------
> > Jan Algermissen, Consultant
> >
> > Mail: algermissen@... <algermissen%40acm.org>
> > Blog: http://www.nordsc.com/blog/
> > Work: http://www.nordsc.com/
> > -----------------------------------
> >
> >
> >
> >
>
> --
> ------------------------------------------
> Felipe Gacho
> 10+ Java Programmer
> CEJUG Senior Advisor
>
>
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 I was curious if there are opinions regarding the congruence of forms (HTML <form>s) and links. It would seem to me that forms are an important part of hypertext navigation, and could be characterized as a user-input-parameterized link, at least with GET-based forms. However, I was curious if others are in agreement on this perspective. If a form is a type of link, why does HTML not define a "rel" attribute for forms? OpenSearch [1] also defines navigation to URIs that are parameterized by user-supplied data, but it does include "rel" attribute. From a navigational perspective, is there any fundamental difference between HTML form and OpenSearch links (other than that HTML obviously prescribes more of the presentation aspect)? If forms can be viewed as link mechanisms, should this only be applied to GET-based forms, or can POST-driven user parametrized data also be considered a form of a navigable link? The reason I ask, is I am trying to feel out if the approach in JSON schema, of allowing user-parametrized data submission [2] as an extension of a link definition is appropriate. With this approach, links can include additional information, extending them to take user provided data to form URIs, and specify the request method. IMO, this is an elegant way of minimizing constructs while still allowing for URIs to be parameterized with user input like forms, but I wasn't sure if others felt this would violate any principles of REST. [1] http://www.opensearch.org/Specifications/OpenSearch/1.1 [2] http://tools.ietf.org/html/draft-zyp-json-schema-01#section-6.1.1.3 Thanks, - -- Kris Zyp SitePen (503) 806-1841 http://sitepen.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (MingW32) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAktgkW0ACgkQ9VpNnHc4zAwykACdHO/ORNeFFy5g07rzy90MzPSr zPMAoJmmg/UQJizuvys79ULqWfKTKmlY =4uDy -----END PGP SIGNATURE-----
On Jan 27, 2010, at 8:18 PM, Kris Zyp wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > I was curious if there are opinions regarding the congruence of forms > (HTML <form>s) and links. It would seem to me that forms are an > important part of hypertext navigation, and could be characterized as > a user-input-parameterized link, at least with GET-based forms. Yes, exactly. > However, I was curious if others are in agreement on this perspective. Yes, I guess everyone is :-) > If a form is a type of link, why does HTML not define a "rel" > attribute for forms? Because it is argetted towards a human user which can cet the 'rel' from the written text. See AtomPub's <collection> and <accept> elements for a machine readable form. > OpenSearch [1] also defines navigation to URIs > that are parameterized by user-supplied data, but it does include > "rel" attribute. AFAIK, the rel attribute has a slightly different meaning in OpneSearch than it does in links. > From a navigational perspective, is there any > fundamental difference between HTML form and OpenSearch links (other > than that HTML obviously prescribes more of the presentation aspect)? No. Only semantic differences. > If forms can be viewed as link mechanisms, should this only be applied > to GET-based forms, or can POST-driven user parametrized data also be > considered a form of a navigable link? It is equivalent in the sense that it takes you to a next state, but POST requests are not idempotent and the client must be aware the it changes the state of the server. > > The reason I ask, is I am trying to feel out if the approach in JSON > schema, of allowing user-parametrized data submission [2] as an > extension of a link definition is appropriate. With this approach, > links can include additional information, extending them to take user > provided data to form URIs, and specify the request method. IMO, this > is an elegant way of minimizing constructs while still allowing for > URIs to be parameterized with user input like forms, but I wasn't sure > if others felt this would violate any principles of REST. I'd need to look into that proposal, but can;t right now. Jan > > [1] http://www.opensearch.org/Specifications/OpenSearch/1.1 > [2] http://tools.ietf.org/html/draft-zyp-json- > schema-01#section-6.1.1.3 > > Thanks, > > - -- > Kris Zyp > SitePen > (503) 806-1841 > http://sitepen.com > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.9 (MingW32) > Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ > > iEYEARECAAYFAktgkW0ACgkQ9VpNnHc4zAwykACdHO/ORNeFFy5g07rzy90MzPSr > zPMAoJmmg/UQJizuvys79ULqWfKTKmlY > =4uDy > -----END PGP SIGNATURE----- > > > > ------------------------------------ > > Yahoo! Groups Links > > > ----------------------------------- Jan Algermissen, Consultant Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
Kris: I view FORM elements as templated hypermedia links. in HTML the FORM element is unique as it is the only link element that allows selection of the HTTP method (GET, POST), too. I have built a few internal apps that use a generic <link> element that contains nested <input> elements. This sounds similar to what you are thinking about. FWIW, I use the "rel" attribute on my invented link elements, too. In the end this is about defining a media type. The elements you use and the allowed child elements are up to you. I will say that I've had mixed feelings about allowing a "method" attribute. I've been toying with the idea of detailing any method semantics against either the element itself or a "rel" attribute. mca http://amundsen.com/blog/ On Wed, Jan 27, 2010 at 14:18, Kris Zyp <kris@...> wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > I was curious if there are opinions regarding the congruence of forms > (HTML <form>s) and links. It would seem to me that forms are an > important part of hypertext navigation, and could be characterized as > a user-input-parameterized link, at least with GET-based forms. > However, I was curious if others are in agreement on this perspective. > If a form is a type of link, why does HTML not define a "rel" > attribute for forms? OpenSearch [1] also defines navigation to URIs > that are parameterized by user-supplied data, but it does include > "rel" attribute. From a navigational perspective, is there any > fundamental difference between HTML form and OpenSearch links (other > than that HTML obviously prescribes more of the presentation aspect)? > If forms can be viewed as link mechanisms, should this only be applied > to GET-based forms, or can POST-driven user parametrized data also be > considered a form of a navigable link? > > The reason I ask, is I am trying to feel out if the approach in JSON > schema, of allowing user-parametrized data submission [2] as an > extension of a link definition is appropriate. With this approach, > links can include additional information, extending them to take user > provided data to form URIs, and specify the request method. IMO, this > is an elegant way of minimizing constructs while still allowing for > URIs to be parameterized with user input like forms, but I wasn't sure > if others felt this would violate any principles of REST. > > [1] http://www.opensearch.org/Specifications/OpenSearch/1.1 > [2] http://tools.ietf.org/html/draft-zyp-json-schema-01#section-6.1.1.3 > > Thanks, > > - -- > Kris Zyp > SitePen > (503) 806-1841 > http://sitepen.com > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.9 (MingW32) > Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ > > iEYEARECAAYFAktgkW0ACgkQ9VpNnHc4zAwykACdHO/ORNeFFy5g07rzy90MzPSr > zPMAoJmmg/UQJizuvys79ULqWfKTKmlY > =4uDy > -----END PGP SIGNATURE----- > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
On Jan 27, 2010, at 8:58 PM, mike amundsen wrote: > I will say that I've had mixed feelings about allowing a "method" > attribute. I've been toying with the idea of detailing any method > semantics against either the element itself or a "rel" attribute. Yes, I think so, too. Since the client understands the semantics of the transition it is taking anyway[1] there is usually no need to specify the method in the hypertext. Where it does make sense is in GET vs POST situations as OpenText does because having a link to a result set (GET) and a link to a newly created resource that is the result set (POST) has no immediate impact on how you handle the result set. Jan [1] Except for crawlers, but these may only use GET anyway ----------------------------------- Jan Algermissen, Consultant Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
Kris: <snip> Where it does make sense is in GET vs POST situations as OpenText... </snip> Not sure I follow you there. What I meant is (in the following examples), adding a "method" attribute does not seem to help anything. In addition, adding the method within the representation may bind the server needlessly. <link rel="search" href="..."> <input name="year" /> <input name="month" /> </link> <link rel="buy" href="..." /> <input name="product" /> <input name="quantity " /> </link> <link rel="cancel" href="" /> <input name="order-id" /> </link> Finally, if i understand your example, my first sample markup might support both GET and POST w/ POST creating a resource that can be recalled later. mca http://amundsen.com/blog/ On Wed, Jan 27, 2010 at 15:26, Jan Algermissen <algermissen1971@...> wrote: > > On Jan 27, 2010, at 8:58 PM, mike amundsen wrote: > >> I will say that I've had mixed feelings about allowing a "method" >> attribute. I've been toying with the idea of detailing any method >> semantics against either the element itself or a "rel" attribute. > > Yes, I think so, too. Since the client understands the semantics of > the transition it is taking anyway[1] there is usually no need to > specify the method in the hypertext. > Where it does make sense is in GET vs POST situations as OpenText > does because having a link to a result set (GET) and a link to > a newly created resource that is the result set (POST) has no > immediate impact on how you handle the result set. > > Jan > > > > [1] Except for crawlers, but these may only use GET anyway > > > > ----------------------------------- > Jan Algermissen, Consultant > > Mail: algermissen@... > Blog: http://www.nordsc.com/blog/ > Work: http://www.nordsc.com/ > ----------------------------------- > > > >
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 1/27/2010 1:26 PM, Jan Algermissen wrote: > > > > On Jan 27, 2010, at 8:58 PM, mike amundsen wrote: > > > I will say that I've had mixed feelings about allowing a "method" > > attribute. I've been toying with the idea of detailing any method > > semantics against either the element itself or a "rel" attribute. > > Yes, I think so, too. Since the client understands the semantics of > the transition it is taking anyway[1] there is usually no need to > specify the method in the hypertext. > To make sure I understand correctly, are you suggesting that ideally link relations would specify (in the link registry) which method should be used to navigate to the target URI? So perhaps a "create" relation could be registered that specifies "POST" as the method to use? Would this be problematic for custom relations that the client may not be aware of the appropriate method for? Currently, the Atom link registry does not specify any defined methods for any of the existing relations that are registered. Even the "edit" relation doesn't specify a method, it is still just GETable resource that one can apply the uniform interface to. - -- Kris Zyp SitePen (503) 806-1841 http://sitepen.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (MingW32) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAktgpagACgkQ9VpNnHc4zAxKswCgqEHhKdR8hgWrc8Yiy0ZrH3as IeYAoKe9tvA7UZDSzGIIO41uQVBy3nYP =MhL1 -----END PGP SIGNATURE-----
On Jan 27, 2010, at 9:45 PM, mike amundsen wrote: > Kris: > > <snip> > Where it does make sense is in GET vs POST situations as OpenText... Doh - 'OpenSearch' of course > </snip> > Not sure I follow you there. > > What I meant is (in the following examples), adding a "method" > attribute does not seem to help anything. In addition, adding the > method within the representation may bind the server needlessly. > > <link rel="search" href="..."> > <input name="year" /> > <input name="month" /> > </link> > > <link rel="buy" href="..." /> > <input name="product" /> > <input name="quantity " /> > </link> > > <link rel="cancel" href="" /> > <input name="order-id" /> > </link> > > Finally, if i understand your example, my first sample markup might > support both GET and POST w/ POST creating a resource that can be > recalled later. Yes, that is what I meant. Jan > > mca > http://amundsen.com/blog/ > > > > > On Wed, Jan 27, 2010 at 15:26, Jan Algermissen <algermissen1971@... > > wrote: >> >> On Jan 27, 2010, at 8:58 PM, mike amundsen wrote: >> >>> I will say that I've had mixed feelings about allowing a "method" >>> attribute. I've been toying with the idea of detailing any method >>> semantics against either the element itself or a "rel" attribute. >> >> Yes, I think so, too. Since the client understands the semantics of >> the transition it is taking anyway[1] there is usually no need to >> specify the method in the hypertext. >> Where it does make sense is in GET vs POST situations as OpenText >> does because having a link to a result set (GET) and a link to >> a newly created resource that is the result set (POST) has no >> immediate impact on how you handle the result set. >> >> Jan >> >> >> >> [1] Except for crawlers, but these may only use GET anyway >> >> >> >> ----------------------------------- >> Jan Algermissen, Consultant >> >> Mail: algermissen@... >> Blog: http://www.nordsc.com/blog/ >> Work: http://www.nordsc.com/ >> ----------------------------------- >> >> >> >> > > > ------------------------------------ > > Yahoo! Groups Links > > > ----------------------------------- Jan Algermissen, Consultant Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
Kris: I agree - linking methods only to "rel" values can be a problem. However, defining this within the media-type is not. And it may be a case of defining what clients can expect for each method rather than restricting clients to using selected methods: "rel='search'" - GET will return 200 w/ the results - POST will return 201 w/ Location header pointing to the results resource - DELETE returns 200 OK if a valid search resource URI is passed All other methods return 405 mca http://amundsen.com/blog/ On Wed, Jan 27, 2010 at 15:44, Kris Zyp <kris@...> wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > > > On 1/27/2010 1:26 PM, Jan Algermissen wrote: >> >> >> >> On Jan 27, 2010, at 8:58 PM, mike amundsen wrote: >> >> > I will say that I've had mixed feelings about allowing a "method" >> > attribute. I've been toying with the idea of detailing any method >> > semantics against either the element itself or a "rel" attribute. >> >> Yes, I think so, too. Since the client understands the semantics of >> the transition it is taking anyway[1] there is usually no need to >> specify the method in the hypertext. >> > > To make sure I understand correctly, are you suggesting that ideally > link relations would specify (in the link registry) which method > should be used to navigate to the target URI? So perhaps a "create" > relation could be registered that specifies "POST" as the method to > use? Would this be problematic for custom relations that the client > may not be aware of the appropriate method for? Currently, the Atom > link registry does not specify any defined methods for any of the > existing relations that are registered. Even the "edit" relation > doesn't specify a method, it is still just GETable resource that one > can apply the uniform interface to. > > - -- > Kris Zyp > SitePen > (503) 806-1841 > http://sitepen.com > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.9 (MingW32) > Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ > > iEYEARECAAYFAktgpagACgkQ9VpNnHc4zAxKswCgqEHhKdR8hgWrc8Yiy0ZrH3as > IeYAoKe9tvA7UZDSzGIIO41uQVBy3nYP > =MhL1 > -----END PGP SIGNATURE----- > >
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 1/27/2010 1:56 PM, mike amundsen wrote: > > > Kris: > > I agree - linking methods only to "rel" values can be a problem. > However, defining this within the media-type is not. And it may be > a case of defining what clients can expect for each method rather > than restricting clients to using selected methods: > > "rel='search'" - GET will return 200 w/ the results - POST will > return 201 w/ Location header pointing to the results resource - > DELETE returns 200 OK if a valid search resource URI is passed All > other methods return 405 I thought that ideally relations would be media type independent (have the same semantics across media types). Furthermore, I would think it would be desirable for media types to generally be protocol independent. Specifying what methods and what status codes are returned for relation in a media type definition seems like it would violate both of these principles. - -- Kris Zyp SitePen (503) 806-1841 http://sitepen.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (MingW32) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAktgqX4ACgkQ9VpNnHc4zAxzMgCgv45s5HeNZ5Dv62+5CbqTCCjN PE8AnRgLnmxqpz9H/gGiin+V4lnBlfn6 =trGq -----END PGP SIGNATURE-----
On Jan 27, 2010, at 9:44 PM, Kris Zyp wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > > > On 1/27/2010 1:26 PM, Jan Algermissen wrote: > > > > > > > > On Jan 27, 2010, at 8:58 PM, mike amundsen wrote: > > > > > I will say that I've had mixed feelings about allowing a "method" > > > attribute. I've been toying with the idea of detailing any method > > > semantics against either the element itself or a "rel" attribute. > > > > Yes, I think so, too. Since the client understands the semantics of > > the transition it is taking anyway[1] there is usually no need to > > specify the method in the hypertext. > > > > To make sure I understand correctly, are you suggesting that ideally > link relations would specify (in the link registry) which method > should be used to navigate to the target URI? Yes. With GET being the default. The relation type describes the semantics of the link target and these semantics include what kind of domain goals the client can achive with requests to the target resource. If you see <link rel="cart" href="./shopping-cart"/> and the type spec for 'cart' tells you that a DELETE request on it empties the cart then you know that DELETE ./shopping-cart achieves the domain goal of emptying the cart. I started to describe that in http://www.nordsc.com/blog/?p=8 (last paragraph where it says: "express the operations that are currently stated in the form of prose in hypermedia specifications in terms of the above predicate conjunctions and HTTP operations") > So perhaps a "create" > relation could be registered that specifies "POST" as the method to > use? No, link relations denote resource semantics, not actions. The resource semantics include what is achived by what requests on the resource. If you say <link rel="lock" href="/445/lock"/> these semantics might be that PUT creates the lock and DELETE removes it. > Would this be problematic for custom relations that the client > may not be aware of the appropriate method for? The cient needs to implement the semantics anyway. However, if the HTTP connector implementation you use does not support all methods, dump it and get a better one. > Currently, the Atom > link registry does not specify any defined methods for any of the > existing relations that are registered. If there is no such definition, no domain operations are involved. GET makes sense by default. > Even the "edit" relation > doesn't specify a method, it is still just GETable resource that one > can apply the uniform interface to. No. The semantics of edit and edit-media are defined in the AtomPub specs and there the effect of a PUT and DELETE to either of them is specified. Jan > > - -- > Kris Zyp > SitePen > (503) 806-1841 > http://sitepen.com > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.9 (MingW32) > Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ > > iEYEARECAAYFAktgpagACgkQ9VpNnHc4zAxKswCgqEHhKdR8hgWrc8Yiy0ZrH3as > IeYAoKe9tvA7UZDSzGIIO41uQVBy3nYP > =MhL1 > -----END PGP SIGNATURE----- > ----------------------------------- Jan Algermissen, Consultant Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
On Jan 27, 2010, at 9:44 PM, Kris Zyp wrote: > So perhaps a "create" > relation Note that rels like 'edit' 'edit-media' or 'search' do not mean 'edit action here' but 'this is the edit-resource' Jan ----------------------------------- Jan Algermissen, Consultant Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
> Note that rels like 'edit' 'edit-media' or 'search' do not mean 'edit action Yep, verb-noun stuff here. Wondering if there is any place for a verb in Uniform Interface apps? mca http://amundsen.com/blog/ On Wed, Jan 27, 2010 at 16:03, Jan Algermissen <algermissen1971@...> wrote: > > On Jan 27, 2010, at 9:44 PM, Kris Zyp wrote: > >> So perhaps a "create" >> relation > > Note that rels like 'edit' 'edit-media' or 'search' do not mean 'edit action > here' but 'this is the edit-resource' > > Jan > > > > > ----------------------------------- > Jan Algermissen, Consultant > > Mail: algermissen@... > Blog: http://www.nordsc.com/blog/ > Work: http://www.nordsc.com/ > ----------------------------------- > > > >
On Jan 27, 2010, at 10:04 PM, mike amundsen wrote: > Wondering if there is any place for a verb in Uniform Interface apps? Yes, first word of request :-) Sorry, could not resist. Jan > > mca > http://amundsen.com/blog/ > > > > > On Wed, Jan 27, 2010 at 16:03, Jan Algermissen <algermissen1971@... > > wrote: >> >> On Jan 27, 2010, at 9:44 PM, Kris Zyp wrote: >> >>> So perhaps a "create" >>> relation >> >> Note that rels like 'edit' 'edit-media' or 'search' do not mean >> 'edit action >> here' but 'this is the edit-resource' >> >> Jan >> >> >> >> >> ----------------------------------- >> Jan Algermissen, Consultant >> >> Mail: algermissen@... >> Blog: http://www.nordsc.com/blog/ >> Work: http://www.nordsc.com/ >> ----------------------------------- >> >> >> >> > > > ------------------------------------ > > Yahoo! Groups Links > > > ----------------------------------- Jan Algermissen, Consultant Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
<snip> >> Wondering if there is any place for a verb in Uniform Interface apps? > > Yes, first word of request :-) </snip> such a joker! mca http://amundsen.com/blog/ On Wed, Jan 27, 2010 at 16:07, Jan Algermissen <algermissen1971@...> wrote: > > On Jan 27, 2010, at 10:04 PM, mike amundsen wrote: > >> Wondering if there is any place for a verb in Uniform Interface apps? > > Yes, first word of request :-) > > Sorry, could not resist. > > Jan > > >> >> mca >> http://amundsen.com/blog/ >> >> >> >> >> On Wed, Jan 27, 2010 at 16:03, Jan Algermissen <algermissen1971@...> >> wrote: >>> >>> On Jan 27, 2010, at 9:44 PM, Kris Zyp wrote: >>> >>>> So perhaps a "create" >>>> relation >>> >>> Note that rels like 'edit' 'edit-media' or 'search' do not mean 'edit >>> action >>> here' but 'this is the edit-resource' >>> >>> Jan >>> >>> >>> >>> >>> ----------------------------------- >>> Jan Algermissen, Consultant >>> >>> Mail: algermissen@... >>> Blog: http://www.nordsc.com/blog/ >>> Work: http://www.nordsc.com/ >>> ----------------------------------- >>> >>> >>> >>> >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> > > ----------------------------------- > Jan Algermissen, Consultant > > Mail: algermissen@... > Blog: http://www.nordsc.com/blog/ > Work: http://www.nordsc.com/ > ----------------------------------- > > > >
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 1/27/2010 2:01 PM, Jan Algermissen wrote: > > > > On Jan 27, 2010, at 9:44 PM, Kris Zyp wrote: > > > > On 1/27/2010 1:26 PM, Jan Algermissen wrote: > > > >> On Jan 27, 2010, at 8:58 PM, mike amundsen wrote: > >>> I will say that I've had mixed feelings about allowing a >>> "method" attribute. I've been toying with the idea of detailing >>> any method semantics against either the element itself or a >>> "rel" attribute. > >> Yes, I think so, too. Since the client understands the semantics >> of the transition it is taking anyway[1] there is usually no need >> to specify the method in the hypertext. > > > To make sure I understand correctly, are you suggesting that > ideally link relations would specify (in the link registry) which > method should be used to navigate to the target URI? > >> Yes. With GET being the default. The relation type describes the >> semantics of the link target and these semantics include what >> kind of domain goals the client can achive with requests to the >> target resource. > >> If you see <link rel="cart" href="./shopping-cart"/> and the type >> spec for 'cart' tells you that a DELETE request on it empties the >> cart then you know that > >> DELETE ./shopping-cart achieves the domain goal of emptying the >> cart. > >> I started to describe that in http://www.nordsc.com/blog/?p=8 >> <http://www.nordsc.com/blog/?p=8> > >> (last paragraph where it says: "express the operations that are >> currently stated in the form of prose in hypermedia >> specifications in terms of the above predicate conjunctions and >> HTTP operations") > > So perhaps a "create" relation could be registered that specifies > "POST" as the method to use? > >> No, link relations denote resource semantics, not actions. The > resource >> semantics include what is achived by what requests on the >> resource. > I am confused, I thought you were saying that a link relation would define what actions/verbs could be applied to the resource. Would it be better if there was a "collection" relation that said that you could GET and POST to it? >> If you say > >> <link rel="lock" href="/445/lock"/> these semantics might be that >> PUT creates the lock and DELETE removes it. > > Would this be problematic for custom relations that the client may > not be aware of the appropriate method for? > >> The cient needs to implement the semantics anyway. However, if >> the HTTP connector implementation you use does not support all >> methods, dump it and get a better one. I am not sure I understand. If a user agent gives user a selection between different links/relations to follow, they should still be able to follow the link without understanding the link relation definition, shouldn't they? > > Currently, the Atom link registry does not specify any defined > methods for any of the existing relations that are registered. > >> If there is no such definition, no domain operations are >> involved. GET makes sense by default. > > Even the "edit" relation doesn't specify a method, it is still just > GETable resource that one can apply the uniform interface to. > >> No. The semantics of edit and edit-media are defined in the >> AtomPub specs and there the effect of a PUT and DELETE to either >> of them is specified. The effects described by AtomPub are just reiterating the already prescribed effects defined by the uniform interface of the HTTP protocol, aren't they? I don't see anything here [1] that suggests that a target of the "edit" resource deviates from normal uniform interface behavior. http://bitworking.org/projects/atom/rfc5023.html#edit-via-PUT - -- Kris Zyp SitePen (503) 806-1841 http://sitepen.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (MingW32) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAktgrIgACgkQ9VpNnHc4zAwFlgCfX8v+iUHBqv1cJUZpcosLUxut rV0An3522rURI4Hbw4YpNko4PUeeMWPp =Jg5B -----END PGP SIGNATURE-----
On Jan 27, 2010, at 10:13 PM, Kris Zyp wrote: > > > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > > > On 1/27/2010 2:01 PM, Jan Algermissen wrote: > > > > > > > > On Jan 27, 2010, at 9:44 PM, Kris Zyp wrote: > > > > > > > > On 1/27/2010 1:26 PM, Jan Algermissen wrote: > > > > > > > >> On Jan 27, 2010, at 8:58 PM, mike amundsen wrote: > > > >>> I will say that I've had mixed feelings about allowing a > >>> "method" attribute. I've been toying with the idea of detailing > >>> any method semantics against either the element itself or a > >>> "rel" attribute. > > > >> Yes, I think so, too. Since the client understands the semantics > >> of the transition it is taking anyway[1] there is usually no need > >> to specify the method in the hypertext. > > > > > > To make sure I understand correctly, are you suggesting that > > ideally link relations would specify (in the link registry) which > > method should be used to navigate to the target URI? You can always *navigate* there with GET. If any other method achieves some 'opration' then that must be specified as part of the relations semantics. (which are the semantics of the target resource in the given context) > > > >> Yes. With GET being the default. The relation type describes the > >> semantics of the link target and these semantics include what > >> kind of domain goals the client can achive with requests to the > >> target resource. > > > >> If you see <link rel="cart" href="./shopping-cart"/> and the type > >> spec for 'cart' tells you that a DELETE request on it empties the > >> cart then you know that > > > >> DELETE ./shopping-cart achieves the domain goal of emptying the > >> cart. > > > >> I started to describe that in http://www.nordsc.com/blog/?p=8 > >> <http://www.nordsc.com/blog/?p=8> > > > >> (last paragraph where it says: "express the operations that are > >> currently stated in the form of prose in hypermedia > >> specifications in terms of the above predicate conjunctions and > >> HTTP operations") > > > > So perhaps a "create" relation could be registered that specifies > > "POST" as the method to use? > > > >> No, link relations denote resource semantics, not actions. The > > resource > >> semantics include what is achived by what requests on the > >> resource. > > > I am confused, I thought you were saying that a link relation would > define what actions/verbs could be applied to the resource. The desription of the relation defines the meaning of the target resource, IWO: the end of the 'foo' link is a 'bar' resource. You achieve this and that by PUTing to such 'bar' resources. > Would it > be better if there was a "collection" relation that said that you > could GET and POST to it? yes, exactly. In this sense, there is no difference between <link rel="collection" href="..."> and <collection href="..."> Which is why I called it "(named) hypermedia context" in the blog [sorry, but I just had no better word for it] > > >> If you say > > > >> <link rel="lock" href="/445/lock"/> these semantics might be that > >> PUT creates the lock and DELETE removes it. > > > > Would this be problematic for custom relations that the client may > > not be aware of the appropriate method for? > > > >> The cient needs to implement the semantics anyway. However, if > >> the HTTP connector implementation you use does not support all > >> methods, dump it and get a better one. > > I am not sure I understand. If a user agent gives user a selection > between different links/relations to follow, they should still be able > to follow the link without understanding the link relation definition, > shouldn't they? With GET yes, for other methods, they'd better understand the effect the request will have, eh? > > > > Currently, the Atom link registry does not specify any defined > > methods for any of the existing relations that are registered. > > > >> If there is no such definition, no domain operations are > >> involved. GET makes sense by default. > > > > Even the "edit" relation doesn't specify a method, it is still just > > GETable resource that one can apply the uniform interface to. > > > >> No. The semantics of edit and edit-media are defined in the > >> AtomPub specs and there the effect of a PUT and DELETE to either > >> of them is specified. > > The effects described by AtomPub are just reiterating the already > prescribed effects defined by the uniform interface of the HTTP > protocol, aren't they? No, they do (a bit) more. See media entry aspects. > I don't see anything here [1] that suggests > that a target of the "edit" resource deviates from normal uniform > interface behavior. I guess that is because the target resources do not have semantics outside the HTTP domain. AtomPub sort of re-specifies parts of what is already inherent in HTTP - and also overspecifies things that contradict HTTP. For example, a server cannot be constrained (from HTTP POV) as to what to return for a request on a certain resource. AtomPub does that and thus limits server evolvability because it couples it to a client expectation that is not part of HTTP. (E.g. the spec says that requests to collection resource return Atom feed documents) Jan > > http://bitworking.org/projects/atom/rfc5023.html#edit-via-PUT > - -- > Kris Zyp > SitePen > (503) 806-1841 > http://sitepen.com > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.9 (MingW32) > Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ > > iEYEARECAAYFAktgrIgACgkQ9VpNnHc4zAwFlgCfX8v+iUHBqv1cJUZpcosLUxut > rV0An3522rURI4Hbw4YpNko4PUeeMWPp > =Jg5B > -----END PGP SIGNATURE----- > > > > ----------------------------------- Jan Algermissen, Consultant Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
Hi I've been looking for advice on how to design the representation
of a collection of references.
Supposing I have Seller resources, a list of which can be accessed at
/Sellers, and an individual Seller can be accessed at Sellers/{id}.
Also, I have Account resources. A list of Accounts linked to a Seller
can be accessed at /Sellers/{id}/Accounts. An individual account can be
found at /Sellers/{id1}/Accounts/{id2}
Just for now, let's assume I want an XML representation.
GET /Sellers/{id1}/Accounts/{id2} could return:
<Account id={id2} uri="/Sellers/{id1}/Accounts/{id2}"
attr1="abc" attr2="xyz" />
GET /Sellers/{id1}/Accounts could return:
<Accounts uri="/Sellers/{id1}/Accounts">
<Account id={id2} uri="/Sellers/{id1}/Accounts/{id2}"
attr1="abc" attr2="xyz" />
<Account id={id4} uri="/Sellers/{id1}/Accounts/{id4}"
attr1="efg" attr2="uvw" />
</Accounts>
And because I'm being pragmatic and know my clients are usually
interested in Accounts when dealing with a seller...
GET /Sellers/{id1} could return:
<Seller id={id1} uri="/Sellers/{id1}" attrA="mno"
attrB="pqr">
<Accounts uri="/Sellers/{id1}/Accounts">
<Account id={id2} uri="/Sellers/{id1}/Accounts/{id2}"
attr1="abc" attr2="xyz" />
<Account id={id4} uri="/Sellers/{id1}/Accounts/{id4}"
attr1="efg" attr2="uvw" />
</Accounts>
</Seller>
This works fine to my mind so long as the Account representation is not
too complex (or more importantly, large)... but what if it was? What if
an individual Account was so big it was inefficient to return all its
information when listing it (either as a standalone Accounts element
from /Sellers/{id1}/Accounts or a child element of Seller returned from
/Sellers/{id1})?
If I knew that 95% of the time my clients would have sufficient
information if each Account element in a list contained just an id, a
uri and a name attribute, I could decide to return only that basic data
when Accounts are listed.
But here is my question, finally, should that still be represented as:
<Accounts><Account ... /><Account ... /></Accounts>
Or should I acknowledge it is not an account and change it to something
like:
<AccountRefs><AccountRef ... /><AccountRef ... /></AccountRefs>
Which may have to be retrieved from /Sellers/{id1}/AccountRefs
The benefit of the first option is that the client would just deal with
Account resources. To create a new Account they could put it to
/Sellers/{id1}/Accounts, the URI for which is readily available. They
would have to realise though that the representation they PUT to that
URI is probably different from one they retrieve when they get a list of
Accounts (though it would be the same as the one they get when
retrieving an individual account).
The benefit of the second approach is that it is clear that the list
does not contain full blown Account details. Also, if I wanted to write
a schema describing the XML I could have different definitions for the
full "representation" of an Account resource and that of a
reference to an Account. However, this approach makes the creation of a
new Account more complex, as I in theory should PUT an Account somewhere
(but nothing in the list representation tells me where)... and I also
need to explicitly create the AccountRef (or have the server create it
as the Account is created).
I have swapped between which approach I prefer, though at the moment
I'm liking the first (i.e. sticking with Account). My reasoning...
It is just another representation of Account and I'm not aware of
anything that says each "view" of a resource must contain the
same information.
What do you think? Am I just missing the point altogether? Is this all
discussed in great depth elsewhere that I couldn't find (sorry if it
is)?
Thanks for your time.
Piers
piers_lawson wrote:
> Hi I've been looking for advice on how to design the representation
> of a collection of references.
> Supposing I have Seller resources, a list of which can be accessed
> at /Sellers, and an individual Seller can be accessed at Sellers/{id}.
> Also, I have Account resources. A list of Accounts linked to a Seller
> can be accessed at /Sellers/{id}/Accounts. An individual account can
> be found at /Sellers/{id1}/Accounts/{id2}
> Just for now, let's assume I want an XML representation.
>
> GET /Sellers/{id1}/Accounts/{id2} could return:
> <Account id={id2} uri="/Sellers/{id1}/Accounts/{id2}" attr1="abc"
> attr2="xyz" />
>
> GET /Sellers/{id1}/Accounts could return:
> <Accounts uri="/Sellers/{id1}/Accounts">
> <Account id={id2} uri="/Sellers/{id1}/Accounts/{id2}"
> attr1="abc" attr2="xyz" />
> <Account id={id4} uri="/Sellers/{id1}/Accounts/{id4}"
> attr1="efg" attr2="uvw" />
> </Accounts>
Another issue to keep in mind is cache invalidation. When you duplicate information like Account attributes in the collection resource, you must invalidate the collection every time you invalidate each contained resource.
> And because I'm being pragmatic and know my clients are usually
> interested in Accounts when dealing with a seller...
> GET /Sellers/{id1} could return:
> <Seller id={id1} uri="/Sellers/{id1}" attrA="mno" attrB="pqr">
> <Accounts uri="/Sellers/{id1}/Accounts">
> <Account id={id2} uri="/Sellers/{id1}/Accounts/{id2}"
> attr1="abc" attr2="xyz" />
> <Account id={id4} uri="/Sellers/{id1}/Accounts/{id4}"
> attr1="efg" attr2="uvw" />
> </Accounts>
> </Seller>
Exponentially ditto here. HTTP is optimized for large-grain resources, but a large part of that depends on caching.
> This works fine to my mind so long as the Account representation
> is not too complex (or more importantly, large)... but what if it
> was? What if an individual Account was so big it was inefficient
> to return all its information when listing it (either as a
> standalone Accounts element from /Sellers/{id1}/Accounts
Then you should break it up into smaller resources, like /Accounts/{id2}/address/, and include links to those sub-resources in the response to /Accounts/{id2}/
> or a child element of Seller returned from /Sellers/{id1})?
Another good reason not to duplicate information from elements to their containers.
> ...
> Also, if I wanted to write a schema describing the XML
> I could have different definitions for the full
> "representation" of an Account resource and that of a
> reference to an Account. However, this approach makes
> the creation of a new Account more complex, as I in
> theory should PUT an Account somewhere (but nothing in
> the list representation tells me where)...
If instead of a new <AccountRef> class you just included a normal ol' URI, then that would be an exact reference to the full representation. You can also include a URI (or URI template) that tells you where to PUT a new Account.
Robert Brewer
fumanchu@...
"piers_lawson" wrote: > > Hi I've been looking for advice on how to design the representation > of a collection of references. > Look no further than RFC 4287. > > If I knew that 95% of the time my clients would have sufficient > information if each Account element in a list contained just an id, a > uri and a name attribute, I could decide to return only that basic > data when Accounts are listed. > This is a corollary to atom:summary vs. atom:content, you can have a feed which contains atom:summary information, but atom:content has @src which provides a link to the standalone Atom Entry's atom:content. Atom Feed documents are particularly well-suited to listing a collection of names and IDs, where each contains or links to, more specific data. > > What do you think? Am I just missing the point altogether? Is this all > discussed in great depth elsewhere that I couldn't find (sorry if it > is)? > How you model your data is up to you; to me, it doesn't sound like anything Atom can't handle. If you don't use Atom, then you can at least use standard link relations on your hyperlinks, or @src to refer to documents within a collection, or other standard methods of linking hypertext documents. Try modeling your data as Atom first, though. You may wind up being pleased with the results you get from using standard media types and link relations. -Eric
On Jan 28, 2010, at 3:51 AM, Robert Brewer wrote:
> piers_lawson wrote:
>> Hi I've been looking for advice on how to design the representation
>> of a collection of references.
>> Supposing I have Seller resources, a list of which can be accessed
>> at /Sellers, and an individual Seller can be accessed at Sellers/
>> {id}.
>> Also, I have Account resources. A list of Accounts linked to a Seller
>> can be accessed at /Sellers/{id}/Accounts. An individual account can
>> be found at /Sellers/{id1}/Accounts/{id2}
>> Just for now, let's assume I want an XML representation.
>>
>> GET /Sellers/{id1}/Accounts/{id2} could return:
>> <Account id={id2} uri="/Sellers/{id1}/Accounts/{id2}" attr1="abc"
>> attr2="xyz" />
>>
>> GET /Sellers/{id1}/Accounts could return:
>> <Accounts uri="/Sellers/{id1}/Accounts">
>> <Account id={id2} uri="/Sellers/{id1}/Accounts/{id2}"
>> attr1="abc" attr2="xyz" />
>> <Account id={id4} uri="/Sellers/{id1}/Accounts/{id4}"
>> attr1="efg" attr2="uvw" />
>> </Accounts>
>
> Another issue to keep in mind is cache invalidation. When you
> duplicate information like Account attributes in the collection
> resource, you must invalidate the collection every time you
> invalidate each contained resource.
>
Yes, good point (and often not discussed). Sometimes, you might hapen
to know that the referenced resources are immutable (e.g. event
history). Then the situation is different.
>> And because I'm being pragmatic and know my clients are usually
>> interested in Accounts when dealing with a seller...
>> GET /Sellers/{id1} could return:
>> <Seller id={id1} uri="/Sellers/{id1}" attrA="mno" attrB="pqr">
>> <Accounts uri="/Sellers/{id1}/Accounts">
>> <Account id={id2} uri="/Sellers/{id1}/Accounts/{id2}"
>> attr1="abc" attr2="xyz" />
>> <Account id={id4} uri="/Sellers/{id1}/Accounts/{id4}"
>> attr1="efg" attr2="uvw" />
>> </Accounts>
>> </Seller>
>
> Exponentially ditto here. HTTP is optimized for large-grain
> resources, but a large part of that depends on caching.
Yes. And instead of fearing to-be-dereferenced references (<http://www.nordsc.com/blog/?p=152
> ) we should make use caching and conditional requests where it
makes sense.
>
>> This works fine to my mind so long as the Account representation
>> is not too complex (or more importantly, large)... but what if it
>> was? What if an individual Account was so big it was inefficient
>> to return all its information when listing it (either as a
>> standalone Accounts element from /Sellers/{id1}/Accounts
>
> Then you should break it up into smaller resources, like /Accounts/
> {id2}/address/, and include links to those sub-resources in the
> response to /Accounts/{id2}/
>
>> or a child element of Seller returned from /Sellers/{id1})?
>
> Another good reason not to duplicate information from elements to
> their containers.
I'd generally put minimal information into the container to enable
display. For example like Atom does with titles on links and with the
summary element. If the content of both sticks to describing the
nature of the target (which should not really change) you should not
have any freshness problems.
>
>> ...
>> Also, if I wanted to write a schema describing the XML
>> I could have different definitions for the full
>> "representation" of an Account resource and that of a
>> reference to an Account. However, this approach makes
>> the creation of a new Account more complex, as I in
>> theory should PUT an Account somewhere (but nothing in
>> the list representation tells me where)...
>
> If instead of a new <AccountRef> class you just included a normal
> ol' URI, then that would be an exact reference to the full
> representation. You can also include a URI (or URI template) that
> tells you where to PUT a new Account.
Yes.
Jan
>
>
> Robert Brewer
> fumanchu@...
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
-----------------------------------
Jan Algermissen, Consultant
Mail: algermissen@...
Blog: http://www.nordsc.com/blog/
Work: http://www.nordsc.com/
-----------------------------------
On Wed, Jan 27, 2010 at 11:07 PM, Eric J. Bowman <eric@...>wrote:
>
>
> "piers_lawson" wrote:
> >
> > Hi I've been looking for advice on how to design the representation
> > of a collection of references.
> >
>
> Look no further than RFC 4287.
>
>
I agree that Atom is a useful model for understanding how to represent
collections. But, every time I try to use it in anger, I keep running into
some frustrations:
* Atom is XML only -- I find JSON much easier for a general client base to
deal with.
* Even when I'm willing to deal with XML formats, it is pretty clumsy to
represent
application specific content types -- you're pretty much talking about
custom
XML namespaces, or about leveraging the XML "any" elements in the Atom
schema,
which also basically negates the idea that a generic client is going to be
able to
understand much about a particular message.
* There are a bunch of required attributes in the Atom "envelope" around the
content that make perfect sense for the original use case (blog postings)
but do not always make sense for particular application use cases.
* Yes, the concept of a standardized <link> element is quite useful for
expressing
relationships between representations (and I like the idea that you can
define
your own "rel" values). But ...
- You can have the same benefits in roll-your-own representations by
emulating the <link> semantics in some appropriate local syntax.
- Generic clients are not going to understand the semantics of custom
"rel"
values anyway, so what is the value of packaging them in the standard
Atom envelope?
Bottom line, I like the overall design paradigm that Atom defines, with well
defined semantics for the various CRUD operations that are almost univeral
across applications. But, adhering to 4287 (versus my own custom collection
representation that shares the spirit of the <link> element for cross
referencing) doesn't seem to add huge amounts of value. You're right, a
generic Atom client might understand the coarse grained message structure of
an "application/atom+xml;type=feed" media type, but I have a hard time
seeing why this matters if the client can't understand the actual
representation inside the <atom:content> element (or the equivalent custom
XML elements that the "any" construct in the schema allows) anyway.
For my own APIs (at least so far), I've followed these principles:
* Design a custom representation, with a custom media type,
for each individual data object "class".
* Within the representation, include an extensible structure
akin to the Atom <link> element, to define relationships
between representations (as well as links to trigger state changers)
but without the extra baggage of "required" Atom elements that
are not relevant to the application specific use cases
(let alone the limitation to use XML).
* Design an additional representation, with a "plural" name in
the media type, for collections of the same type of data object.
In XML, this would typically be something like as <customers>
element around zero or more <customer> elements, with a
media type like "application/vnd.com.example.myapp.Customers+xml"
(or +json) for the collection, and
"application/vnc.com.example.myap.Customer+xml"
for the representation of an individual customer.
I am aware that there are reasonable-sounding arguments for using "standard"
representation formats when they are available. I'm just not sold on the
value, at least for the APIs I have been involved in so far.
Craig McClanahan
Craig McClanahan wrote: > > Eric J. Bowman wrote: > > > > > > > "piers_lawson" wrote: > > > > > > Hi I've been looking for advice on how to design the > > > representation of a collection of references. > > > > > > > Look no further than RFC 4287. > > > > > I agree that Atom is a useful model for understanding how to represent > collections. But, every time I try to use it in anger, I keep > running into some frustrations: > > * Atom is XML only -- I find JSON much easier for a general client > base to deal with. > Atom is namespace-extensible and follows a schema. Unless and until JSON actually becomes a hypertext format, I'll stick to using hypertext and standard link relations in REST applications. I hold the opposite view from yours on the XML-vs-JSON issue, entirely. There is no standard means to represent collections of data as JSON. Whereas there is application/atom+xml, which keeps that out-of-band knowledge wrapped up in a common-knowledge standard media type, where it belongs -- not in an API description. > > * Even when I'm willing to deal with XML formats, it is pretty clumsy > to represent > application specific content types -- you're pretty much talking > about custom > XML namespaces, or about leveraging the XML "any" elements in the > Atom schema, > which also basically negates the idea that a generic client is > going to be able to > understand much about a particular message. > No, I'm not even suggesting that. Atom provides a wrapper for XML of any schema within atom:summary and atom:content elements. All a generic client, or intermediary, needs to know is right there in the media type identifier of the response. The user's agent will need to know how to render the standard-or-nonstandard vocabulary presented in the contents, but it needs no special knowledge of protocol-layer messaging beyond that provided by Atom and Atom Protocol -- while an intermediary only needs knowledge of the protocol-layer messaging but no special knowledge of application-specific vocabulary. The original question Piers was asking was also seeking advice on how to create and update collection members. If you are dealing with data that can be modeled in Atom, you can use Atom Protocol to handle creating and updating members in a standardized fashion, regardless of the vocabulary used within atom:summary or atom:content elements. I don't know how any "generic" client is supposed to understand any application-specific messaging that isn't described by the use of a standard media type. > > * There are a bunch of required attributes in the Atom "envelope" > around the content that make perfect sense for the original use case > (blog postings) but do not always make sense for particular > application use cases. > You're right, Atom isn't always the answer. But, it certainly helps to start with modeling a system like Piers' in Atom, since it's a pretty close fit out-of-the-box. Patient records have a patient ID and a patient Name and contact info. In Atom, these would be under <author> elements instead of <patient> elements. The question is, whether it causes so much confusion for the system to call patients "authors" that it outweighs the benefit of being able to build the system using standard libraries for well-known media types like Atom. That same cost-benefit analysis needs to be applied to every instance where a working model built with Atom is suboptimal to the goals of the system. So build a working model of the system, using Atom if it's even remotely close to workable, because then you know exactly what shortcomings are being addressed by the selection of some other standard, or any custom, media types you proceed to choose or create. Atom is hardly limited to "blogging", it applies to any sort of record where it's important to know who said what to whom, when. If the actual data is a PDF or something, it can be handled as an attachment. > > * Yes, the concept of a standardized <link> element is quite useful > for expressing > relationships between representations (and I like the idea that you > can define > your own "rel" values). > But, there's nothing standard about an application-specific link relation. > > But ... > > - You can have the same benefits in roll-your-own representations by > emulating the <link> semantics in some appropriate local syntax. > No, you can't. Link semantics and syntax are clearly spelled out by the relevant standards which describe them (as defined by media type identifier used), and there is expected to be a new registry for standardized link relations to be used in HTTP Link header syntax. REST quite clearly calls for the use of standard or standardizable methods and media types, the combination of which encapsulates all out- of-band knowledge of an API within common knowledge. Nonstandard vocabularies are allowed inside of standard media types, but embedding standard link relations inside nonstandard vocabularies defeats the entire purpose. > > - Generic clients are not going to understand the semantics of > custom "rel" > values anyway, so what is the value of packaging them in the > standard Atom envelope? > Who said anything about custom rel values? The OP is clearly describing a problem that's easily solved by the standard link relations defined within Atom. Granted, those link relations can be used in other media types, but if those other media types aren't standard then the system isn't built in the REST style. If a patient can have an atom:id, then so can a Doctor. It follows, then, that a medical recordkeeping system would need a vocabulary of link relations specific to that type of system, for linking Doctors and patients -- a patient record of a Dr. visit might have a link to the atom:id of the referring Dr. which would need a different link relation that that of the attending Dr. But, Atom allows for such extension, and such a vocabulary, if generic enough, could see adoption elsewhere. After which, a standard emerges establishing consensus for a Medical Records extension vocabulary for Atom. Now that I have said something about custom rel values, why would a generic client, or intermediary, need to understand application specifics like referring vs. attending physician? All I care about is that the (human|machine) user's agent can interpret this information. A hypertext application can easily be built based on those link relations, which allows for the correct display and manipulation of system resources. The user's agent is thus taught all it needs to know, and the link relations don't need to be standardized for it to work. REST requires the standardization of these link relations, for purposes beyond just making the application work -- to make it work for others in an interoperable way over time, requires that the out-of-band knowledge encapsulated in these link relations is (or becomes) common knowledge. > > Bottom line, I like the overall design paradigm that Atom defines, > with well defined semantics for the various CRUD operations that are > almost univeral across applications. But, adhering to 4287 (versus > my own custom collection representation that shares the spirit of the > <link> element for cross referencing) doesn't seem to add huge > amounts of value. You're right, a generic Atom client might > understand the coarse grained message structure of an > "application/atom+xml;type=feed" media type, but I have a hard time > seeing why this matters if the client can't understand the actual > representation inside the <atom:content> element (or the equivalent > custom XML elements that the "any" construct in the schema allows) > anyway. > I don't see why a client can't be instructed, via an XSLT transformation, how to present any custom XML vocabulary presented within atom:summary/atom:content elements as HTML documents. At the wire level, protocol interaction simply doesn't require knowledge of any custom vocabulary within representations of standard media types. > > For my own APIs (at least so far), I've followed these principles: > > * Design a custom representation, with a custom media type, > for each individual data object "class". > OK, you're not the first person I've heard promoting this design meme, but you will be the first person I ask why... Where do you find any support for this notion within REST or anything else Dr. Fielding has written about REST? The REST style calls for obeying the standards used. I'm assuming you're using the HTTP protocol, in which case RFC 2616 applies, along with its SHOULD NOT admonishment against using unregistered media types. I've written extensively in recent threads about all the things Roy and REST do say about media type proliferation, so aside from going against RFC 2616 this design pattern does not have any conceivable support from within REST, which calls for the re-use of existing standard media types. That there are people out there who are minting media types for each representation type on their system, boggles my mind. This goes against all custom and practice in Web Development, where the closest you'll come to an example was Atom Protocol's adding the type parameter to the application/atom+xml media type identifier. Additionally, you suggest that Atom is only useful for blogging, instead of its actual intent to be an extensible, all-purpose re-usable wrapper, the very notion of which seems alien to the mint-new-media- types-for-everything approach. You also suggest a 1:1 relationship between media types and media type identifiers, which doesn't exist, and isn't required unless media type identifiers are being mis-used as "contracts". REST development starts with modeling resources; once modeled, those resources are assigned an appropriate media type. Only once all existing media types are deemed inappropriate, is it OK to forge ahead with minting a new one. This is designing within the constraints of REST. The unbounded-creativity approach of treating each of your representations as a unique snowflake without considering the possibility that a standard media type may be a perfect fit, is in direct conflict with the REST style. > > * Within the representation, include an extensible structure > akin to the Atom <link> element, to define relationships > between representations (as well as links to trigger state changers) > but without the extra baggage of "required" Atom elements that > are not relevant to the application specific use cases > (let alone the limitation to use XML). > I certainly don't see how any generic client is going to understand what you suggest. To quote from REST: " The trade-off, though, is that a uniform interface degrades efficiency, since information is transferred in a standardized form rather than one which is specific to an application's needs. " If you're unwilling to make this trade-off, then perhaps REST isn't the right architecture for your system. What you consider "baggage" is considered by plenty of others as the essential bare-minimum information pertinent to who said what to whom, when. While some of this data may be irrelevant to your specific application, the use of a standard media type allows for serendipitous re-use. Atom does force you to provide the who said what to whom, when-metadata, even if you aren't using it in your interface, because it's critical to allowing others to use your API for purposes you never conceived. I also have no idea what you mean when you say Atom is limited to XML. > > * Design an additional representation, with a "plural" name in > the media type, for collections of the same type of data object. > In XML, this would typically be something like as <customers> > element around zero or more <customer> elements, with a > media type like "application/vnd.com.example.myapp.Customers+xml" > (or +json) for the collection, and > "application/vnc.com.example.myap.Customer+xml" > for the representation of an individual customer. > You've now minted two media types that are specific to your application and do not represent common knowledge. Your API is thus driven by out- of-band knowledge presented in its documentation, rather than a shared understanding of common media types and link relations. I don't understand how a generic client, or intermediary, can understand anything about your payload other than what's implied by application/xml because RFC 3023 allows for the extension of that media type identifier. However, no such corollary exists in RFC 4627, so your +json media type identifiers will be treated by generic clients, or intermediaries, as an invalid declaration and may forward or interpret the message as text/plain or application/octet-stream, or remove the Content-Type header entirely. I also have no idea what you mean by "typically be something like..." If you're just using Atom, and not Atom Protocol, then <feed> wraps <entry> unless <entry> is the root element, and either way they have the same media type. In Atom Protocol, it is optional to use a type parameter, both type=feed and type=entry are defined. I don't bother with type=entry, but I do use type=feed. These are subtypes, and this is the proper way to handle collections/members rather than assigning two separate media type identifiers -- how is any generic client, or intermediary, to understand that *.Customer+xml and *.Customers+xml are in any way related to one another? These are separate, opaque identifers rather than an identical media type with separate, opaque subtypes. > > I am aware that there are reasonable-sounding arguments for using > "standard" representation formats when they are available. I'm just > not sold on the value, at least for the APIs I have been involved in > so far. > REST doesn't call this an optional constraint -- self-descriptive messaging is critical to the entire style. If you're relying on out- of-band information based on the syntax of custom media type identifiers then specific knowledge of your API is required for any intermediary or client to make heads or tails of its payload. If you're using standard media types, then generic clients may be instructed how to deal with any uncommon knowledge about your API, using hypertext representations. Nobody has managed to sell me on any pragmatic value to disregarding the self-descriptive messaging constraint. There is value in allowing intermediaries and generic clients to determine the nature of a payload by keying on standard media type identifiers. It comes with a tradeoff of efficiency that I'm willing to make in order to gain the positive benefits of scaling and serendipitous re-use, which is why I use REST. If there were some hidden value in disregarding the self-descriptive messaging constraint that disproves REST's utility, then I wouldn't pay REST any mind, but so far no such inherent contradiction has been exposed. IOW, my response has nothing to do with dogma, and everything to do with why the Web succeeded in the first place. -Eric
--- In rest-discuss@yahoogroups.com, mike amundsen <mamund@...> wrote: > > > Note that rels like 'edit' 'edit-media' or 'search' do not mean 'edit action > Yep, verb-noun stuff here. > > Wondering if there is any place for a verb in Uniform Interface apps? > > mca > http://amundsen.com/blog/ > Absolutely! Client side events are often named after verbs (e.g. change, blur, focus, load). If your hypermedia format is event driven, that's where your verbs will be. Oh well, ya... the HTTP verbs too (as Jan said). A role of hypermedia can be the filtering and conversion of the client side events/verbs to server requests/verbs. Andrew
Hi All, I was wondering how you would implement a RESTful CAPTCHA solution, especially in the context of the strict interpretation of 'statelessness' many on this list use. I would assume the 'naive' solution would be to send a hash of the answer to the user and then use both the hash (which the user returns to the server) and the answer on the server side to see if they match. But that exposes the server to replay attacks. The approach I would take (which violates the 'strict interpretation' of statelessness) would be to create a new resource for the captcha on the server and compare the incoming answer with that, destroying the resource when done. I am unsure what approach would satisfy both requirements. My intention for starting this discussion is not to inflame, but to better understand the 'strict' statelessness POV. Thanks, Alexandros.
How about this: When a person requests the representation to fill out (the one w/ the captcha rendering) - generate a captcha image and create a new resource for it (/captchas/a1s2d3f3g4) - in the response representation, include an embed link (<img />) to the generated captcha resource _and_ a hidden input that contains the same URI When the user POSTs the representation back, it can include a pointer to the captcha resource (from the hidden input) as well as the user's input text. You can use that pointer to validate the input text passed by the user. The images could be generated ahead of time or on deamand. The URIs could be completely random or a hash of the actual captcha text to short-cut the validation process. The captcha URIs can be marked as single-use and even limited time-valid to cut down on flooding, etc. mca http://amundsen.com/blog/ On Sat, Jan 30, 2010 at 10:07, Alexandros Marinos <al3xgr@...> wrote: > > > Hi All, > > I was wondering how you would implement a RESTful CAPTCHA solution, > especially in the context of the strict interpretation of 'statelessness' > many on this list use. I would assume the 'naive' solution would be to send > a hash of the answer to the user and then use both the hash (which the user > returns to the server) and the answer on the server side to see if they > match. But that exposes the server to replay attacks. The approach I would > take (which violates the 'strict interpretation' of statelessness) would be > to create a new resource for the captcha on the server and compare the > incoming answer with that, destroying the resource when done. > > I am unsure what approach would satisfy both requirements. My intention for > starting this discussion is not to inflame, but to better understand the > 'strict' statelessness POV. > > Thanks, > Alexandros. > > >
mike amundsen wrote: > > > How about this: > > When a person requests the representation to fill out (the one w/ the > captcha rendering) > - generate a captcha image and create a new resource for it > (/captchas/a1s2d3f3g4) > - in the response representation, include an embed link (<img />) to the > generated captcha resource _and_ a hidden input that contains the same URI That would mean that every access to the form would have to be via POST (because generating the image resource is a side-effect and therefore not allowed for GET) so you can't simply link to the form. Regards, Simon
I have several implementations that write logs, audit trails, and create other data (that may or may not be exposed as a resource) for later use on every GET. The important thing is that the same GET can be repeated many times w/o any harm and will return (from the client's POV) the same expected results. This includes records used to track single-use URI tokens and could also be used to generate and track Captcha information. I don't see a problem w/ HTTP specs or REST style in this. mca http://amundsen.com/blog/ On Sat, Jan 30, 2010 at 20:08, Simon Reinhardt <simon.reinhardt@...> wrote: > mike amundsen wrote: >> >> >> How about this: >> >> When a person requests the representation to fill out (the one w/ the >> captcha rendering) >> - generate a captcha image and create a new resource for it >> (/captchas/a1s2d3f3g4) >> - in the response representation, include an embed link (<img />) to the >> generated captcha resource _and_ a hidden input that contains the same URI > > That would mean that every access to the form would have to be via POST > (because generating the image resource is a side-effect and therefore not > allowed for GET) so you can't simply link to the form. > > Regards, > Simon >
Hi,
I'm planning to develop a webservice, and I like to try the RESTful
architecture.
The service is about downloading some data from the server to a device
attached on the local computer. The client need to retrieve the command
from the server and then send the response of the device to the server
to check its validity. Then the server says if it is ok or not.
Device client Server
----> Get command
<----- <-----
----> ----> Response from device
<----- Response from server indicating
if it is ok or not the execution
It would be like: client calls authenticate of device. then the server
sends the command to be sent to the device for authentication. The
client send this command to the device and the response is sent back to
the server. The server then replies.
I have thought on:
/device/{id} as resource
/device/{id}/authenticate
GET will retrieve the command and blank state
<command> value </command>
<state> not defined </command>
PUT will send the response and get the real state
---> <response> value </response>
<--- <state> not defined </command>
I don't know if this is REST. Is it better to create another resource as:
/device/{id}/authenticate/command (only GET available)
/device/{id}/authenticate/response (only PUT available)
/device/{id}/authenticate (only GET avaliable for status)
Any help is welcome.
TA
Does the media type define when the client reaches a staedy-state or is this a client configuration issue? There is definitely overlap: HTML surely at least implies that a steady state is reached when all inline media has been retrieved but I can tell my client to not download, for example, images and thus reach a steady state earlier. But would it also be conceptually ok to configure a client to do more until a steady state is reached? That is, perform some other requests using partiluar domain semantics of an accessed service? For example, think of a client configuration that would cause the client to allways execute several searches when the home page of an online store is reached that provides OpenSearch forms? jan ----------------------------------- Jan Algermissen, Consultant Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
George,
On Jan 30, 2010, at 1:38 PM, George wrote:
> Hi,
>
> I'm planning to develop a webservice, and I like to try the RESTful
> architecture.
>
> The service is about downloading some data from the server to a device
> attached on the local computer. The client need to retrieve the command
> from the server and then send the response of the device to the server
> to check its validity. Then the server says if it is ok or not.
I think I do not understand what you are up to. Why does the client fetch the command for the device from the server?
>
> Device client Server
> ----> Get command
> <----- <-----
>
> ----> ----> Response from device
> <----- Response from server indicating
> if it is ok or not the execution
>
> It would be like: client calls authenticate of device. then the server
> sends the command to be sent to the device for authentication.
HTTP authentication is orthogonal. Use one of the HTTP standard authentication solutions.
> The
> client send this command to the device and the response is sent back to
> the server. The server then replies.
>
> I have thought on:
> /device/{id} as resource
> /device/{id}/authenticate
> GET will retrieve the command and blank state
> <command> value </command>
> <state> not defined </command>
> PUT will send the response and get the real state
> ---> <response> value </response>
> <--- <state> not defined </command>
>
> I don't know if this is REST. Is it better to create another resource as:
> /device/{id}/authenticate/command (only GET available)
> /device/{id}/authenticate/response (only PUT available)
> /device/{id}/authenticate (only GET avaliable for status)
>
> Any help is welcome.
Can you explain your requirements? I am having trouble understanding what you are trying to do.
Jan
> TA
>
>
>
>
>
>
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
-----------------------------------
Jan Algermissen, Consultant
Mail: algermissen@...
Blog: http://www.nordsc.com/blog/
Work: http://www.nordsc.com/
-----------------------------------
Hi,
Let's try to explain it a little further.
On 01/02/2010 9:02, Jan Algermissen wrote:
> George,
>
> On Jan 30, 2010, at 1:38 PM, George wrote:
>
>> Hi,
>>
>> I'm planning to develop a webservice, and I like to try the RESTful
>> architecture.
>>
>> The service is about downloading some data from the server to a device
>> attached on the local computer. The client need to retrieve the command
>> from the server and then send the response of the device to the server
>> to check its validity. Then the server says if it is ok or not.
>
> I think I do not understand what you are up to. Why does the client fetch the command for the device from the server?
The system is foreseen to control a hardware device. The issue is that
the device only accepts a subset of commands based on some cryptographic
features.
I don't want the command set and the cryptographic keys to be on the
client, as this way I have to replicate the keys on every client and the
security can be comprised.
Each command is encrypted with different keys depending on the device it
is directed to. So the issue is first is that the server needs to know
the device as to open the session with the correct set of keys. After
that, the client get the command (encripted and maced with server keys),
this command is sent to the device who will response. The response has
some crypto stuff that need to be check on the server. Then the client
get an ACK or NACK depending on the correct answer from the device
(whether the command is well executed or not, or whether the device owns
the correct set of keys and it not a fake device).
>
>
>
>>
>> Device client Server
>> ----> Get command
>> <-----<-----
>>
>> ----> ----> Response from device
>> <----- Response from server indicating
>> if it is ok or not the execution
>>
>> It would be like: client calls authenticate of device. then the server
>> sends the command to be sent to the device for authentication.
>
> HTTP authentication is orthogonal. Use one of the HTTP standard authentication solutions.
Authentication is done based on the crypto protocol that I explained above.
>
>
>> The
>> client send this command to the device and the response is sent back to
>> the server. The server then replies.
>>
>> I have thought on:
>> /device/{id} as resource
>> /device/{id}/authenticate
>> GET will retrieve the command and blank state
>> <command> value</command>
>> <state> not defined</command>
>> PUT will send the response and get the real state
>> ---> <response> value</response>
>> <---<state> not defined</command>
>>
>> I don't know if this is REST. Is it better to create another resource as:
>> /device/{id}/authenticate/command (only GET available)
>> /device/{id}/authenticate/response (only PUT available)
>> /device/{id}/authenticate (only GET avaliable for status)
>>
>> Any help is welcome.
>
> Can you explain your requirements? I am having trouble understanding what you are trying to do.
The issue is that I need to get a command and then check the answer from
that command. This will be done in 2 steps, and I don't know how to map
that into resources.
Thanks... hope now is clearer.
CU
Jorge
> Jan
>
>
>
>> TA
>>
>>
>>
>>
>>
>>
>>
>>
>> ------------------------------------
>>
>> Yahoo! Groups Links
>>
>>
>>
>
> -----------------------------------
> Jan Algermissen, Consultant
>
> Mail: algermissen@...
> Blog: http://www.nordsc.com/blog/
> Work: http://www.nordsc.com/
> -----------------------------------
>
>
>
Hey all, The first week of This Week in REST is over (I think it was a successful test run) and you can find the collected links for last week (Jan 25 2010 Jan 31 2010) on: * the REST wiki - http://rest.blueoxen.net/cgi-bin/wiki.pl?RESTWeekly_Jan_25_2010 * the blog I've started for feed-based distribution - http://thisweekinrest.wordpress.com/2010/02/01/this-week-in-rest-volume-1-jan-25-2010-jan-31-2010/ For contributing links this week (please do!) visit http://rest.blueoxen.net/cgi-bin/wiki.pl?RESTWeekly_Feb_1_2010 Cheers, Ivan
Jan Algermissen wrote: > > Does the media type define when the client reaches a staedy-state or > is this a client configuration issue? > Media type. > > There is definitely overlap: HTML surely at least implies that a > steady state is reached when all inline media has been retrieved but > I can tell my client to not download, for example, images and thus > reach a steady state earlier. > Yes, but that behavior is defined by the media type -- HTML defines @alt for inline images and the <noscript> element to provide graceful fallback based on client configurations of !image and/or !script which the media type explicitly allows for. > > But would it also be conceptually ok to configure a client to do more > until a steady state is reached? That is, perform some other requests > using partiluar domain semantics of an accessed service? > Sure. You're just applying the layered-system constraint on the client side. Thus, your client state becomes separated from application state. Application state is entirely derived from the retrieved representation, while actual client state is opaque to the server component which sent the representation, because it "cannot 'see' beyond the immediate layer with which [it interacts]," to paraphrase REST. > > For example, think of a client configuration that would cause the > client to allways execute several searches when the home page of an > online store is reached that provides OpenSearch forms? > I think that's out of scope to REST, which is only concerned with the interactions between connectors in a system. What the user's agent actually does with the retrieved representations is opaque behind the uniform interface. The media type tells the client all it really needs to know about rendering the document to obtain application state, doesn't it? For example, a user may employ an accessibility agent which performs specific actions based on content. Or a Web accelerator which follows and pre-caches DNS responses and retrieved representations that a user hasn't yet requested, or what you suggested. I just don't think REST cares about any of that. I would call these examples "enhanced client state", but I don't see how any of that affects "application state" as envisioned by the server through its media-type declaration. -Eric
Jan Algermissen wrote: > On Jan 24, 2010, at 4:46 PM, Mike Kelly wrote: > > >> Jan Algermissen wrote: >> >>> On Jan 24, 2010, at 2:07 PM, Mike Kelly wrote: >>> >>> >>> >>>> the only way for the client to understand the 'meaning' of its >>>> current state is in the context of the application flow (i.e. >>>> series of link relations) which led up to it. >>>> >>>> >>> Correct me, but this is exactly what REST prevents. A client can >>> use the URI of any steady-state and just proceed through the >>> application from that point on without the need for any knowledge >>> about prior interactions. If it can't, the representation is just >>> badly designed. >>> >>> >> Are we drawing a distinction here between steady-state and entry- >> point? >> > > Hmm, IMHO each steady state is a potential entry point. > I can't agree with that - and don't think the quote you dug up is referring to this specific issue. For a machine client, RESTing at a given steady state and establishing its 'meaning' is a much more delicate affair than for a human client. We have fuzzy methods of inference and contextualization with humans that we just don't have the luxury of with machines. I think we need to agree on the definition of 'meaning' in this context.. because, to me, it includes more than just the current set of available link relations, and since we're not supposed to type resources - the only approach I can see working is predefined application flows (which may or may not contain several 'steady states') driven by link relations. - Mike
On Feb 1, 2010, at 1:38 PM, Mike Kelly wrote: > I think we need to agree on the definition of 'meaning' in this > context.. because, to me, it includes more than just the current set of > available link relations, and since we're not supposed to type resources > - the only approach I can see working is predefined application flows Predefined application flows violate the hypermedia constraint and couple the server in a way that REST deliberately aims to avoid. As you know from my questions on this list around this issue I have been banging my head against that wall, but the more it hurt, the clearer the picture became (for me at least) :-) It is really as simple as this: o The client needs an understanding of the set of media types that the service uses, IOW, the set of types the client has to be able to deal with. (this is what I think makes up a service type for discovery and 'governance' purposes) o For any request the client sends it must expect any HTTP response code and a representation of any of the set of media types. It is completely up to the client to deal with whatever it receives. 4xx is not 'broken contract' but an *essential* part of the contract. o The server has the obligation to not lie about the links it sends, e.g. using <img src=""/> to point to an Atom feed can be considered a broken server o The server must keep resource sematics stable (and steady state (==bookmarkable) resources should be presistent) Bottom line: it is really about putting *all* change handling into the clients (and to actively expect change) to allow independent server evolution and avoid ned for communication between server and client owners. The funny thing for me was that, once actively accepting these consequences, it turns out there is really not so much that can change about a server given the above obligations. REST externalises everything about an API that *can* be kept stable and lets the client deal with the remaining instabilities. Jan > (which may or may not contain several 'steady states') driven by link > relations. ----------------------------------- Jan Algermissen, Consultant Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
Jan Algermissen wrote: > On Feb 1, 2010, at 1:38 PM, Mike Kelly wrote: > > >> I think we need to agree on the definition of 'meaning' in this >> context.. because, to me, it includes more than just the current set of >> available link relations, and since we're not supposed to type resources >> - the only approach I can see working is predefined application flows >> > > Predefined application flows violate the hypermedia constraint and couple the server in a way that REST deliberately aims to avoid. > Predefined application flows like AtomPub violate the hypermedia constraint? > It is really as simple as this: > > o The client needs an understanding of the set of media types that the service uses, IOW, the set of types the client has to be able to deal with. (this is what I think makes up a service type for discovery and 'governance' purposes) > I would choose to say 'set of representations' over 'set of media types'. Those representations will define types of link relations, which in turn define the application flow(s). Over time - other links/flows can be added, resources moved, media-types added etc. This is what I have understood as evolveability aspect of REST wrt the hypertext constraint. > > The funny thing for me was that, once actively accepting these consequences, it turns out there is really not so much that can change about a server given the above obligations. Ok - are you saying this is a Good Thing? - Mike
Subscribed! thanks. mca http://amundsen.com/blog/ On Mon, Feb 1, 2010 at 05:29, izuzak <izuzak@...> wrote: > Hey all, > > The first week of This Week in REST is over (I think it was a successful test run) and you can find the collected links for last week (Jan 25 2010 Jan 31 2010) on: > * the REST wiki - http://rest.blueoxen.net/cgi-bin/wiki.pl?RESTWeekly_Jan_25_2010 > * the blog I've started for feed-based distribution - http://thisweekinrest.wordpress.com/2010/02/01/this-week-in-rest-volume-1-jan-25-2010-jan-31-2010/ > > For contributing links this week (please do!) visit http://rest.blueoxen.net/cgi-bin/wiki.pl?RESTWeekly_Feb_1_2010 > > Cheers, > Ivan > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
Eric, On Feb 1, 2010, at 11:50 AM, Eric J. Bowman wrote: > Jan Algermissen wrote: >> >> Does the media type define when the client reaches a staedy-state or >> is this a client configuration issue? >> > > Media type. > >> >> There is definitely overlap: HTML surely at least implies that a >> steady state is reached when all inline media has been retrieved but >> I can tell my client to not download, for example, images and thus >> reach a steady state earlier. >> > > Yes, but that behavior is defined by the media type -- HTML defines > @alt for inline images and the <noscript> element to provide graceful > fallback based on client configurations of !image and/or !script which > the media type explicitly allows for. > >> >> But would it also be conceptually ok to configure a client to do more >> until a steady state is reached? That is, perform some other requests >> using partiluar domain semantics of an accessed service? >> > > Sure. You're just applying the layered-system constraint on the client > side. Thus, your client state becomes separated from application state. > Application state is entirely derived from the retrieved representation, > while actual client state is opaque to the server component which sent > the representation, because it "cannot 'see' beyond the immediate layer > with which [it interacts]," to paraphrase REST. > >> >> For example, think of a client configuration that would cause the >> client to allways execute several searches when the home page of an >> online store is reached that provides OpenSearch forms? >> > > I think that's out of scope to REST, which is only concerned with the > interactions between connectors in a system. What the user's agent > actually does with the retrieved representations is opaque behind the > uniform interface. The media type tells the client all it really needs > to know about rendering the document to obtain application state, > doesn't it? > > For example, a user may employ an accessibility agent which performs > specific actions based on content. Or a Web accelerator which follows > and pre-caches DNS responses and retrieved representations that a user > hasn't yet requested, or what you suggested. I just don't think REST > cares about any of that. I would call these examples "enhanced client > state", but I don't see how any of that affects "application state" as > envisioned by the server through its media-type declaration. > Yepp, that is pretty much how I see it also. Jan > -Eric > > > ------------------------------------ > > Yahoo! Groups Links > > > ----------------------------------- Jan Algermissen, Consultant Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
On Feb 1, 2010, at 2:54 PM, Mike Kelly wrote: > Jan Algermissen wrote: >> On Feb 1, 2010, at 1:38 PM, Mike Kelly wrote: >> >> >>> I think we need to agree on the definition of 'meaning' in this >>> context.. because, to me, it includes more than just the current set of >>> available link relations, and since we're not supposed to type resources >>> - the only approach I can see working is predefined application flows >>> >> >> Predefined application flows violate the hypermedia constraint and couple the server in a way that REST deliberately aims to avoid. >> > > Predefined application flows like AtomPub violate the hypermedia constraint? Yes. Roy confirmed that (recent post on atom-protocol list) > >> It is really as simple as this: >> >> o The client needs an understanding of the set of media types that the service uses, IOW, the set of types the client has to be able to deal with. (this is what I think makes up a service type for discovery and 'governance' purposes) >> > > I would choose to say 'set of representations' over 'set of media > types'. Not getting what you mean. You cannot specify the set of representations - they can cary at will. > Those representations will define types of link relations, which > in turn define the application flow(s). Over time - other links/flows > can be added, resources moved, media-types added etc. This is what I > have understood as evolveability aspect of REST wrt the hypertext > constraint. Dig into this later. > >> >> The funny thing for me was that, once actively accepting these consequences, it turns out there is really not so much that can change about a server given the above obligations. > > Ok - are you saying this is a Good Thing? YES!! REST separates so nicely what can change from what can be kept stable in the face of evolving requirements. Its is magical - now that I see it :-) Jan > > - Mike > > > ------------------------------------ > > Yahoo! Groups Links > > > ----------------------------------- Jan Algermissen, Consultant Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
Jorge: Assume: - You have on file several hundred photos of animals to use as captcha tests. - You have stored metadata about the image including the "animal name" (kitten, dog, buffalo, etc.) possibly as EXIF w/ the file or some other storage medium. - You and the user-agent both support the HTML media-type (for this example). - Upon request your server can select one of the images at random and assign it a unique URI based on a hash of the filename and the requesting user-agent header Your server can then respond to requests for /i-am-human.html with a representation that includes the URI of the image as both an <img /> tag and a hidden input to be returned by the user-agent. When the HTML FORM is posted, the server can validate the hash to see if that photo exists and the offered text matches the photo on file and, if it does, validate the captcha. The HTTP conversation could go like this: *** REQUEST GET /i-am-human.html HTTP/1.1 Host: www.example.org Accept: text/html User-Agent: common-browser/1.0 *** RESPONSE HTTP/1.1 200 OK Date: ... Server: smart-server/9.0 ... Content-Type: text/html Content-Length: nnn .... <form method="post" action="http://www.example.org/prove-it"> <input type="hidden" name="captcha-image" value="http://www.example.org/captcha-images/a1s2d3f4g5.png" /> User Name: <input type="text" name="user-name" value="" /> Animal: <input type="text" name="captcha-animal" value="" /> <img src="http://www.example.org/captcha-images/a1s2d3f4g5.png" title="What animal appears in this photo?" /> <input type="submit" /> </form> ... *** REQUEST POST /prove-it HTTP/1.1 Host: www.example.org Content-Type: application/x-www-form-urlencoded User-Agent: common-browser/1.0 captcha-image=http%3A%2F%2Fwww.example.org%2Fcaptcha-images%2Fa1s2d3f4g5.png&user-name=Mike&captcha-animal=kitten *** RESPONSE HTTP/1.1 400 Bad Request Date: ... Server: smart-server/9.0 ... Content-Type: text/html Content-Length: nnn ... <p class="error">Sorry, that was not a photo of a kitten. You must not be human.</p> ... There are lots of possible variations on this approach. mca http://amundsen.com/blog/ On Mon, Feb 1, 2010 at 12:48, Jorge <george.news@...> wrote: > Hi > > --- In rest-discuss@yahoogroups.com, mike amundsen <mamund@...> wrote: >> >> How about this: >> >> When a person requests the representation to fill out (the one w/ the >> captcha rendering) >> - generate a captcha image and create a new resource for it >> (/captchas/a1s2d3f3g4) >> - in the response representation, include an embed link (<img />) to the >> generated captcha resource _and_ a hidden input that contains the same URI >> >> When the user POSTs the representation back, it can include a pointer to the >> captcha resource (from the hidden input) as well as the user's input text. >> You can use that pointer to validate the input text passed by the user. >> >> The images could be generated ahead of time or on deamand. The URIs could be >> completely random or a hash of the actual captcha text to short-cut the >> validation process. The captcha URIs can be marked as single-use and even >> limited time-valid to cut down on flooding, etc. > > > Could you please put how do you foreseen the resources? > > POST --> /captchas return /captchas/a1s2d3f3g4 with representation > > Then, how is the validation resource? and is it a POST, PUT,...? > > TA > >> mca >> http://amundsen.com/blog/ >> >> >> >> On Sat, Jan 30, 2010 at 10:07, Alexandros Marinos <al3xgr@...> wrote: >> >> > >> > >> > Hi All, >> > >> > I was wondering how you would implement a RESTful CAPTCHA solution, >> > especially in the context of the strict interpretation of 'statelessness' >> > many on this list use. I would assume the 'naive' solution would be to send >> > a hash of the answer to the user and then use both the hash (which the user >> > returns to the server) and the answer on the server side to see if they >> > match. But that exposes the server to replay attacks. The approach I would >> > take (which violates the 'strict interpretation' of statelessness) would be >> > to create a new resource for the captcha on the server and compare the >> > incoming answer with that, destroying the resource when done. >> > >> > I am unsure what approach would satisfy both requirements. My intention for >> > starting this discussion is not to inflame, but to better understand the >> > 'strict' statelessness POV. >> > >> > Thanks, >> > Alexandros. >> > >> > >> > >> > > >
Thank you for your replies.
I think both Craig and Eric agree that a collection of "links"
to resources should be returned to the client in a form that is not
simply a cut-down version of the actual resource, but is clearly a link.
Eric then goes further to suggest that by re-using a well known
specification, anybody writing a client gets the benefit of reusing
knowledge (and possibly tools) they already have.
I will look more closely at the Atom specification to see how well it
fits my situation... though I have one immediate question:
As stated originally, I wanted my representation of the Seller resource
to contain both the Seller information and the collection of links to
Accounts. I don't want the Seller representation to be a Service
Document that contains pointers to the feeds, which the client would
then have to GET separately. How would you mix one or more feeds into
the representation of a resource? Would you have something along the
lines of:
<seller .....>
<otherInfo1 />
<otherInfo2 />
<feed xmlns="http://www.w3.org/2005/Atom">
<title>Accounts</title>
...
<entry>...</entry>
<entry>...</entry>
</feed>
</seller>
If so, what media type should be used? Neither
"application/atomserv+xml" or "application/atom+xml" seem
appropriate.
I understand the benefits of re-using a standard but worry about the
verbosity compared to a custom representation. I also wonder about the
real benefits when the client will have to be built to understand the
"foreign markup" anyway if it is to be of any use to an end
user.
I think I might be sold more on the idea if I could see an example of
this embedding of feeds into a representation or a resource.
Thank you for your time
Piers
You might to check out the draft of the "Inlining" extension for Atom: http://tools.ietf.org/html/draft-mehta-atom-inline-01 I see it's expired as of 2009-12-31, but it might give you some ideas on how to approach the problem. mca http://amundsen.com/blog/ On Mon, Feb 1, 2010 at 18:48, piers_lawson <Piers@...> wrote: > > > Thank you for your replies. > > I think both Craig and Eric agree that a collection of "links" to resources > should be returned to the client in a form that is not simply a cut-down > version of the actual resource, but is clearly a link. Eric then goes > further to suggest that by re-using a well known specification, anybody > writing a client gets the benefit of reusing knowledge (and possibly tools) > they already have. > > I will look more closely at the Atom specification to see how well it fits > my situation... though I have one immediate question: > > As stated originally, I wanted my representation of the Seller resource to > contain both the Seller information and the collection of links to Accounts. > I don't want the Seller representation to be a Service Document that > contains pointers to the feeds, which the client would then have to GET > separately. How would you mix one or more feeds into the representation of a > resource? Would you have something along the lines of: > > <seller .....> > <otherInfo1 /> > <otherInfo2 /> > <feed xmlns="http://www.w3.org/2005/Atom"> > > <title>Accounts</title> > > ... > > <entry>...</entry> > > <entry>...</entry> > > </feed> > > </seller> > > If so, what media type should be used? Neither "application/atomserv+xml" > or "application/atom+xml" seem appropriate. > > I understand the benefits of re-using a standard but worry about the > verbosity compared to a custom representation. I also wonder about the real > benefits when the client will have to be built to understand the "foreign > markup" anyway if it is to be of any use to an end user. > > I think I might be sold more on the idea if I could see an example of this > embedding of feeds into a representation or a resource. > > Thank you for your time > > Piers > > >
--- In rest-discuss@yahoogroups.com, Jan Algermissen <algermissen1971@...> wrote: > On Feb 1, 2010, at 2:54 PM, Mike Kelly wrote: > > Jan Algermissen wrote: > >> On Feb 1, 2010, at 1:38 PM, Mike Kelly wrote: > >>> I think we need to agree on the definition of 'meaning' in this > >>> context.. because, to me, it includes more than just the current set of > >>> available link relations, and since we're not supposed to type resources > >>> - the only approach I can see working is predefined application flows > >>> > >> > >> Predefined application flows violate the hypermedia constraint and couple the server in a way that REST deliberately aims to avoid. > >> > > > > Predefined application flows like AtomPub violate the hypermedia constraint? > > Yes. Roy confirmed that (recent post on atom-protocol list) Really? Can you link to that? Was it the "MUST a collection be returned as an Atom feed?" thread? I didn't read any of Roy's comments that way... or maybe I don't understand what you mean by "AtomPub's predefined flow". Do you mean the Service -> Feed -> Entry hierarchy? You said yourself in that thread that <collection> and <image> played conceptually similar roles. Isn't Page -> Image a similar two-level hierarchy? What is wrong with that? I guess what I'm wondering is if AtomPub really defines an "application flow" or do client writers mistake the hierarchy for one? Regards, Andrew
On Feb 2, 2010, at 4:16 AM, wahbedahbe wrote: > > --- In rest-discuss@yahoogroups.com, Jan Algermissen <algermissen1971@...> wrote: >> On Feb 1, 2010, at 2:54 PM, Mike Kelly wrote: >>> Jan Algermissen wrote: >>>> On Feb 1, 2010, at 1:38 PM, Mike Kelly wrote: >>>>> I think we need to agree on the definition of 'meaning' in this >>>>> context.. because, to me, it includes more than just the current set of >>>>> available link relations, and since we're not supposed to type resources >>>>> - the only approach I can see working is predefined application flows >>>>> >>>> >>>> Predefined application flows violate the hypermedia constraint and couple the server in a way that REST deliberately aims to avoid. >>>> >>> >>> Predefined application flows like AtomPub violate the hypermedia constraint? >> >> Yes. Roy confirmed that (recent post on atom-protocol list) > > > Really? Can you link to that? Was it the "MUST a collection be returned as an Atom feed?" thread? Yep. http://www.imc.org/atom-protocol/mail-archive/msg11487.html > > I didn't read any of Roy's comments that way... or maybe I don't understand what you mean by "AtomPub's predefined flow". The requirement that a GET on a collection returns an Atom feed. That is unRESTful coupling because the client must not rely on such information but react to whatever it gets at runtime. > > Do you mean the Service -> Feed -> Entry hierarchy? Yes, that's what it comes down to. > > You said yourself in that thread that <collection> and <image> played conceptually similar roles. Isn't Page -> Image a similar two-level hierarchy? What is wrong with that? <collection href=""> points to 'a collection', that is ok. But predefining the media type that comes back from the collection is not. Might well be RSS or text/uri-list > > I guess what I'm wondering is if AtomPub really defines an > "application flow" or do client writers mistake the hierarchy for one? A truly RESTful client would do a GET on the collection and treat any response as 'correct' from a server POV (except for non-Collection representations, for example an audio file). Only if we do that the server's independent evolvability is preserved. Jan > > Regards, > > Andrew > > > > ------------------------------------ > > Yahoo! Groups Links > > > ----------------------------------- Jan Algermissen, Consultant Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
On Feb 2, 2010, at 12:48 AM, piers_lawson wrote:
>
>
> Thank you for your replies.
>
> I think both Craig and Eric agree that a collection of "links" to resources should be returned to the client in a form that is not simply a cut-down version of the actual resource, but is clearly a link. Eric then goes further to suggest that by re-using a well known specification, anybody writing a client gets the benefit of reusing knowledge (and possibly tools) they already have.
>
> I will look more closely at the Atom specification to see how well it fits my situation... though I have one immediate question:
>
> As stated originally, I wanted my representation of the Seller resource to contain both the Seller information and the collection of links to Accounts. I don't want the Seller representation to be a Service Document that contains pointers to the feeds, which the client would then have to GET separately. How would you mix one or more feeds into the representation of a resource? Would you have something along the lines of:
>
> <seller .....>
>
> <otherInfo1 />
>
> <otherInfo2 />
>
> <feed xmlns="http://www.w3.org/2005/Atom">
> <title>Accounts</title>
>
> ...
>
> <entry>...</entry>
>
> <entry>...</entry>
>
> </feed>
>
> </seller>
>
> If so, what media type should be used? Neither "application/atomserv+xml" or "application/atom+xml" seem appropriate.
>
> I understand the benefits of re-using a standard but worry about the verbosity compared to a custom representation.
Since you have a custom <seller> anyhow, I'd - for the sake of clarity - just use custom markup for the account links, too. You might still provide an extra resource that is the collection of accounts and provide a feed for that.
e.g.
<seller>
...
<accounts href="">
<account .....>
<account ....>
</accouts>
</seller>
Jan
> I also wonder about the real benefits when the client will have to be built to understand the "foreign markup" anyway if it is to be of any use to an end user.
>
> I think I might be sold more on the idea if I could see an example of this embedding of feeds into a representation or a resource.
>
> Thank you for your time
>
> Piers
>
>
>
>
-----------------------------------
Jan Algermissen, Consultant
Mail: algermissen@...
Blog: http://www.nordsc.com/blog/
Work: http://www.nordsc.com/
-----------------------------------
Jan Algermissen wrote: > On Feb 2, 2010, at 4:16 AM, wahbedahbe wrote: > > >> --- In rest-discuss@yahoogroups.com, Jan Algermissen <algermissen1971@...> wrote: >> >>> On Feb 1, 2010, at 2:54 PM, Mike Kelly wrote: >>> >>>> Jan Algermissen wrote: >>>> >>>>> On Feb 1, 2010, at 1:38 PM, Mike Kelly wrote: >>>>> >>>>>> I think we need to agree on the definition of 'meaning' in this >>>>>> context.. because, to me, it includes more than just the current set of >>>>>> available link relations, and since we're not supposed to type resources >>>>>> - the only approach I can see working is predefined application flows >>>>>> >>>>>> >>>>> Predefined application flows violate the hypermedia constraint and couple the server in a way that REST deliberately aims to avoid. >>>>> >>>>> >>>> Predefined application flows like AtomPub violate the hypermedia constraint? >>>> >>> Yes. Roy confirmed that (recent post on atom-protocol list) >>> >> Really? Can you link to that? Was it the "MUST a collection be returned as an Atom feed?" thread? >> > > Yep. http://www.imc.org/atom-protocol/mail-archive/msg11487.html > That post is about over-specification, not predefinition. And leads me to ask - What is an 'adequate specification', if not predefinition? - Mike
On Feb 2, 2010, at 11:25 AM, Mike Kelly wrote: > Jan Algermissen wrote: >> On Feb 2, 2010, at 4:16 AM, wahbedahbe wrote: >> >> >>> --- In rest-discuss@yahoogroups.com, Jan Algermissen <algermissen1971@...> wrote: >>> >>>> On Feb 1, 2010, at 2:54 PM, Mike Kelly wrote: >>>> >>>>> Jan Algermissen wrote: >>>>> >>>>>> On Feb 1, 2010, at 1:38 PM, Mike Kelly wrote: >>>>>> >>>>>>> I think we need to agree on the definition of 'meaning' in this >>>>>>> context.. because, to me, it includes more than just the current set of >>>>>>> available link relations, and since we're not supposed to type resources >>>>>>> - the only approach I can see working is predefined application flows >>>>>>> >>>>>>> >>>>>> Predefined application flows violate the hypermedia constraint and couple the server in a way that REST deliberately aims to avoid. >>>>>> >>>>>> >>>>> Predefined application flows like AtomPub violate the hypermedia constraint? >>>>> >>>> Yes. Roy confirmed that (recent post on atom-protocol list) >>>> >>> Really? Can you link to that? Was it the "MUST a collection be returned as an Atom feed?" thread? >>> >> >> Yep. http://www.imc.org/atom-protocol/mail-archive/msg11487.html >> > > That post is about over-specification, not predefinition. And leads me > to ask - What is an 'adequate specification', if not predefinition? I understood Roy to be saying that it is an over-specifiction that AtomPub requires GETs on collections to return an Atom feed. Such a predefinition is an over-specification in RESTland. Jan > > - Mike > > > ------------------------------------ > > Yahoo! Groups Links > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
Jan Algermissen wrote: > On Feb 2, 2010, at 11:25 AM, Mike Kelly wrote: > > >> Jan Algermissen wrote: >> >>> On Feb 2, 2010, at 4:16 AM, wahbedahbe wrote: >>> >>> >>> >>>> --- In rest-discuss@yahoogroups.com, Jan Algermissen <algermissen1971@...> wrote: >>>> >>>> >>>>> On Feb 1, 2010, at 2:54 PM, Mike Kelly wrote: >>>>> >>>>> >>>>>> Jan Algermissen wrote: >>>>>> >>>>>> >>>>>>> On Feb 1, 2010, at 1:38 PM, Mike Kelly wrote: >>>>>>> >>>>>>> >>>>>>>> I think we need to agree on the definition of 'meaning' in this >>>>>>>> context.. because, to me, it includes more than just the current set of >>>>>>>> available link relations, and since we're not supposed to type resources >>>>>>>> - the only approach I can see working is predefined application flows >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>> Predefined application flows violate the hypermedia constraint and couple the server in a way that REST deliberately aims to avoid. >>>>>>> >>>>>>> >>>>>>> >>>>>> Predefined application flows like AtomPub violate the hypermedia constraint? >>>>>> >>>>>> >>>>> Yes. Roy confirmed that (recent post on atom-protocol list) >>>>> >>>>> >>>> Really? Can you link to that? Was it the "MUST a collection be returned as an Atom feed?" thread? >>>> >>>> >>> Yep. http://www.imc.org/atom-protocol/mail-archive/msg11487.html >>> >>> >> That post is about over-specification, not predefinition. And leads me >> to ask - What is an 'adequate specification', if not predefinition? >> > > I understood Roy to be saying that it is an over-specifiction that AtomPub requires GETs on collections to return an Atom feed. > > Such a predefinition is an over-specification in RESTland. > > Jan > Maybe so, but the use of term "over" specification implies that there is infact an appropriate degree of specification i.e. a predefined flow with a more suitable level of liberalism. - Mike
When a client sends a POST request and receives a 201 Created... a) is the POST response body the steady state b) is it implied by HTTP spec that the client will do a GET on the Location and is that the steady state? c) is this up to the media type that contained the link to the POST-accepting resource? For other return codes I think this: 200 Ok - steady state is the POST response 202 Accepted - steady state is available at the link provided by the body of the 202 response 303 See Other - steady state is available ta resource that the Location points to Jan
Boing... any idea?
On 01/02/2010 10:28, George wrote:
>
>
> Hi,
>
> Let's try to explain it a little further.
>
> On 01/02/2010 9:02, Jan Algermissen wrote:
> > George,
> >
> > On Jan 30, 2010, at 1:38 PM, George wrote:
> >
> >> Hi,
> >>
> >> I'm planning to develop a webservice, and I like to try the RESTful
> >> architecture.
> >>
> >> The service is about downloading some data from the server to a device
> >> attached on the local computer. The client need to retrieve the command
> >> from the server and then send the response of the device to the server
> >> to check its validity. Then the server says if it is ok or not.
> >
> > I think I do not understand what you are up to. Why does the client
> fetch the command for the device from the server?
>
> The system is foreseen to control a hardware device. The issue is that
> the device only accepts a subset of commands based on some cryptographic
> features.
>
> I don't want the command set and the cryptographic keys to be on the
> client, as this way I have to replicate the keys on every client and the
> security can be comprised.
>
> Each command is encrypted with different keys depending on the device it
> is directed to. So the issue is first is that the server needs to know
> the device as to open the session with the correct set of keys. After
> that, the client get the command (encripted and maced with server keys),
> this command is sent to the device who will response. The response has
> some crypto stuff that need to be check on the server. Then the client
> get an ACK or NACK depending on the correct answer from the device
> (whether the command is well executed or not, or whether the device owns
> the correct set of keys and it not a fake device).
>
> >
> >
> >
> >>
> >> Device client Server
> >> ----> Get command
> >> <-----<-----
> >>
> >> ----> ----> Response from device
> >> <----- Response from server indicating
> >> if it is ok or not the execution
> >>
> >> It would be like: client calls authenticate of device. then the server
> >> sends the command to be sent to the device for authentication.
> >
> > HTTP authentication is orthogonal. Use one of the HTTP standard
> authentication solutions.
>
> Authentication is done based on the crypto protocol that I explained above.
>
> >
> >
> >> The
> >> client send this command to the device and the response is sent back to
> >> the server. The server then replies.
> >>
> >> I have thought on:
> >> /device/{id} as resource
> >> /device/{id}/authenticate
> >> GET will retrieve the command and blank state
> >> <command> value</command>
> >> <state> not defined</command>
> >> PUT will send the response and get the real state
> >> ---> <response> value</response>
> >> <---<state> not defined</command>
> >>
> >> I don't know if this is REST. Is it better to create another
> resource as:
> >> /device/{id}/authenticate/command (only GET available)
> >> /device/{id}/authenticate/response (only PUT available)
> >> /device/{id}/authenticate (only GET avaliable for status)
> >>
> >> Any help is welcome.
> >
> > Can you explain your requirements? I am having trouble understanding
> what you are trying to do.
>
> The issue is that I need to get a command and then check the answer from
> that command. This will be done in 2 steps, and I don't know how to map
> that into resources.
>
> Thanks... hope now is clearer.
>
> CU
> Jorge
>
> > Jan
> >
> >
> >
> >> TA
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >> ------------------------------------
> >>
> >> Yahoo! Groups Links
> >>
> >>
> >>
> >
> > -----------------------------------
> > Jan Algermissen, Consultant
> >
> > Mail: algermissen@... <mailto:algermissen%40acm.org>
> > Blog: http://www.nordsc.com/blog/ <http://www.nordsc.com/blog/>
> > Work: http://www.nordsc.com/ <http://www.nordsc.com/>
> > -----------------------------------
> >
> >
> >
>
>
I just read this on the diagonal, but it seems similar to what SyncML
does, is that the case?
_________________________________________________
Melhores cumprimentos / Beir beannacht / Best regards
Antnio Manuel dos Santos Mota
http://card.ly/amsmota
_________________________________________________
2010/2/2 George <george.news@...>:
> Boing... any idea?
>
> On 01/02/2010 10:28, George wrote:
>>
>>
>> Hi,
>>
>> Let's try to explain it a little further.
>>
>> On 01/02/2010 9:02, Jan Algermissen wrote:
>> > George,
>> >
>> > On Jan 30, 2010, at 1:38 PM, George wrote:
>> >
>> >> Hi,
>> >>
>> >> I'm planning to develop a webservice, and I like to try the RESTful
>> >> architecture.
>> >>
>> >> The service is about downloading some data from the server to a device
>> >> attached on the local computer. The client need to retrieve the command
>> >> from the server and then send the response of the device to the server
>> >> to check its validity. Then the server says if it is ok or not.
>> >
>> > I think I do not understand what you are up to. Why does the client
>> fetch the command for the device from the server?
>>
>> The system is foreseen to control a hardware device. The issue is that
>> the device only accepts a subset of commands based on some cryptographic
>> features.
>>
>> I don't want the command set and the cryptographic keys to be on the
>> client, as this way I have to replicate the keys on every client and the
>> security can be comprised.
>>
>> Each command is encrypted with different keys depending on the device it
>> is directed to. So the issue is first is that the server needs to know
>> the device as to open the session with the correct set of keys. After
>> that, the client get the command (encripted and maced with server keys),
>> this command is sent to the device who will response. The response has
>> some crypto stuff that need to be check on the server. Then the client
>> get an ACK or NACK depending on the correct answer from the device
>> (whether the command is well executed or not, or whether the device owns
>> the correct set of keys and it not a fake device).
>>
>> >
>> >
>> >
>> >>
>> >> Device client Server
>> >> ----> Get command
>> >> <-----<-----
>> >>
>> >> ----> ----> Response from device
>> >> <----- Response from server indicating
>> >> if it is ok or not the execution
>> >>
>> >> It would be like: client calls authenticate of device. then the server
>> >> sends the command to be sent to the device for authentication.
>> >
>> > HTTP authentication is orthogonal. Use one of the HTTP standard
>> authentication solutions.
>>
>> Authentication is done based on the crypto protocol that I explained above.
>>
>> >
>> >
>> >> The
>> >> client send this command to the device and the response is sent back to
>> >> the server. The server then replies.
>> >>
>> >> I have thought on:
>> >> /device/{id} as resource
>> >> /device/{id}/authenticate
>> >> GET will retrieve the command and blank state
>> >> <command> value</command>
>> >> <state> not defined</command>
>> >> PUT will send the response and get the real state
>> >> ---> <response> value</response>
>> >> <---<state> not defined</command>
>> >>
>> >> I don't know if this is REST. Is it better to create another
>> resource as:
>> >> /device/{id}/authenticate/command (only GET available)
>> >> /device/{id}/authenticate/response (only PUT available)
>> >> /device/{id}/authenticate (only GET avaliable for status)
>> >>
>> >> Any help is welcome.
>> >
>> > Can you explain your requirements? I am having trouble understanding
>> what you are trying to do.
>>
>> The issue is that I need to get a command and then check the answer from
>> that command. This will be done in 2 steps, and I don't know how to map
>> that into resources.
>>
>> Thanks... hope now is clearer.
>>
>> CU
>> Jorge
>>
>> > Jan
>> >
>> >
>> >
>> >> TA
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >> ------------------------------------
>> >>
>> >> Yahoo! Groups Links
>> >>
>> >>
>> >>
>> >
>> > -----------------------------------
>> > Jan Algermissen, Consultant
>> >
>> > Mail: algermissen@... <mailto:algermissen%40acm.org>
>> > Blog: http://www.nordsc.com/blog/ <http://www.nordsc.com/blog/>
>> > Work: http://www.nordsc.com/ <http://www.nordsc.com/>
>> > -----------------------------------
>> >
>> >
>> >
>>
>>
>
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
>
On 02/02/2010 14:31, Ant�nio Mota wrote:
> I just read this on the diagonal, but it seems similar to what SyncML
> does, is that the case?
SyncML seems to be used to syncronized devices' (mobiles, handhelds,..)
information, such as contacts,... My case is not such a thing, as the
commands are for instance RS232 commands to be sent to a local device.
Thanks. Anyway I will read a little further SyncML (curiosity)
See you
>
> _________________________________________________
>
> Melhores cumprimentos / Beir beannacht / Best regards
>
> Ant�nio Manuel dos Santos Mota
>
> http://card.ly/amsmota
> _________________________________________________
>
>
>
>
> 2010/2/2 George<george.news@...>:
>> Boing... any idea?
>>
>> On 01/02/2010 10:28, George wrote:
>>>
>>>
>>> Hi,
>>>
>>> Let's try to explain it a little further.
>>>
>>> On 01/02/2010 9:02, Jan Algermissen wrote:
>>> > George,
>>> >
>>> > On Jan 30, 2010, at 1:38 PM, George wrote:
>>> >
>>> >> Hi,
>>> >>
>>> >> I'm planning to develop a webservice, and I like to try the RESTful
>>> >> architecture.
>>> >>
>>> >> The service is about downloading some data from the server to a device
>>> >> attached on the local computer. The client need to retrieve the command
>>> >> from the server and then send the response of the device to the server
>>> >> to check its validity. Then the server says if it is ok or not.
>>> >
>>> > I think I do not understand what you are up to. Why does the client
>>> fetch the command for the device from the server?
>>>
>>> The system is foreseen to control a hardware device. The issue is that
>>> the device only accepts a subset of commands based on some cryptographic
>>> features.
>>>
>>> I don't want the command set and the cryptographic keys to be on the
>>> client, as this way I have to replicate the keys on every client and the
>>> security can be comprised.
>>>
>>> Each command is encrypted with different keys depending on the device it
>>> is directed to. So the issue is first is that the server needs to know
>>> the device as to open the session with the correct set of keys. After
>>> that, the client get the command (encripted and maced with server keys),
>>> this command is sent to the device who will response. The response has
>>> some crypto stuff that need to be check on the server. Then the client
>>> get an ACK or NACK depending on the correct answer from the device
>>> (whether the command is well executed or not, or whether the device owns
>>> the correct set of keys and it not a fake device).
>>>
>>> >
>>> >
>>> >
>>> >>
>>> >> Device client Server
>>> >> ----> Get command
>>> >> <-----<-----
>>> >>
>>> >> ----> ----> Response from device
>>> >> <----- Response from server indicating
>>> >> if it is ok or not the execution
>>> >>
>>> >> It would be like: client calls authenticate of device. then the server
>>> >> sends the command to be sent to the device for authentication.
>>> >
>>> > HTTP authentication is orthogonal. Use one of the HTTP standard
>>> authentication solutions.
>>>
>>> Authentication is done based on the crypto protocol that I explained above.
>>>
>>> >
>>> >
>>> >> The
>>> >> client send this command to the device and the response is sent back to
>>> >> the server. The server then replies.
>>> >>
>>> >> I have thought on:
>>> >> /device/{id} as resource
>>> >> /device/{id}/authenticate
>>> >> GET will retrieve the command and blank state
>>> >> <command> value</command>
>>> >> <state> not defined</command>
>>> >> PUT will send the response and get the real state
>>> >> ---> <response> value</response>
>>> >> <---<state> not defined</command>
>>> >>
>>> >> I don't know if this is REST. Is it better to create another
>>> resource as:
>>> >> /device/{id}/authenticate/command (only GET available)
>>> >> /device/{id}/authenticate/response (only PUT available)
>>> >> /device/{id}/authenticate (only GET avaliable for status)
>>> >>
>>> >> Any help is welcome.
>>> >
>>> > Can you explain your requirements? I am having trouble understanding
>>> what you are trying to do.
>>>
>>> The issue is that I need to get a command and then check the answer from
>>> that command. This will be done in 2 steps, and I don't know how to map
>>> that into resources.
>>>
>>> Thanks... hope now is clearer.
>>>
>>> CU
>>> Jorge
>>>
>>> > Jan
>>> >
>>> >
>>> >
>>> >> TA
>>> >>
>>> >>
>>> >>
>>> >>
>>> >>
>>> >>
>>> >>
>>> >>
>>> >> ------------------------------------
>>> >>
>>> >> Yahoo! Groups Links
>>> >>
>>> >>
>>> >>
>>> >
>>> > -----------------------------------
>>> > Jan Algermissen, Consultant
>>> >
>>> > Mail: algermissen@...<mailto:algermissen%40acm.org>
>>> > Blog: http://www.nordsc.com/blog/<http://www.nordsc.com/blog/>
>>> > Work: http://www.nordsc.com/<http://www.nordsc.com/>
>>> > -----------------------------------
>>> >
>>> >
>>> >
>>>
>>>
>>
>>
>>
>> ------------------------------------
>>
>> Yahoo! Groups Links
>>
>>
>>
>>
>
--- In rest-discuss@yahoogroups.com, Jan Algermissen <algermissen1971@...> wrote: > > When a client sends a POST request and receives a 201 Created... > > a) is the POST response body the steady state > > b) is it implied by HTTP spec that the client will do a GET on the Location and is that the steady state? > > c) is this up to the media type that contained the link to the POST-accepting resource? > > > > For other return codes I think this: > > 200 Ok - steady state is the POST response > 202 Accepted - steady state is available at the link provided by the body of the 202 response > 303 See Other - steady state is available ta resource that the Location points to > > Jan > I think the HTTP spec is pretty clear that 201 responses SHOULD have a body which constitutes your steady state. Section 9.5: If a resource has been created on the origin server, the response SHOULD be 201 (Created) and contain an entity which describes the status of the request and refers to the new resource, and a Location header (see section 14.30) Section 10.2.2: The newly created resource can be referenced by the URI(s) returned in the entity of the response, with the most specific URI for the resource given by a Location header field. The response SHOULD include an entity containing a list of resource characteristics and location(s) from which the user or user agent can choose the one most appropriate. The entity format is specified by the media type given in the Content-Type header field. I've always considered the Location header in 201 responses as information for intermediaries rather than the driver of application state. But I know I'm in the minority on that: a lot of people don't even have a body in their 201 responses (which to me is a violation of the spec). Regards, Andrew
--- In rest-discuss@yahoogroups.com, Jan Algermissen <algermissen1971@...> wrote: > > > On Feb 2, 2010, at 4:16 AM, wahbedahbe wrote: > > > > > --- In rest-discuss@yahoogroups.com, Jan Algermissen <algermissen1971@> wrote: > >> On Feb 1, 2010, at 2:54 PM, Mike Kelly wrote: > >>> Jan Algermissen wrote: > >>>> On Feb 1, 2010, at 1:38 PM, Mike Kelly wrote: > >>>>> I think we need to agree on the definition of 'meaning' in this > >>>>> context.. because, to me, it includes more than just the current set of > >>>>> available link relations, and since we're not supposed to type resources > >>>>> - the only approach I can see working is predefined application flows > >>>>> > >>>> > >>>> Predefined application flows violate the hypermedia constraint and couple the server in a way that REST deliberately aims to avoid. > >>>> > >>> > >>> Predefined application flows like AtomPub violate the hypermedia constraint? > >> > >> Yes. Roy confirmed that (recent post on atom-protocol list) > > > > > > Really? Can you link to that? Was it the "MUST a collection be returned as an Atom feed?" thread? > > Yep. http://www.imc.org/atom-protocol/mail-archive/msg11487.html > > > > > I didn't read any of Roy's comments that way... or maybe I don't understand what you mean by "AtomPub's predefined flow". > > The requirement that a GET on a collection returns an Atom feed. That is unRESTful coupling because the client must not rely on such information but react to whatever it gets at runtime. > > > > > Do you mean the Service -> Feed -> Entry hierarchy? > > Yes, that's what it comes down to. > > > > > You said yourself in that thread that <collection> and <image> played conceptually similar roles. Isn't Page -> Image a similar two-level hierarchy? What is wrong with that? > > <collection href=""> points to 'a collection', that is ok. But predefining the media type that comes back from the collection is not. Might well be RSS or text/uri-list > > > > > > I guess what I'm wondering is if AtomPub really defines an > > "application flow" or do client writers mistake the hierarchy for one? > > A truly RESTful client would do a GET on the collection and treat any response as 'correct' from a server POV (except for non-Collection representations, for > example an audio file). > > Only if we do that the server's independent evolvability is preserved. > Ok... agree on the above, but how do you define 'collection'? To me it's something that has a set of 'entries'. So you get the hierarchy don't you? A 'good' client can/should be flexible on the media types though (and the spec shouldn't try to restrict them). Perhaps the variability in media type means that the entries may be exclusively inline for some representations which just means that the hierarchy might not map to your addressable resources, but it's still there. Andrew
But SHOULD is not SHALL, is a recommendation not a imposition... So not having a body on a 201 could not be considered a violation of the spec... Wich indeed turns things more complicated in determining what a steady-state should be... _________________________________________________ Melhores cumprimentos / Beir beannacht / Best regards Antnio Manuel dos Santos Mota http://card.ly/amsmota _________________________________________________ 2010/2/2 wahbedahbe <andrew.wahbe@...> > > > > > --- In rest-discuss@yahoogroups.com <rest-discuss%40yahoogroups.com>, Jan > Algermissen <algermissen1971@...> wrote: > > > > When a client sends a POST request and receives a 201 Created... > > > > a) is the POST response body the steady state > > > > b) is it implied by HTTP spec that the client will do a GET on the > Location and is that the steady state? > > > > c) is this up to the media type that contained the link to the > POST-accepting resource? > > > > > > > > For other return codes I think this: > > > > 200 Ok - steady state is the POST response > > 202 Accepted - steady state is available at the link provided by the body > of the 202 response > > 303 See Other - steady state is available ta resource that the Location > points to > > > > Jan > > > > I think the HTTP spec is pretty clear that 201 responses SHOULD have a body > which constitutes your steady state. > Section 9.5: > If a resource has been created on the origin server, the response > SHOULD be 201 (Created) and contain an entity which describes the > status of the request and refers to the new resource, and a > Location header (see section 14.30) > Section 10.2.2: > The newly created resource can be referenced by the URI(s) > returned in the entity of the response, with the most specific URI > for the resource given by a Location header field. The response > SHOULD include an entity containing a list of resource > characteristics and location(s) from which the user or user agent > can choose the one most appropriate. The entity format is > specified by the media type given in the Content-Type header field. > > I've always considered the Location header in 201 responses as information > for intermediaries rather than the driver of application state. But I know > I'm in the minority on that: a lot of people don't even have a body in their > 201 responses (which to me is a violation of the spec). > Regards, > > Andrew > > >
Well, basically yes, is for synchronizing devices, but can be extended
to other things. For example, we use (among other things) the Alert
messages to pass what is called Command&Control messages between two
systems.
Now this is quite incompatible with the idea of REST, because SyncML
is based on Command Elements, it's not resource oriented. What you can
do is to use SyncML as payloads of REST messages, like you'll do with
any other media-type, and let the service implementations deal with
the SyncML itself.
But even if this doesn't apply to your scenario, reading the sepc MAY
give you some good ideas...
_________________________________________________
Melhores cumprimentos / Beir beannacht / Best regards
Antnio Manuel dos Santos Mota
http://card.ly/amsmota
_________________________________________________
2010/2/2 George <george.news@...>:
>
> On 02/02/2010 14:31, Antnio Mota wrote:
>>
>> I just read this on the diagonal, but it seems similar to what SyncML
>> does, is that the case?
>
> SyncML seems to be used to syncronized devices' (mobiles, handhelds,..) information, such as contacts,... My case is not such a thing, as the commands are for instance RS232 commands to be sent to a local device.
>
> Thanks. Anyway I will read a little further SyncML (curiosity)
>
> See you
>
>
>
>>
>> _________________________________________________
>>
>> Melhores cumprimentos / Beir beannacht / Best regards
>>
>> Antnio Manuel dos Santos Mota
>>
>> http://card.ly/amsmota
>> _________________________________________________
>>
>>
>>
>>
>> 2010/2/2 George<george.news@...>:
>>>
>>> Boing... any idea?
>>>
>>> On 01/02/2010 10:28, George wrote:
>>>>
>>>>
>>>> Hi,
>>>>
>>>> Let's try to explain it a little further.
>>>>
>>>> On 01/02/2010 9:02, Jan Algermissen wrote:
>>>> > George,
>>>> >
>>>> > On Jan 30, 2010, at 1:38 PM, George wrote:
>>>> >
>>>> >> Hi,
>>>> >>
>>>> >> I'm planning to develop a webservice, and I like to try the RESTful
>>>> >> architecture.
>>>> >>
>>>> >> The service is about downloading some data from the server to a device
>>>> >> attached on the local computer. The client need to retrieve the command
>>>> >> from the server and then send the response of the device to the server
>>>> >> to check its validity. Then the server says if it is ok or not.
>>>> >
>>>> > I think I do not understand what you are up to. Why does the client
>>>> fetch the command for the device from the server?
>>>>
>>>> The system is foreseen to control a hardware device. The issue is that
>>>> the device only accepts a subset of commands based on some cryptographic
>>>> features.
>>>>
>>>> I don't want the command set and the cryptographic keys to be on the
>>>> client, as this way I have to replicate the keys on every client and the
>>>> security can be comprised.
>>>>
>>>> Each command is encrypted with different keys depending on the device it
>>>> is directed to. So the issue is first is that the server needs to know
>>>> the device as to open the session with the correct set of keys. After
>>>> that, the client get the command (encripted and maced with server keys),
>>>> this command is sent to the device who will response. The response has
>>>> some crypto stuff that need to be check on the server. Then the client
>>>> get an ACK or NACK depending on the correct answer from the device
>>>> (whether the command is well executed or not, or whether the device owns
>>>> the correct set of keys and it not a fake device).
>>>>
>>>> >
>>>> >
>>>> >
>>>> >>
>>>> >> Device client Server
>>>> >> ----> Get command
>>>> >> <-----<-----
>>>> >>
>>>> >> ----> ----> Response from device
>>>> >> <----- Response from server indicating
>>>> >> if it is ok or not the execution
>>>> >>
>>>> >> It would be like: client calls authenticate of device. then the server
>>>> >> sends the command to be sent to the device for authentication.
>>>> >
>>>> > HTTP authentication is orthogonal. Use one of the HTTP standard
>>>> authentication solutions.
>>>>
>>>> Authentication is done based on the crypto protocol that I explained above.
>>>>
>>>> >
>>>> >
>>>> >> The
>>>> >> client send this command to the device and the response is sent back to
>>>> >> the server. The server then replies.
>>>> >>
>>>> >> I have thought on:
>>>> >> /device/{id} as resource
>>>> >> /device/{id}/authenticate
>>>> >> GET will retrieve the command and blank state
>>>> >> <command> value</command>
>>>> >> <state> not defined</command>
>>>> >> PUT will send the response and get the real state
>>>> >> ---> <response> value</response>
>>>> >> <---<state> not defined</command>
>>>> >>
>>>> >> I don't know if this is REST. Is it better to create another
>>>> resource as:
>>>> >> /device/{id}/authenticate/command (only GET available)
>>>> >> /device/{id}/authenticate/response (only PUT available)
>>>> >> /device/{id}/authenticate (only GET avaliable for status)
>>>> >>
>>>> >> Any help is welcome.
>>>> >
>>>> > Can you explain your requirements? I am having trouble understanding
>>>> what you are trying to do.
>>>>
>>>> The issue is that I need to get a command and then check the answer from
>>>> that command. This will be done in 2 steps, and I don't know how to map
>>>> that into resources.
>>>>
>>>> Thanks... hope now is clearer.
>>>>
>>>> CU
>>>> Jorge
>>>>
>>>> > Jan
>>>> >
>>>> >
>>>> >
>>>> >> TA
>>>> >>
>>>> >>
>>>> >>
>>>> >>
>>>> >>
>>>> >>
>>>> >>
>>>> >>
>>>> >> ------------------------------------
>>>> >>
>>>> >> Yahoo! Groups Links
>>>> >>
>>>> >>
>>>> >>
>>>> >
>>>> > -----------------------------------
>>>> > Jan Algermissen, Consultant
>>>> >
>>>> > Mail: algermissen@...<mailto:algermissen%40acm.org>
>>>> > Blog: http://www.nordsc.com/blog/<http://www.nordsc.com/blog/>
>>>> > Work: http://www.nordsc.com/<http://www.nordsc.com/>
>>>> > -----------------------------------
>>>> >
>>>> >
>>>> >
>>>>
>>>>
>>>
>>>
>>>
>>> ------------------------------------
>>>
>>> Yahoo! Groups Links
>>>
>>>
>>>
>>>
>>
>
>
I'll give it a try. I though I could make it using REST approach, but it
seems it is not gonna be possible.
Let's read and see if a flash comes ;)
See you
On 02/02/2010 18:10, Ant�nio Mota wrote:
> Well, basically yes, is for synchronizing devices, but can be extended
> to other things. For example, we use (among other things) the Alert
> messages to pass what is called Command&Control messages between two
> systems.
>
> Now this is quite incompatible with the idea of REST, because SyncML
> is based on Command Elements, it's not resource oriented. What you can
> do is to use SyncML as payloads of REST messages, like you'll do with
> any other media-type, and let the service implementations deal with
> the SyncML itself.
>
> But even if this doesn't apply to your scenario, reading the sepc MAY
> give you some good ideas...
>
> _________________________________________________
>
> Melhores cumprimentos / Beir beannacht / Best regards
>
> Ant�nio Manuel dos Santos Mota
>
> http://card.ly/amsmota
> _________________________________________________
>
>
>
>
> 2010/2/2 George<george.news@...>:
>>
>> On 02/02/2010 14:31, Ant�nio Mota wrote:
>>>
>>> I just read this on the diagonal, but it seems similar to what SyncML
>>> does, is that the case?
>>
>> SyncML seems to be used to syncronized devices' (mobiles, handhelds,..) information, such as contacts,... My case is not such a thing, as the commands are for instance RS232 commands to be sent to a local device.
>>
>> Thanks. Anyway I will read a little further SyncML (curiosity)
>>
>> See you
>>
>>
>>
>>>
>>> _________________________________________________
>>>
>>> Melhores cumprimentos / Beir beannacht / Best regards
>>>
>>> Ant�nio Manuel dos Santos Mota
>>>
>>> http://card.ly/amsmota
>>> _________________________________________________
>>>
>>>
>>>
>>>
>>> 2010/2/2 George<george.news@...>:
>>>>
>>>> Boing... any idea?
>>>>
>>>> On 01/02/2010 10:28, George wrote:
>>>>>
>>>>>
>>>>> Hi,
>>>>>
>>>>> Let's try to explain it a little further.
>>>>>
>>>>> On 01/02/2010 9:02, Jan Algermissen wrote:
>>>>> > George,
>>>>> >
>>>>> > On Jan 30, 2010, at 1:38 PM, George wrote:
>>>>> >
>>>>> >> Hi,
>>>>> >>
>>>>> >> I'm planning to develop a webservice, and I like to try the RESTful
>>>>> >> architecture.
>>>>> >>
>>>>> >> The service is about downloading some data from the server to a device
>>>>> >> attached on the local computer. The client need to retrieve the command
>>>>> >> from the server and then send the response of the device to the server
>>>>> >> to check its validity. Then the server says if it is ok or not.
>>>>> >
>>>>> > I think I do not understand what you are up to. Why does the client
>>>>> fetch the command for the device from the server?
>>>>>
>>>>> The system is foreseen to control a hardware device. The issue is that
>>>>> the device only accepts a subset of commands based on some cryptographic
>>>>> features.
>>>>>
>>>>> I don't want the command set and the cryptographic keys to be on the
>>>>> client, as this way I have to replicate the keys on every client and the
>>>>> security can be comprised.
>>>>>
>>>>> Each command is encrypted with different keys depending on the device it
>>>>> is directed to. So the issue is first is that the server needs to know
>>>>> the device as to open the session with the correct set of keys. After
>>>>> that, the client get the command (encripted and maced with server keys),
>>>>> this command is sent to the device who will response. The response has
>>>>> some crypto stuff that need to be check on the server. Then the client
>>>>> get an ACK or NACK depending on the correct answer from the device
>>>>> (whether the command is well executed or not, or whether the device owns
>>>>> the correct set of keys and it not a fake device).
>>>>>
>>>>> >
>>>>> >
>>>>> >
>>>>> >>
>>>>> >> Device client Server
>>>>> >> ----> Get command
>>>>> >> <-----<-----
>>>>> >>
>>>>> >> ----> ----> Response from device
>>>>> >> <----- Response from server indicating
>>>>> >> if it is ok or not the execution
>>>>> >>
>>>>> >> It would be like: client calls authenticate of device. then the server
>>>>> >> sends the command to be sent to the device for authentication.
>>>>> >
>>>>> > HTTP authentication is orthogonal. Use one of the HTTP standard
>>>>> authentication solutions.
>>>>>
>>>>> Authentication is done based on the crypto protocol that I explained above.
>>>>>
>>>>> >
>>>>> >
>>>>> >> The
>>>>> >> client send this command to the device and the response is sent back to
>>>>> >> the server. The server then replies.
>>>>> >>
>>>>> >> I have thought on:
>>>>> >> /device/{id} as resource
>>>>> >> /device/{id}/authenticate
>>>>> >> GET will retrieve the command and blank state
>>>>> >> <command> value</command>
>>>>> >> <state> not defined</command>
>>>>> >> PUT will send the response and get the real state
>>>>> >> ---> <response> value</response>
>>>>> >> <---<state> not defined</command>
>>>>> >>
>>>>> >> I don't know if this is REST. Is it better to create another
>>>>> resource as:
>>>>> >> /device/{id}/authenticate/command (only GET available)
>>>>> >> /device/{id}/authenticate/response (only PUT available)
>>>>> >> /device/{id}/authenticate (only GET avaliable for status)
>>>>> >>
>>>>> >> Any help is welcome.
>>>>> >
>>>>> > Can you explain your requirements? I am having trouble understanding
>>>>> what you are trying to do.
>>>>>
>>>>> The issue is that I need to get a command and then check the answer from
>>>>> that command. This will be done in 2 steps, and I don't know how to map
>>>>> that into resources.
>>>>>
>>>>> Thanks... hope now is clearer.
>>>>>
>>>>> CU
>>>>> Jorge
>>>>>
>>>>> > Jan
>>>>> >
>>>>> >
>>>>> >
>>>>> >> TA
>>>>> >>
>>>>> >>
>>>>> >>
>>>>> >>
>>>>> >>
>>>>> >>
>>>>> >>
>>>>> >>
>>>>> >> ------------------------------------
>>>>> >>
>>>>> >> Yahoo! Groups Links
>>>>> >>
>>>>> >>
>>>>> >>
>>>>> >
>>>>> > -----------------------------------
>>>>> > Jan Algermissen, Consultant
>>>>> >
>>>>> > Mail: algermissen@...<mailto:algermissen%40acm.org>
>>>>> > Blog: http://www.nordsc.com/blog/<http://www.nordsc.com/blog/>
>>>>> > Work: http://www.nordsc.com/<http://www.nordsc.com/>
>>>>> > -----------------------------------
>>>>> >
>>>>> >
>>>>> >
>>>>>
>>>>>
>>>>
>>>>
>>>>
>>>> ------------------------------------
>>>>
>>>> Yahoo! Groups Links
>>>>
>>>>
>>>>
>>>>
>>>
>>
>>
>
Mike, thank you for pointing out the draft inlining spec... it was interesting. Jan, I started off the thread suggesting a format similar to the one that you put forward as one of my alternatives. I have been convinced by the idea that really the collection should not contain Account elements but some sort of link. The inlining idea is good in that it formalises combining a link with some data from the target resource.... at least it would be if the spec had moved forward from being a draft ;-) However, what I'm really pushing for is an example of publishing a feed (or even feeds) within other content, for example within the seller element from my last post. If I end up with something similar to my last post (i.e. a seller element that contains a feed element) I'm back to a custom vocabulary... at which point I can't see the advantage of using Atom feeds... a generic client would never find the feeds in the first place (because my seller element doesn't look anything like a Service document)... Any thoughts?
Have you looked at the threading extensions for Atom? http://www.ietf.org/rfc/rfc4685.txt I'm not a big fan of the service document. As far as generic clients go, I think they can infer the editability of a resource by the presence of a link rel='edit' without needing to consult a service document. The response to a PUT request is what's authoritative, not any sort of service document. Allow headers work just as well to provide a hint as to acceptable methods. Where I do use a "service document" in an otherwise-Atom Protocol system, I repurpose Google's XML sitemaps so I have a hierarchical format, instead of using a root-level flat-file document to define a hierarchy. You can also have feeds of feeds in Atom, so I don't really understand the disadvantage you're seeing. I didn't propose Atom as the magical solution to your problem, merely as a way forward. Once you've identified its actual shortcomings, you're free to abandon it, but it will give you an awful lot of ideas on how to use link relations properly. See it through, though, since it provides a useful guideline on how to arrange and link collections. You can use standard link relations, even if you use your own vocabulary as a resource model. Better to use standard media types, though. -Eric "piers_lawson" wrote: > > Mike, thank you for pointing out the draft inlining spec... it was > interesting. > > Jan, I started off the thread suggesting a format similar to the one > that you put forward as one of my alternatives. I have been convinced > by the idea that really the collection should not contain Account > elements but some sort of link. The inlining idea is good in that it > formalises combining a link with some data from the target > resource.... at least it would be if the spec had moved forward from > being a draft ;-) > > However, what I'm really pushing for is an example of publishing a > feed (or even feeds) within other content, for example within the > seller element from my last post. If I end up with something similar > to my last post (i.e. a seller element that contains a feed element) > I'm back to a custom vocabulary... at which point I can't see the > advantage of using Atom feeds... a generic client would never find > the feeds in the first place (because my seller element doesn't look > anything like a Service document)... Any thoughts? >
Thanks for the quick response Eric. I'm certainly not dismissing Atom yet and I hope I'm learning a lot from it and this discussion. You mention a feed of feed approach... would that be a <feed> element that represents the parent resource, which then contained an <entry> for each "collection"? With the <entry> containing a link to the feed for that child collection (and possibly also containing an <inline> to provide a high level view of the child feed's content)? I think what I'm missing is some good examples of using Atom outside of the basic scenario. So given a resource that has properties of its own, and perhaps is linked to at least one possibly two collections of other resources... how would you represent it? What XML would you envisage being returned when the parent is requested? Thanks everyone for your time!
"piers_lawson" wrote: > > You mention a feed of feed approach... would that be a <feed> element > that represents the parent resource, which then contained an <entry> > for each "collection"? With the <entry> containing a link to the feed > for that child collection (and possibly also containing an <inline> > to provide a high level view of the child feed's content)? > Yes. If the client wants to access a feed within a feed, it can follow the link rel='related' instead of the link rel='self' of an entry in the original feed. The link rel='self' just points to the first entry in the sub-collection. This is the first I've heard of <inline>, so I can't answer as to that. -Eric
Ok ya... very bad wording on my part. Not a violation of the spec -- more just "disregard" for the spec. So "SHOULD" is to be treated as: "you really ought to unless you really know what you are doing and you have a good, valid reason not to", not "this is a good idea that you might want to consider". I find that most folks seem to be omitting a body in 201 responses for no reason other than they don't want to be bothered -- they are treating POST as being exempt to HATEOAS, not even using the location header to redirect or anything like that. It's just RPC with the procedure call returning a URI that happens to be in the location header. Pet peeve of mine... Anyways, I've find the use of "Location" in 201 and 3xx to be a bit strange. On the one hand, you'd think that the intent of using it in 201 is to redirect the client to the new resource while indicating that a new resource has been created. But I've never been able to find a definitive statement on whether 201 should result in a redirect. The only text that comes close is in the text describing the location header itself: The Location response-header field is used to redirect the recipient to a location other than the Request-URI for completion of the request or identification of a new resource. For 201 (Created) responses, the Location is that of the new resource which was created by the request. For 3xx responses, the location SHOULD indicate the server's preferred URI for automatic redirection to the resource. It's not clear to me if the use of the word 'redirect' in the first sentence means an automatic request here or not -- you'd think it would be more explicit in the description of 201 if it was. I read it as to be used for automatic redirection in the 3xx case and identification of the new resource in the 201 case. And AFAIK, no widely-used HTTP clients or browsers do auto-redirect on 201 So if you don't redirect then this leaves the client in either the steady state defined by the body or, if there was no body in the response, in the same steady state it was in before the request (sort of like a 204 response). That's my take anyways -- would love to know if anyone had authoritative info on if 201 was supposed to redirect (and the body was means as a "backup" as in the 3xx case). Andrew 2010/2/2 Antnio Mota <amsmota@...> > But SHOULD is not SHALL, is a recommendation not a imposition... So not > having a body on a 201 could not be considered a violation of the spec... > Wich indeed turns things more complicated in determining what a steady-state > should be... > > > _________________________________________________ > > Melhores cumprimentos / Beir beannacht / Best regards > > Antnio Manuel dos Santos Mota > > http://card.ly/amsmota > _________________________________________________ > > > > 2010/2/2 wahbedahbe <andrew.wahbe@...> > >> >> >> >> >> --- In rest-discuss@yahoogroups.com <rest-discuss%40yahoogroups.com>, Jan >> Algermissen <algermissen1971@...> wrote: >> > >> > When a client sends a POST request and receives a 201 Created... >> > >> > a) is the POST response body the steady state >> > >> > b) is it implied by HTTP spec that the client will do a GET on the >> Location and is that the steady state? >> > >> > c) is this up to the media type that contained the link to the >> POST-accepting resource? >> > >> > >> > >> > For other return codes I think this: >> > >> > 200 Ok - steady state is the POST response >> > 202 Accepted - steady state is available at the link provided by the >> body of the 202 response >> > 303 See Other - steady state is available ta resource that the Location >> points to >> > >> > Jan >> > >> >> I think the HTTP spec is pretty clear that 201 responses SHOULD have a >> body which constitutes your steady state. >> Section 9.5: >> If a resource has been created on the origin server, the response >> SHOULD be 201 (Created) and contain an entity which describes the >> status of the request and refers to the new resource, and a >> Location header (see section 14.30) >> Section 10.2.2: >> The newly created resource can be referenced by the URI(s) >> returned in the entity of the response, with the most specific URI >> for the resource given by a Location header field. The response >> SHOULD include an entity containing a list of resource >> characteristics and location(s) from which the user or user agent >> can choose the one most appropriate. The entity format is >> specified by the media type given in the Content-Type header field. >> >> I've always considered the Location header in 201 responses as information >> for intermediaries rather than the driver of application state. But I know >> I'm in the minority on that: a lot of people don't even have a body in their >> 201 responses (which to me is a violation of the spec). >> Regards, >> >> Andrew >> >> >> > > -- Andrew Wahbe
Andrew Wahbe wrote: > > Ok ya... very bad wording on my part. Not a violation of the spec -- > more just "disregard" for the spec. So "SHOULD" is to be treated as: > "you really ought to unless you really know what you are doing and > you have a good, valid reason not to", not "this is a good idea that > you might want to consider". (...) > The thing is, HTTP isn't constrained to being used on the Web. So there are plenty of things in there which, in the context of the Web, become a "MUST" even though the spec says "SHOULD". For example, the use of registered media types. If your context is intranet, then knock yourself out disrespecting the SHOULD by creating new media types willy- nilly. If your context is the Web, then how is an intermediary to tell the difference between one application/vnd.customers+xml and another? Such a collision is likely on the Web, so consider the use of registered media types a MUST. If your context is intranet, then g'head. These SHOULD instead of MUST situations in the spec aren't loopholes. Off the Web, I send Content-Location with 201 responses instead of Location, 'cuz it makes more sense to _me_ that way... But, I haven't read 2616bis lately (really need to quit putting that off, now), so I can't really comment on the issue, beyond pointing out that the Web turns a lot of HTTP's SHOULDs into MUSTs, for all intents and purposes. -Eric
On Feb 3, 2010, at 5:56 AM, Andrew Wahbe wrote: > > > Ok ya... very bad wording on my part. Not a violation of the spec -- more just "disregard" for the spec. So "SHOULD" is to be treated as: "you really ought to unless you really know what you are doing and you have a good, valid reason not to", not "this is a good idea that you might want to consider". I find that most folks seem to be omitting a body in 201 responses for no reason other than they don't want to be bothered -- they are treating POST as being exempt to HATEOAS, not even using the location header to redirect or anything like that. It's just RPC with the procedure call returning a URI that happens to be in the location header. Pet peeve of mine... > > Anyways, I've find the use of "Location" in 201 and 3xx to be a bit strange. On the one hand, you'd think that the intent of using it in 201 is to redirect the client to the new resource while indicating that a new resource has been created. But I've never been able to find a definitive statement on whether 201 should result in a redirect. The only text that comes close is in the text describing the location header itself: > The Location response-header field is used to redirect the recipient > to a location other than the Request-URI for completion of the > request or identification of a new resource. For 201 (Created) > > responses, the Location is that of the new resource which was created > > by the request. For 3xx responses, the location SHOULD indicate the > server's preferred URI for automatic redirection to the resource. > It's not clear to me if the use of the word 'redirect' in the first sentence means an automatic request here or not -- you'd think it would be more explicit in the description of 201 if it was. I read it as to be used for automatic redirection in the 3xx case and identification of the new resource in the 201 case. And AFAIK, no widely-used HTTP clients or browsers do auto-redirect on 201 I found this: http://www.w3.org/Protocols/HTTP/1.1/rfc2616bis/issues/#i61 (which suggests that the original intention was location of created resource OR redirect target). > > So if you don't redirect then this leaves the client in either the steady state defined by the body or, if there was no body in the response, in the same steady state it was in before the request (sort of like a 204 response). Yes. Interestingly, this means that the client's application state does not change although it made a request. But what if the client has this current state: GET /book <documentIndex> <chapter href="chapter1.xml"/> <chapter href="chapter2.xml"/> <appendix href="appendix1.xml"/> <chapters href="chapters/"/> </documentIndex> and then it does DELETE /book/appendix1.xml 204 No Content The client's state is now 'wrong' in the sense that the third link should be removed. Since the server cannot know what state the client was in before the DELETE request, it cannot really assist by sending the updated state. Should the client take care of 'adjusting' the state itself? On the contrary, if you did POST /book/chapters/ <newCpater> the server would usually say "Hey, this has updated some resource, look:" 303 See Other Location: /book Hmmm - so it is probably wise to do this then: DELETE /book/appendix1.xml 303 See Other Location: /book > > That's my take anyways -- would love to know if anyone had authoritative info on if 201 was supposed to redirect (and the body was means as a "backup" as in the 3xx case). See link above. Jan > > Andrew > > 2010/2/2 Antnio Mota <amsmota@...> > But SHOULD is not SHALL, is a recommendation not a imposition... So not having a body on a 201 could not be considered a violation of the spec... Wich indeed turns things more complicated in determining what a steady-state should be... > > > _________________________________________________ > > Melhores cumprimentos / Beir beannacht / Best regards > > Antnio Manuel dos Santos Mota > > http://card.ly/amsmota > _________________________________________________ > > > > 2010/2/2 wahbedahbe <andrew.wahbe@...> > > > > > --- In rest-discuss@yahoogroups.com, Jan Algermissen <algermissen1971@...> wrote: > > > > When a client sends a POST request and receives a 201 Created... > > > > a) is the POST response body the steady state > > > > b) is it implied by HTTP spec that the client will do a GET on the Location and is that the steady state? > > > > c) is this up to the media type that contained the link to the POST-accepting resource? > > > > > > > > For other return codes I think this: > > > > 200 Ok - steady state is the POST response > > 202 Accepted - steady state is available at the link provided by the body of the 202 response > > 303 See Other - steady state is available ta resource that the Location points to > > > > Jan > > > > I think the HTTP spec is pretty clear that 201 responses SHOULD have a body which constitutes your steady state. > Section 9.5: > If a resource has been created on the origin server, the response > SHOULD be 201 (Created) and contain an entity which describes the > status of the request and refers to the new resource, and a > Location header (see section 14.30) > Section 10.2.2: > The newly created resource can be referenced by the URI(s) > returned in the entity of the response, with the most specific URI > for the resource given by a Location header field. The response > SHOULD include an entity containing a list of resource > characteristics and location(s) from which the user or user agent > can choose the one most appropriate. The entity format is > specified by the media type given in the Content-Type header field. > > I've always considered the Location header in 201 responses as information for intermediaries rather than the driver of application state. But I know I'm in the minority on that: a lot of people don't even have a body in their 201 responses (which to me is a violation of the spec). > Regards, > > Andrew > > > > > > -- > Andrew Wahbe > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@...g Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
On Wed, Feb 3, 2010 at 2:29 AM, Jan Algermissen <algermissen1971@...>wrote: > > > > So if you don't redirect then this leaves the client in either the steady > state defined by the body or, if there was no body in the response, in the > same steady state it was in before the request (sort of like a 204 > response). > > Yes. Interestingly, this means that the client's application state does not > change although it made a request. But what if the client has this current > state: > > GET /book > > <documentIndex> > <chapter href="chapter1.xml"/> > <chapter href="chapter2.xml"/> > <appendix href="appendix1.xml"/> > <chapters href="chapters/"/> > </documentIndex> > > and then it does > > DELETE /book/appendix1.xml > > 204 No Content > > The client's state is now 'wrong' in the sense that the third link should > be removed. Since the server cannot know what state the client was in before > the DELETE request, it cannot really assist by sending the updated state. > > Should the client take care of 'adjusting' the state itself? > > On the contrary, if you did > > POST /book/chapters/ > > <newCpater> > > the server would usually say "Hey, this has updated some resource, look:" > > 303 See Other > Location: /book > > > > Hmmm - so it is probably wise to do this then: > > DELETE /book/appendix1.xml > > 303 See Other > Location: /book > > > Good point. One thing about PUT and DELETE is that, other than Atom, there aren't a lot (any?) good examples of how to use them in hypermedia. AFAIK, HTML5 is just saying to add them as an option on <form> which is a bit silly -- there doesn't seem to be any sense in a single hypertext construct being used for GET, POST, PUT and DELETE. The use cases and application flows for the different methods are just so different. Let's consider Atom as an example here (as that's all we've got). If your client is reading a feed and decides to delete an entry and gets a 204 back, does the client have to GET the feed again before it can take any other actions on the feed? Or can it assume that the DELETE operation had the consequences outlined by HTTP, AtomPub and the edit relation and make an assumption about the current state of the feed? In other words: maybe 204 doesn't mean "the steady state hasn't changed" but rather "the steady state has been adjusted as defined by the media type and the method"? Regards, Andrew
http://www.jfokus.se/jfokus/preso/jf-10_DomainDrivenRESTWeb-Services.pdf thanks for all discussions here... every opinion influenced my under progress work .. Now I am designing and implementing the HATEOAS workflow engine.... let's see..... * there are some terminology and conceptual abuse in my slides, but live I commented out about that (like why application/xml is not enough for rest, etc)... -- ------------------------------------------ Felipe Gacho 10+ Java Programmer CEJUG Senior Advisor
This wound up a long post, so I'm going to start by pre-stating that HTTP is an application protocol, and by prepeating my summary as thesis: Application steady states are derived from HTTP messaging (think of a browser diplaying a default 'broken image' icon, a different steady- state than one where the image was retrieved and rendered) separately from the state (and media type) of the resource that provided the container representation. Andrew Wahbe wrote: > > Good point. One thing about PUT and DELETE is that, other than Atom, > there aren't a lot (any?) good examples of how to use them in > hypermedia. AFAIK, HTML5 is just saying to add them as an option on > <form> which is a bit silly > Last I checked, adding Xforms to HTML 5 was still under active consideration. I don't consider Atom a good example of using PUT and DELETE in hypermedia, because the hypermedia isn't actually instructing the client what to do. I have mentioned my Xforms Atom Protocol client, coming soon to a demo near you... My initial posting of the demo won't include a nifty Xforms interface, though. The pragmatic reason for distilling out a static demo from my ongoing project, is to fully explore the notion of Xforms client implementation -- Xforms relies on XHTML or SVG as a host language, so it's a matter of client capability that isn't exposed in headers, meaning you can't conneg around the problem, so I'm kinda stuck... The demo will also (eventually) have a sandbox for publishing text/plain files (until it gets spammed, anyway) that y'all can play with, which allows for PUT and DELETE. Obviously, text/plain isn't a media type which describes PUT and DELETE, but the interface is Xforms, which adds support for those methods into the host-language media type -- the media types of the host language and the target don't matter to my sandbox API, so long as Xforms is allowed by the host language... The directory contents will be modeled as XBEL, using application/xbel +xml, and served (like the rest of the demo) using client-side XSLT to turn that into XHTML + Xforms. The sandbox API, unlike Atom Protocol, is RESTful and does it without minting any media (sub)types, by using an existing media type already defined for collections of URIs of any media type... If logged in using HTTP Digest, the username is used as the port 80 process ID (group 'user' not 'www'). File creation is handled via POST to the collection, as with Atom Protocol. Users may only DELETE files they have created, but may edit (PUT) files created by others, based on the standard UNIX file and directory permissions already constraining the behavior of the httpd. The "sandbox API" is what I've been using for years to update .xsl, .css and .js files on my previous demos (without the newly-added hypertext constraint). If I need to close the sandbox, I just 'chmod -w sandbox', or make the server 410 with '-r' (at directory or file level). Allowing curl to PUT and DELETE is easy on an httpd, making it REST requires a couple of extra steps, though. It's pretty cool, but I promise to stop fiddling with it and post my demo without... You can, of course, utilize the sandbox without the Xforms interface (this 'paleowiki' is even fun to use with curl). But, in terms of "learning the API" you can see it plain as day in the server-generated hypertext application that self-documents it, and by introspecting response codes with a protocol analyzer. So g'head (soon) and write a quickie sandbox API client using libcurl, or just play with it using curl. I don't care if you don't follow my provided hypertext, just that I've met REST's burden of providing it, so you can figure out how to use curl by using 'view source'. > > -- there doesn't seem to be any sense in a single hypertext construct > being used for GET, POST, PUT and DELETE. The use cases and > application flows for the different methods are just so different. > I agree; it's the immediate drawback I notice in: http://www.w3.org/2007/03/html-forms/ Which I'm going to give a whirl, anyway. I prefer the raw Xforms MVC design pattern. I have nothing against MVC per se, only its all-too- common usage on the Web in systems which break the identification of resources constraint. Putting MVC on the client makes it a RESTful design pattern. The 'XRX' design pattern mates an Xforms frontend to an Xquery backend; this design pattern may easily be implemented as a decoupled REST system. I can't get enough of that buzz I get writing Xforms applications for RESTful systems -- the ability to rapidly prototype Web systems for visualization and analysis, without using Javascript, blows me away. If only I could figure out how to serve my Xforms applications to generic clients, say by having Xforms included within HTML 5, I'd be thrilled... > > Let's consider Atom as an example here (as that's all we've got). If > your client is reading a feed and decides to delete an entry and gets > a 204 back, does the client have to GET the feed again before it can > take any other actions on the feed? Or can it assume that the DELETE > operation had the consequences outlined by HTTP, AtomPub and the edit > relation and make an assumption about the current state of the feed? > Yes (sorta), and no. A client can't assume a 2xx response to mean 'success' because of 202. If the server has no intention of performing the DELETE, then it should use a 4xx response. OTOH, a server isn't beholden to its own 2xx response to a DELETE, so "caveat client" applies. This goes for any media type, not just Atom, btw. The server's response to a DELETE request is only the server's response to the request (a shadow in Plato's cave). The effect on the resource (the true form) can only be determined via subsequent request. Most servers will change their response to 404, when the correct response is usually a 410. But, even that depends on the context of the DELETE request... What if the DELETE request had a Link header whose link relation and content describe the proper forwarding for the resource? If the deleted resource has been combined with another, then a 307 redirect is called for. However, if the resource has been moved to a new location, the response would be a 301 redirect. Even if the client has no control over this, the response code to the DELETE request itself, isn't authoritative of much. > > In other words: maybe 204 doesn't mean "the steady state hasn't > changed" but rather "the steady state has been adjusted as defined by > the media type and the method"? > It means neither. It only informs the client that the DELETE request didn't result in an error. Only the responses to GET, HEAD or OPTIONS requests tell you anything about resource state. Any of these requests made subsequent to a DELETE request will give a response code indicating the state of the resource -- not found, gone, moved or (perhaps) merged, etc. (Or, in the case of a collection feed, perhaps all members were deleted, or just the collection, or both.) REST "application state" is held entirely on the client. A REST system has literally infinite "applications," defined as "what the user is trying to do." The application presents the user with a representation of resource state -- that's the first "steady state," when a form with a delete button finishes loading and styling. That delete form allows users to construct their own choice for the next state transition. (Other possible state transition selections I'll ignore include, but are not limited to: menu links, mailto links, links to help with the form, etc. which may also appear as part of the form's steady-state.) When a user hits the delete button, a DELETE request is sent to the identified resource. Now, all heck breaks loose! We haven't a clue as to the result of this action beyond "request accepted" or "request rejected," which have nothing to do with resource state -- but everything to do with *application* state. What happens when the response is 4xx? There's no rule that says you have to treat that 4xx response as the next application state (even though you may do so). If you do, there's no reason the body of the 4xx response can't look identical to the last steady-state, although you may want to add some text (or styling) indicating the failure. You aren't limited to presenting your application within 200 responses. If you don't want to load a new page, then Xforms can catch the response code and restyle the page accordingly. For example, if the response was 4xx, color the filename text red, gray it out, and make it not selectable. There -- the application just transitioned to a new steady- state without following any links (loading a new page). (The sandbox API returns 202 Accepted to DELETE, does chmod -r and removes the file from the XBEL index, causing a 410 response to subsequent requests -- who says my server has to allow you to delete your death threats...? ;-) Likewise, if the response is 2xx, Xforms can turn right around and make a HEAD request to the allegedly-deleted resource. If the response to that is 4xx, the filename text is removed from the delete form's select box (the sandbox API just reloads the XBEL file regardless of response). There -- the application transitioned to a new steady-state without following any links, again. HTTP is an application messaging protocol. The URI which responded with the delete form, has that delete form as its resource state. But, the client's application state varies based on the state of each resource in the directory it represents in its select box as a list of filenames (or the state of a linked XBEL- represented index resource, in the sandbox API case -- the XBEL document is the Xforms 'Model', so refreshing the model shows the outcome of the DELETE request, which will be 304 on failure due to matching Etag). So the state of such a delete application transitions from one steady- state to another based on the user selecting a filename and clicking the delete button -- regardless of whether your delete form is updating dynamically, or you are using it as a representation of the success and failure states expressed in response to the DELETE (by following the link, i.e. loading a new 4xx response page, dereferencing the deleted URL, whatevah), or media type used (I've given a general idea on how this is done, plus specifics of how it's done in my forthcoming sandbox API, without mentioning Atom). About the only REST no-no with a 204 response, is to present the user with that response as the next application state -- a blank page. So the client needs to have some logic, i.e. make a HEAD request and dynamically restyle, or make a GET request and present the user with a "success" application state wrapped in a 4xx response (which breaks no REST constraints -- resource state and application state aren't the same thing). In a nut, this behavior is out-of-scope to media type, it's all about HTTP as application (not transport) protocol. Application steady states are derived from HTTP messaging (think of a browser diplaying a default 'broken image' icon, a different steady-state than one where the image was retrieved and rendered) separately from the state of the resource that provided the container representation. </ramble_on> Watching Pagey "Ramble On" on my "It Might Get Loud" DVD too much, Eric
Thanks Eric, Could you post a small sample showing what you mean exactly. So a feed that contains one (or even better two feeds)... and for the parent feed and the child feeds they contain some additional data beyond the standard fields supportted by Atom. I think an example would really solidify it in my mind. Thank you for your input so far!
OK, it's rough around the edges yet, but it's time I posted it. There are a few ways in. If you're interested in the httpd project, start here, any browser will do: http://charger.bisonsystems.net/ http://charger-admin.bisonsystems.net/ Or, you can jump into the project directly, but compatibility is currently limited to Firefox: http://charger.bisonsystems.net/xmltest/index.xht Yeah, there's a button to push. Sorry, I'm working on Xforms capability in an unobtrusive fashion, but development of same is required to be obtrusive. Normally, I restrict all the internals stuff to localhost access, but for this demo I've created the charger-admin alias. Don't worry, the whole thing is read-only, so you can't do any damage. (I hope... :-) The purpose is to demo a whole buncha stuff. The only links that work (apart from the directory browser) are the ones in body content, and the 'View' menu (the button for which takes you back into the directory browser, btw), but the variants are under construction so there's nothing to see there at this time. Drilling down to individual posts, comment threads, and standalone comments does work. And, all from one XSLT stylesheet. That's the part where the application logic and XHTML template are cached on the client, and based entirely on standard elements and link relations. The applicability of this, of course, goes far beyond the demo's nature as a weblog. Sandbox API is next, after the /date service is finished (a simple REST service to transform ISO 8601 date strings into human-readable form). All further notes are there in the project. Q&A taken here, on-topic to the RESTful nature of the design (and proposed design, as described by the Accept and Allow headers). I haven't figured out what I want to do with OPTIONS on this project, yet. -Eric
Hello Eric, Feedback coming in... just a few clicking around (safari) and noticed content "negotiation" through URI template: http://charger.bisonsystems.net/xmltest/2006/aug/09/11.axml (also a 404 here) Regards Guilherme Silveira Caelum | Ensino e Inovao http://www.caelum.com.br/ On Sat, Feb 6, 2010 at 8:33 PM, Eric J. Bowman <eric@...>wrote: > > > OK, it's rough around the edges yet, but it's time I posted it. There > are a few ways in. If you're interested in the httpd project, start > here, any browser will do: > > http://charger.bisonsystems.net/ > http://charger-admin.bisonsystems.net/ > > Or, you can jump into the project directly, but compatibility is > currently limited to Firefox: > > http://charger.bisonsystems.net/xmltest/index.xht > > Yeah, there's a button to push. Sorry, I'm working on Xforms capability > in an unobtrusive fashion, but development of same is required to be > obtrusive. > > Normally, I restrict all the internals stuff to localhost access, but > for this demo I've created the charger-admin alias. Don't worry, the > whole thing is read-only, so you can't do any damage. (I hope... :-) > > The purpose is to demo a whole buncha stuff. The only links that work > (apart from the directory browser) are the ones in body content, and > the 'View' menu (the button for which takes you back into the directory > browser, btw), but the variants are under construction so there's > nothing to see there at this time. > > Drilling down to individual posts, comment threads, and standalone > comments does work. And, all from one XSLT stylesheet. That's the > part where the application logic and XHTML template are cached on the > client, and based entirely on standard elements and link relations. > The applicability of this, of course, goes far beyond the demo's nature > as a weblog. > > Sandbox API is next, after the /date service is finished (a simple REST > service to transform ISO 8601 date strings into human-readable form). > > All further notes are there in the project. Q&A taken here, on-topic > to the RESTful nature of the design (and proposed design, as described > by the Accept and Allow headers). I haven't figured out what I want to > do with OPTIONS on this project, yet. > > -Eric > >
> > Feedback coming in... just a few clicking around (safari) and noticed > content "negotiation" through URI template: > http://charger.bisonsystems.net/xmltest/2006/aug/09/11.axml (also a > 404 here) > Yeah, that stuff isn't built yet. I had to get creative with filename extensions, but that's a contrivance of the demo. -Eric
The default httpd response is the domain parking page, which you can see in action here: http://way.groo.vi/ Nanoweb has lots of cool stuff that I'm trying not to break, and some solid ideas for httpd implementation, including an .nwaccess format that's bass-ackwards from .htaccess but which I prefer. -Eric
Hey all, Volume 2 of This week in REST is up on the REST wiki - http://rest.blueoxen.net/cgi-bin/wiki.pl?RESTWeekly_Feb_1_2010 and the blog - http://bit.ly/almkw1 For contributing links this week visit http://rest.blueoxen.net/cgi-bin/wiki.pl?RESTWeekly_Feb_8_2010 Enjoy :) Cheers, Ivan
"piers_lawson" wrote: > > Could you post a small sample showing what you mean exactly. So a > feed that contains one (or even better two feeds)... and for the > parent feed and the child feeds they contain some additional data > beyond the standard fields supportted by Atom. I think an example > would really solidify it in my mind. > I'm working on your example. The demo I posted shows what I'm talking about, but I haven't worked the rest of my solution into the demo yet. When I have, I'll bump here. I'm almost finished (I think) making the XSLT also function in Opera (using a holistic approach to the cross- browser XSLT problem, rather than using @mode and an engine test, i.e. making some code specific to Mozilla's Transformiix processor or Opera's libxslt). After I've made the XSLT work with WebKit, I'll work those other tricks up my sleeve for dealing with collections into the demo. Then I'll make the XSLT work with IE, so I can get back to my original problem, which is dealing with Xforms. IE has a couple of nice Xforms extensions available. The result of this work, for me, will be a development style based on the union of subsets of XSLT 1.0, EXSLT and Xforms that works on the Web. And, a coding style (which can be validated using RELAX NG + Schematron) for hitting that cross-browser sweet spot, which results in code that also works for server-side transformation (if nothing else works, libxslt can be used on the server). Anyway, I ought to be able to provide an interesting collection example for you later this week. -Eric
Am 01.02.10 19:17, schrieb mike amundsen: > *** REQUEST > POST /prove-it HTTP/1.1 > Host: www.example.org > Content-Type: application/x-www-form-urlencoded > User-Agent: common-browser/1.0 > > captcha-image=http%3A%2F%2Fwww.example.org%2Fcaptcha-images%2Fa1s2d3f4g5.png&user-name=Mike&captcha-animal=kitten > > *** RESPONSE > HTTP/1.1 400 Bad Request > Date: ... > Server: smart-server/9.0 > ... > Content-Type: text/html > Content-Length: nnn > > ... > <p class="error">Sorry, that was not a photo of a kitten. You must not > be human.</p> Use used 400 bad request as response wich make sense. I used 412 precondition failed until now an I wonder if there is any reason to prefer either one. Philipp Meier
All, as there's going to be a session on OpenRasta at the Mix conference, I'm thinking of organizing a small get-together over there for all ReSTafarians. Anyone around between 14th and 18th of March? Seb _________________________________________________________________ Tell us your greatest, weirdest and funniest Hotmail stories http://clk.atdmt.com/UKM/go/195013117/direct/01/
Hi Seb, I'll be there again... Dave Evans -----Original Message----- From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of Sebastien Lambla Sent: 11 February 2010 11:52 To: rest-discuss@yahoogroups.com Subject: [rest-discuss] Las Vegas for Mix10, any restafarians around? All, as there's going to be a session on OpenRasta at the Mix conference, I'm thinking of organizing a small get-together over there for all ReSTafarians. Anyone around between 14th and 18th of March? Seb ________________________________ We want to hear all your funny, exciting and crazy Hotmail stories. Tell us now <http://clk.atdmt.com/UKM/go/195013117/direct/01/>
Volume 3 of This week in REST is up on the REST wiki - http://rest.blueoxen.net/cgi-bin/wiki.pl?RESTWeekly_Feb_8_2010 and the blog - http://bit.ly/9S2HM3 For contributing links this week visit http://rest.blueoxen.net/cgi-bin/wiki.pl?RESTWeekly_Feb_15_2010 How are you guys liking TWIR so far? Any suggestions on making it better? Thanks, Ivan
Which is the best way to deal with content negotiation? I mean that the server could detect and handle the content type requested by the client As far I know I could use the following mechanisms Using the ACCEPT or CONTENT_TYPE HTTP headers Using a format request parameter, clients can request a specific content type, as in http:/something/resource?format=xml Using the file extension in the URI, as in http:/something/resource.xml. Thanks in advance.
One of these doesn't belong: > > Using the ACCEPT or CONTENT_TYPE HTTP headers > Using a format request parameter, clients can request a specific > content type, as in http:/something/resource?format=xml Using the > file extension in the URI, as in http:/something/resource.xml. > The middle one. You have a negotiated resource: http://something/resource The resource analyzes the ACCEPT header of an incoming request, and chooses the appropriate variant, let's say it negotiates between: http://something/resource.xml http://something/resource.svg Or something. The server responds with the correct Content-Type for the variant it returns, and includes the URL in a Content-Location header. Don't forget to Vary: Accept. -Eric
On Mon, Feb 15, 2010 at 7:57 AM, javier <vrhj2000@...> wrote: > Which is the best way to deal with content negotiation? > I mean that the server could detect and handle the content > type requested by the client > > As far I know I could use the following mechanisms > > Using the ACCEPT or CONTENT_TYPE HTTP headers > Using a format request parameter, clients can request a specific content type, as in http:/something/resource?format=xml > Using the file extension in the URI, as in > http:/something/resource.xml. There's also the use of the Link header + "alternate" relation in a response, permitting the server to inform the client what representations it has available. I think it's safe to say that there's now consensus in the HTTP community that this is the most general solution. Mark.
On Feb 16, 2010, at 3:43 PM, Mark Baker wrote: > On Mon, Feb 15, 2010 at 7:57 AM, javier <vrhj2000@...> wrote: >> Which is the best way to deal with content negotiation? >> I mean that the server could detect and handle the content >> type requested by the client >> >> As far I know I could use the following mechanisms >> >> Using the ACCEPT or CONTENT_TYPE HTTP headers >> Using a format request parameter, clients can request a specific content type, as in http:/something/resource?format=xml >> Using the file extension in the URI, as in >> http:/something/resource.xml. > > There's also the use of the Link header + "alternate" relation in a > response, permitting the server to inform the client what > representations it has available. I think it's safe to say that > there's now consensus in the HTTP community that this is the most > general solution. I'd like to add a pointer to http://tools.ietf.org/html/rfc2295 , especially the Alternates header: http://tools.ietf.org/html/rfc2295#section-8.3 Jan > > Mark. > > > ------------------------------------ > > Yahoo! Groups Links > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
I'm working on an HTTP service-oriented system. I'm trying to figure out
how to communicate validation exceptions from the server to the client. For
example, if I have a resource that uses the URI template: /foo/{id}, where
{id} has to be a number. Is the right status in those cases 400/Bad
Request? How about for a an incorrectly formatted date query parameter?
Does anyone have further advice about contents of a body format (some form
of json/xml) that would embody the details of the invalid parameters?
This type of validation is second nature to web-based form entry and HTML
based frameworks, but doing it data-oriented is new (at least to me). Are
there common ways to represent validation errors?
Thanks
-Solomon Duskis
Given that the creation of APIs that ignore the hypermedia constraint is undeniably an ongoing activity wouldn't it be nice to give those a name in order to differentiate them from RESTful APIs? I think that it can in various circumstances be argued to be to have HTTP based APIs that define some URI layout and maybe only use generic media types. It is certainly more useful than doing the same API with RMI, SOAP or CORBA. However, given there is no name for them, they simple become 'REST-APIs' which obfuscates the fact that systems using those APIs will *not* have the properties induced by REST. If we name them, differentiation between the two becomes a lot easier. What I lack is a compelling name that sounds cool enough so API owners can still be proud even if they are not (yet) in the REST-Olymp of things :-) ? Any ideas? HTTP-base APIs Fixed-URI-APIs Both *not* cool.... Jan ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
On Feb 16, 2010, at 9:40 PM, Solomon Duskis wrote:
>
>
> I'm working on an HTTP service-oriented system. I'm trying to figure out how to communicate validation exceptions from the server to the client. For example, if I have a resource that uses the URI template: /foo/{id}, where {id} has to be a number. Is the right status in those cases 400/Bad Request? How about for a an incorrectly formatted date query parameter?
I used 422 Unprocessable Entity for that (but on a form data POST request)
What about aa 404 with an HTML body explainig the cause?
>
> Does anyone have further advice about contents of a body format (some form of json/xml) that would embody the details of the invalid parameters?
>
I like to use HTML with a well-known micro format that enables programs to parse out aspects of the messge.
> This type of validation is second nature to web-based form entry and HTML based frameworks, but doing it data-oriented is new (at least to me). Are there common ways to represent validation errors?
>
400 Bad Request syntactical error
409 Confilct (logical error [this is bending the meaning a little]
415 Unsupported Media Type (server does not know request body type)
422 Unprocessable Entity semantic error
Jan
>
> Thanks
>
> -Solomon Duskis
>
>
>
-----------------------------------
Jan Algermissen, Consultant
NORD Software Consulting
Mail: algermissen@...
Blog: http://www.nordsc.com/blog/
Work: http://www.nordsc.com/
-----------------------------------
Hi Jan, Thanks for the response! There are quite a few WebDAV status codes that I didn't know about. Regarding the following: > <snip> > > > > > Does anyone have further advice about contents of a body format (some > form of json/xml) that would embody the details of the invalid parameters? > > > > I like to use HTML with a well-known micro format that enables programs to > parse out aspects of the messge. Do you have examples of such a well-known micro format? -Solomon
On Feb 16, 2010, at 10:04 PM, Solomon Duskis wrote: > Hi Jan, > > Thanks for the response! There are quite a few WebDAV status codes that I didn't know about. > > Regarding the following: > > <snip> > > > > > Does anyone have further advice about contents of a body format (some form of json/xml) that would embody the details of the invalid parameters? > > > > I like to use HTML with a well-known micro format that enables programs to parse out aspects of the messge. > > Do you have examples of such a well-known micro format? You could say that you constrain the HTML to include <p class="error"></p> <p class="suggested-fix"></p> <p class="stack-trace"></p> nothing fancy really, just something to make sure machine can do a little bit with it instead of dump it right into the log :-) Jan > > -Solomon ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
I include a definition of an error message in all the media-type formats I
use. This error item could appear within the body of any response from the
server:
XML
<error>
<code />
<message />
<details />
</error>
JSON
{'error':{'code':'','message':'','details':''}}
XHTML (used w/ Atom, too)
<div class="error">
<span class="code"></span>
<span class="message"></span>
<span class="details"></span>
</div>
mca
http://amundsen.com/blog/
On Tue, Feb 16, 2010 at 16:04, Solomon Duskis <sduskis@...> wrote:
>
>
> Hi Jan,
>
> Thanks for the response! There are quite a few WebDAV status codes that I
> didn't know about.
>
> Regarding the following:
>
>
>> <snip>
>>
>
>
>> >
>> > Does anyone have further advice about contents of a body format (some
>> form of json/xml) that would embody the details of the invalid parameters?
>> >
>>
>> I like to use HTML with a well-known micro format that enables programs to
>> parse out aspects of the messge.
>
>
> Do you have examples of such a well-known micro format?
>
> -Solomon
>
>
>
A while back I put together the following set of "not quite REST" style definitions for a discussion I was having with some colleagues: RPC --- - HTTP-based RPC (SOAP or POX over HTTP etc.) - only meets the client-server constraint Unlinked Resource Oriented -------------------------- - has well-defined resources at distinct URIs and uses HTTP methods properly. But representations are just serialized "data" with no links - meets client-server, layered system, caching, stateless (usually -- some stray here via cookies etc.), identification of resources, manipulation of resources through representations - partially meets self-descriptive messages in that HTTP meta-data is used properly but a non-standard media-type or "generic" media type (XML, JSON) or otherwise "anemic" format is used Resource Oriented ----------------- Like Unlinked Resource Oriented but with links in the data. - this is tricky on the constraints met. Meets the same constraints as above; self-descriptive messages still not met because of non-standard or generic format. Arguably only partially meets HATEOAS even though there are links. The distinction between "Resource Oriented" and REST is subtle. This is basically many of the APIs out there that miss the mark in media type design. Yes there are links, and so the client is still decoupled from the URI layout, but the client is bound to details of the service in other ways Roy outlines in his "REST APIs must by hypertext-driven" blog posting. The result is that you still have clients that are still very coupled to the service as the media type does not enable the client to be driven without other out-of-band info. In short, you can't just throw URIs into arbitrary JSON and get hypermedia (though that is what you often see in this style). Anyways, that was what I came up with. It's not quite 3 out of 4 of the interface constraints style you propose -- I actually can't say that I see that too often. I would define that as an API where the representation is in a well defined, standard format that is not hypermedia -- what is a good example of this kind of format (that is used in APIs)? Regards, Andrew --- In rest-discuss@yahoogroups.com, Jan Algermissen <algermissen1971@...> wrote: > > Given that the creation of APIs that ignore the hypermedia constraint is undeniably an ongoing activity wouldn't it be nice to give those a name in order to differentiate them from RESTful APIs? > > I think that it can in various circumstances be argued to be to have HTTP based APIs that define some URI layout and maybe only use generic media types. It is certainly more useful than doing the same API with RMI, SOAP or CORBA. > > However, given there is no name for them, they simple become 'REST-APIs' which obfuscates the fact that systems using those APIs will *not* have the properties induced by REST. > > If we name them, differentiation between the two becomes a lot easier. > > > What I lack is a compelling name that sounds cool enough so API owners can still be proud even if they are not (yet) in the REST-Olymp of things :-) ? > > Any ideas? > > > HTTP-base APIs > Fixed-URI-APIs > > Both *not* cool.... > > Jan > > > > ----------------------------------- > Jan Algermissen, Consultant > NORD Software Consulting > > Mail: algermissen@... > Blog: http://www.nordsc.com/blog/ > Work: http://www.nordsc.com/ > ----------------------------------- >
On Feb 16, 2010, at 10:37 PM, wahbedahbe wrote: > > Resource Oriented > ----------------- That's actually quite good I think. Thoug 'resource orientation' nomadays implies REST :-) Oh well... Jan
On Feb 16, 2010, at 7:24 AM, Eric J. Bowman wrote: > One of these doesn't belong: > > > > > Using the ACCEPT or CONTENT_TYPE HTTP headers > > Using a format request parameter, clients can request a specific > > content type, as in http:/something/resource?format=xml > > Using the file extension in the URI, as in http:/something/resource.xml. > > > > The middle one. Interesting. I claim there's no difference at all between options two and three, as at least from a REST perspective there's no difference between a query parameter and a file extension. I wonder why you'd disagree? Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/
On Feb 16, 2010, at 9:40 PM, Solomon Duskis wrote:
> I'm working on an HTTP service-oriented system. I'm trying to figure out how to communicate validation exceptions from the server to the client. For example, if I have a resource that uses the URI template: /foo/{id}, where {id} has to be a number. Is the right status in those cases 400/Bad Request? How about for a an incorrectly formatted date query parameter?
>
>
> Does anyone have further advice about contents of a body format (some form of json/xml) that would embody the details of the invalid parameters?
>
The second time today I'm writing something to this effect: AFAICT from your description, a resource /foo/123 exists, a resource /foo/abc doesn't that sounds like a clear 404 to me. As Jan points out, some descriptive text in HTML with or without a microformat seems reasonable.
Stefan
--
Stefan Tilkov, http://www.innoq.com/blog/st/
Recess uses 409 for validation errors. I think it fits well.
409 Conflict
The request could not be completed due to a conflict with the current state
of the resource. This code is only allowed in situations where it is
expected that the user might be able to resolve the conflict and resubmit
the request. The response body SHOULD include enough information for the
user to recognize the source of the conflict. Ideally, the response entity
would include enough information for the user or user agent to fix the
problem; however, that might not be possible and is not required.
[...]
- Kev
On Tue, Feb 16, 2010 at 12:40 PM, Solomon Duskis <sduskis@...> wrote:
>
>
> I'm working on an HTTP service-oriented system. I'm trying to figure out
> how to communicate validation exceptions from the server to the client. For
> example, if I have a resource that uses the URI template: /foo/{id}, where
> {id} has to be a number. Is the right status in those cases 400/Bad
> Request? How about for a an incorrectly formatted date query parameter?
>
> Does anyone have further advice about contents of a body format (some form
> of json/xml) that would embody the details of the invalid parameters?
>
> This type of validation is second nature to web-based form entry and HTML
> based frameworks, but doing it data-oriented is new (at least to me). Are
> there common ways to represent validation errors?
>
>
> Thanks
>
> -Solomon Duskis
>
>
Rigid? Static? Fixed? - Kev On Tue, Feb 16, 2010 at 12:39 PM, Jan Algermissen <algermissen1971@...>wrote: > > > Given that the creation of APIs that ignore the hypermedia constraint is > undeniably an ongoing activity wouldn't it be nice to give those a name in > order to differentiate them from RESTful APIs? > > I think that it can in various circumstances be argued to be to have HTTP > based APIs that define some URI layout and maybe only use generic media > types. It is certainly more useful than doing the same API with RMI, SOAP or > CORBA. > > However, given there is no name for them, they simple become 'REST-APIs' > which obfuscates the fact that systems using those APIs will *not* have the > properties induced by REST. > > If we name them, differentiation between the two becomes a lot easier. > > What I lack is a compelling name that sounds cool enough so API owners can > still be proud even if they are not (yet) in the REST-Olymp of things :-) ? > > Any ideas? > > HTTP-base APIs > Fixed-URI-APIs > > Both *not* cool.... > > Jan > > ----------------------------------- > Jan Algermissen, Consultant > NORD Software Consulting > > Mail: algermissen@... <algermissen%40acm.org> > Blog: http://www.nordsc.com/blog/ > Work: http://www.nordsc.com/ > ----------------------------------- > > >
On Tue, Feb 16, 2010 at 12:53 PM, Kev Burns <kevburnsjr@...> wrote: > Recess uses 409 for validation errors. I think it fits well. > > 409 Conflict > > The request could not be completed due to a conflict with the current state of the > resource. This code is only allowed in situations where it is expected that the > user might be able to resolve the conflict and resubmit the request. The response > body SHOULD include enough information for the user to recognize the source > of the conflict. Ideally, the response entity would include enough information for > the user or user agent to fix the problem; however, that might not be possible > and is not required. That sounds like a good plan overall, but for this specific instance, I don't think it's valid. In this case, there is no resource state to be conflicted with, as the actual resource identity is the problem. Having an explanation in the body of a 404 is a better choice for this specific case. Regards, Will Hartung (willh@...)
I call them POD services, for Plain Old Data. To: algermissen1971@mac.com CC: rest-discuss@yahoogroups.com From: kevburnsjr@... Date: Tue, 16 Feb 2010 12:55:47 -0800 Subject: Re: [rest-discuss] A Name for "3 out of 4 REST constraints" APIs? Rigid? Static? Fixed? - Kev On Tue, Feb 16, 2010 at 12:39 PM, Jan Algermissen <algermissen1971@...> wrote: Given that the creation of APIs that ignore the hypermedia constraint is undeniably an ongoing activity wouldn't it be nice to give those a name in order to differentiate them from RESTful APIs? I think that it can in various circumstances be argued to be to have HTTP based APIs that define some URI layout and maybe only use generic media types. It is certainly more useful than doing the same API with RMI, SOAP or CORBA. However, given there is no name for them, they simple become 'REST-APIs' which obfuscates the fact that systems using those APIs will *not* have the properties induced by REST. If we name them, differentiation between the two becomes a lot easier. What I lack is a compelling name that sounds cool enough so API owners can still be proud even if they are not (yet) in the REST-Olymp of things :-) ? Any ideas? HTTP-base APIs Fixed-URI-APIs Both *not* cool.... Jan ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ ----------------------------------- _________________________________________________________________ Do you have a story that started on Hotmail? Tell us now http://clk.atdmt.com/UKM/go/195013117/direct/01/
Ah, the old Addressable Resources Sans Engine Of Application State conundrum. No, nothing comes to mind. ian --- In rest-discuss@yahoogroups.com, Jan Algermissen <algermissen1971@...> wrote: > > Given that the creation of APIs that ignore the hypermedia constraint is undeniably an ongoing activity wouldn't it be nice to give those a name in order to differentiate them from RESTful APIs? > > I think that it can in various circumstances be argued to be to have HTTP based APIs that define some URI layout and maybe only use generic media types. It is certainly more useful than doing the same API with RMI, SOAP or CORBA. > > However, given there is no name for them, they simple become 'REST-APIs' which obfuscates the fact that systems using those APIs will *not* have the properties induced by REST. > > If we name them, differentiation between the two becomes a lot easier. > > > What I lack is a compelling name that sounds cool enough so API owners can still be proud even if they are not (yet) in the REST-Olymp of things :-) ? > > Any ideas? > > > HTTP-base APIs > Fixed-URI-APIs > > Both *not* cool.... > > Jan > > > > ----------------------------------- > Jan Algermissen, Consultant > NORD Software Consulting > > Mail: algermissen@... > Blog: http://www.nordsc.com/blog/ > Work: http://www.nordsc.com/ > ----------------------------------- >
CR - Cloud Resource
Additional thought: In a sense, the APIs in question do make use of hypermedia. They just use it at design time by making the server's state machine static. This information is then published as *hypermedia* for *design time* consumption instead of runtime consumption. The consequence is of course that client and server are coupled by their original design[1]. However, systems still benefit from the induced simplicity by this 'style' so it should not be ruled out as such, just be correctly understood. An appropriate name would maybe have to emphasize the satic nature of the use of hypermedia? Resource Orientation with Static Hypermedia (ROSH) Resource Oriented and Coupled (ROAC) etc. Jan [1] http://tech.groups.yahoo.com/group/rest-discuss/message/8377 On Feb 16, 2010, at 9:39 PM, Jan Algermissen wrote: > Given that the creation of APIs that ignore the hypermedia constraint is undeniably an ongoing activity wouldn't it be nice to give those a name in order to differentiate them from RESTful APIs? > > I think that it can in various circumstances be argued to be to have HTTP based APIs that define some URI layout and maybe only use generic media types. It is certainly more useful than doing the same API with RMI, SOAP or CORBA. > > However, given there is no name for them, they simple become 'REST-APIs' which obfuscates the fact that systems using those APIs will *not* have the properties induced by REST. > > If we name them, differentiation between the two becomes a lot easier. > > > What I lack is a compelling name that sounds cool enough so API owners can still be proud even if they are not (yet) in the REST-Olymp of things :-) ? > > Any ideas? > > > HTTP-base APIs > Fixed-URI-APIs > > Both *not* cool.... > > Jan > > > > ----------------------------------- > Jan Algermissen, Consultant > NORD Software Consulting > > Mail: algermissen@... > Blog: http://www.nordsc.com/blog/ > Work: http://www.nordsc.com/ > ----------------------------------- > > > > > > > ------------------------------------ > > Yahoo! Groups Links > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
--- In rest-discuss@yahoogroups.com, Jan Algermissen <algermissen1971@...> wrote:
>
> Additional thought:
>
> In a sense, the APIs in question do make use of hypermedia. They just use it at design time by making the server's state machine static. This information is then published as *hypermedia* for *design time* consumption instead of runtime consumption.
>
> The consequence is of course that client and server are coupled by their original design[1]. However, systems still benefit from the induced simplicity by this 'style' so it should not be ruled out as such, just be correctly understood.
>
>
> An appropriate name would maybe have to emphasize the satic nature of the use of hypermedia?
>
> Resource Orientation with Static Hypermedia (ROSH)
> Resource Oriented and Coupled (ROAC)
>
> etc.
>
> Jan
>
> [1] http://tech.groups.yahoo.com/group/rest-discuss/message/8377
>
I'm not sure if design-time, "static hypermedia" makes sense. From the message you quoted:
Hypermedia means the placement of controls within the
presentation of information
If the links and controls aren't in the data communicated at runtime, then I think it's a stretch to call it "hypermedia".
How about "Pure Data Resources" or "Data-Only Resources"?
Andrew
> > Interesting. I claim there's no difference at all between options two > and three, as at least from a REST perspective there's no difference > between a query parameter and a file extension. > > I wonder why you'd disagree? > Because this is a common misconception that arises from the notion of URI opacity. REST's notion of opaque URIs does not somehow override RFC 3986, which transparently declares quite some difference between path and query. Roy would say, "[Q]uery is not a substitute for identification of resources." http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven#comment-720 Just think about how any httpd handles incoming requests. If the query is being used as a filename extension (violating the Identification of Resources constraint), then you'd have some processing to do in order to attach the query data to the path data before continuing. Using a filename extension requires no such query processing at the server. So clearly, they are not the same. Since URIs are opaque, there is no way for an intermediary to determine that the query is being mis-used as a filename extension. It is well known that intermediaries tend not to cache responses to requests containing queries. The use of a filename extension as a filename extension is well understood by intermediaries, whereas using a query as a filename extension is not. -Eric
On Feb 17, 2010, at 9:34 PM, Eric J. Bowman wrote: >> >> Interesting. I claim there's no difference at all between options two >> and three, as at least from a REST perspective there's no difference >> between a query parameter and a file extension. >> >> I wonder why you'd disagree? >> > > Because this is a common misconception that arises from the notion of > URI opacity. REST's notion of opaque URIs does not somehow override RFC > 3986, which transparently declares quite some difference between path > and query. Roy would say, "[Q]uery is not a substitute for > identification of resources." > > http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven#comment-720 > I don't interpret this the same way; it would be interesting to get Roy's take on this. > Just think about how any httpd handles incoming requests. If the query > is being used as a filename extension (violating the Identification of > Resources constraint), then you'd have some processing to do in order > to attach the query data to the path data before continuing. Using a > filename extension requires no such query processing at the server. I'm not sure I get what you're saying. Using or not using something like mod_rewrite to turn "?format=pdf" into ".pdf" does not change anything with regards to RESTful HTTP usage from my understanding. > > So clearly, they are not the same. Since URIs are opaque, there is no > way for an intermediary to determine that the query is being mis-used > as a filename extension. It is well known that intermediaries tend not > to cache responses to requests containing queries. That used to be true, particularly for Squid, but is no longer the case. Even so, other intermediaries don't cache anything by default; I don't see how that would support your argument either. > The use of a > filename extension as a filename extension is well understood by > intermediaries, whereas using a query as a filename extension is not. > IMO any intermediary making such as a distinction is broken. But in any case, I'd be interested which ones this applies to? Best, Stefan > -Eric
> > > > The use of a > > filename extension as a filename extension is well understood by > > intermediaries, whereas using a query as a filename extension is not. > > > As Stefan pointed out... if a intermediate knows that your specific resources provided by your server works in that way, that is something that broke the uniform interface: the http protocol + URIs did not specify that, that is private knowledge from your own system, thus your intermediate will break with other systems. In reality, a lot of people will use it because they find content negotation through any headers that affect the Vary header response something default to implement. Creating a pure Rails app, conneg through Accept is actually working partially because of well known problems of some browsers, which is a pity: it works for the browser, but if your client and intermediate layers work as properly, your application will not. I would go for Accept,Accept-Language,User-Agent,Alternate headers (with a hypermedia representation in a response body when providing a 406).. The question would be, why people still go for .xml, .json and ?format alternatives? In my opinion, other alternatives only exist because of the lack of a REST client api that supports them... Developers use http client libraries to access resources, believing they are benefiting from REST advantages, therefore they do no conneg at all, just GET whatever.xml: their libraries do not support content negotiation, unless you it on your own. But now my answer will start to run off the desired topic and get into what I would love to see and implement in trendy http client libraries in order to make them REST, not just http. Regards Guilherme Silveira
Let's put this much more simply. Say we have: /resource?format=pdf /resource?format=html What happens if I curl each one in succession? Well, curl will save the first as 'resource' and the second as 'resource.1'. If conneg is enabled at /resource, and we use curl to send appropriate Accept headers, it will still save the files as 'resource' and 'resource.1' because curl ignores the query -- whether it's part of the requested URL or sent in a Content-Location response header. Question: Is curl broken, or is the server getting RFC 3986 wrong, thereby violating REST's Identification of Resources constraint, which requires us (for all intents and purposes) to obey RFC 3986? " All URI references are parsed by generic syntax parsers when used.(...) URI scheme specifications can define opaque identifiers by disallowing use of slash characters, question mark characters, and the URIs "scheme:." and "scheme:..". " The HTTP scheme is *not* opaque, because it allows '/', '.', '..' and '?'. I don't care if the generic-syntax URI parser is in curl or in an intermediary's client connector or in the origin's server connector. It must handle the request in a non-generic fashion (i.e. by using mod_rewrite) to correlate a query with a file format, whereas using a filename extension is an unambiguous method of solving this problem, which has the benefit of "flowing naturally, in harmony with the system". To the extent that curl will know to save resource.pdf and resource.html to disk, as will anything based on generic URI parsing. The saved PDF representation may be opened in the proper application automatically. The common understanding encapsulated in RFC 3986 is that, "The query component contains non-hierarchical data...". I contend that the format of a representation is part of a system's hierarchical data. The proof is right there in curl, which sees no hierarchy when it's saving 'resource' and 'resource.1' to disk, while obviously recognizing .pdf and .html as part of a hierarchy of representations, whether by direct dereferencing or by presence in a Content-Location response header. The demo I recently posted shows exactly what is meant by URI opacity in REST. The client-side XSLT code (which describes the API) could care less about the URI allocation scheme, aside from sometimes checking for the presence of a fragment (like '/' and '?', the octothorpe is a reserved character in URIs), or generating relative URLs based on the hierarchy of my allocation scheme. But it's still treating URIs opaquely -- links are derived from the link relations in the source documents, which are read from @href's. There is no analysis of those @href contents by keyword or filename extension to determine "resource type". Clients aren't required to know any specifics of a URI's pattern. That would be coupling, which is diametrically opposed to the notion of URI opacity in REST. Don't get carried away by saying that my treating URIs as hierarchical or testing whether they have fragments or not, amounts to a failure to treat URIs as opaque. Can you point to some constraint in REST that I'm breaking? Can you find any support from Roy to back up this notion that URI opacity means that query and filename extension are the same? Are there any generic URI parsers you can point me to, which treat query strings opaquely? Or do they all consider '?' to be a reserved character? I simply find no support in REST for this extreme notion that URIs can't be treated as hierarchical, or that it's an error to consider a query string any differently than a filename extension. In fact, the only reference I can find in REST to anything being "opaque" is cookies. So the onus is not on me to prove that Roy isn't being clear as daylight when he says that "query is not a substitute for identification of resources", but on you who think it could possibly mean any different to support *your* arguments. :-) -Eric
> > The demo I recently posted shows exactly what is meant by URI opacity > in REST. The client-side XSLT code (which describes the API) could > care less about the URI allocation scheme, aside from sometimes > checking for the presence of a fragment (like '/' and '?', the > octothorpe is a reserved character in URIs), or generating relative > URLs based on the hierarchy of my allocation scheme. > There is a REST mismatch in my XSLT code, sometimes I'm doing substring- before($known-string) when generating URLs. This is a pragmatic solution to the limitations of Xpath 1 -- I'd rather do substring-before (last-instance-of-'/'), which would be a more opaque handling of the URL, but the resulting code would be more complex than would be worth maintaining. So a comment is placed in the code, noting the deficiency. -Eric
For those who has no hypermedia support at all: Almost There REST? > In a sense, the APIs in question do make use of hypermedia. They just use it at design time by making the server's state machine static. This information is then published as *hypermedia* for *design time* consumption instead of runtime consumption. Unfortunately, sounds like all REST frameworks manuals I've read so far (including my own) that include hypermedia support: late binding on uri's but early binding on protocol design. Noting that binding is always there - its just a matter of how much bound you are - calling it "Resource Oriented and Coupled" might be too harsh. I believe your current series of blog posts might help creating procurement examples with lesser binding in that way. Regards Guilherme Silveira Caelum | Ensino e Inovao http://www.caelum.com.br/ On Wed, Feb 17, 2010 at 10:04 AM, Jan Algermissen <algermissen1971@...>wrote: > > > Additional thought: > > In a sense, the APIs in question do make use of hypermedia. They just use > it at design time by making the server's state machine static. This > information is then published as *hypermedia* for *design time* consumption > instead of runtime consumption. > > The consequence is of course that client and server are coupled by their > original design[1]. However, systems still benefit from the induced > simplicity by this 'style' so it should not be ruled out as such, just be > correctly understood. > > An appropriate name would maybe have to emphasize the satic nature of the > use of hypermedia? > > Resource Orientation with Static Hypermedia (ROSH) > Resource Oriented and Coupled (ROAC) > > etc. > > Jan > > [1] http://tech.groups.yahoo.com/group/rest-discuss/message/8377 > > > On Feb 16, 2010, at 9:39 PM, Jan Algermissen wrote: > > > Given that the creation of APIs that ignore the hypermedia constraint is > undeniably an ongoing activity wouldn't it be nice to give those a name in > order to differentiate them from RESTful APIs? > > > > I think that it can in various circumstances be argued to be to have HTTP > based APIs that define some URI layout and maybe only use generic media > types. It is certainly more useful than doing the same API with RMI, SOAP or > CORBA. > > > > However, given there is no name for them, they simple become 'REST-APIs' > which obfuscates the fact that systems using those APIs will *not* have the > properties induced by REST. > > > > If we name them, differentiation between the two becomes a lot easier. > > > > > > What I lack is a compelling name that sounds cool enough so API owners > can still be proud even if they are not (yet) in the REST-Olymp of things > :-) ? > > > > Any ideas? > > > > > > HTTP-base APIs > > Fixed-URI-APIs > > > > Both *not* cool.... > > > > Jan > > > > > > > > ----------------------------------- > > Jan Algermissen, Consultant > > NORD Software Consulting > > > > Mail: algermissen@... <algermissen%40acm.org> > > Blog: http://www.nordsc.com/blog/ > > Work: http://www.nordsc.com/ > > ----------------------------------- > > > > > > > > > > > > > > ------------------------------------ > > > > Yahoo! Groups Links > > > > > > > > > ----------------------------------- > Jan Algermissen, Consultant > NORD Software Consulting > > Mail: algermissen@... <algermissen%40acm.org> > Blog: http://www.nordsc.com/blog/ > Work: http://www.nordsc.com/ > ----------------------------------- > > >
> > I'm not sure if design-time, "static hypermedia" makes sense. From the > message you quoted: > Hypermedia means the placement of controls within the > presentation of information > If the links and controls aren't in the data communicated at runtime, then > I think it's a stretch to call it "hypermedia". > If I got it right, the API in question uses hypermedia to represent state transitions that were agreed upon a priori by both server and consumers. I believe the difference here is that if such state transitions were made through resource handling on the http level, using well known and accepted relation types, you server has published a resource that can be used by clients that did not need to write specific code to deal with it. So, hypermedia is there, its just that the client is consuming it in a less-but-still-early-binding fashion. Jan, is that the case? Regards > How about "Pure Data Resources" or "Data-Only Resources"? > > Andrew > > >
Query strings shouldn't have anything to do with content negotiation, and it is an error to couple the two together. Consider my not-yet- ready-for-prime-time Simple Web Service: say I want to know what day of the week St. Patrick's Day falls on this year, in English: http://en.wiski.org/date?iso=2010-03-17 Wednesday? That sucks. Friday's best... Notice the output is XHTML. Well, that's just for now. Eventually, that URI will negotiate between these two variants (these URLs will appear in Content-Location): http://en.wiski.org/date.xht?iso=2010-03-17 http://en.wiski.org/date.json?iso=2010-03-17 The query won't be considered in conneg, only the path. The query is handled separately. The best advice I can give anyone asking about conneg, is to keep the query out of it, if for no other reason than for separation of concerns -- hierarchical data like output format belongs in the path. The following request will eventually redirect based on language negotiation: http://wiski.org/date?iso=2010-03-17 The workflow of redirecting to the proper language prefix, then negotiating an output format, only works if the query is passed along as an appendage. To consider the query within those workflows would go against the notion of URI opacity, and require the implementation of additional steps. As it is, the URIs are self-explanatory: A resource, /date, having two variants /date.xht and /date.json, is what's being queried. The /date resource is described simply as a converter of ISO 8601 date strings to human language. What machine language that human language is returned in, isn't part of the /date resource description. It has no relevance to the query being made. It's part of the protocol layer. If I make an OPTIONS request (eventually) to http://en.wiski.org/date ?iso=2010-03-17, I'll get back a list containing /date.xht ?iso=2010-03-17 and /date.json?iso=2010-03-17 in a 300 Multiple Choices response. In the future, I can omit the Accept header and request /date.json directly for any query. Considering the query when performing conneg adds an entire layer of complexity. It's best ignored, in favor of conneg based on Accept header, with variants as their own resources with their own URLs using filename extensions. After all, output format is part of the protocol layer (late binding of representation to resource). Protocol operations (conneg) shouldn't involve query strings -- only headers. If a server can't perform content negotiation before parsing the query, then content negotiation isn't being performed at the protocol layer, it's based on non-opaque knowledge of the query string itself, and that knowledge cannot be deciphered by any intermediary. So I can't stress enough, to base conneg on Accept headers and filename extensions, never query strings. Query strings shouldn't have anything to do with protocol-layer concerns, they're strictly application-layer concerns to be considered *after* content negotiation has occurred. In the case of my /date service, conneg may occur twice before the server even knows what handler to pass that query *to*. As opposed to having a single handler deciphering the query string, deciding output format at the application layer. I dispatch the query to a handler as determined by the Accept header. Huge architectural difference, there. Finally, no media type identifier describes registered "queries" for the media type. They do, however, in many cases, describe registered filename extensions. Using query as filename extension simply goes against common best practice, whereas REST embraces common best practices. -Eric
On Feb 18, 2010, at 4:12 AM, Guilherme Silveira wrote: > For those who has no hypermedia support at all: Almost There REST? > > > In a sense, the APIs in question do make use of hypermedia. They just use it at design time by making the server's state machine static. This information is then published as *hypermedia* for *design time* consumption instead of runtime consumption. > Unfortunately, sounds like all REST frameworks manuals I've read so far (including my own) that include hypermedia support: late binding on uri's but early binding on protocol design. > > Noting that binding is always there - its just a matter of how much bound you are Sorry, I am having trouble to understand. Can you re-word? > - calling it "Resource Oriented and Coupled" might be too harsh. > > I believe your current series of blog posts might help creating procurement examples with lesser binding in that way. > About to show how I think that would work, but I keep being distracted by other things. Jan > Regards > > Guilherme Silveira > Caelum | Ensino e Inovao > http://www.caelum.com.br/ > > > On Wed, Feb 17, 2010 at 10:04 AM, Jan Algermissen <algermissen1971@...> wrote: > > Additional thought: > > In a sense, the APIs in question do make use of hypermedia. They just use it at design time by making the server's state machine static. This information is then published as *hypermedia* for *design time* consumption instead of runtime consumption. > > The consequence is of course that client and server are coupled by their original design[1]. However, systems still benefit from the induced simplicity by this 'style' so it should not be ruled out as such, just be correctly understood. > > An appropriate name would maybe have to emphasize the satic nature of the use of hypermedia? > > Resource Orientation with Static Hypermedia (ROSH) > Resource Oriented and Coupled (ROAC) > > etc. > > Jan > > [1] http://tech.groups.yahoo.com/group/rest-discuss/message/8377 > > > > On Feb 16, 2010, at 9:39 PM, Jan Algermissen wrote: > > > Given that the creation of APIs that ignore the hypermedia constraint is undeniably an ongoing activity wouldn't it be nice to give those a name in order to differentiate them from RESTful APIs? > > > > I think that it can in various circumstances be argued to be to have HTTP based APIs that define some URI layout and maybe only use generic media types. It is certainly more useful than doing the same API with RMI, SOAP or CORBA. > > > > However, given there is no name for them, they simple become 'REST-APIs' which obfuscates the fact that systems using those APIs will *not* have the properties induced by REST. > > > > If we name them, differentiation between the two becomes a lot easier. > > > > > > What I lack is a compelling name that sounds cool enough so API owners can still be proud even if they are not (yet) in the REST-Olymp of things :-) ? > > > > Any ideas? > > > > > > HTTP-base APIs > > Fixed-URI-APIs > > > > Both *not* cool.... > > > > Jan > > > > > > > > ----------------------------------- > > Jan Algermissen, Consultant > > NORD Software Consulting > > > > Mail: algermissen@... > > Blog: http://www.nordsc.com/blog/ > > Work: http://www.nordsc.com/ > > ----------------------------------- > > > > > > > > > > > > > > ------------------------------------ > > > > Yahoo! Groups Links > > > > > > > > > ----------------------------------- > Jan Algermissen, Consultant > NORD Software Consulting > > Mail: algermissen@... > Blog: http://www.nordsc.com/blog/ > Work: http://www.nordsc.com/ > ----------------------------------- > > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
On Feb 18, 2010, at 4:15 AM, Guilherme Silveira wrote: > > > I'm not sure if design-time, "static hypermedia" makes sense. From the message you quoted: > Hypermedia means the placement of controls within the > presentation of information > If the links and controls aren't in the data communicated at runtime, then I think it's a stretch to call it "hypermedia". > If I got it right, the API in question uses hypermedia to represent state transitions that were agreed upon a priori by both server and consumers. Yes, that is what I mean. I know it is not really correct to put it that way, but it occurred to me that a description of the available URIs at design time is in some sense similar to such a description at run time. It is 'just' the amount of design time coupling induced that differs. Consider WADL at runtime (which is RESTful[1] because you can consider it a form) vs. WADL at design time. If we manage to make designers of 3/4 APIs aware of the coupling they introduce and show them the tradeoffs involved, that would be beneficial in many ways: - clear up the confusion in general - emphazise what RESTfulness means - educate how coupled 3/4 systems actually are - provide a ground for analyzing the tradeoffs of 3/4 systems, e.g. I might choose not to go through the 'pain' of the media type design and maintenance and instead put my resources in place, descipe the URIs and payloads and ready is my service. As long as I make *very* clear that there is a lot of coupling going on and that I effectively cannot change my service without consulting the client owner the resukt will be a manageable system. For some situations, the very short time to production and benefits in 'debugability' make 3/4 systems far superior to any form of RPC. Just be sure you do not think it is REST. > > I believe the difference here is that if such state transitions were made through resource handling on the http level, using well known and accepted relation types, you server has published a resource that can be used by clients that did not need to write specific code to deal with it. > So, hypermedia is there, its just that the client is consuming it in a less-but-still-early-binding fashion. > > Jan, is that the case? Hmm, sorry - I did not get the essence of your point. Try again? Jan [1] Though I do not like to use such a generic 'form' meachanism; I prefer business level documents > > Regards > > > How about "Pure Data Resources" or "Data-Only Resources"? > > Andrew > > > > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
In procurement domains/processes, there usually exist order document and order-change documents (see UBL for example). Suppose I create an order by POSTing an order document to some order accepting resource. POST /orders/ Content-Type: application/procurement+xml;type=order <order ... /> 201 Created Location: /orders/1 Content-Location: /orders/1 ETag: "1234" Content-Type: application/procurement+xml;type=order <order .../> What do you think about viewing an order-change document as a diff I can apply to an order: PATCH /orders/1 Content-Type: application/procurement+xml;type=order-change If-Match: "1234" <orderChange .../> 200 Ok Content-Location: /orders/1 ETag: "1235" Content-Type: application/procurement+xml;type=order <order .../> I think the considerably open semantics of PATCH do at least not forbid that. Any thoughts? Jan ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
On Feb 18, 2010, at 11:15 AM, Sebastien Lambla wrote: > Quick note on this... > > Hypermedia controls are part of hypermedia document. That's things linking to things. > > If you pre-define statically all the URIs for your state transitions, you *do not have* hypermedia controls. You have static endpoinds a la SOAP. And that's because you don't have hypermedia documents. Yes, sure I am bending the point. But a design time form is still not very different from a run time form. Except for the induced coupling of course. I am not trying to say that the other APIs are somehow also REST. I am trying to explicitly label them as something so we can say: Foo APIs are Foo APIs *NOT* REST APIs. If you have two childre, you do not call one of them Jim and the other one "Jim with black hair" or "Small Jim" or whatever. Jan > > *if* you delegated some of those endpoints definitions to some discovery mechanism a la UDDI, then maybe you've managed to reintroduce the hypermedia documents, but then you've gone full circle in reinventing the wheel. > > > > CC: andrew.wahbe@...; rest-discuss@yahoogroups.com > > To: guilherme.silveira@... > > From: algermissen1971@... > > Date: Thu, 18 Feb 2010 08:38:01 +0100 > > Subject: Re: [rest-discuss] Re: A Name for "3 out of 4 REST constraints" APIs? > > > > > > On Feb 18, 2010, at 4:15 AM, Guilherme Silveira wrote: > > > > > > > > > > > I'm not sure if design-time, "static hypermedia" makes sense. From the message you quoted: > > > Hypermedia means the placement of controls within the > > > presentation of information > > > If the links and controls aren't in the data communicated at runtime, then I think it's a stretch to call it "hypermedia". > > > If I got it right, the API in question uses hypermedia to represent state transitions that were agreed upon a priori by both server and consumers. > > > > Yes, that is what I mean. > > I know it is not really correct to put it that way, but it occurred to me that a description of the available URIs at design time is in some sense similar to such a description at run time. It is 'just' the amount of design time coupling induced that differs. > > > > Consider WADL at runtime (which is RESTful[1] because you can consider it a form) vs. WADL at design time. > > > > If we manage to make designers of 3/4 APIs aware of the coupling they introduce and show them the tradeoffs involved, that would be beneficial in many ways: > > > > - clear up the confusion in general > > - emphazise what RESTfulness means > > - educate how coupled 3/4 systems actually are > > - provide a ground for analyzing the tradeoffs of 3/4 systems, e.g. I might > > choose not to go through the 'pain' of the media type design and maintenance > > and instead put my resources in place, descipe the URIs and payloads > > and ready is my service. As long as I make *very* clear that there is > > a lot of coupling going on and that I effectively cannot change my service > > without consulting the client owner the resukt will be a manageable system. > > For some situations, the very short time to production and benefits in > > 'debugability' make 3/4 systems far superior to any form of RPC. > > Just be sure you do not think it is REST. > > > > > > > > I believe the difference here is that if such state transitions were made through resource handling on the http level, using well known and accepted relation types, you server has published a resource that can be used by clients that did not need to write specific code to deal with it. > > > So, hypermedia is there, its just that the client is consuming it in a less-but-still-early-binding fashion. > > > > > > Jan, is that the case? > > > > Hmm, sorry - I did not get the essence of your point. Try again? > > > > Jan > > > > > > [1] Though I do not like to use such a generic 'form' meachanism; I prefer business level documents > > > > > > > > > > > > > > Regards > > > > > > > > > How about "Pure Data Resources" or "Data-Only Resources"? > > > > > > Andrew > > > > > > > > > > > > > > > > > > > > > > ----------------------------------- > > Jan Algermissen, Consultant > > NORD Software Consulting > > > > Mail: algermissen@... > > Blog: http://www.nordsc.com/blog/ > > Work: http://www.nordsc.com/ > > ----------------------------------- > > > > > > > > > > > > > > ------------------------------------ > > > > Yahoo! Groups Links > > > > > > > > Do you have a story that started on Hotmail? Tell us now ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
Quick note on this... Hypermedia controls are part of hypermedia document. That's things linking to things. If you pre-define statically all the URIs for your state transitions, you *do not have* hypermedia controls. You have static endpoinds a la SOAP. And that's because you don't have hypermedia documents. *if* you delegated some of those endpoints definitions to some discovery mechanism a la UDDI, then maybe you've managed to reintroduce the hypermedia documents, but then you've gone full circle in reinventing the wheel. > CC: andrew.wahbe@...; rest-discuss@yahoogroups.com > To: guilherme.silveira@... > From: algermissen1971@...m > Date: Thu, 18 Feb 2010 08:38:01 +0100 > Subject: Re: [rest-discuss] Re: A Name for "3 out of 4 REST constraints" APIs? > > > On Feb 18, 2010, at 4:15 AM, Guilherme Silveira wrote: > > > > > > > I'm not sure if design-time, "static hypermedia" makes sense. From the message you quoted: > > Hypermedia means the placement of controls within the > > presentation of information > > If the links and controls aren't in the data communicated at runtime, then I think it's a stretch to call it "hypermedia". > > If I got it right, the API in question uses hypermedia to represent state transitions that were agreed upon a priori by both server and consumers. > > Yes, that is what I mean. > I know it is not really correct to put it that way, but it occurred to me that a description of the available URIs at design time is in some sense similar to such a description at run time. It is 'just' the amount of design time coupling induced that differs. > > Consider WADL at runtime (which is RESTful[1] because you can consider it a form) vs. WADL at design time. > > If we manage to make designers of 3/4 APIs aware of the coupling they introduce and show them the tradeoffs involved, that would be beneficial in many ways: > > - clear up the confusion in general > - emphazise what RESTfulness means > - educate how coupled 3/4 systems actually are > - provide a ground for analyzing the tradeoffs of 3/4 systems, e.g. I might > choose not to go through the 'pain' of the media type design and maintenance > and instead put my resources in place, descipe the URIs and payloads > and ready is my service. As long as I make *very* clear that there is > a lot of coupling going on and that I effectively cannot change my service > without consulting the client owner the resukt will be a manageable system. > For some situations, the very short time to production and benefits in > 'debugability' make 3/4 systems far superior to any form of RPC. > Just be sure you do not think it is REST. > > > > > I believe the difference here is that if such state transitions were made through resource handling on the http level, using well known and accepted relation types, you server has published a resource that can be used by clients that did not need to write specific code to deal with it. > > So, hypermedia is there, its just that the client is consuming it in a less-but-still-early-binding fashion. > > > > Jan, is that the case? > > Hmm, sorry - I did not get the essence of your point. Try again? > > Jan > > > [1] Though I do not like to use such a generic 'form' meachanism; I prefer business level documents > > > > > > > > Regards > > > > > > How about "Pure Data Resources" or "Data-Only Resources"? > > > > Andrew > > > > > > > > > > > > > > ----------------------------------- > Jan Algermissen, Consultant > NORD Software Consulting > > Mail: algermissen@... > Blog: http://www.nordsc.com/blog/ > Work: http://www.nordsc.com/ > ----------------------------------- > > > > > > > ------------------------------------ > > Yahoo! Groups Links > > > _________________________________________________________________ Send us your Hotmail stories and be featured in our newsletter http://clk.atdmt.com/UKM/go/195013117/direct/01/
On Thu, Feb 18, 2010 at 21:15, Sebastien Lambla <seb@...> wrote: > Hypermedia controls are part of hypermedia document. That's things linking to things. Linking is only part of it, of course. How about ; Interactive Resource Orientation vs. Non-interactive? (Interactive at least denotes a semblance of non-static interfaces) Regards, Alex -- Project Wrangler, SOA, Information Alchemist, UX, RESTafarian, Topic Maps --- http://shelter.nu/blog/ ---------------------------------------------- ------------------ http://www.google.com/profiles/alexander.johannesen ---
On Thu, Feb 18, 2010 at 2:20 AM, Jan Algermissen <algermissen1971@...> wrote: > In procurement domains/processes, there usually exist order document and order-change > documents (see UBL for example). [...] > What do you think about viewing an order-change document as a diff I can apply to an order: > > PATCH /orders/1 One problem might be that frequently order changes should be resources (usually called Change Orders) in their own right. For example, sometimes order changes invoke extra charges. In that case, the trading partners might want three resources: the original order, the change order, and the changed order (after the change was applied).
Good point. It becomes a matter of media type design -- can you create a media type which you can dereference as a 'change order' *and* use as a diff format for PATCH? -Eric Bob Haugen wrote: > On Thu, Feb 18, 2010 at 2:20 AM, Jan Algermissen > <algermissen1971@...> wrote: > > In procurement domains/processes, there usually exist order > > document and order-change documents (see UBL for example). > [...] > > What do you think about viewing an order-change document as a diff > > I can apply to an order: > > > > PATCH /orders/1 > > One problem might be that frequently order changes should be resources > (usually called Change Orders) in their own right. For example, > sometimes order changes invoke extra charges. > > In that case, the trading partners might want three resources: the > original order, the change order, and the changed order (after the > change was applied). >
rest-wheas whithout hipermedia engine of application state _________________________________________________ Melhores cumprimentos / Beir beannacht / Best regards Antnio Manuel dos Santos Mota http://card.ly/amsmota _________________________________________________ 2010/2/18 Alexander Johannesen <alexander.johannesen@...> > > > On Thu, Feb 18, 2010 at 21:15, Sebastien Lambla <seb@...<seb%40serialseb.com>> > wrote: > > Hypermedia controls are part of hypermedia document. That's things > linking to things. > > Linking is only part of it, of course. How about ; > > Interactive Resource Orientation vs. Non-interactive? (Interactive > at least denotes a semblance of non-static interfaces) > > Regards, > > Alex > -- > Project Wrangler, SOA, Information Alchemist, UX, RESTafarian, Topic Maps > --- http://shelter.nu/blog/ ---------------------------------------------- > ------------------ http://www.google.com/profiles/alexander.johannesen --- > >
Or just rest-wh _________________________________________________ Melhores cumprimentos / Beir beannacht / Best regards Antnio Manuel dos Santos Mota http://card.ly/amsmota _________________________________________________ 2010/2/18 Antnio Mota <amsmota@...> > rest-wheas > > whithout hipermedia engine of application state > > > _________________________________________________ > > Melhores cumprimentos / Beir beannacht / Best regards > > Antnio Manuel dos Santos Mota > > http://card.ly/amsmota > _________________________________________________ > > > > 2010/2/18 Alexander Johannesen <alexander.johannesen@...> > > >> >> On Thu, Feb 18, 2010 at 21:15, Sebastien Lambla <seb@...<seb%40serialseb.com>> >> wrote: >> > Hypermedia controls are part of hypermedia document. That's things >> linking to things. >> >> Linking is only part of it, of course. How about ; >> >> Interactive Resource Orientation vs. Non-interactive? (Interactive >> at least denotes a semblance of non-static interfaces) >> >> Regards, >> >> Alex >> -- >> Project Wrangler, SOA, Information Alchemist, UX, RESTafarian, Topic Maps >> --- http://shelter.nu/blog/---------------------------------------------- >> ------------------ http://www.google.com/profiles/alexander.johannesen--- >> >> > >
RESTless
On Feb 18, 2010, at 1:19 PM, Bob Haugen wrote: > RESTless :-) But having REST in the name obfuscates the message (IMHO) that it is something different. Jan > > > ------------------------------------ > > Yahoo! Groups Links > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
Well, with REST-WHEAS that doesn't happen because - the "-" can be understood as "minus" - WHEAS for rest-aware people is similiar enough to HATEOAS to immediately be recognizable - 3/4 of REST is almost REST, so it is convenient that people understand that is a valid approach to a future full-implementation of REST. So people can say - my app is REST-WHEAS until I have the time to implement the rest... the rest of REST... _________________________________________________ Melhores cumprimentos / Beir beannacht / Best regards Antnio Manuel dos Santos Mota http://card.ly/amsmota _________________________________________________ 2010/2/18 Jan Algermissen <algermissen1971@...> > > > > On Feb 18, 2010, at 1:19 PM, Bob Haugen wrote: > > > RESTless > > :-) > > But having REST in the name obfuscates the message (IMHO) that it is > something different. > > Jan > > > > > > > ------------------------------------ > > > > Yahoo! Groups Links > > > > > > > > > ----------------------------------- > Jan Algermissen, Consultant > NORD Software Consulting > > Mail: algermissen@... <algermissen%40acm.org> > Blog: http://www.nordsc.com/blog/ > Work: http://www.nordsc.com/ > ----------------------------------- > > >
> <http://en.wikipedia.org/wiki/George_Forman>. <http://en.wikipedia.org/wiki/George_Foreman>. Sincerely yours, the editor.
Jan Algermissen (in <http://tech.groups.yahoo.com/group/rest-discuss/message/14786>): > If you have two [children], you do not call one of them ["Jim"] and the other one "Jim with black hair" or "Small Jim" or whatever. <http://en.wikipedia.org/wiki/George_Forman>.
On Feb 18, 2010, at 2:49 PM, Etan Wexler wrote: > Jan Algermissen (in > <http://tech.groups.yahoo.com/group/rest-discuss/message/14786>): > >> If you have two [children], you do not call one of them ["Jim"] and the other one "Jim with black hair" or "Small Jim" or whatever. > > <http://en.wikipedia.org/wiki/George_Forman>. :-) I once played pool with two huge white South Africans that were *single-egged twins*. One of them said: "Hi I am Neil and this is my brother Neil" Jan > > > ------------------------------------ > > Yahoo! Groups Links > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
> I think the considerably open semantics of PATCH do at least not forbid that. IMHO, PATCH does not define which media type can be used, so content negotiation should handle it as usual: you can either accept a type=order with a partial representation of your resource, or type=order-change with some change specific values. Regards Guilherme
My brother is also called Ian. We differ by middle name. ian On 18 Feb 2010, at 14:49, Etan Wexler <yahoo.com@...> wrote: > Jan Algermissen (in > <http://tech.groups.yahoo.com/group/rest-discuss/message/14786>): > > > If you have two [children], you do not call one of them ["Jim"] > and the other one "Jim with black hair" or "Small Jim" or whatever. > > <http://en.wikipedia.org/wiki/George_Forman>. >
Hello Jan. Let's clarify a little about this: 1. REST is much more that using HTTP and using Hypermedia as an Engine to control State. That means, there are several other constrains. The HyperMedia use is just one of the four constrains needed to achieve the use of an universal interface. 2. Hypermedia constrain is there because REST was aimed to be a style for systems in a networked environment whose main interaction is the transfer of large hypermedia documents. I had thought always that, if you are not into that kind of system, you should not use REST as it is, but you may use a derivative. 3. REST is a style, detailed with the reasoning of each constrain, which means you can identify which constrains work for your work and use them. You may be using then a subset of the style, perfectly fine, and fits with your application. But your application is not, then RESTFull. 4. So, what you see is not only people forgetting about the Hypermedia constrain, but actually using subsets of the style they need. The name you suggest to look for is then not related to the lack of Hypermedia thingies, but of an incomplete use of REST. Than brings me to the actual problem that faces what you are proposing: It may be too many names! Actually, some may be completely new styles aimed to different things. I can use representational style of resources, without stating the workflow as a State. It may be transactional and atomic. It may be the RETA. Or, we can use the State Machine for workflow, but using RPC as the step unit. YOu have SMARPC. What about the ones that use REST as CRUD, but with resources? CRUDRES If someone is not using Hypemedias as the state machine supported, it may be they are not using state machine at all. I have seem apis that are more like RPC. So, it is Representational, no State, no Transfer, but RPC. RERPC. And what if they have state machine workflow on server? No Hypermedia. That is the tricky one, since the REST name fits, only that it is not intended for hypermedia transfer, but for transactions. RESTX may be the name. etc. So, and that is something I was planning to mention in a post when having some time, we may actually review the needs of the applications and define a style, based on REST, that may fit my app needs, instead of forcing the app to support REST just to have that name in the credits. We may find there are less that a dozen totally different styles, all REST children, but with a name on their own. With all the rationale on how to use them, it may be a great help for all those that are not quite there yet. William Martinez Pomares. --- In rest-discuss@yahoogroups.com, Jan Algermissen <algermissen1971@...> wrote: > > Given that the creation of APIs that ignore the hypermedia constraint is undeniably an ongoing activity wouldn't it be nice to give those a name in order to differentiate them from RESTful APIs? > > I think that it can in various circumstances be argued to be to have HTTP based APIs that define some URI layout and maybe only use generic media types. It is certainly more useful than doing the same API with RMI, SOAP or CORBA. > > However, given there is no name for them, they simple become 'REST-APIs' which obfuscates the fact that systems using those APIs will *not* have the properties induced by REST. > > If we name them, differentiation between the two becomes a lot easier. > > > What I lack is a compelling name that sounds cool enough so API owners can still be proud even if they are not (yet) in the REST-Olymp of things :-) ? > > Any ideas? > > > HTTP-base APIs > Fixed-URI-APIs > > Both *not* cool.... > > Jan > > > > ----------------------------------- > Jan Algermissen, Consultant > NORD Software Consulting > > Mail: algermissen@... > Blog: http://www.nordsc.com/blog/ > Work: http://www.nordsc.com/ > ----------------------------------- >
On Feb 18, 2010, at 5:59 PM, William Martinez Pomares wrote: > Hello Jan. > Let's clarify a little about this: Yes, those are valid points. But aren't there 'REST abuse patterns' that are sufficiently common to warrant to be assigned a name? Or we call them '!REST' (not REST) or just HTTP-APIs. Jan > 1. REST is much more that using HTTP and using Hypermedia as an Engine > to control State. That means, there are several other constrains. The > HyperMedia use is just one of the four constrains needed to achieve the > use of an universal interface. > > 2. Hypermedia constrain is there because REST was aimed to be a style > for systems in a networked environment whose main interaction is the > transfer of large hypermedia documents. I had thought always that, if > you are not into that kind of system, you should not use REST as it is, > but you may use a derivative. > > 3. REST is a style, detailed with the reasoning of each constrain, which > means you can identify which constrains work for your work and use them. > You may be using then a subset of the style, perfectly fine, and fits > with your application. But your application is not, then RESTFull. > > 4. So, what you see is not only people forgetting about the Hypermedia > constrain, but actually using subsets of the style they need. The name > you suggest to look for is then not related to the lack of Hypermedia > thingies, but of an incomplete use of REST. > > Than brings me to the actual problem that faces what you are proposing: > It may be too many names! > Actually, some may be completely new styles aimed to different things. > I can use representational style of resources, without stating the > workflow as a State. It may be transactional and atomic. It may be the > RETA. > > Or, we can use the State Machine for workflow, but using RPC as the step > unit. YOu have SMARPC. > > What about the ones that use REST as CRUD, but with resources? CRUDRES > If someone is not using Hypemedias as the state machine supported, it > may be they are not using state machine at all. I have seem apis that > are more like RPC. So, it is Representational, no State, no Transfer, > but RPC. RERPC. > > And what if they have state machine workflow on server? No Hypermedia. > That is the tricky one, since the REST name fits, only that it is not > intended for hypermedia transfer, but for transactions. RESTX may be the > name. > etc. > > So, and that is something I was planning to mention in a post when > having some time, we may actually review the needs of the applications > and define a style, based on REST, that may fit my app needs, instead of > forcing the app to support REST just to have that name in the credits. > We may find there are less that a dozen totally different styles, all > REST children, but with a name on their own. With all the rationale on > how to use them, it may be a great help for all those that are not quite > there yet. > > William Martinez Pomares. > > --- In rest-discuss@yahoogroups.com, Jan Algermissen > <algermissen1971@...> wrote: >> >> Given that the creation of APIs that ignore the hypermedia constraint > is undeniably an ongoing activity wouldn't it be nice to give those a > name in order to differentiate them from RESTful APIs? >> >> I think that it can in various circumstances be argued to be to have > HTTP based APIs that define some URI layout and maybe only use generic > media types. It is certainly more useful than doing the same API with > RMI, SOAP or CORBA. >> >> However, given there is no name for them, they simple become > 'REST-APIs' which obfuscates the fact that systems using those APIs will > *not* have the properties induced by REST. >> >> If we name them, differentiation between the two becomes a lot easier. >> >> >> What I lack is a compelling name that sounds cool enough so API owners > can still be proud even if they are not (yet) in the REST-Olymp of > things :-) ? >> >> Any ideas? >> >> >> HTTP-base APIs >> Fixed-URI-APIs >> >> Both *not* cool.... >> >> Jan >> >> >> >> ----------------------------------- >> Jan Algermissen, Consultant >> NORD Software Consulting >> >> Mail: algermissen@... >> Blog: http://www.nordsc.com/blog/ >> Work: http://www.nordsc.com/ >> ----------------------------------- >> > > > > > ------------------------------------ > > Yahoo! Groups Links > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
Eric J. Bowman wrote: > I contend that the format of a representation is part of a > system's hierarchical data. The proof is right there in curl, > which sees no hierarchy when it's saving 'resource' and > 'resource.1' to disk, while obviously recognizing .pdf > and .html as part of a hierarchy of representations, whether > by direct dereferencing or by presence in a Content-Location > response header. That's an odd way to look at it. The "format of a representation is part of a system's hierarchical data" if and only if the path portion of the URI contains information to that effect. If not, it doesn't. The fact that some domains specifically do this does not require them all to do so. Curl is suboptimal in this respect since it does not differentiate responses based on the query string component. > But it's still treating URIs opaquely -- links are derived from > the link relations in the source documents, which are read from > @href's. There is no analysis of those @href contents by keyword > or filename extension to determine "resource type". Clients > aren't required to know any specifics of a URI's pattern. That > would be coupling, which is diametrically opposed to the notion > of URI opacity in REST. Absolutely correct. That behavior should be extended to query string components of the URI as well. > Don't get carried away by saying that my treating URIs as > hierarchical or testing whether they have fragments or not, > amounts to a failure to treat URIs as opaque. Can you point > to some constraint in REST that I'm breaking? Can you find any > support from Roy to back up this notion that URI opacity means > that query and filename extension are the same? Sure. I used to take your position but corrected myself [1] back in 2005, complete with Fielding quotes and RFC 3986 references ;) > Are there any > generic URI parsers you can point me to, which treat query > strings opaquely? Or do they all consider '?' to be a reserved > character? The '?' character does have a special meaning: it separates the hierarchical portion of the URI from the opaque part. But the opaque part is still part of the identifier. > So the onus is not on me to prove that Roy isn't being clear > as daylight when he says that "query is not a substitute for > identification of resources", but on you who think it could > possibly mean any different to support *your* arguments. :-) When I read that statement from Roy [2], I don't see any indication that it's about "query" as a URI component; instead, it is regarding "query" as a means of fetching a list of resources. Roy's point seems to be that returning 38 items in a list within a single response is no substitute for having distinct URI's for each of those 38 individual resources. Robert Brewer fumanchu@... [1] http://groups.google.com/group/cherrypy-devel/msg/0fcc62df334bc9ed [2] http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven#comment-720
On Thu, Feb 18, 2010 at 2:38 AM, Jan Algermissen <algermissen1971@...>wrote: > > On Feb 18, 2010, at 4:15 AM, Guilherme Silveira wrote: > > > > > > > I'm not sure if design-time, "static hypermedia" makes sense. From the > message you quoted: > > Hypermedia means the placement of controls within the > > presentation of information > > If the links and controls aren't in the data communicated at runtime, > then I think it's a stretch to call it "hypermedia". > > If I got it right, the API in question uses hypermedia to represent state > transitions that were agreed upon a priori by both server and consumers. > > Yes, that is what I mean. > I know it is not really correct to put it that way, but it occurred to me > that a description of the available URIs at design time is in some sense > similar to such a description at run time. It is 'just' the amount of design > time coupling induced that differs. > > Consider WADL at runtime (which is RESTful[1] because you can consider it a > form) vs. WADL at design time. > > If we manage to make designers of 3/4 APIs aware of the coupling they > introduce and show them the tradeoffs involved, that would be beneficial in > many ways... > > > I get what you mean. I was just objecting to the term "static hypermedia". To me that's confusing because the term "hypermedia" implies controls in *run-time* data. We are talking about the system architecture here (which implies run-time design), not the code constructs -- so the fact that a hypermedia document was used to say generate code is a different matter. Andrew -- Andrew Wahbe
> > Or we call them '!REST' (not REST) or just HTTP-APIs. > I would suggest with the volume of email this generates: unREST.
Hi guys,
I'm trying to understand HATEOAS properly and aim to embrace it fully. However, the lack of client side libraries that embrace this is impeding me for the moment. The way I see it, all clients must be only be made known of one URI, say "/". The client then derives the URIs from that link. As such, I was hoping to do something roughly like:
from("/").follow("#users_link").fill_form("search_user", {"id": "someid"}).follow("#user_link").follow("#friends_link")
The goal of the previous command was to get the friends of some user having an ID of 'someid'. It starts by going to the given URL, then from the page it got, follows a certain link which redirect it to the resource for a list of users. It then searches for the user, probably filling up a form, submits it, and goes to the user's page. From the user's page, there is a link to the friends page, and that is followed and eventually processed.
In reality of course, this shouldn't really necessitate multiple calls to the server if called multiple times since previous results have been cached and processed on the client side. Only when the cache expires, should there be an attempt to request again. I'm not really sure if RESTful clients that respect HATEOAS do it this way, and should they in the first place. If they do, are there tools that exist for this?
On a side note, content type negotiation should be preconfigured before doing the call I stated above.
Thanks,
Jan Vincent Liwanag
jvliwanag@...
Comments Below... --- In rest-discuss@yahoogroups.com, Jan Algermissen <algermissen1971@...> wrote: > > > On Feb 18, 2010, at 5:59 PM, William Martinez Pomares wrote: > > > Hello Jan. > > Let's clarify a little about this: > > Yes, those are valid points. > > But aren't there 'REST abuse patterns' that are sufficiently common to warrant to be assigned a name? > > Or we call them '!REST' (not REST) or just HTTP-APIs. > > Jan > > Totally agree. But one thing is a REST Abuse pattern, and another one is a new style that is so similar, but not the same, as REST. Should not have reference to REST name, but should have a name, and not only that, also a definition of constrains, the goal or target, the consequences, etc. Just like documenting a new style. BTW, one thing is a style and another one an API. Style is more like a pattern, thus should have a name for many APIs. So, REST is an architectural style, thus created to guide the creation of systems, not APIs. An API is, as the name indicates, an interface to communicate with an application or system behind. It makes sense the application is RESTFull, but more than often, the application is a SOA or a plain distributed object oriented one, that someone wants to expose to the web. I see not REST anywhere there. Cheers William.
Funny, now that I am understanding what Jan and others are saying about HATEOAS, REST and why so many "rest" apis are not truly REST, I too think we need some sort of new API name for the "almost REST but not quite" HTTP api format many of us are using. It's basically a better/easier XML-RPC and Web Services.. while almost being rest as well.
--- On Thu, 2/18/10, William Martinez Pomares <wmartinez@...> wrote:
From: William Martinez Pomares <wmartinez@...>
Subject: [rest-discuss] Re: A Name for "3 out of 4 REST constraints" APIs?
To: rest-discuss@yahoogroups.com
Date: Thursday, February 18, 2010, 7:14 PM
Comments Below...
--- In rest-discuss@ yahoogroups. com, Jan Algermissen <algermissen1971@ ...> wrote:
>
>
> On Feb 18, 2010, at 5:59 PM, William Martinez Pomares wrote:
>
> > Hello Jan.
> > Let's clarify a little about this:
>
> Yes, those are valid points.
>
> But aren't there 'REST abuse patterns' that are sufficiently common to warrant to be assigned a name?
>
> Or we call them '!REST' (not REST) or just HTTP-APIs.
>
> Jan
>
>
Totally agree.
But one thing is a REST Abuse pattern, and another one is a new style that is so similar, but not the same, as REST. Should not have reference to REST name, but should have a name, and not only that, also a definition of constrains, the goal or target, the consequences, etc. Just like documenting a new style.
BTW, one thing is a style and another one an API. Style is more like a pattern, thus should have a name for many APIs. So, REST is an architectural style, thus created to guide the creation of systems, not APIs. An API is, as the name indicates, an interface to communicate with an application or system behind. It makes sense the application is RESTFull, but more than often, the application is a SOA or a plain distributed object oriented one, that someone wants to expose to the web. I see not REST anywhere there.
Cheers
William.
I think one problem you may run in to in order to be truly RESTful, is assuming that the response will return the link you wish to follow. From your multiple method invocation (assuming I am reading that correctly that it is one line of code with multiple method calls), the 2nd method invocation assumes the link is part of the response. What happens if it's not? For what ever reason the server decided not to include it, be it an error, invalid authorization, etc?
--- On Thu, 2/18/10, Jan Vincent <jvliwanag@...> wrote:
From: Jan Vincent <jvliwanag@...>
Subject: [rest-discuss] HATEOAS and Cache
To: rest-discuss@yahoogroups.com
Date: Thursday, February 18, 2010, 4:56 PM
Hi guys,
I'm trying to understand HATEOAS properly and aim to embrace it fully. However, the lack of client side libraries that embrace this is impeding me for the moment. The way I see it, all clients must be only be made known of one URI, say "/". The client then derives the URIs from that link. As such, I was hoping to do something roughly like:
from("/").follow( "#users_link" ).fill_form( "search_user" , {"id": "someid"}).follow( "#user_link" ).follow( "#friends_ link")
The goal of the previous command was to get the friends of some user having an ID of 'someid'. It starts by going to the given URL, then from the page it got, follows a certain link which redirect it to the resource for a list of users. It then searches for the user, probably filling up a form, submits it, and goes to the user's page. From the user's page, there is a link to the friends page, and that is followed and eventually processed.
In reality of course, this shouldn't really necessitate multiple calls to the server if called multiple times since previous results have been cached and processed on the client side. Only when the cache expires, should there be an attempt to request again. I'm not really sure if RESTful clients that respect HATEOAS do it this way, and should they in the first place. If they do, are there tools that exist for this?
On a side note, content type negotiation should be preconfigured before doing the call I stated above.
Thanks,
Jan Vincent Liwanag
jvliwanag@gmail. com
On Feb 19, 2010, at 7:45 AM, Kevin Duffey wrote:
>
>
> I think one problem you may run in to in order to be truly RESTful, is assuming that the response will return the link you wish to follow. From your multiple method invocation (assuming I am reading that correctly that it is one line of code with multiple method calls), the 2nd method invocation assumes the link is part of the response. What happens if it's not?
Exactly.
This is why hiding a state traversal behind an OO interface violates the hypermedia constraint[1]. It hides the fact that the client should accept the case of the link not being present as part of normal communication. That is: not throw an exception as the default behaviour.
Jan
[1] Unless you have a media type that mandates the link to be there in valid instances, e.g.
GET /sth
Accept: appliation/foo
200 Ok
Content-Type: appliation/foo
<foo href=""/>
If vaid foo messages MUST contain the href attr then you could hide this in an OO call. But few media types ever do this and it can IMHO be considered bad media type design.
> For what ever reason the server decided not to include it, be it an error, invalid authorization, etc?
>
>
> --- On Thu, 2/18/10, Jan Vincent <jvliwanag@...> wrote:
>
> From: Jan Vincent <jvliwanag@...>
> Subject: [rest-discuss] HATEOAS and Cache
> To: rest-discuss@yahoogroups.com
> Date: Thursday, February 18, 2010, 4:56 PM
>
> Hi guys,
>
> I'm trying to understand HATEOAS properly and aim to embrace it fully. However, the lack of client side libraries that embrace this is impeding me for the moment. The way I see it, all clients must be only be made known of one URI, say "/". The client then derives the URIs from that link. As such, I was hoping to do something roughly like:
>
> from("/").follow( "#users_link" ).fill_form( "search_user" , {"id": "someid"}).follow( "#user_link" ).follow( "#friends_ link")
>
> The goal of the previous command was to get the friends of some user having an ID of 'someid'. It starts by going to the given URL, then from the page it got, follows a certain link which redirect it to the resource for a list of users. It then searches for the user, probably filling up a form, submits it, and goes to the user's page. From the user's page, there is a link to the friends page, and that is followed and eventually processed.
>
> In reality of course, this shouldn't really necessitate multiple calls to the server if called multiple times since previous results have been cached and processed on the client side. Only when the cache expires, should there be an attempt to request again. I'm not really sure if RESTful clients that respect HATEOAS do it this way, and should they in the first place. If they do, are there tools that exist for this?
>
> On a side note, content type negotiation should be preconfigured before doing the call I stated above.
>
> Thanks,
>
> Jan Vincent Liwanag
> jvliwanag@gmail. com
>
>
>
>
>
>
-----------------------------------
Jan Algermissen, Consultant
NORD Software Consulting
Mail: algermissen@...
Blog: http://www.nordsc.com/blog/
Work: http://www.nordsc.com/
-----------------------------------
On Feb 19, 2010, at 1:56 AM, Jan Vincent wrote:
> Hi guys,
>
> I'm trying to understand HATEOAS properly and aim to embrace it fully. However, the lack of client side libraries that embrace this is impeding me for the moment. The way I see it, all clients must be only be made known of one URI, say "/". The client then derives the URIs from that link. As such, I was hoping to do something roughly like:
>
> from("/").follow("#users_link").fill_form("search_user", {"id": "someid"}).follow("#user_link").follow("#friends_link")
>
> The goal of the previous command was to get the friends of some user having an ID of 'someid'. It starts by going to the given URL, then from the page it got, follows a certain link which redirect it to the resource for a list of users. It then searches for the user, probably filling up a form, submits it, and goes to the user's page. From the user's page, there is a link to the friends page, and that is followed and eventually processed.
This is not a RESTful client implementation because it is based on expectations about the application's state machine and the client cannot have such expectations. The server might change the whole state machine at runtime which is not an error condition but a feature of REST.
Think this instead:
The client has an overall goal (unless it is a robot) and must for each steady state it is being put in by the server understand how to proceed achieving that goal.
If the client knows that the server uses media types A, B and C[1] it must handle A, B and C for any steady state it reaches.
Jan
[1] In my opinion, the client must know the types at design time - otherwise, the client could not be coded in the first place
>
> In reality of course, this shouldn't really necessitate multiple calls to the server if called multiple times since previous results have been cached and processed on the client side. Only when the cache expires, should there be an attempt to request again. I'm not really sure if RESTful clients that respect HATEOAS do it this way, and should they in the first place. If they do, are there tools that exist for this?
>
> On a side note, content type negotiation should be preconfigured before doing the call I stated above.
>
> Thanks,
>
> Jan Vincent Liwanag
> jvliwanag@...
>
>
>
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
-----------------------------------
Jan Algermissen, Consultant
NORD Software Consulting
Mail: algermissen@...
Blog: http://www.nordsc.com/blog/
Work: http://www.nordsc.com/
-----------------------------------
On Feb 19, 2010, at 8:03 AM, Jan Algermissen wrote: > > [1] In my opinion, the client must know the types at design time - otherwise, the client could not be coded in the first place Meant to add this link to some thoughts about this: http://www.nordsc.com/blog/?cat=4 Jan > > >> >> In reality of course, this shouldn't really necessitate multiple calls to the server if called multiple times since previous results have been cached and processed on the client side. Only when the cache expires, should there be an attempt to request again. I'm not really sure if RESTful clients that respect HATEOAS do it this way, and should they in the first place. If they do, are there tools that exist for this? >> >> On a side note, content type negotiation should be preconfigured before doing the call I stated above. >> >> Thanks, >> >> Jan Vincent Liwanag >> jvliwanag@... >> >> >> >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> > > ----------------------------------- > Jan Algermissen, Consultant > NORD Software Consulting > > Mail: algermissen@... > Blog: http://www.nordsc.com/blog/ > Work: http://www.nordsc.com/ > ----------------------------------- > > > > > > > ------------------------------------ > > Yahoo! Groups Links > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
True enough,
Doing:
> from("/").follow( "#users_link" ).fill_form( "search_user" , {"id": "someid"}).follow( "#user_link" ).follow( "#friends_ link")
Doesn't cut it since it assumes specific responses from the server. But the point of HATEOAS I believe is that links are indeed provided for me to traverse through. I may instead represent this as some form of tree, perhaps something like:
from("/").follow("#users_link" ).handle({
200: fill_form("search_user" , ...)...
404: ...
})
On the other hand, I would rather do the previous one and handle exceptions instead. The client should know some information about the server's state machine right? And if it asks right, it should rightfully expect that it gets the content-type it requested for. How else would it be able to do its task?
On Feb 19, 2010, at 3:05 PM, Jan Algermissen wrote:
>
> On Feb 19, 2010, at 8:03 AM, Jan Algermissen wrote:
>
>>
>> [1] In my opinion, the client must know the types at design time - otherwise, the client could not be coded in the first place
>
> Meant to add this link to some thoughts about this: http://www.nordsc.com/blog/?cat=4
>
> Jan
>
>
>>
>>
>>>
>>> In reality of course, this shouldn't really necessitate multiple calls to the server if called multiple times since previous results have been cached and processed on the client side. Only when the cache expires, should there be an attempt to request again. I'm not really sure if RESTful clients that respect HATEOAS do it this way, and should they in the first place. If they do, are there tools that exist for this?
>>>
>>> On a side note, content type negotiation should be preconfigured before doing the call I stated above.
>>>
>>> Thanks,
>>>
>>> Jan Vincent Liwanag
>>> jvliwanag@...
>>>
>>>
>>>
>>>
>>>
>>> ------------------------------------
>>>
>>> Yahoo! Groups Links
>>>
>>>
>>>
>>
>> -----------------------------------
>> Jan Algermissen, Consultant
>> NORD Software Consulting
>>
>> Mail: algermissen@...
>> Blog: http://www.nordsc.com/blog/
>> Work: http://www.nordsc.com/
>> -----------------------------------
>>
>>
>>
>>
>>
>>
>> ------------------------------------
>>
>> Yahoo! Groups Links
>>
>>
>>
>
> -----------------------------------
> Jan Algermissen, Consultant
> NORD Software Consulting
>
> Mail: algermissen@...
> Blog: http://www.nordsc.com/blog/
> Work: http://www.nordsc.com/
> -----------------------------------
>
>
>
>
Jan Vincent Liwanag
jvliwanag@...
On Feb 19, 2010, at 8:19 AM, Jan Vincent wrote:
> True enough,
>
> Doing:
>
>> from("/").follow( "#users_link" ).fill_form( "search_user" , {"id": "someid"}).follow( "#user_link" ).follow( "#friends_ link")
>
> Doesn't cut it since it assumes specific responses from the server. But the point of HATEOAS I believe is that links are indeed provided for me to traverse through.
Yes, but you can make no assumption
a) about the media type returned
b) about what links, forms etc. you will find in there
> I may instead represent this as some form of tree, perhaps something like:
>
> from("/").follow("#users_link" ).handle({
> 200: fill_form("search_user" , ...)...
> 404: ...
> })
The 200 still does not mean that the form will be there (or that the response will be in a media type that you expect)
Also, what about 201,202,303,204.... you have to handle all of those, too. ... yes, the REST client side is hard ...
>
> On the other hand, I would rather do the previous one and handle exceptions instead.
Well, you can, of course. But you must understand that what you handle as exceptions does not mean the server behaves incorrect because any valid HTTP response is part of the contract. The exceptions would essentially only handle the client side 'broken' implementation.
> The client should know some information about the server's state machine right?
No! That is the essence of the hypermedia constraint. It must look at each steady state in isolation and 'make the best of it' in a sense.
Note that this influences the issue of media type design substantially because you can design types that make this very hard or types that makethis easier.
> And if it asks right, it should rightfully expect that it gets the content-type it requested for.
Still, if it gets a 406 it should be able to handle that and not just dump an exception into the logs.
> How else would it be able to do its task?
By knowing the set of media types to expect and by handling every media type for every response.
Jan
>
> On Feb 19, 2010, at 3:05 PM, Jan Algermissen wrote:
>
>>
>> On Feb 19, 2010, at 8:03 AM, Jan Algermissen wrote:
>>
>>>
>>> [1] In my opinion, the client must know the types at design time - otherwise, the client could not be coded in the first place
>>
>> Meant to add this link to some thoughts about this: http://www.nordsc.com/blog/?cat=4
>>
>> Jan
>>
>>
>>>
>>>
>>>>
>>>> In reality of course, this shouldn't really necessitate multiple calls to the server if called multiple times since previous results have been cached and processed on the client side. Only when the cache expires, should there be an attempt to request again. I'm not really sure if RESTful clients that respect HATEOAS do it this way, and should they in the first place. If they do, are there tools that exist for this?
>>>>
>>>> On a side note, content type negotiation should be preconfigured before doing the call I stated above.
>>>>
>>>> Thanks,
>>>>
>>>> Jan Vincent Liwanag
>>>> jvliwanag@...
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> ------------------------------------
>>>>
>>>> Yahoo! Groups Links
>>>>
>>>>
>>>>
>>>
>>> -----------------------------------
>>> Jan Algermissen, Consultant
>>> NORD Software Consulting
>>>
>>> Mail: algermissen@...
>>> Blog: http://www.nordsc.com/blog/
>>> Work: http://www.nordsc.com/
>>> -----------------------------------
>>>
>>>
>>>
>>>
>>>
>>>
>>> ------------------------------------
>>>
>>> Yahoo! Groups Links
>>>
>>>
>>>
>>
>> -----------------------------------
>> Jan Algermissen, Consultant
>> NORD Software Consulting
>>
>> Mail: algermissen@...
>> Blog: http://www.nordsc.com/blog/
>> Work: http://www.nordsc.com/
>> -----------------------------------
>>
>>
>>
>>
>
> Jan Vincent Liwanag
> jvliwanag@...
>
>
>
-----------------------------------
Jan Algermissen, Consultant
NORD Software Consulting
Mail: algermissen@...
Blog: http://www.nordsc.com/blog/
Work: http://www.nordsc.com/
-----------------------------------
On Feb 19, 2010, at 3:36 PM, Jan Algermissen wrote:
>
> On Feb 19, 2010, at 8:19 AM, Jan Vincent wrote:
>
>> True enough,
>>
>> Doing:
>>
>>> from("/").follow( "#users_link" ).fill_form( "search_user" , {"id": "someid"}).follow( "#user_link" ).follow( "#friends_ link")
>>
>> Doesn't cut it since it assumes specific responses from the server. But the point of HATEOAS I believe is that links are indeed provided for me to traverse through.
>
> Yes, but you can make no assumption
> a) about the media type returned
> b) about what links, forms etc. you will find in there
Can't I? If the server acts right, it should recognize my Accept header. If it doesn't have the Content-Type that I recognize, then my client gives up. It simply doesn't know how to understand the new Content-Type. Since the client is preprogrammed, it's one limitation I should live with.
>
>> I may instead represent this as some form of tree, perhaps something like:
>>
>> from("/").follow("#users_link" ).handle({
>> 200: fill_form("search_user" , ...)...
>> 404: ...
>> })
>
> The 200 still does not mean that the form will be there (or that the response will be in a media type that you expect)
> Also, what about 201,202,303,204.... you have to handle all of those, too. ... yes, the REST client side is hard ...
>
If I do a GET, i should expect a 200 if it exists, 404 otherwise. I can choose logical defaults for 3xx series possibly make it transparent to the programmer unless he chooses to do so. 5xx series, well, that would raise exception. Same with PUT, POST, etc, there are certain status codes the client should expect, some status codes wherein default action would be ok, and the rest, that simply won't make sense (i.e., a 201 on a GET request).
>
>>
>> On the other hand, I would rather do the previous one and handle exceptions instead.
>
> Well, you can, of course. But you must understand that what you handle as exceptions does not mean the server behaves incorrect because any valid HTTP response is part of the contract. The exceptions would essentially only handle the client side 'broken' implementation.
>
>> The client should know some information about the server's state machine right?
>
> No! That is the essence of the hypermedia constraint. It must look at each steady state in isolation and 'make the best of it' in a sense.
>
> Note that this influences the issue of media type design substantially because you can design types that make this very hard or types that makethis easier.
It doesn't have to learn of the entire server's state machine, but just enough of it. The clients need some prior knowledge on how to jump from one resource to another, possibly in the form of xpath, json traversal rules, or rdf relationships.
>
>> And if it asks right, it should rightfully expect that it gets the content-type it requested for.
>
> Still, if it gets a 406 it should be able to handle that and not just dump an exception into the logs.
What else is there to do? Unless some AI stuff is going on, I don't think it can do much to recover.
>
>> How else would it be able to do its task?
>
> By knowing the set of media types to expect and by handling every media type for every response.
Exactly. There are some assumptions that must be in place -- certain expectations how the server can act.
I'm curious though. Say for a social network API, how would some client know who the friends are of some user?
>
>
> Jan
>
>>
>> On Feb 19, 2010, at 3:05 PM, Jan Algermissen wrote:
>>
>>>
>>> On Feb 19, 2010, at 8:03 AM, Jan Algermissen wrote:
>>>
>>>>
>>>> [1] In my opinion, the client must know the types at design time - otherwise, the client could not be coded in the first place
>>>
>>> Meant to add this link to some thoughts about this: http://www.nordsc.com/blog/?cat=4
>>>
>>> Jan
>>>
>>>
>>>>
>>>>
>>>>>
>>>>> In reality of course, this shouldn't really necessitate multiple calls to the server if called multiple times since previous results have been cached and processed on the client side. Only when the cache expires, should there be an attempt to request again. I'm not really sure if RESTful clients that respect HATEOAS do it this way, and should they in the first place. If they do, are there tools that exist for this?
>>>>>
>>>>> On a side note, content type negotiation should be preconfigured before doing the call I stated above.
>>>>>
>>>>> Thanks,
>>>>>
>>>>> Jan Vincent Liwanag
>>>>> jvliwanag@...
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> ------------------------------------
>>>>>
>>>>> Yahoo! Groups Links
>>>>>
>>>>>
>>>>>
>>>>
>>>> -----------------------------------
>>>> Jan Algermissen, Consultant
>>>> NORD Software Consulting
>>>>
>>>> Mail: algermissen@...
>>>> Blog: http://www.nordsc.com/blog/
>>>> Work: http://www.nordsc.com/
>>>> -----------------------------------
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> ------------------------------------
>>>>
>>>> Yahoo! Groups Links
>>>>
>>>>
>>>>
>>>
>>> -----------------------------------
>>> Jan Algermissen, Consultant
>>> NORD Software Consulting
>>>
>>> Mail: algermissen@...
>>> Blog: http://www.nordsc.com/blog/
>>> Work: http://www.nordsc.com/
>>> -----------------------------------
>>>
>>>
>>>
>>>
>>
>> Jan Vincent Liwanag
>> jvliwanag@gmail.com
>>
>>
>>
>
> -----------------------------------
> Jan Algermissen, Consultant
> NORD Software Consulting
>
> Mail: algermissen@...
> Blog: http://www.nordsc.com/blog/
> Work: http://www.nordsc.com/
> -----------------------------------
>
>
>
>
Jan Vincent Liwanag
jvliwanag@gmail.com
On Feb 19, 2010, at 9:02 AM, Jan Vincent wrote:
>
> On Feb 19, 2010, at 3:36 PM, Jan Algermissen wrote:
>
>>
>> On Feb 19, 2010, at 8:19 AM, Jan Vincent wrote:
>>
>>> True enough,
>>>
>>> Doing:
>>>
>>>> from("/").follow( "#users_link" ).fill_form( "search_user" , {"id": "someid"}).follow( "#user_link" ).follow( "#friends_ link")
>>>
>>> Doesn't cut it since it assumes specific responses from the server. But the point of HATEOAS I believe is that links are indeed provided for me to traverse through.
>>
>> Yes, but you can make no assumption
>> a) about the media type returned
>> b) about what links, forms etc. you will find in there
>
> Can't I? If the server acts right, it should recognize my Accept header.
There might be several types you put in the accept header. The client should handle any matching response.
> If it doesn't have the Content-Type that I recognize, then my client gives up. It simply doesn't know how to understand the new Content-Type. Since the client is preprogrammed, it's one limitation I should live with.
Yes. However, I like to emphasize that with REST it des not stop there because you do not just have 'broken communication' but in fact still an ongoing communication (a benefit of uniform status codes) and it makes sense to think about leveraging that situation. You could, for example, have the client open an RFC ticket in some helpdesk system to trigger an ASAP update of the client capabilities. I consider that different from dumping a stack trace in the logs and calling a developer with 'uhh - something is wrong'.
Technically, yes - at some point you just need to give up.
>
>>
>>> I may instead represent this as some form of tree, perhaps something like:
>>>
>>> from("/").follow("#users_link" ).handle({
>>> 200: fill_form("search_user" , ...)...
>>> 404: ...
>>> })
>>
>> The 200 still does not mean that the form will be there (or that the response will be in a media type that you expect)
>> Also, what about 201,202,303,204.... you have to handle all of those, too. ... yes, the REST client side is hard ...
>>
> If I do a GET, i should expect a 200 if it exists, 404 otherwise. I can choose logical defaults for 3xx series possibly make it transparent to the programmer unless he chooses to do so. 5xx series, well, that would raise exception. Same with PUT, POST, etc, there are certain status codes the client should expect, some status codes wherein default action would be ok, and the rest, that simply won't make sense (i.e., a 201 on a GET request).
Yes. I wanted to stress the point that you should handle those codes that make sense (202 on GET also, IMO). But as long as you think along the lines you describe that's ok I guess.
>>
>>>
>>> On the other hand, I would rather do the previous one and handle exceptions instead.
>>
>> Well, you can, of course. But you must understand that what you handle as exceptions does not mean the server behaves incorrect because any valid HTTP response is part of the contract. The exceptions would essentially only handle the client side 'broken' implementation.
>>
>>> The client should know some information about the server's state machine right?
>>
>> No! That is the essence of the hypermedia constraint. It must look at each steady state in isolation and 'make the best of it' in a sense.
>>
>> Note that this influences the issue of media type design substantially because you can design types that make this very hard or types that makethis easier.
>
> It doesn't have to learn of the entire server's state machine, but just enough of it. The clients need some prior knowledge on how to jump from one resource to another,
It must know the media type to understand the meaning of the current transitions. It can then coose the transition that advances its built-in goal but it cannot have any expectation about what is being returned by the server.
(Note that the client *can* expect for the server to not lie, e.g. <img src=""/> must point to an image, <app:collection href=""/> must point to a collection.)
> possibly in the form of xpath, json traversal rules, or rdf relationships.
What do you mean here? I did not get that.
>
>>
>>> And if it asks right, it should rightfully expect that it gets the content-type it requested for.
>>
>> Still, if it gets a 406 it should be able to handle that and not just dump an exception into the logs.
>
> What else is there to do? Unless some AI stuff is going on, I don't think it can do much to recover.
See above: leverage the still existing conversation. In addition, think more in terms of 'client is not able to reach its overall goal' from the current steady state as opposed to 'the server did not send me what it should have - that's an exception'. Even if the end result is about the same - the style inthinging is different.
>
>>
>>> How else would it be able to do its task?
>>
>> By knowing the set of media types to expect and by handling every media type for every response.
>
> Exactly. There are some assumptions that must be in place -- certain expectations how the server can act.
>
> I'm curious though. Say for a social network API, how would some client know who the friends are of some user?
What do you mean?
Jan
>
>>
>>
>> Jan
>>
>>>
>>> On Feb 19, 2010, at 3:05 PM, Jan Algermissen wrote:
>>>
>>>>
>>>> On Feb 19, 2010, at 8:03 AM, Jan Algermissen wrote:
>>>>
>>>>>
>>>>> [1] In my opinion, the client must know the types at design time - otherwise, the client could not be coded in the first place
>>>>
>>>> Meant to add this link to some thoughts about this: http://www.nordsc.com/blog/?cat=4
>>>>
>>>> Jan
>>>>
>>>>
>>>>>
>>>>>
>>>>>>
>>>>>> In reality of course, this shouldn't really necessitate multiple calls to the server if called multiple times since previous results have been cached and processed on the client side. Only when the cache expires, should there be an attempt to request again. I'm not really sure if RESTful clients that respect HATEOAS do it this way, and should they in the first place. If they do, are there tools that exist for this?
>>>>>>
>>>>>> On a side note, content type negotiation should be preconfigured before doing the call I stated above.
>>>>>>
>>>>>> Thanks,
>>>>>>
>>>>>> Jan Vincent Liwanag
>>>>>> jvliwanag@...
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> ------------------------------------
>>>>>>
>>>>>> Yahoo! Groups Links
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>> -----------------------------------
>>>>> Jan Algermissen, Consultant
>>>>> NORD Software Consulting
>>>>>
>>>>> Mail: algermissen@...
>>>>> Blog: http://www.nordsc.com/blog/
>>>>> Work: http://www.nordsc.com/
>>>>> -----------------------------------
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> ------------------------------------
>>>>>
>>>>> Yahoo! Groups Links
>>>>>
>>>>>
>>>>>
>>>>
>>>> -----------------------------------
>>>> Jan Algermissen, Consultant
>>>> NORD Software Consulting
>>>>
>>>> Mail: algermissen@...
>>>> Blog: http://www.nordsc.com/blog/
>>>> Work: http://www.nordsc.com/
>>>> -----------------------------------
>>>>
>>>>
>>>>
>>>>
>>>
>>> Jan Vincent Liwanag
>>> jvliwanag@...
>>>
>>>
>>>
>>
>> -----------------------------------
>> Jan Algermissen, Consultant
>> NORD Software Consulting
>>
>> Mail: algermissen@...
>> Blog: http://www.nordsc.com/blog/
>> Work: http://www.nordsc.com/
>> -----------------------------------
>>
>>
>>
>>
>
> Jan Vincent Liwanag
> jvliwanag@...
>
>
>
-----------------------------------
Jan Algermissen, Consultant
NORD Software Consulting
Mail: algermissen@...
Blog: http://www.nordsc.com/blog/
Work: http://www.nordsc.com/
-----------------------------------
First off,
Thanks for the insights. Perhaps my POV on HATEOAS still needs tweaking. Let's have some theoretical desktop client for a social network such as facebook. The desktop client say, simply needs to display the friends of some user. Now, the social network published it's API and the resources that can be accessed. Say, there are three of them
Root Resource ("/") - The only one that has a URL, and contains links to other resources, for simplicity, say this contains all the links to all its users.
User Resource - Give some info about the user, and more importantly, it has the link to the Friends Resource of the User
Friends Resource - Contains the list of friends, and the links to their respective User Resources
If the desktop client aims to accomplish what it's supposed to, I aim to expose two methods to be used by the application, say:
get_user_info(User)
get_friends(User)
Internally, it traverses from the root until it ends up getting the information it needs.
If I get what you're saying though, I probably shouldn't even expose those two methods? Am I thinking about this all wrong?
On Feb 19, 2010, at 4:18 PM, Jan Algermissen wrote:
>
> On Feb 19, 2010, at 9:02 AM, Jan Vincent wrote:
>
>>
>> On Feb 19, 2010, at 3:36 PM, Jan Algermissen wrote:
>>
>>>
>>> On Feb 19, 2010, at 8:19 AM, Jan Vincent wrote:
>>>
>>>> True enough,
>>>>
>>>> Doing:
>>>>
>>>>> from("/").follow( "#users_link" ).fill_form( "search_user" , {"id": "someid"}).follow( "#user_link" ).follow( "#friends_ link")
>>>>
>>>> Doesn't cut it since it assumes specific responses from the server. But the point of HATEOAS I believe is that links are indeed provided for me to traverse through.
>>>
>>> Yes, but you can make no assumption
>>> a) about the media type returned
>>> b) about what links, forms etc. you will find in there
>>
>> Can't I? If the server acts right, it should recognize my Accept header.
>
> There might be several types you put in the accept header. The client should handle any matching response.
>
>> If it doesn't have the Content-Type that I recognize, then my client gives up. It simply doesn't know how to understand the new Content-Type. Since the client is preprogrammed, it's one limitation I should live with.
>
> Yes. However, I like to emphasize that with REST it des not stop there because you do not just have 'broken communication' but in fact still an ongoing communication (a benefit of uniform status codes) and it makes sense to think about leveraging that situation. You could, for example, have the client open an RFC ticket in some helpdesk system to trigger an ASAP update of the client capabilities. I consider that different from dumping a stack trace in the logs and calling a developer with 'uhh - something is wrong'.
>
> Technically, yes - at some point you just need to give up.
>
>>
>>>
>>>> I may instead represent this as some form of tree, perhaps something like:
>>>>
>>>> from("/").follow("#users_link" ).handle({
>>>> 200: fill_form("search_user" , ...)...
>>>> 404: ...
>>>> })
>>>
>>> The 200 still does not mean that the form will be there (or that the response will be in a media type that you expect)
>>> Also, what about 201,202,303,204.... you have to handle all of those, too. ... yes, the REST client side is hard ...
>>>
>> If I do a GET, i should expect a 200 if it exists, 404 otherwise. I can choose logical defaults for 3xx series possibly make it transparent to the programmer unless he chooses to do so. 5xx series, well, that would raise exception. Same with PUT, POST, etc, there are certain status codes the client should expect, some status codes wherein default action would be ok, and the rest, that simply won't make sense (i.e., a 201 on a GET request).
>
> Yes. I wanted to stress the point that you should handle those codes that make sense (202 on GET also, IMO). But as long as you think along the lines you describe that's ok I guess.
>
>
>>>
>>>>
>>>> On the other hand, I would rather do the previous one and handle exceptions instead.
>>>
>>> Well, you can, of course. But you must understand that what you handle as exceptions does not mean the server behaves incorrect because any valid HTTP response is part of the contract. The exceptions would essentially only handle the client side 'broken' implementation.
>>>
>>>> The client should know some information about the server's state machine right?
>>>
>>> No! That is the essence of the hypermedia constraint. It must look at each steady state in isolation and 'make the best of it' in a sense.
>>>
>>> Note that this influences the issue of media type design substantially because you can design types that make this very hard or types that makethis easier.
>>
>> It doesn't have to learn of the entire server's state machine, but just enough of it. The clients need some prior knowledge on how to jump from one resource to another,
>
> It must know the media type to understand the meaning of the current transitions. It can then coose the transition that advances its built-in goal but it cannot have any expectation about what is being returned by the server.
>
> (Note that the client *can* expect for the server to not lie, e.g. <img src=""/> must point to an image, <app:collection href=""/> must point to a collection.)
>
>
>> possibly in the form of xpath, json traversal rules, or rdf relationships.
>
> What do you mean here? I did not get that.
>
>
>
>>
>>>
>>>> And if it asks right, it should rightfully expect that it gets the content-type it requested for.
>>>
>>> Still, if it gets a 406 it should be able to handle that and not just dump an exception into the logs.
>>
>> What else is there to do? Unless some AI stuff is going on, I don't think it can do much to recover.
>
> See above: leverage the still existing conversation. In addition, think more in terms of 'client is not able to reach its overall goal' from the current steady state as opposed to 'the server did not send me what it should have - that's an exception'. Even if the end result is about the same - the style inthinging is different.
>
>
>>
>>>
>>>> How else would it be able to do its task?
>>>
>>> By knowing the set of media types to expect and by handling every media type for every response.
>>
>> Exactly. There are some assumptions that must be in place -- certain expectations how the server can act.
>>
>> I'm curious though. Say for a social network API, how would some client know who the friends are of some user?
>
> What do you mean?
>
>
> Jan
>
>
>
>>
>>>
>>>
>>> Jan
>>>
>>>>
>>>> On Feb 19, 2010, at 3:05 PM, Jan Algermissen wrote:
>>>>
>>>>>
>>>>> On Feb 19, 2010, at 8:03 AM, Jan Algermissen wrote:
>>>>>
>>>>>>
>>>>>> [1] In my opinion, the client must know the types at design time - otherwise, the client could not be coded in the first place
>>>>>
>>>>> Meant to add this link to some thoughts about this: http://www.nordsc.com/blog/?cat=4
>>>>>
>>>>> Jan
>>>>>
>>>>>
>>>>>>
>>>>>>
>>>>>>>
>>>>>>> In reality of course, this shouldn't really necessitate multiple calls to the server if called multiple times since previous results have been cached and processed on the client side. Only when the cache expires, should there be an attempt to request again. I'm not really sure if RESTful clients that respect HATEOAS do it this way, and should they in the first place. If they do, are there tools that exist for this?
>>>>>>>
>>>>>>> On a side note, content type negotiation should be preconfigured before doing the call I stated above.
>>>>>>>
>>>>>>> Thanks,
>>>>>>>
>>>>>>> Jan Vincent Liwanag
>>>>>>> jvliwanag@...
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> ------------------------------------
>>>>>>>
>>>>>>> Yahoo! Groups Links
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>> -----------------------------------
>>>>>> Jan Algermissen, Consultant
>>>>>> NORD Software Consulting
>>>>>>
>>>>>> Mail: algermissen@...
>>>>>> Blog: http://www.nordsc.com/blog/
>>>>>> Work: http://www.nordsc.com/
>>>>>> -----------------------------------
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> ------------------------------------
>>>>>>
>>>>>> Yahoo! Groups Links
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>> -----------------------------------
>>>>> Jan Algermissen, Consultant
>>>>> NORD Software Consulting
>>>>>
>>>>> Mail: algermissen@acm.org
>>>>> Blog: http://www.nordsc.com/blog/
>>>>> Work: http://www.nordsc.com/
>>>>> -----------------------------------
>>>>>
>>>>>
>>>>>
>>>>>
>>>>
>>>> Jan Vincent Liwanag
>>>> jvliwanag@...
>>>>
>>>>
>>>>
>>>
>>> -----------------------------------
>>> Jan Algermissen, Consultant
>>> NORD Software Consulting
>>>
>>> Mail: algermissen@...
>>> Blog: http://www.nordsc.com/blog/
>>> Work: http://www.nordsc.com/
>>> -----------------------------------
>>>
>>>
>>>
>>>
>>
>> Jan Vincent Liwanag
>> jvliwanag@...
>>
>>
>>
>
> -----------------------------------
> Jan Algermissen, Consultant
> NORD Software Consulting
>
> Mail: algermissen@...
> Blog: http://www.nordsc.com/blog/
> Work: http://www.nordsc.com/
> -----------------------------------
>
>
>
>
Jan Vincent Liwanag
jvliwanag@...
On Feb 19, 2010, at 9:38 AM, Jan Vincent wrote:
>
> First off,
>
> Thanks for the insights. Perhaps my POV on HATEOAS still needs tweaking. Let's have some theoretical desktop client for a social network such as facebook. The desktop client say, simply needs to display the friends of some user. Now, the social network published it's API and the resources that can be accessed. Say, there are three of them
>
> Root Resource ("/") - The only one that has a URL, and contains links to other resources, for simplicity, say this contains all the links to all its users.
> User Resource - Give some info about the user, and more importantly, it has the link to the Friends Resource of the User
> Friends Resource - Contains the list of friends, and the links to their respective User Resources
>
> If the desktop client aims to accomplish what it's supposed to, I aim to expose two methods to be used by the application, say:
>
> get_user_info(User)
> get_friends(User)
>
> Internally, it traverses from the root until it ends up getting the information it needs.
>
> If I get what you're saying though, I probably shouldn't even expose those two methods? Am I thinking about this all wrong?
Hmm, no, I don't think so. The API design is correct becasue you discover the resources at runtime through typed links.
Regarding the use case, I'd probably leverage the fact that the application you are building seems to be passive in nature, driven by a human end user. If so, you can think more in terms of a Web browser style. Web browsers typically use the processing rules of the media types to turn the steady state they are in into some GUI for the user.
So I'd probably let the server! drive the state of the GUI (or that part of the GUI) completely. E.g. turn the service document into a pane and just make every user link you find into a GUI control (item in a scrollable list) and turn the ueser's ref to it's info into a clickable element. Once clicked on, the GUI would hand that click to the user agent to have it traverse to the next steady state. this state could be handed back to the GUI fro display in another pane. And so on.
Key think here is really to make the GUI just respond to the steady state, ike a browser does. IOW, have the GUI reflect exactly what the server sent.
I am sure you shoul dbe able to leverage HTML for that quite a bit, or?
HTH,
Jan
>
> On Feb 19, 2010, at 4:18 PM, Jan Algermissen wrote:
>
>>
>> On Feb 19, 2010, at 9:02 AM, Jan Vincent wrote:
>>
>>>
>>> On Feb 19, 2010, at 3:36 PM, Jan Algermissen wrote:
>>>
>>>>
>>>> On Feb 19, 2010, at 8:19 AM, Jan Vincent wrote:
>>>>
>>>>> True enough,
>>>>>
>>>>> Doing:
>>>>>
>>>>>> from("/").follow( "#users_link" ).fill_form( "search_user" , {"id": "someid"}).follow( "#user_link" ).follow( "#friends_ link")
>>>>>
>>>>> Doesn't cut it since it assumes specific responses from the server. But the point of HATEOAS I believe is that links are indeed provided for me to traverse through.
>>>>
>>>> Yes, but you can make no assumption
>>>> a) about the media type returned
>>>> b) about what links, forms etc. you will find in there
>>>
>>> Can't I? If the server acts right, it should recognize my Accept header.
>>
>> There might be several types you put in the accept header. The client should handle any matching response.
>>
>>> If it doesn't have the Content-Type that I recognize, then my client gives up. It simply doesn't know how to understand the new Content-Type. Since the client is preprogrammed, it's one limitation I should live with.
>>
>> Yes. However, I like to emphasize that with REST it des not stop there because you do not just have 'broken communication' but in fact still an ongoing communication (a benefit of uniform status codes) and it makes sense to think about leveraging that situation. You could, for example, have the client open an RFC ticket in some helpdesk system to trigger an ASAP update of the client capabilities. I consider that different from dumping a stack trace in the logs and calling a developer with 'uhh - something is wrong'.
>>
>> Technically, yes - at some point you just need to give up.
>>
>>>
>>>>
>>>>> I may instead represent this as some form of tree, perhaps something like:
>>>>>
>>>>> from("/").follow("#users_link" ).handle({
>>>>> 200: fill_form("search_user" , ...)...
>>>>> 404: ...
>>>>> })
>>>>
>>>> The 200 still does not mean that the form will be there (or that the response will be in a media type that you expect)
>>>> Also, what about 201,202,303,204.... you have to handle all of those, too. ... yes, the REST client side is hard ...
>>>>
>>> If I do a GET, i should expect a 200 if it exists, 404 otherwise. I can choose logical defaults for 3xx series possibly make it transparent to the programmer unless he chooses to do so. 5xx series, well, that would raise exception. Same with PUT, POST, etc, there are certain status codes the client should expect, some status codes wherein default action would be ok, and the rest, that simply won't make sense (i.e., a 201 on a GET request).
>>
>> Yes. I wanted to stress the point that you should handle those codes that make sense (202 on GET also, IMO). But as long as you think along the lines you describe that's ok I guess.
>>
>>
>>>>
>>>>>
>>>>> On the other hand, I would rather do the previous one and handle exceptions instead.
>>>>
>>>> Well, you can, of course. But you must understand that what you handle as exceptions does not mean the server behaves incorrect because any valid HTTP response is part of the contract. The exceptions would essentially only handle the client side 'broken' implementation.
>>>>
>>>>> The client should know some information about the server's state machine right?
>>>>
>>>> No! That is the essence of the hypermedia constraint. It must look at each steady state in isolation and 'make the best of it' in a sense.
>>>>
>>>> Note that this influences the issue of media type design substantially because you can design types that make this very hard or types that makethis easier.
>>>
>>> It doesn't have to learn of the entire server's state machine, but just enough of it. The clients need some prior knowledge on how to jump from one resource to another,
>>
>> It must know the media type to understand the meaning of the current transitions. It can then coose the transition that advances its built-in goal but it cannot have any expectation about what is being returned by the server.
>>
>> (Note that the client *can* expect for the server to not lie, e.g. <img src=""/> must point to an image, <app:collection href=""/> must point to a collection.)
>>
>>
>>> possibly in the form of xpath, json traversal rules, or rdf relationships.
>>
>> What do you mean here? I did not get that.
>>
>>
>>
>>>
>>>>
>>>>> And if it asks right, it should rightfully expect that it gets the content-type it requested for.
>>>>
>>>> Still, if it gets a 406 it should be able to handle that and not just dump an exception into the logs.
>>>
>>> What else is there to do? Unless some AI stuff is going on, I don't think it can do much to recover.
>>
>> See above: leverage the still existing conversation. In addition, think more in terms of 'client is not able to reach its overall goal' from the current steady state as opposed to 'the server did not send me what it should have - that's an exception'. Even if the end result is about the same - the style inthinging is different.
>>
>>
>>>
>>>>
>>>>> How else would it be able to do its task?
>>>>
>>>> By knowing the set of media types to expect and by handling every media type for every response.
>>>
>>> Exactly. There are some assumptions that must be in place -- certain expectations how the server can act.
>>>
>>> I'm curious though. Say for a social network API, how would some client know who the friends are of some user?
>>
>> What do you mean?
>>
>>
>> Jan
>>
>>
>>
>>>
>>>>
>>>>
>>>> Jan
>>>>
>>>>>
>>>>> On Feb 19, 2010, at 3:05 PM, Jan Algermissen wrote:
>>>>>
>>>>>>
>>>>>> On Feb 19, 2010, at 8:03 AM, Jan Algermissen wrote:
>>>>>>
>>>>>>>
>>>>>>> [1] In my opinion, the client must know the types at design time - otherwise, the client could not be coded in the first place
>>>>>>
>>>>>> Meant to add this link to some thoughts about this: http://www.nordsc.com/blog/?cat=4
>>>>>>
>>>>>> Jan
>>>>>>
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>>
>>>>>>>> In reality of course, this shouldn't really necessitate multiple calls to the server if called multiple times since previous results have been cached and processed on the client side. Only when the cache expires, should there be an attempt to request again. I'm not really sure if RESTful clients that respect HATEOAS do it this way, and should they in the first place. If they do, are there tools that exist for this?
>>>>>>>>
>>>>>>>> On a side note, content type negotiation should be preconfigured before doing the call I stated above.
>>>>>>>>
>>>>>>>> Thanks,
>>>>>>>>
>>>>>>>> Jan Vincent Liwanag
>>>>>>>> jvliwanag@...
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> ------------------------------------
>>>>>>>>
>>>>>>>> Yahoo! Groups Links
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>> -----------------------------------
>>>>>>> Jan Algermissen, Consultant
>>>>>>> NORD Software Consulting
>>>>>>>
>>>>>>> Mail: algermissen@...
>>>>>>> Blog: http://www.nordsc.com/blog/
>>>>>>> Work: http://www.nordsc.com/
>>>>>>> -----------------------------------
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> ------------------------------------
>>>>>>>
>>>>>>> Yahoo! Groups Links
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>> -----------------------------------
>>>>>> Jan Algermissen, Consultant
>>>>>> NORD Software Consulting
>>>>>>
>>>>>> Mail: algermissen@...
>>>>>> Blog: http://www.nordsc.com/blog/
>>>>>> Work: http://www.nordsc.com/
>>>>>> -----------------------------------
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>> Jan Vincent Liwanag
>>>>> jvliwanag@...
>>>>>
>>>>>
>>>>>
>>>>
>>>> -----------------------------------
>>>> Jan Algermissen, Consultant
>>>> NORD Software Consulting
>>>>
>>>> Mail: algermissen@...
>>>> Blog: http://www.nordsc.com/blog/
>>>> Work: http://www.nordsc.com/
>>>> -----------------------------------
>>>>
>>>>
>>>>
>>>>
>>>
>>> Jan Vincent Liwanag
>>> jvliwanag@...
>>>
>>>
>>>
>>
>> -----------------------------------
>> Jan Algermissen, Consultant
>> NORD Software Consulting
>>
>> Mail: algermissen@...
>> Blog: http://www.nordsc.com/blog/
>> Work: http://www.nordsc.com/
>> -----------------------------------
>>
>>
>>
>>
>
> Jan Vincent Liwanag
> jvliwanag@...
>
>
>
-----------------------------------
Jan Algermissen, Consultant
NORD Software Consulting
Mail: algermissen@...
Blog: http://www.nordsc.com/blog/
Work: http://www.nordsc.com/
-----------------------------------
REST-minus or URI-oriented I would not consider anything an abuse of REST unless they are making false claims to be REST-full.
I wrote an article along these lines a while back:
http://codeartisan.blogspot.com/2009/01/websites-are-also-restful-web-se
rvices.html
The basic idea is something very similar to what you're describing
below, Jan; implement your API using HTML as the media type, which lets
you navigate it in a browser while you're prototyping/implementing. This
is powerful for a couple of reasons: (1) playing with the API in a web
browser is much more convenient and natural for a programmer than trying
to poke at endpoints with curl or other client programs; (2) it pushes
you in a HATEOAS direction. For example, if I'm processing a conditional
PUT, and I want to return a 412 (Precondition Failed), it leads me to
put *something* in the response body, because I want to show that
programmer something in the browser. Probably I would put the current
version of the resource with another <form> for editing it.
But now, if I also add program-friendly representations for the
resources (Atom, JSON, etc.), then I've now set up a programmatic client
with enough context to continue along with whatever it's trying to do,
and probably without requiring extra round-trips to the server. i.e. my
client doesn't have to have logic like "if I get a 412, do another GET,
then try to do your PUT again". Instead, it can say "ah, I got a 412 in
response to my PUT; does the representation I got back match my desired
state or not?" followed either by moving onto the next goal or doing a
new PUT, as desired.
Jon
________________________________
From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com]
On Behalf Of Jan Algermissen
Sent: Friday, February 19, 2010 4:25 AM
To: Jan Vincent
Cc: rest-discuss@yahoogroups.com
Subject: Re: [rest-discuss] HATEOAS and Cache
On Feb 19, 2010, at 9:38 AM, Jan Vincent wrote:
>
> First off,
>
> Thanks for the insights. Perhaps my POV on HATEOAS still needs
tweaking. Let's have some theoretical desktop client for a social
network such as facebook. The desktop client say, simply needs to
display the friends of some user. Now, the social network published it's
API and the resources that can be accessed. Say, there are three of them
>
> Root Resource ("/") - The only one that has a URL, and contains links
to other resources, for simplicity, say this contains all the links to
all its users.
> User Resource - Give some info about the user, and more importantly,
it has the link to the Friends Resource of the User
> Friends Resource - Contains the list of friends, and the links to
their respective User Resources
>
> If the desktop client aims to accomplish what it's supposed to, I aim
to expose two methods to be used by the application, say:
>
> get_user_info(User)
> get_friends(User)
>
> Internally, it traverses from the root until it ends up getting the
information it needs.
>
> If I get what you're saying though, I probably shouldn't even expose
those two methods? Am I thinking about this all wrong?
Hmm, no, I don't think so. The API design is correct becasue you
discover the resources at runtime through typed links.
Regarding the use case, I'd probably leverage the fact that the
application you are building seems to be passive in nature, driven by a
human end user. If so, you can think more in terms of a Web browser
style. Web browsers typically use the processing rules of the media
types to turn the steady state they are in into some GUI for the user.
So I'd probably let the server! drive the state of the GUI (or that part
of the GUI) completely. E.g. turn the service document into a pane and
just make every user link you find into a GUI control (item in a
scrollable list) and turn the ueser's ref to it's info into a clickable
element. Once clicked on, the GUI would hand that click to the user
agent to have it traverse to the next steady state. this state could be
handed back to the GUI fro display in another pane. And so on.
Key think here is really to make the GUI just respond to the steady
state, ike a browser does. IOW, have the GUI reflect exactly what the
server sent.
I am sure you shoul dbe able to leverage HTML for that quite a bit, or?
HTH,
Jan
>
> On Feb 19, 2010, at 4:18 PM, Jan Algermissen wrote:
>
>>
>> On Feb 19, 2010, at 9:02 AM, Jan Vincent wrote:
>>
>>>
>>> On Feb 19, 2010, at 3:36 PM, Jan Algermissen wrote:
>>>
>>>>
>>>> On Feb 19, 2010, at 8:19 AM, Jan Vincent wrote:
>>>>
>>>>> True enough,
>>>>>
>>>>> Doing:
>>>>>
>>>>>> from("/").follow( "#users_link" ).fill_form( "search_user" ,
{"id": "someid"}).follow( "#user_link" ).follow( "#friends_ link")
>>>>>
>>>>> Doesn't cut it since it assumes specific responses from the
server. But the point of HATEOAS I believe is that links are indeed
provided for me to traverse through.
>>>>
>>>> Yes, but you can make no assumption
>>>> a) about the media type returned
>>>> b) about what links, forms etc. you will find in there
>>>
>>> Can't I? If the server acts right, it should recognize my Accept
header.
>>
>> There might be several types you put in the accept header. The client
should handle any matching response.
>>
>>> If it doesn't have the Content-Type that I recognize, then my client
gives up. It simply doesn't know how to understand the new Content-Type.
Since the client is preprogrammed, it's one limitation I should live
with.
>>
>> Yes. However, I like to emphasize that with REST it des not stop
there because you do not just have 'broken communication' but in fact
still an ongoing communication (a benefit of uniform status codes) and
it makes sense to think about leveraging that situation. You could, for
example, have the client open an RFC ticket in some helpdesk system to
trigger an ASAP update of the client capabilities. I consider that
different from dumping a stack trace in the logs and calling a developer
with 'uhh - something is wrong'.
>>
>> Technically, yes - at some point you just need to give up.
>>
>>>
>>>>
>>>>> I may instead represent this as some form of tree, perhaps
something like:
>>>>>
>>>>> from("/").follow("#users_link" ).handle({
>>>>> 200: fill_form("search_user" , ...)...
>>>>> 404: ...
>>>>> })
>>>>
>>>> The 200 still does not mean that the form will be there (or that
the response will be in a media type that you expect)
>>>> Also, what about 201,202,303,204.... you have to handle all of
those, too. ... yes, the REST client side is hard ...
>>>>
>>> If I do a GET, i should expect a 200 if it exists, 404 otherwise. I
can choose logical defaults for 3xx series possibly make it transparent
to the programmer unless he chooses to do so. 5xx series, well, that
would raise exception. Same with PUT, POST, etc, there are certain
status codes the client should expect, some status codes wherein default
action would be ok, and the rest, that simply won't make sense (i.e., a
201 on a GET request).
>>
>> Yes. I wanted to stress the point that you should handle those codes
that make sense (202 on GET also, IMO). But as long as you think along
the lines you describe that's ok I guess.
>>
>>
>>>>
>>>>>
>>>>> On the other hand, I would rather do the previous one and handle
exceptions instead.
>>>>
>>>> Well, you can, of course. But you must understand that what you
handle as exceptions does not mean the server behaves incorrect because
any valid HTTP response is part of the contract. The exceptions would
essentially only handle the client side 'broken' implementation.
>>>>
>>>>> The client should know some information about the server's state
machine right?
>>>>
>>>> No! That is the essence of the hypermedia constraint. It must look
at each steady state in isolation and 'make the best of it' in a sense.
>>>>
>>>> Note that this influences the issue of media type design
substantially because you can design types that make this very hard or
types that makethis easier.
>>>
>>> It doesn't have to learn of the entire server's state machine, but
just enough of it. The clients need some prior knowledge on how to jump
from one resource to another,
>>
>> It must know the media type to understand the meaning of the current
transitions. It can then coose the transition that advances its built-in
goal but it cannot have any expectation about what is being returned by
the server.
>>
>> (Note that the client *can* expect for the server to not lie, e.g.
<img src=""/> must point to an image, <app:collection href=""/> must
point to a collection.)
>>
>>
>>> possibly in the form of xpath, json traversal rules, or rdf
relationships.
>>
>> What do you mean here? I did not get that.
>>
>>
>>
>>>
>>>>
>>>>> And if it asks right, it should rightfully expect that it gets the
content-type it requested for.
>>>>
>>>> Still, if it gets a 406 it should be able to handle that and not
just dump an exception into the logs.
>>>
>>> What else is there to do? Unless some AI stuff is going on, I don't
think it can do much to recover.
>>
>> See above: leverage the still existing conversation. In addition,
think more in terms of 'client is not able to reach its overall goal'
from the current steady state as opposed to 'the server did not send me
what it should have - that's an exception'. Even if the end result is
about the same - the style inthinging is different.
>>
>>
>>>
>>>>
>>>>> How else would it be able to do its task?
>>>>
>>>> By knowing the set of media types to expect and by handling every
media type for every response.
>>>
>>> Exactly. There are some assumptions that must be in place -- certain
expectations how the server can act.
>>>
>>> I'm curious though. Say for a social network API, how would some
client know who the friends are of some user?
>>
>> What do you mean?
>>
>>
>> Jan
>>
>>
>>
>>>
>>>>
>>>>
>>>> Jan
>>>>
>>>>>
>>>>> On Feb 19, 2010, at 3:05 PM, Jan Algermissen wrote:
>>>>>
>>>>>>
>>>>>> On Feb 19, 2010, at 8:03 AM, Jan Algermissen wrote:
>>>>>>
>>>>>>>
>>>>>>> [1] In my opinion, the client must know the types at design time
- otherwise, the client could not be coded in the first place
>>>>>>
>>>>>> Meant to add this link to some thoughts about this:
http://www.nordsc.com/blog/?cat=4 <http://www.nordsc.com/blog/?cat=4>
>>>>>>
>>>>>> Jan
>>>>>>
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>>
>>>>>>>> In reality of course, this shouldn't really necessitate
multiple calls to the server if called multiple times since previous
results have been cached and processed on the client side. Only when the
cache expires, should there be an attempt to request again. I'm not
really sure if RESTful clients that respect HATEOAS do it this way, and
should they in the first place. If they do, are there tools that exist
for this?
>>>>>>>>
>>>>>>>> On a side note, content type negotiation should be
preconfigured before doing the call I stated above.
>>>>>>>>
>>>>>>>> Thanks,
>>>>>>>>
>>>>>>>> Jan Vincent Liwanag
>>>>>>>> jvliwanag@... <mailto:jvliwanag%40gmail.com>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> ------------------------------------
>>>>>>>>
>>>>>>>> Yahoo! Groups Links
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>> -----------------------------------
>>>>>>> Jan Algermissen, Consultant
>>>>>>> NORD Software Consulting
>>>>>>>
>>>>>>> Mail: algermissen@... <mailto:algermissen%40acm.org>
>>>>>>> Blog: http://www.nordsc.com/blog/ <http://www.nordsc.com/blog/>
>>>>>>> Work: http://www.nordsc.com/ <http://www.nordsc.com/>
>>>>>>> -----------------------------------
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> ------------------------------------
>>>>>>>
>>>>>>> Yahoo! Groups Links
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>> -----------------------------------
>>>>>> Jan Algermissen, Consultant
>>>>>> NORD Software Consulting
>>>>>>
>>>>>> Mail: algermissen@... <mailto:algermissen%40acm.org>
>>>>>> Blog: http://www.nordsc.com/blog/ <http://www.nordsc.com/blog/>
>>>>>> Work: http://www.nordsc.com/ <http://www.nordsc.com/>
>>>>>> -----------------------------------
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>> Jan Vincent Liwanag
>>>>> jvliwanag@... <mailto:jvliwanag%40gmail.com>
>>>>>
>>>>>
>>>>>
>>>>
>>>> -----------------------------------
>>>> Jan Algermissen, Consultant
>>>> NORD Software Consulting
>>>>
>>>> Mail: algermissen@... <mailto:algermissen%40acm.org>
>>>> Blog: http://www.nordsc.com/blog/ <http://www.nordsc.com/blog/>
>>>> Work: http://www.nordsc.com/ <http://www.nordsc.com/>
>>>> -----------------------------------
>>>>
>>>>
>>>>
>>>>
>>>
>>> Jan Vincent Liwanag
>>> jvliwanag@... <mailto:jvliwanag%40gmail.com>
>>>
>>>
>>>
>>
>> -----------------------------------
>> Jan Algermissen, Consultant
>> NORD Software Consulting
>>
>> Mail: algermissen@... <mailto:algermissen%40acm.org>
>> Blog: http://www.nordsc.com/blog/ <http://www.nordsc.com/blog/>
>> Work: http://www.nordsc.com/ <http://www.nordsc.com/>
>> -----------------------------------
>>
>>
>>
>>
>
> Jan Vincent Liwanag
> jvliwanag@... <mailto:jvliwanag%40gmail.com>
>
>
>
-----------------------------------
Jan Algermissen, Consultant
NORD Software Consulting
Mail: algermissen@... <mailto:algermissen%40acm.org>
Blog: http://www.nordsc.com/blog/ <http://www.nordsc.com/blog/>
Work: http://www.nordsc.com/ <http://www.nordsc.com/>
-----------------------------------
On Tue, Feb 16, 2010 at 3:39 PM, Jan Algermissen <algermissen1971@...>wrote: > > Any ideas? > > What about RES' - that's 3/4s. -Randy Fischer
- resent, now CCed list - On Feb 19, 2010, at 1:26 PM, Bob Haugen wrote: > REST-minus I actually like that. Gee - I should have asked on StackOverflow so people could vote for the best name :) > > or > > URI-oriented > > I would not consider anything an abuse of REST unless they are making > false claims to be REST-full. Yes. Jan > > > ------------------------------------ > > Yahoo! Groups Links > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
I like Restless. Easy to roll of the tongue like Restful. "Is your API RESTful or RESTless?"
--- On Fri, 2/19/10, Jan Algermissen <algermissen1971@mac.com> wrote:
From: Jan Algermissen <algermissen1971@...>
Subject: Re: [rest-discuss] Re: A Name for "3 out of 4 REST constraints" APIs?
To: "Bob Haugen" <bob.haugen@...>
Cc: "REST Discuss" <rest-discuss@yahoogroups.com>
Date: Friday, February 19, 2010, 6:40 AM
- resent, now CCed list -
On Feb 19, 2010, at 1:26 PM, Bob Haugen wrote:
> REST-minus
I actually like that.
Gee - I should have asked on StackOverflow so people could vote for the best name :)
>
> or
>
> URI-oriented
>
> I would not consider anything an abuse of REST unless they are making
> false claims to be REST-full.
Yes.
Jan
>
>
> ------------ --------- --------- ------
>
> Yahoo! Groups Links
>
>
>
------------ --------- --------- -----
Jan Algermissen, Consultant
NORD Software Consulting
Mail: algermissen@ acm.org
Blog: http://www.nordsc. com/blog/
Work: http://www.nordsc. com/
------------ --------- --------- -----
Quite some time ago I wrote a little sample app, based mostly on an article by Paul James [1] which was a simple digest auth implementation. Now perhaps my memory has gone soggy, but I'm sure that there was a workaround to avoid the annoying pop-up dialog being displayed (something like you had to make sure that the response length was at least 420 characters). Well, I just ran the sample again and for the life of me I can't suppress the dialog (in order to replace it with something prettier that uses AJAX to carry out the actual auth handshake). Has there been fix work done on recent browser versions to stop this end-around? If so (or even if not, for that matter) does anyone have a way of doing it? Thanks, Alan [1] http://www.peej.co.uk/articles/http-auth-with-html-forms.html
Hi Alan, At the time Paul wrote this article I tried implementing his approach into an application I was working on. Unfortunately I was unable to use it due to a bug in a Safari. The relevant bug report is here: https://bugs.webkit.org/show_bug.cgi?id=8291 Cheers Adam On 20/02/2010, at 12:17 PM, Alan Dean wrote: > Quite some time ago I wrote a little sample app, based mostly on an > article by Paul James [1] which was a simple digest auth > implementation. Now perhaps my memory has gone soggy, but I'm sure > that there was a workaround to avoid the annoying pop-up dialog > being displayed (something like you had to make sure that the > response length was at least 420 characters). > > Well, I just ran the sample again and for the life of me I can't > suppress the dialog (in order to replace it with something prettier > that uses AJAX to carry out the actual auth handshake). Has there > been fix work done on recent browser versions to stop this end- > around? If so (or even if not, for that matter) does anyone have a > way of doing it? > > Thanks, > Alan > > [1] http://www.peej.co.uk/articles/http-auth-with-html-forms.html > > >
Speaking of HTML, how do you assert something (PUT) via HATEOAS? I was wondering why the URI templates in the forms didn't go through. I believe it still counts as hypermedia.
Here's what I mean. Say, I have a resource consisting, say, a list of rabbits. I wanna assert that some rabbit exist by hopefully PUTting on the URL it's assigned to.
/rabbits -> the link I'm provided with
/rabbits/brown_one -> the one I need to PUT to
Let's consider Case A:
The brown_one resource exists. It's link is listed on the rabbits resource. I can simply follow the link to /rabbits/brown_one, and I might expect a form there to modify the resource. That's great.
Case B:
The brown_one resource doesn't exist yet. It's not listed on the rabbits resource. The standard one perhaps is to fill up a form on /rabbits to create a new one. If it's done right, it's gonna be a POST to the current resource (/rabbits).
A better form however, should have allowed me to assert that some brown_one exists, creating it if necessary. But that would use some URI template on the form, something like:
<form method="PUT" action="/rabbits/{type}">
<input name="type".../>
...</form>
The first solution for Case B isn't so convenient, since I accomplish 'assertion' via checking some resource, then POST-ing to it, rather than PUT. The second one however is non-standard but more uniform (it's supposed to work whether or not there exists a brown_one in the first place). A third but not so great solution is to have some resource that can offer up a link to something that may not exist yet -- there's gonna be a /rabbits/brown_one link somehow but GET-ting it is gonna be a 404 while a form is being made available if I want to fill it up.
What do you guys think?
On Feb 19, 2010, at 9:04 PM, Moore, Jonathan wrote:
>
>
> I wrote an article along these lines a while back:
> http://codeartisan.blogspot.com/2009/01/websites-are-also-restful-web-services.html
>
> The basic idea is something very similar to what youre describing below, Jan; implement your API using HTML as the media type, which lets you navigate it in a browser while youre prototyping/implementing. This is powerful for a couple of reasons: (1) playing with the API in a web browser is much more convenient and natural for a programmer than trying to poke at endpoints with curl or other client programs; (2) it pushes you in a HATEOAS direction. For example, if Im processing a conditional PUT, and I want to return a 412 (Precondition Failed), it leads me to put *something* in the response body, because I want to show that programmer something in the browser. Probably I would put the current version of the resource with another <form> for editing it.
>
> But now, if I also add program-friendly representations for the resources (Atom, JSON, etc.), then Ive now set up a programmatic client with enough context to continue along with whatever its trying to do, and probably without requiring extra round-trips to the server. i.e. my client doesnt have to have logic like if I get a 412, do another GET, then try to do your PUT again. Instead, it can say ah, I got a 412 in response to my PUT; does the representation I got back match my desired state or not? followed either by moving onto the next goal or doing a new PUT, as desired.
>
> Jon
>
> From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of Jan Algermissen
> Sent: Friday, February 19, 2010 4:25 AM
> To: Jan Vincent
> Cc: rest-discuss@yahoogroups.com
> Subject: Re: [rest-discuss] HATEOAS and Cache
>
>
>
> On Feb 19, 2010, at 9:38 AM, Jan Vincent wrote:
>
> >
> > First off,
> >
> > Thanks for the insights. Perhaps my POV on HATEOAS still needs tweaking. Let's have some theoretical desktop client for a social network such as facebook. The desktop client say, simply needs to display the friends of some user. Now, the social network published it's API and the resources that can be accessed. Say, there are three of them
> >
> > Root Resource ("/") - The only one that has a URL, and contains links to other resources, for simplicity, say this contains all the links to all its users.
> > User Resource - Give some info about the user, and more importantly, it has the link to the Friends Resource of the User
> > Friends Resource - Contains the list of friends, and the links to their respective User Resources
> >
> > If the desktop client aims to accomplish what it's supposed to, I aim to expose two methods to be used by the application, say:
> >
> > get_user_info(User)
> > get_friends(User)
> >
> > Internally, it traverses from the root until it ends up getting the information it needs.
> >
> > If I get what you're saying though, I probably shouldn't even expose those two methods? Am I thinking about this all wrong?
>
> Hmm, no, I don't think so. The API design is correct becasue you discover the resources at runtime through typed links.
>
> Regarding the use case, I'd probably leverage the fact that the application you are building seems to be passive in nature, driven by a human end user. If so, you can think more in terms of a Web browser style. Web browsers typically use the processing rules of the media types to turn the steady state they are in into some GUI for the user.
>
> So I'd probably let the server! drive the state of the GUI (or that part of the GUI) completely. E.g. turn the service document into a pane and just make every user link you find into a GUI control (item in a scrollable list) and turn the ueser's ref to it's info into a clickable element. Once clicked on, the GUI would hand that click to the user agent to have it traverse to the next steady state. this state could be handed back to the GUI fro display in another pane. And so on.
>
> Key think here is really to make the GUI just respond to the steady state, ike a browser does. IOW, have the GUI reflect exactly what the server sent.
>
> I am sure you shoul dbe able to leverage HTML for that quite a bit, or?
>
> HTH,
>
> Jan
>
> >
> > On Feb 19, 2010, at 4:18 PM, Jan Algermissen wrote:
> >
> >>
> >> On Feb 19, 2010, at 9:02 AM, Jan Vincent wrote:
> >>
> >>>
> >>> On Feb 19, 2010, at 3:36 PM, Jan Algermissen wrote:
> >>>
> >>>>
> >>>> On Feb 19, 2010, at 8:19 AM, Jan Vincent wrote:
> >>>>
> >>>>> True enough,
> >>>>>
> >>>>> Doing:
> >>>>>
> >>>>>> from("/").follow( "#users_link" ).fill_form( "search_user" , {"id": "someid"}).follow( "#user_link" ).follow( "#friends_ link")
> >>>>>
> >>>>> Doesn't cut it since it assumes specific responses from the server. But the point of HATEOAS I believe is that links are indeed provided for me to traverse through.
> >>>>
> >>>> Yes, but you can make no assumption
> >>>> a) about the media type returned
> >>>> b) about what links, forms etc. you will find in there
> >>>
> >>> Can't I? If the server acts right, it should recognize my Accept header.
> >>
> >> There might be several types you put in the accept header. The client should handle any matching response.
> >>
> >>> If it doesn't have the Content-Type that I recognize, then my client gives up. It simply doesn't know how to understand the new Content-Type. Since the client is preprogrammed, it's one limitation I should live with.
> >>
> >> Yes. However, I like to emphasize that with REST it des not stop there because you do not just have 'broken communication' but in fact still an ongoing communication (a benefit of uniform status codes) and it makes sense to think about leveraging that situation. You could, for example, have the client open an RFC ticket in some helpdesk system to trigger an ASAP update of the client capabilities. I consider that different from dumping a stack trace in the logs and calling a developer with 'uhh - something is wrong'.
> >>
> >> Technically, yes - at some point you just need to give up.
> >>
> >>>
> >>>>
> >>>>> I may instead represent this as some form of tree, perhaps something like:
> >>>>>
> >>>>> from("/").follow("#users_link" ).handle({
> >>>>> 200: fill_form("search_user" , ...)...
> >>>>> 404: ...
> >>>>> })
> >>>>
> >>>> The 200 still does not mean that the form will be there (or that the response will be in a media type that you expect)
> >>>> Also, what about 201,202,303,204.... you have to handle all of those, too. ... yes, the REST client side is hard ...
> >>>>
> >>> If I do a GET, i should expect a 200 if it exists, 404 otherwise. I can choose logical defaults for 3xx series possibly make it transparent to the programmer unless he chooses to do so. 5xx series, well, that would raise exception. Same with PUT, POST, etc, there are certain status codes the client should expect, some status codes wherein default action would be ok, and the rest, that simply won't make sense (i.e., a 201 on a GET request).
> >>
> >> Yes. I wanted to stress the point that you should handle those codes that make sense (202 on GET also, IMO). But as long as you think along the lines you describe that's ok I guess.
> >>
> >>
> >>>>
> >>>>>
> >>>>> On the other hand, I would rather do the previous one and handle exceptions instead.
> >>>>
> >>>> Well, you can, of course. But you must understand that what you handle as exceptions does not mean the server behaves incorrect because any valid HTTP response is part of the contract. The exceptions would essentially only handle the client side 'broken' implementation.
> >>>>
> >>>>> The client should know some information about the server's state machine right?
> >>>>
> >>>> No! That is the essence of the hypermedia constraint. It must look at each steady state in isolation and 'make the best of it' in a sense.
> >>>>
> >>>> Note that this influences the issue of media type design substantially because you can design types that make this very hard or types that makethis easier.
> >>>
> >>> It doesn't have to learn of the entire server's state machine, but just enough of it. The clients need some prior knowledge on how to jump from one resource to another,
> >>
> >> It must know the media type to understand the meaning of the current transitions. It can then coose the transition that advances its built-in goal but it cannot have any expectation about what is being returned by the server.
> >>
> >> (Note that the client *can* expect for the server to not lie, e.g. <img src=""/> must point to an image, <app:collection href=""/> must point to a collection.)
> >>
> >>
> >>> possibly in the form of xpath, json traversal rules, or rdf relationships.
> >>
> >> What do you mean here? I did not get that.
> >>
> >>
> >>
> >>>
> >>>>
> >>>>> And if it asks right, it should rightfully expect that it gets the content-type it requested for.
> >>>>
> >>>> Still, if it gets a 406 it should be able to handle that and not just dump an exception into the logs.
> >>>
> >>> What else is there to do? Unless some AI stuff is going on, I don't think it can do much to recover.
> >>
> >> See above: leverage the still existing conversation. In addition, think more in terms of 'client is not able to reach its overall goal' from the current steady state as opposed to 'the server did not send me what it should have - that's an exception'. Even if the end result is about the same - the style inthinging is different.
> >>
> >>
> >>>
> >>>>
> >>>>> How else would it be able to do its task?
> >>>>
> >>>> By knowing the set of media types to expect and by handling every media type for every response.
> >>>
> >>> Exactly. There are some assumptions that must be in place -- certain expectations how the server can act.
> >>>
> >>> I'm curious though. Say for a social network API, how would some client know who the friends are of some user?
> >>
> >> What do you mean?
> >>
> >>
> >> Jan
> >>
> >>
> >>
> >>>
> >>>>
> >>>>
> >>>> Jan
> >>>>
> >>>>>
> >>>>> On Feb 19, 2010, at 3:05 PM, Jan Algermissen wrote:
> >>>>>
> >>>>>>
> >>>>>> On Feb 19, 2010, at 8:03 AM, Jan Algermissen wrote:
> >>>>>>
> >>>>>>>
> >>>>>>> [1] In my opinion, the client must know the types at design time - otherwise, the client could not be coded in the first place
> >>>>>>
> >>>>>> Meant to add this link to some thoughts about this: http://www.nordsc.com/blog/?cat=4
> >>>>>>
> >>>>>> Jan
> >>>>>>
> >>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>>>
> >>>>>>>> In reality of course, this shouldn't really necessitate multiple calls to the server if called multiple times since previous results have been cached and processed on the client side. Only when the cache expires, should there be an attempt to request again. I'm not really sure if RESTful clients that respect HATEOAS do it this way, and should they in the first place. If they do, are there tools that exist for this?
> >>>>>>>>
> >>>>>>>> On a side note, content type negotiation should be preconfigured before doing the call I stated above.
> >>>>>>>>
> >>>>>>>> Thanks,
> >>>>>>>>
> >>>>>>>> Jan Vincent Liwanag
> >>>>>>>> jvliwanag@...
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> ------------------------------------
> >>>>>>>>
> >>>>>>>> Yahoo! Groups Links
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>>
> >>>>>>> -----------------------------------
> >>>>>>> Jan Algermissen, Consultant
> >>>>>>> NORD Software Consulting
> >>>>>>>
> >>>>>>> Mail: algermissen@...
> >>>>>>> Blog: http://www.nordsc.com/blog/
> >>>>>>> Work: http://www.nordsc.com/
> >>>>>>> -----------------------------------
> >>>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>> ------------------------------------
> >>>>>>>
> >>>>>>> Yahoo! Groups Links
> >>>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>
> >>>>>> -----------------------------------
> >>>>>> Jan Algermissen, Consultant
> >>>>>> NORD Software Consulting
> >>>>>>
> >>>>>> Mail: algermissen@...
> >>>>>> Blog: http://www.nordsc.com/blog/
> >>>>>> Work: http://www.nordsc.com/
> >>>>>> -----------------------------------
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>
> >>>>> Jan Vincent Liwanag
> >>>>> jvliwanag@gmail.com
> >>>>>
> >>>>>
> >>>>>
> >>>>
> >>>> -----------------------------------
> >>>> Jan Algermissen, Consultant
> >>>> NORD Software Consulting
> >>>>
> >>>> Mail: algermissen@...
> >>>> Blog: http://www.nordsc.com/blog/
> >>>> Work: http://www.nordsc.com/
> >>>> -----------------------------------
> >>>>
> >>>>
> >>>>
> >>>>
> >>>
> >>> Jan Vincent Liwanag
> >>> jvliwanag@...
> >>>
> >>>
> >>>
> >>
> >> -----------------------------------
> >> Jan Algermissen, Consultant
> >> NORD Software Consulting
> >>
> >> Mail: algermissen@...
> >> Blog: http://www.nordsc.com/blog/
> >> Work: http://www.nordsc.com/
> >> -----------------------------------
> >>
> >>
> >>
> >>
> >
> > Jan Vincent Liwanag
> > jvliwanag@gmail.com
> >
> >
> >
>
> -----------------------------------
> Jan Algermissen, Consultant
> NORD Software Consulting
>
> Mail: algermissen@acm.org
> Blog: http://www.nordsc.com/blog/
> Work: http://www.nordsc.com/
> -----------------------------------
>
>
>
>
Jan Vincent Liwanag
jvliwanag@...
Markus, [let me in general suggest again to use the correct term 'hypermedia constraint' instead of the silly acronym...] On Feb 20, 2010, at 3:29 PM, Markus Karg wrote: > This discussion started on users@... and was moved to rest-discuss as it is a general REST topic and less a particular Jersey topic. Please send comments only torest-discuss@yahoogroups.com but not to users@.... Thanks. :-) Good! > > As Roy Fielding pointed out several times, an API must not call itself RESTful as long as it is not applying HATEOAS. I want to support this constraint by adding HATEOASfulness to my future applications. One thing I just do not understand so far about HATEOAS-via-HTTP is (and what other people asked me when citing Fielding in this issue): How a client shall actually know which http method to use to follow a link received with the previous request to a RESTful server? Roy answered in his blog that the method could be read out of the last result. But actually how? This information is part of the hypermedia semantics specified (media type specification or link relation specification etc). Such a specification can either explicitly state the method to use (see RFC 5023 for example) or specify a hypermedia element that tells the client at runtime what method to use (e.g. HTML forms). > > I understand that typically one would use e. g. http "Link" headers or e. g. XLink in XML content to model HATEOAS transformation URIs. But the problem is that in reality neither the "Link" header nor XLink provide the possibility to declare the http method used on that URI. Right. The specification of the link relation would have to do that. > So my client can learn from the previous http request what the URI is to place an order, but it does not see which http method must be used to place it (whether it is a PUT or a POST for example, since both could be valid in theory). Right. Suppose you'd use an AtomPub collection to tell the client about the order processor: <app:collection href="/orders/"> <app:accept>application/procurement+xml;type=order</app:accept> </app:collection> Then you would know from RFC5023 that the method to use is POST. There is really no technical magic going on - it all must be specified and implemented by the client (or the client plugin loaded for processing the primary media type of the current steady state). Jan > I understand that in a perfect world, I could apply the common sense for the CRUD-via-http pattern on that resource (PUT, GET, POST, DELETE), but it might be not so obvious in all cases which of that methods is actually right to achieve a particular business task, since the business use case is beyond simple CRUD, as there might be more than one way to RESTfully model a complex business case. > > Fielding wrote in his blog that the client will learn about the http methods by inspecting the last response, just as it learned about the possible URIs. But how to actually achieve this in reality, aside from the purely theoretical idea? Is the actual solution really to break down all business cases to CRUD atoms (and such to unambiguity of http methods)? > > I already searched the archive for this but could not find a working real-world solution (maybe I used the wrong keyword when searching?). > > Thanks > Markus > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
Considering four constraints I don't really like the RESTless term, because RESTfull obviously means fulfilling all four constraints, but RESTless could also mean to fulfill not one of the constraints, but we like to express "all but one", I think. So reading the nice suggestions I would prefer REST-wheas application (whithout hipermedia engine of application state) to stress that there is something missed, or to say what it is with a more positive tone: HTTP-based application. Regards, Daniel Am 19.02.2010 19:11, schrieb Kevin Duffey: > > I like Restless. Easy to roll of the tongue like Restful. "Is your API > RESTful or RESTless?" > > > > --- On *Fri, 2/19/10, Jan Algermissen /<algermissen1971@...>/* wrote: > > > From: Jan Algermissen <algermissen1971@...> > Subject: Re: [rest-discuss] Re: A Name for "3 out of 4 REST > constraints" APIs? > To: "Bob Haugen" <bob.haugen@...> > Cc: "REST Discuss" <rest-discuss@yahoogroups.com> > Date: Friday, February 19, 2010, 6:40 AM > > - resent, now CCed list - > > On Feb 19, 2010, at 1:26 PM, Bob Haugen wrote: > > > REST-minus > > I actually like that. > > Gee - I should have asked on StackOverflow so people could vote > for the best name :) > > > > > or > > > > URI-oriented > > > > I would not consider anything an abuse of REST unless they are > making > > false claims to be REST-full. > > Yes. > Jan > > > > > > > ------------ --------- --------- ------ > > > > Yahoo! Groups Links > > > > > > > > ------------ --------- --------- ----- > Jan Algermissen, Consultant > NORD Software Consulting > > Mail: algermissen@ acm.org </mc/compose?to=algermissen%40acm.org> > Blog: http://www.nordsc. com/blog/ <http://www.nordsc.com/blog/> > Work: http://www.nordsc. com/ <http://www.nordsc.com/> > ------------ --------- --------- ----- > > > -- Daniel "Oscar" Schulte
This discussion started on users@... and was moved to rest-discuss as it is a general REST topic and less a particular Jersey topic. Please send comments only to rest-discuss@yahoogroups.com but not to users@.... Thanks. :-) As Roy Fielding pointed out several times, an API must not call itself RESTful as long as it is not applying HATEOAS. I want to support this constraint by adding HATEOASfulness to my future applications. One thing I just do not understand so far about HATEOAS-via-HTTP is (and what other people asked me when citing Fielding in this issue): How a client shall actually know which http method to use to follow a link received with the previous request to a RESTful server? Roy answered in his blog that the method could be read out of the last result. But actually how? I understand that typically one would use e. g. http "Link" headers or e. g. XLink in XML content to model HATEOAS transformation URIs. But the problem is that in reality neither the "Link" header nor XLink provide the possibility to declare the http method used on that URI. So my client can learn from the previous http request what the URI is to place an order, but it does not see which http method must be used to place it (whether it is a PUT or a POST for example, since both could be valid in theory). I understand that in a perfect world, I could apply the common sense for the CRUD-via-http pattern on that resource (PUT, GET, POST, DELETE), but it might be not so obvious in all cases which of that methods is actually right to achieve a particular business task, since the business use case is beyond simple CRUD, as there might be more than one way to RESTfully model a complex business case. Fielding wrote in his blog that the client will learn about the http methods by inspecting the last response, just as it learned about the possible URIs. But how to actually achieve this in reality, aside from the purely theoretical idea? Is the actual solution really to break down all business cases to CRUD atoms (and such to unambiguity of http methods)? I already searched the archive for this but could not find a working real-world solution (maybe I used the wrong keyword when searching?). Thanks Markus
> > As Roy Fielding pointed out several times, an API must not call > itself RESTful as long as it is not applying HATEOAS. I want to support > this constraint by adding HATEOASfulness to my future applications. One > thing I just do not understand so far about HATEOAS-via-HTTP is (and > what other people asked me when citing Fielding in this issue): How a > client shall actually know which http method to use to follow a link > received with the previous request to a RESTful server? Roy answered in > his blog that the method could be read out of the last result. But > actually how? > > This information is part of the hypermedia semantics specified (media > type specification or link relation specification etc). Such a > specification can either explicitly state the method to use (see RFC > 5023 for example) or specify a hypermedia element that tells the client > at runtime what method to use (e.g. HTML forms). I understand that with AtomPub RFC5023 specifies that. Call me dumb, but what to do if I am not using AtomPub but self-made service (like a web shop application)? How to do it then? For example, if I am writing a web shop, an that one allows to place an order using a POST. How to tell a client that it shall use that POST? I mean, *where* to put that information in a technical sense?
On Feb 20, 2010, at 7:16 PM, Markus KARG wrote: >>> As Roy Fielding pointed out several times, an API must not call >> itself RESTful as long as it is not applying HATEOAS. I want to support >> this constraint by adding HATEOASfulness to my future applications. One >> thing I just do not understand so far about HATEOAS-via-HTTP is (and >> what other people asked me when citing Fielding in this issue): How a >> client shall actually know which http method to use to follow a link >> received with the previous request to a RESTful server? Roy answered in >> his blog that the method could be read out of the last result. But >> actually how? >> >> This information is part of the hypermedia semantics specified (media >> type specification or link relation specification etc). Such a >> specification can either explicitly state the method to use (see RFC >> 5023 for example) or specify a hypermedia element that tells the client >> at runtime what method to use (e.g. HTML forms). > > I understand that with AtomPub RFC5023 specifies that. Call me dumb, but > what to do if I am not using AtomPub but self-made service (like a web shop > application)? How to do it then? For example, if I am writing a web shop, an > that one allows to place an order using a POST. How to tell a client that it > shall use that POST? I mean, *where* to put that information in a technical > sense? > In the specification of the media type (or link relation etc.) you are using. If you do not have something like that, that specifies the semantics of your application[1] nobody can code a client that can interact with the application. Unless the client is ultimately driven by a human. AtomPub takes you quite far usually, though in my work, I use a somewhat extended version that provides for the description of 'single' resources and for search forms[2]. I actually cannot see anything I would like to describe about the start state of a service that cannot be expressed with that extended form of AtomPub. The client developer needs to read the media type or link relation spec and simple put that knowledge into the client code (be it directly or as a plugin). Jan [1] Usually, we would want such specs for kinds of applications, of course, not just a single one. [2] http://www.nordsc.com/blog/?p=80 > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
And this is related to REST because? _________________________________________________ Melhores cumprimentos / Beir beannacht / Best regards Antnio Manuel dos Santos Mota http://card.ly/amsmota _________________________________________________ On 20 February 2010 04:24, Adam Ratcliffe <adam@...> wrote: > > > Hi Alan, > > At the time Paul wrote this article I tried implementing his approach into > an application I was working on. Unfortunately I was unable to use it due > to a bug in a Safari. The relevant bug report is here: > https://bugs.webkit.org/show_bug.cgi?id=8291 > > Cheers > Adam > > On 20/02/2010, at 12:17 PM, Alan Dean wrote: > > Quite some time ago I wrote a little sample app, based mostly on an article > by Paul James [1] which was a simple digest auth implementation. Now perhaps > my memory has gone soggy, but I'm sure that there was a workaround to avoid > the annoying pop-up dialog being displayed (something like you had to make > sure that the response length was at least 420 characters). > > Well, I just ran the sample again and for the life of me I can't suppress > the dialog (in order to replace it with something prettier that uses AJAX to > carry out the actual auth handshake). Has there been fix work done on recent > browser versions to stop this end-around? If so (or even if not, for that > matter) does anyone have a way of doing it? > > Thanks, > Alan > > [1] http://www.peej.co.uk/articles/http-auth-with-html-forms.html > > > >
António, Because I need to carry out auth on a REST service with resources which are surfaced to humans as (x)html as well as to machine-to-machine interactions. At the moment, I've inherited a cookie-based system but wish to refactor that away (I don't think that I need to convince this audience why) but, as you can imagine, the butt-ugly dialog gives my product owners an excellent reason to override the preferred technical choice. Auth is frequent bugbear to us RESTafarians and I figured that someone else on the list has probably seen this change and (possibly) has an answer that allows use of a standardized auth mechanism (HTTP Digest) rather than yet-another-custom-token scheme. Regards, Alan Dean 2010/2/20 António Mota <amsmota@...> > And this is related to REST because? > > _________________________________________________ > > Melhores cumprimentos / Beir beannacht / Best regards > > António Manuel dos Santos Mota > > http://card.ly/amsmota > _________________________________________________ > > > > On 20 February 2010 04:24, Adam Ratcliffe <adam@...> wrote: > >> >> >> Hi Alan, >> >> At the time Paul wrote this article I tried implementing his approach into >> an application I was working on. Unfortunately I was unable to use it due >> to a bug in a Safari. The relevant bug report is here: >> https://bugs.webkit.org/show_bug.cgi?id=8291 >> >> Cheers >> Adam >> >> On 20/02/2010, at 12:17 PM, Alan Dean wrote: >> >> Quite some time ago I wrote a little sample app, based mostly on an >> article by Paul James [1] which was a simple digest auth implementation. Now >> perhaps my memory has gone soggy, but I'm sure that there was a workaround >> to avoid the annoying pop-up dialog being displayed (something like you had >> to make sure that the response length was at least 420 characters). >> >> Well, I just ran the sample again and for the life of me I can't suppress >> the dialog (in order to replace it with something prettier that uses AJAX to >> carry out the actual auth handshake). Has there been fix work done on recent >> browser versions to stop this end-around? If so (or even if not, for that >> matter) does anyone have a way of doing it? >> >> Thanks, >> Alan >> >> [1] http://www.peej.co.uk/articles/http-auth-with-html-forms.html >> >> >> >> > >
Hello guys, Jan Vincent, answering your question about the cache, Restfulie and Exylus are the only two clients APIs that I am aware of supporting cache (according to the restwiki). You can see how it works here: http://guilhermesilveira.wordpress.com/2010/01/26/scaling-through-rest-why-rest-clients-require-cache-support/ > Also, what about 201,202,303,204.... you have to handle all of those, too. ... yes, the REST client side is hard ... Jan, since it seems impossible to write code answering every response code at every request, Restfulie, for example, allows you to register default handlers, and already provides a set of defaults (i.e. 201 will follow and retrieve the resource). Regards Guilherme Silveira Caelum | Ensino e Inovao http://www.caelum.com.br/
Hey all,
How are you authenticating users with REST calls other than Basic Auth? I am looking into OAuth.. setting it up is a bit of a chore. Looking to use this one example I found with opensso.
I am basically looking to do something like how Google Maps and other REST apis require a developer to get a token to be able to make calls to the API. I am fine if it's not OAuth, but I would like to better understand and/or get some help from one of you security experts out there on how to get this working.
So in a nutshell, I'll have a "dev" test box, and a production setup. I want a way for a developer to get some sort of token for either/both, but am finding myself getting a little lost with all the key/secret stuff. For now I basically want to be able to authenticate all API calls coming in with some way to make sure it's something I gave the user to allow them access. I am not sure if this is referred to as 2-legged or 3-legged in OAuth terms. I think it's two-legged.. and is that good enough?
Any pointers to examples that explain how I can deploy my own token generator and how a client would make a call to get a token, then make a call to any API with the token (or token/key?), and so on would be much appreciated.
On Feb 18, 2010, at 10:21 AM, Robert Brewer wrote: > Eric J. Bowman wrote: > > I contend that the format of a representation is part of a > > system's hierarchical data. The proof is right there in curl, > > which sees no hierarchy when it's saving 'resource' and > > 'resource.1' to disk, while obviously recognizing .pdf > > and .html as part of a hierarchy of representations, whether > > by direct dereferencing or by presence in a Content-Location > > response header. > > That's an odd way to look at it. The "format of a representation > is part of a system's hierarchical data" if and only if the path > portion of the URI contains information to that effect. If not, > it doesn't. The fact that some domains specifically do this does > not require them all to do so. Curl is suboptimal in this respect > since it does not differentiate responses based on the query > string component. More to the point, one implementation doesn't "prove" anything. > > But it's still treating URIs opaquely -- links are derived from > > the link relations in the source documents, which are read from > > @href's. There is no analysis of those @href contents by keyword > > or filename extension to determine "resource type". Clients > > aren't required to know any specifics of a URI's pattern. That > > would be coupling, which is diametrically opposed to the notion > > of URI opacity in REST. > > Absolutely correct. That behavior should be extended to query string > components of the URI as well. > > > Don't get carried away by saying that my treating URIs as > > hierarchical or testing whether they have fragments or not, > > amounts to a failure to treat URIs as opaque. Can you point > > to some constraint in REST that I'm breaking? Can you find any > > support from Roy to back up this notion that URI opacity means > > that query and filename extension are the same? > > Sure. I used to take your position but corrected myself [1] back > in 2005, complete with Fielding quotes and RFC 3986 references ;) > > > Are there any > > generic URI parsers you can point me to, which treat query > > strings opaquely? Or do they all consider '?' to be a reserved > > character? > > The '?' character does have a special meaning: it separates the > hierarchical portion of the URI from the opaque part. But the > opaque part is still part of the identifier. > > > So the onus is not on me to prove that Roy isn't being clear > > as daylight when he says that "query is not a substitute for > > identification of resources", but on you who think it could > > possibly mean any different to support *your* arguments. :-) > > When I read that statement from Roy [2], I don't see any indication > that it's about "query" as a URI component; instead, it is regarding > "query" as a means of fetching a list of resources. Roy's point > seems to be that returning 38 items in a list within a single > response is no substitute for having distinct URI's for each of > those 38 individual resources. > > > Robert Brewer > fumanchu@... > > [1] http://groups.google.com/group/cherrypy-devel/msg/0fcc62df334bc9ed > [2] http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven#comment-720 Right, I wasn't talking about URI syntax at all. Any server developer would know that the URI syntax of the request-target (URI path and query) has absolutely no correlation to any specific "kind" of resource mapping or implementation. Things like mod_rewrite make that impossible. ...Roy
> > I understand that with AtomPub RFC5023 specifies that. Call me dumb, > but > > what to do if I am not using AtomPub but self-made service (like a > web shop > > application)? How to do it then? For example, if I am writing a web > shop, an > > that one allows to place an order using a POST. How to tell a client > that it > > shall use that POST? I mean, *where* to put that information in a > technical > > sense? > > > > In the specification of the media type (or link relation etc.) you are > using. > > If you do not have something like that, that specifies the semantics of > your application[1] nobody can code a client that can interact with the > application. Unless the client is ultimately driven by a human. Ok, so let me provide an example: The media type used is based on XML, and I am using XSL to define it's structure (special, proporietary MIME e. g. "application/inspectionplan+xml". Maybe my XSD knowledge is not good enough, but how do I define the http method now to be used for a particular operation? I mean, actually, not theoretically, in machine-readable form, not in human-readable specs?
On Feb 21, 2010, at 10:09 AM, Markus KARG wrote:
>>> I understand that with AtomPub RFC5023 specifies that. Call me dumb,
>> but
>>> what to do if I am not using AtomPub but self-made service (like a
>> web shop
>>> application)? How to do it then? For example, if I am writing a web
>> shop, an
>>> that one allows to place an order using a POST. How to tell a client
>> that it
>>> shall use that POST? I mean, *where* to put that information in a
>> technical
>>> sense?
>>>
>>
>> In the specification of the media type (or link relation etc.) you are
>> using.
>>
>> If you do not have something like that, that specifies the semantics of
>> your application[1] nobody can code a client that can interact with the
>> application. Unless the client is ultimately driven by a human.
>
> Ok, so let me provide an example: The media type used is based on XML, and I
> am using XSL to define it's structure
What do you mean by that? Ah, you meant XSD, yes?
> (special, proporietary MIME e. g.
> "application/inspectionplan+xml". Maybe my XSD knowledge is not good enough,
> but how do I define the http method now to be used for a particular
> operation? I mean, actually, not theoretically, in machine-readable form,
> not in human-readable specs?
You do not need a machine readable form.
You'd write sometiing like:
"The link relation "order-processor" is used to refer to resources that accept order submissions and you pace orders by sending the order in a POST request to to that resource"
The server would send data such as
200 Ok
Content-type: application/inspectionplan+xml
<foo>
<bar>
</bar>
<link rel="order-processor" href="/orders"/>
</foo>
And in your client you would code:
if(media type == application/inspectionplan+xml) {
linkElem = extract from body the link element that has rel=order-processor
if(linkElem) {
orderProcessorUri = linkElem.getAttr("href");
// Now, this is the hard coded knowledge that POST is the method to use.
// It comes from the spec directly.
request = new Request("POST",orderProcessorUri,orderDocument)
response = client.perform(request);
}
}
This is not different from HTML specifyiing that the method for <a href=""/> is GET.
If you want a variation of the method at runtim, use a form mechanism such as HTML does, just with a specific form. You can look at the OpenSearch parameters extension as an exaple (see method attribute):
<http://www.opensearch.org/Specifications/OpenSearch/Extensions/Parameter/1.0>
Jan
>
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
-----------------------------------
Jan Algermissen, Consultant
NORD Software Consulting
Mail: algermissen@...
Blog: http://www.nordsc.com/blog/
Work: http://www.nordsc.com/
-----------------------------------
---------- Forwarded message ---------- Date: 2010/2/21 To: users@...v.java.net That's my PUJ example :) I have this academic competition, with "competition phases"... So, I have professors submitting student's homeworks for a competition, and each competition has the following phases: 1) call for papers 2) evaluation 3) history So, 1) During the call for papers, a professor can submit a homework, or patch the homework, delete, etc (homework CRUD). When the competition changes to the evaluation phase, no more changes are accepted in the homeworks, so the method PUT and POST should be "disabled" in the homework resources. 2) During the evaluation, the "evaluators" can apply grades to the homeworks.. They can also review their grades. methods PUT, GET, DELETE and POST are available to the homework grades resource. 3) When the competition finishes, it goes to the history phase, and no more changes are acceptable in the homeworks (evaluation, patches, etc..). The resources supports only GET methods in this phase. Notice that I have two distinct workflows here: 1) the resource workflow, where a change in the resource produces its feasible next states as side effects. 2) the application phases, where an external action (usually the competition owner presses a button) changes the state of all resources regarding a competition. Considering the competition also as a resource, I have a change in the state of a competition with side effects in other resources. So it is not only a matter of changing the resource representation, but also to manage the impact of that changing in other resources. I have an slide representing this problem in this presentation: http://www.jfokus.se/jfokus/preso/jf-10_DomainDrivenRESTWeb-Services.pdf Please let me know if this problem is suitable for your discussion. I have also some code implemented about that, like: ------------- load competitions: XML/JSON: http://fgaucho.dyndns.org:8080/arena-http/competition JSONP: http://fgaucho.dyndns.org:8080/arena-http/competition/jsonp ------------- load homeworks: XML/JSON: http://fgaucho.dyndns.org:8080/arena-http/homework?competition=PUJCE-09 JSONP: http://fgaucho.dyndns.org:8080/arena-http/homework/jsonp?competition=PUJCE-09 * if you don't include the 'competition' parameter, it will load all homeworks. and I have also some write operations implemented.. It is consumed by clients like: http://fgaucho.dyndns.org:8080/arena-dwr/ http://fgaucho.dyndns.org:8080/arena-jsf20 the code is fully available from here: http://kenai.com/projects/puj/pages/Arena-dev but it is still a work in progress... so be nice to tell me the missed points :) regards, Felipe Gacho On Sun, Feb 21, 2010 at 10:14 AM, Markus Karg <markus.karg@gmx.net> wrote: > Tatu, > > > > no doubt about that. If you business demands storing a resource, REST will > not prevent you from doing so. But as you said, then this is because of it's > resource state, not of it's conversational state. The conversational state > still must not get stored on the server. So for sake of clarity, let's not > further use the carts example, as it would mislead readers. We should fina > an example where undoubtful the conversation state will never get part of > the resource state just be redefinitio of the business scenario. ;-) > > > > Regards > > Markus > > > > From: Tatu Saloranta [mailto:tsaloranta@...] > Sent: Samstag, 20. Februar 2010 20:07 > To: users@... > Subject: Re: [Jersey] RESTful Ordering (was: JAX-RS == REST? Or not? (was > Re: [Jersey] What HATEOAS actually means)) > > > > > > On Sat, Feb 20, 2010 at 12:09 AM, Markus Karg <markus.karg@...> wrote: > > Anyways, their storing of carts content of their servers was a business > decision, not a technical one. ;-) > > Agreed. But I think it is an interesting case to consider -- I am sure that > persistency (and thus statefulness) of resources is a must for many systems. > And that it is necessary to separate resource state with session/transaction > state. I don't think REST would preclude former, but at this point I would > not be entirely surprised to be convinced otherwise. :-) > > Actually come to think of that, I can see why someone might think of > shopping cart as conversation state... and others as more of a resource. I > guess that is modeling choice really. > And also a practical thing: if it is part of conversation, you can have > multiple concurrent sessions (open a new browser window, get a different > cart); or just a single shared one per account. > > FWIW, for retailers like Amazon this is obviously a must feature; not just > shopping cart but wish lists and such. Although it is good to get immediate > sales, probability of a later purchase is probably high (I don't know the > ratio, nor could divulge if I did -- this is just inferred from public > information :) ). > > -+ Tatu +- > -- ------------------------------------------ Felipe Gacho 10+ Java Programmer CEJUG Senior Advisor -- ------------------------------------------ Felipe Gacho 10+ Java Programmer CEJUG Senior Advisor
"Roy T. Fielding" wrote: > On Feb 18, 2010, at 10:21 AM, Robert Brewer wrote: > > Eric J. Bowman wrote: > > > I contend that the format of a representation is part of a > > > system's hierarchical data. The proof is right there in curl, > > > which sees no hierarchy when it's saving 'resource' and > > > 'resource.1' to disk, while obviously recognizing .pdf > > > and .html as part of a hierarchy of representations, whether > > > by direct dereferencing or by presence in a Content-Location > > > response header. > > > > That's an odd way to look at it. The "format of a representation > > is part of a system's hierarchical data" if and only if the path > > portion of the URI contains information to that effect. If not, > > it doesn't. The fact that some domains specifically do this does > > not require them all to do so. Curl is suboptimal in this respect > > since it does not differentiate responses based on the query > > string component. > > More to the point, one implementation doesn't "prove" anything. > Let's not miss the forest for the trees, here. When I use curl as an example, it's because what I'm pointing out is accepted behavior that's uniform amongst many clients (in this case, I've never heard anybody say libcurl's URI parser is bugged or broken). Try saving this URI's content to disk using a variety of clients: http://en.wiski.org/date?iso=2010-02-21 WebKit and Opera save based on the <title> content, so they aren't useful examples of generic-client URI-parsing behavior (like curl). IE wants to save 'date' just like curl does, while Firefox wants to save 'date.xhtml' due to some sort of internal Content-Type mapping. No client saving based on request URI (as opposed to <title> content) asks to save 'date?iso=2010-02-21' or in any way preserves the query. All such clients will agree what to do in the presence of a filename extension. So curl's behavior here is not aberrant, but represents the norm. Given this norm, my advice about not using query as a substitute for filename extensions, stands. One implementation might "disprove" this point, but I am not aware of a single generic-URI-parser which will "differentiate responses based on the query string component". Arguing that curl is a bad example here, is missing the forest for the trees, unless a counter-example is provided which proves the common understanding is different from curl's? -Eric
> >>> I understand that with AtomPub RFC5023 specifies that. Call me > dumb, > >> but > >>> what to do if I am not using AtomPub but self-made service (like a > >> web shop > >>> application)? How to do it then? For example, if I am writing a web > >> shop, an > >>> that one allows to place an order using a POST. How to tell a > client > >> that it > >>> shall use that POST? I mean, *where* to put that information in a > >> technical > >>> sense? > >>> > >> > >> In the specification of the media type (or link relation etc.) you > are > >> using. > >> > >> If you do not have something like that, that specifies the semantics > of > >> your application[1] nobody can code a client that can interact with > the > >> application. Unless the client is ultimately driven by a human. > > > > Ok, so let me provide an example: The media type used is based on > XML, and I > > am using XSL to define it's structure > > What do you mean by that? Ah, you meant XSD, yes? Typo, sorry. Yes, obviously I meant XSD (I am using XSL a lot so my finger type "L" automatically). > > (special, proporietary MIME e. g. > > "application/inspectionplan+xml". Maybe my XSD knowledge is not good > enough, > > but how do I define the http method now to be used for a particular > > operation? I mean, actually, not theoretically, in machine-readable > form, > > not in human-readable specs? > > You do not need a machine readable form. If the specification is only human readable, I do not see much of a benefit in HATEOAS at all: If the client will only be able to communicate with my particlar implementation of a web shop (for example), in contrast to working with *all* kinds of shops on the basis of some generic machine-readble standard business vocabulary, then HATEOAS will not work. Why? Because HATEOAS implies that one (even a machine) can browse to *any* URI provided in the response of the previous call. So that is not guaranteed now, as it will be impossible to redirect the client to a foreign site, if I (the server vendor providing the URI) cannot guarantee that every client knows about the particular use of http methods by that referenced site. So to make the complete system of "machine web" work, I would have to provide information on virtually every external service to virtually each client. How should I know of either? This is impossible. What would solve the problem would be a machine readable form that defines in a unique syntax what methods will have which meaning. Such descriptions are existing, like WADL. So what is missing is two things: (a) WADL must be a standard, (b) It must be standardized where and how to get the WADL. If both is provided, it won't be any problem anymore that *any* client knows how to interprete the workflow's instruction "order this car" if it understands the MIME type of "application/car" to get the HATEOAS links out of it's entity, so it the URI of the "order"-titled link, and gets (from the WADL) the mapping of "order" to "POST" for example. No human would be involved in this process, and no human readable specification is needed. The only thing human must tell the client is that it is part of a supply chain management solution and shall issue "order" on a provided URI (here: the car). I you just rely on defining human readable specifications, you just move the human to an early stage, but you don't get rid of him. But actually, the difference between WWW and REST (shortly spoken) is that the first is about humans, while the latter -in the sense we discuss it here- is about machines. So to make it work completely (not just in part) and globally, we need to get rid of human and find ways to make machines understand machines without the need to manually define each single interconnection type, but step forward to global standards. AtomPub is not a solution, as it only is defined for publishing, while we need something on a higher level, describing *any* kind of business interaction. Regards Markus
On Feb 21, 2010, at 2:23 PM, Markus KARG wrote: > > If the specification is only human readable, I do not see much of a benefit > in HATEOAS at all: If the client will only be able to communicate with my > particlar implementation of a web shop (for example), Right - that is why the goal should obviously be to share a 'shopping' media type across many shopping services. Such as AtomPub provides media types for implementing publishing services. > in contrast to working > with *all* kinds of shops on the basis of some generic machine-readble > standard business vocabulary, then HATEOAS will not work. > Why? Because > HATEOAS implies that one (even a machine) can browse to *any* URI provided > in the response of the previous call. So that is not guaranteed now, as it > will be impossible to redirect the client to a foreign site, if I (the > server vendor providing the URI) cannot guarantee that every client knows > about the particular use of http methods by that referenced site. So to make > the complete system of "machine web" work, I would have to provide > information on virtually every external service to virtually each client. > How should I know of either? This is impossible. > > What would solve the problem There is actually no problem - you just need an "evolving set of standard types" (to paraphrase Roy). > [..] > > I you just rely on defining human readable specifications, you just move the > human to an early stage, but you don't get rid of him. It is impossible to get rid of the human otherwise we would be talking AI. > But actually, the > difference between WWW and REST (shortly spoken) is that the first is about > humans, while the latter -in the sense we discuss it here- is about > machines. So to make it work completely (not just in part) and globally, we > need to get rid of human and find ways to make machines understand machines > without the need to manually define each single interconnection type, but > step forward to global standards. AtomPub is not a solution, as it only is > defined for publishing, while we need something on a higher level, > describing *any* kind of business interaction. At some point you need to bind your implementation to domain semantics, no matter how many general layers you use. For example, the only difference between <person> <father href=""/> </person> and <person> <link rel="father" href=""/> </person> is the layer at which the 'father' semantic is defined (XML semantics vs. generic link relation semantics. You cannot make software figure out what 'father' means - you MUST hardcode that at some point. Jan > > Regards > Markus > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
> > If the specification is only human readable, I do not see much of a > benefit > > in HATEOAS at all: If the client will only be able to communicate > with my > > particlar implementation of a web shop (for example), > > Right - that is why the goal should obviously be to share a 'shopping' > media type across many shopping services. Such as AtomPub provides > media types for implementing publishing services. In a pragmatic sense I agree, but in the sense of why the WWW is so great I disagree: The long term solution shold be a machine readable declaration, so nobody would need to agree upon which types of businesses there will be. I mean, actually I don't provide a ship but a very specialized application. I don't see that my few competitors would like to find a common API. But I do see that they would agree upon something like WADL. > > in contrast to working > > with *all* kinds of shops on the basis of some generic machine- > readble > > standard business vocabulary, then HATEOAS will not work. > > > Why? Because > > HATEOAS implies that one (even a machine) can browse to *any* URI > provided > > in the response of the previous call. So that is not guaranteed now, > as it > > will be impossible to redirect the client to a foreign site, if I > (the > > server vendor providing the URI) cannot guarantee that every client > knows > > about the particular use of http methods by that referenced site. So > to make > > the complete system of "machine web" work, I would have to provide > > information on virtually every external service to virtually each > client. > > How should I know of either? This is impossible. > > > > What would solve the problem > > There is actually no problem - you just need an "evolving set of > standard types" (to paraphrase Roy). Well, actually there IS a problem, as for many businesses it is impossible to defined a common methodology (even a common vocabulary). Also, this leads to a plethora of business objects. I think what Roy actually meant was not the idea that everybody should do his own type and declare it as a standard, but more that the world negotiates upon *standard* types like Dublin Core. Yes, that would definitively be a solution, but I do not see that it actually is happening. > > I you just rely on defining human readable specifications, you just > move the > > human to an early stage, but you don't get rid of him. > > It is impossible to get rid of the human otherwise we would be talking > AI. No you misunderstood. I am not talking about replacing human in the workflow sense, but only that I want to replace it at the point where he must read a specification, as this thread is solely about http methods! So if there would be a machine readable specification how to find out about http methods, that does not imply any artifical intelligence, but solely solving one particlar technical problem -- that one this thread is about. > > But actually, the > > difference between WWW and REST (shortly spoken) is that the first is > about > > humans, while the latter -in the sense we discuss it here- is about > > machines. So to make it work completely (not just in part) and > globally, we > > need to get rid of human and find ways to make machines understand > machines > > without the need to manually define each single interconnection type, > but > > step forward to global standards. AtomPub is not a solution, as it > only is > > defined for publishing, while we need something on a higher level, > > describing *any* kind of business interaction. > > At some point you need to bind your implementation to domain semantics, > no matter how many general layers you use. For example, the only > difference between I do not see that the selection of one http method is part of the domain. "buy", "sell", "publish", "payoff" certainly are domain verbs, but the binding of "sell" to "POST" is not, at least in my understanding, as exactly that can be solved simply by some kind of machine-readable contract. > <person> > <father href=""/> > </person> > > and > > <person> > <link rel="father" href=""/> > </person> > > is the layer at which the 'father' semantic is defined (XML semantics > vs. generic link relation semantics. You cannot make software figure > out what 'father' means - you MUST hardcode that at some point. You don't get the point of this discussion. I did nither ask nor say that a machine shall find out what "father" means, but that I like the machine to find out which http method to use to follow the link. Please don't imply more general ideas into my question than I actually asked. ;-) In fact, you need to hardcode what "father" means, but that is part of the business domain model, while the binding of "father --> GET" is part of the technical realization. And I am talking solely of the latter. I don't want to write a software that can do anything. I just want to write one that ONCE I MADE IT UNDERSTOOD WHAT A FATHER IS is able to learn about the ways to find the father on a fully automated way. And with WADL for example that would be possible.
On Feb 21, 2010, at 3:08 AM, Eric J. Bowman wrote: > "Roy T. Fielding" wrote: > >> On Feb 18, 2010, at 10:21 AM, Robert Brewer wrote: >>> Eric J. Bowman wrote: >>>> I contend that the format of a representation is part of a >>>> system's hierarchical data. The proof is right there in curl, >>>> which sees no hierarchy when it's saving 'resource' and >>>> 'resource.1' to disk, while obviously recognizing .pdf >>>> and .html as part of a hierarchy of representations, whether >>>> by direct dereferencing or by presence in a Content-Location >>>> response header. >>> >>> That's an odd way to look at it. The "format of a representation >>> is part of a system's hierarchical data" if and only if the path >>> portion of the URI contains information to that effect. If not, >>> it doesn't. The fact that some domains specifically do this does >>> not require them all to do so. Curl is suboptimal in this respect >>> since it does not differentiate responses based on the query >>> string component. >> >> More to the point, one implementation doesn't "prove" anything. > > Let's not miss the forest for the trees, here. When I use curl as an > example, it's because what I'm pointing out is accepted behavior that's > uniform amongst many clients (in this case, I've never heard anybody say > libcurl's URI parser is bugged or broken). Does it work for your use case? No. Then it is bugged or broken. It is accepted behavior that many clients ignore the media type when handling text/plain content. Is that bugged or broken? Yes. Browsers also typically respect content-disposition, if provided, since their save operation occurs after reading the message, whereas command-line tools do not. There are many known problems in existing tools that the authors refuse to fix because the fix would be incompatible with existing output. If you don't like curl's behavior, use wget (which does save the query info as the filename). Regardless, it is a fact that the distinction between path and query in the URI syntax has nothing to do with how a service is implemented or whether a given representation of a resource can be saved as a file. Likewise, the only relevant distinction between file extensions and query-based media-type indicators is that the latter are too long and far more likely to cause undesirable cache impacts (many general-purpose caches refuse to store responses that look like queries because, historically, they tend to have lower hit rates than simple paths). That's why I always prefer extensions. Neither choice has anything to do with REST. ....Roy
Hi all,
I am looking to start messing around with HATEOAS a bit more with my
java/jersey stuff. I am not quite sure I understand the flow of things
tho. If I publish a URI to my service I
have something like http://myservice.com/webcart as the URI. What is
the first call done.. a GET or an OPTION in order to get the possible
URIs to call next? I am trying to figure out how I start up my service
SDK that I'll publish on my site to help developers get started using
my public service.
Regardless.. I assume the response would be some sort of list of links
that can be called? I was originally turned on to this notion of using
something like as a response body:
<links>
<link rel="self" href="http://myservice.com"/>
<link rel="edit" href="http://myservice.com"/>
</links>
I am not quite sure tho how to specify this in the response. I know
some are talking about using the Links header..but from what I gather
it's not going to be standard until HTML5, which who knows when that
will come out.
As well, I am not quite sure how you specify the methods that can be
used.. GET, POST, PUT and DELETE. Does my SDK doc say "if rel is EDIT
then you can PUT to that URI. If it is SELF then you can only GET to
that URI"? or is there another attribute that lists the allowed methods on the URI of each link? I guess I am looking for a little help in understanding how
best to describe this HATEOAS API to other developers that will consume
it.. how to get them started, and how to explain each URI they can use
at any given point and what methods they can use on the URI, what
params, what body, etc.
In the case of a DELETE, if it succeeds.. it returns a 204 No Content.
How then do you respond with any sort of links that it can do at that
point? If it can't return any body, I can't return any <links>
with it. I think we "can" fudge it.. but then I would guess it wouldn't
be RESTful to provide any sort of body with a No Content response.
Maybe Delete doesn't have to return 204 to indicate a success?
So what are you all that have provided a HATEOAS API been doing?
Thank you.
On Feb 21, 2010, at 11:32 PM, Kevin Duffey wrote: > > > Hi all, > > I am looking to start messing around with HATEOAS a bit more with my java/jersey stuff. Funny, I am just reading this[1] post about the hypermedia constraint, maybe that helps. > I am not quite sure I understand the flow of things tho. If I publish a URI to my service I have something like http://myservice.com/webcart as the URI. What is the first call done.. It is always a GET on the URI of the entry URI the client has. This will bring the client into the corresponding entry state. Note that there might be more entry states, there is no need to limit that to a single one. In fact, any bookmarkable state can be considered an entry state. For example, when you are being sent a link to a book on Amazon, you GET it with your browser and you are right in an entry state of the Amazon shopping application. > a GET or an OPTION in order to get the possible URIs to call next? The next possible states will always be in your current state in the form of hypermedia constrols (links, forms). When the client understands (==implements) the media type it will understand the meaning of these transitions. (If it doesn't it'll have to give up or try something else). > I am trying to figure out how I start up my service SDK that I'll publish on my site to help developers get started using my public service. Do not hide the HTTP interface inside an SDK. This will only obfuscate the essence of the hypermedia constraint. Specifiy your media type or media type extensions (reuse standard types as much as possible of course) and that is all the client developer needs to know. > > Regardless.. I assume the response would be some sort of list of links that can be called? Do not approach this too 'technically'. The links can be anywhere in your hypermedia. They could be plain text, XML elements, generic link elements or forms (or take some exotic form [2]) Have a look at OpenSearch[1] or AtomPub (RFC5023) for 'learning'. > I was originally turned on to this notion of using something like as a response body: > > <links> > <link rel="self" href="http://myservice.com"/> > <link rel="edit" href="http://myservice.com"/> > </links> Can be like this, but absolutely need not be. > > I am not quite sure tho how to specify this in the response. I know some are talking about using the Links header..but from what I gather it's not going to be standard until HTML5, which who knows when that will come out. (Link header is orthogonal to HTML5. Consider it standard right now. It has already been in an earlier ersion of HTTP, too) > > As well, I am not quite sure how you specify the methods that can be used.. Write it up in the specification of your hypermedia controls (See my last posting on that, too). > GET, POST, PUT and DELETE. Does my SDK doc say "if rel is EDIT then you can PUT to that URI. Yes, but not your SDK doc but the media type (or link relation-) specification. Just like HTML does for <a>, <img> or <form> > If it is SELF then you can only GET to that URI"? or is there another attribute that lists the allowed methods on the URI of each link? I guess I am looking for a little help in understanding how best to describe this HATEOAS API to other developers that will consume it.. how to get them started, and how to explain each URI they can use at any given point and what methods they can use on the URI, what params, what body, etc. Just do the equivalent of AtomPub or OpenSearch for what you need your service to enable. > > In the case of a DELETE, if it succeeds.. it returns a 204 No Content. Maybe. Or a 303 See other, or a 202 Accepted ... > How then do you respond with any sort of links that it can do at that point? If it can't return any body, I can't return any <links> with it. Right. The point you are (correctly) making here is what to consider the steady state after a 204. My personal opinion is that the steady state remains the one you were in before. If that had changed, the server should have sent you a 303 See Other to reload that state. (There is a lot to be leveraged from the HTTP codes themselves!) > I think we "can" fudge it.. but then I would guess it wouldn't be RESTful to provide any sort of body with a No Content response. Maybe Delete doesn't have to return 204 to indicate a success? > > So what are you all that have provided a HATEOAS API been doing? Enjoy your travel towards the next hypermedia constraint[4] 'aha-moment'. It will surely come. Jan > > Thank you. > [1] http://tech.groups.yahoo.com/group/rest-discuss/message/8377 [2] http://www.nordsc.com/blog/?p=293#cbcID-tweak [3] http://www.opensearch.org [4] You see me hammering on the correct term as opposed to the acronym, don't you :-) > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
On Feb 22, 2010, at 12:06 AM, Jan Algermissen wrote: > a GET on the URI of the entry URI Doh - too late :-) "a GET on the entry URI" ....
This information may give you some ideas on how to approach creating and documenting your REST-ful service API. http://code.google.com/p/implementing-rest/wiki/RESTAPIRules mca http://amundsen.com/blog/ On Sun, Feb 21, 2010 at 17:32, Kevin Duffey <andjarnic@...> wrote: > > > Hi all, > > I am looking to start messing around with HATEOAS a bit more with my > java/jersey stuff. I am not quite sure I understand the flow of things tho. > If I publish a URI to my service I have something like > http://myservice.com/webcart as the URI. What is the first call done.. a > GET or an OPTION in order to get the possible URIs to call next? I am trying > to figure out how I start up my service SDK that I'll publish on my site to > help developers get started using my public service. > > Regardless.. I assume the response would be some sort of list of links that > can be called? I was originally turned on to this notion of using something > like as a response body: > > <links> > <link rel="self" href="http://myservice.com"/> > <link rel="edit" href="http://myservice.com"/> > </links> > > I am not quite sure tho how to specify this in the response. I know some > are talking about using the Links header..but from what I gather it's not > going to be standard until HTML5, which who knows when that will come out. > > As well, I am not quite sure how you specify the methods that can be used.. > GET, POST, PUT and DELETE. Does my SDK doc say "if rel is EDIT then you can > PUT to that URI. If it is SELF then you can only GET to that URI"? or is > there another attribute that lists the allowed methods on the URI of each > link? I guess I am looking for a little help in understanding how best to > describe this HATEOAS API to other developers that will consume it.. how to > get them started, and how to explain each URI they can use at any given > point and what methods they can use on the URI, what params, what body, etc. > > In the case of a DELETE, if it succeeds.. it returns a 204 No Content. How > then do you respond with any sort of links that it can do at that point? If > it can't return any body, I can't return any <links> with it. I think we > "can" fudge it.. but then I would guess it wouldn't be RESTful to provide > any sort of body with a No Content response. Maybe Delete doesn't have to > return 204 to indicate a success? > > So what are you all that have provided a HATEOAS API been doing? > > Thank you. > > > > >
Ok..looking at the OpenSearch stuff.. interesting. So they basically describe the document that can be returned, and each element within it.
Maybe you can shed some light on the media type thing. That stills confuses me a bit. I read somewhere that for my java/jersey rest services, I should support a media type like vnd+com.mypackage.myclass+xml and so on. Should each and every method I support for each URL have a specific media type? I've seen a few posts now saying it's all about the media types. Ok.. what does that mean exactly? From the media type can you determine what to call on it? In other words, should I have a specific media type specified for a given resource that handles a POST for a single item, and a different media type for a POST that handles multiple items (for example, posting a new item to a web cart, or posting the entire web cart at one shot to "check out")?
Along the same lines, the OpenSearch seems to be a single resource. So I suppose I would have a document for each resource that indicates the elements that can come back in a response.. and in some cases some elements may not be part of the response. In the case of one resource call that responds with a URI to another resource, that would be documented in some manner as well... not sure how exactly.
--- On Sun, 2/21/10, Jan Algermissen <algermissen1971@...> wrote:
From: Jan Algermissen <algermissen1971@...>
Subject: Re: [rest-discuss] How do I help developers get started using my HATEOAS API?
To: "Kevin Duffey" <andjarnic@...>
Cc: rest-discuss@yahoogroups.com
Date: Sunday, February 21, 2010, 3:06 PM
On Feb 21, 2010, at 11:32 PM, Kevin Duffey wrote:
>
>
> Hi all,
>
> I am looking to start messing around with HATEOAS a bit more with my java/jersey stuff.
Funny, I am just reading this[1] post about the hypermedia constraint, maybe that helps.
> I am not quite sure I understand the flow of things tho. If I publish a URI to my service I have something like http://myservice.com/webcart as the URI. What is the first call done..
It is always a GET on the URI of the entry URI the client has. This will bring the client into the corresponding entry state. Note that there might be more entry states, there is no need to limit that to a single one. In fact, any bookmarkable state can be considered an entry state. For example, when you are being sent a link to a book on Amazon, you GET it with your browser and you are right in an entry state of the Amazon shopping application.
> a GET or an OPTION in order to get the possible URIs to call next?
The next possible states will always be in your current state in the form of hypermedia constrols (links, forms). When the client understands (==implements) the media type it will understand the meaning of these transitions. (If it doesn't it'll have to give up or try something else).
> I am trying to figure out how I start up my service SDK that I'll publish on my site to help developers get started using my public service.
Do not hide the HTTP interface inside an SDK. This will only obfuscate the essence of the hypermedia constraint. Specifiy your media type or media type extensions (reuse standard types as much as possible of course) and that is all the client developer needs to know.
>
> Regardless.. I assume the response would be some sort of list of links that can be called?
Do not approach this too 'technically'. The links can be anywhere in your hypermedia. They could be plain text, XML elements, generic link elements or forms (or take some exotic form [2])
Have a look at OpenSearch[1] or AtomPub (RFC5023) for 'learning'.
> I was originally turned on to this notion of using something like as a response body:
>
> <links>
><link rel="self" href="http://myservice.com"/>
><link rel="edit" href="http://myservice.com"/>
> </links>
Can be like this, but absolutely need not be.
>
> I am not quite sure tho how to specify this in the response. I know some are talking about using the Links header..but from what I gather it's not going to be standard until HTML5, which who knows when that will come out.
(Link header is orthogonal to HTML5. Consider it standard right now. It has already been in an earlier ersion of HTTP, too)
>
> As well, I am not quite sure how you specify the methods that can be used..
Write it up in the specification of your hypermedia controls (See my last posting on that, too).
> GET, POST, PUT and DELETE. Does my SDK doc say "if rel is EDIT then you can PUT to that URI.
Yes, but not your SDK doc but the media type (or link relation-) specification. Just like HTML does for <a>, <img> or <form>
> If it is SELF then you can only GET to that URI"? or is there another attribute that lists the allowed methods on the URI of each link? I guess I am looking for a little help in understanding how best to describe this HATEOAS API to other developers that will consume it.. how to get them started, and how to explain each URI they can use at any given point and what methods they can use on the URI, what params, what body, etc.
Just do the equivalent of AtomPub or OpenSearch for what you need your service to enable.
>
> In the case of a DELETE, if it succeeds.. it returns a 204 No Content.
Maybe. Or a 303 See other, or a 202 Accepted ...
> How then do you respond with any sort of links that it can do at that point? If it can't return any body, I can't return any <links> with it.
Right. The point you are (correctly) making here is what to consider the steady state after a 204. My personal opinion is that the steady state remains the one you were in before. If that had changed, the server should have sent you a 303 See Other to reload that state.
(There is a lot to be leveraged from the HTTP codes themselves!)
> I think we "can" fudge it.. but then I would guess it wouldn't be RESTful to provide any sort of body with a No Content response. Maybe Delete doesn't have to return 204 to indicate a success?
>
> So what are you all that have provided a HATEOAS API been doing?
Enjoy your travel towards the next hypermedia constraint[4] 'aha-moment'. It will surely come.
Jan
>
> Thank you.
>
[1] http://tech.groups.yahoo.com/group/rest-discuss/message/8377
[2] http://www.nordsc.com/blog/?p=293#cbcID-tweak
[3] http://www.opensearch.org
[4] You see me hammering on the correct term as opposed to the acronym, don't you :-)
>
>
>
-----------------------------------
Jan Algermissen, Consultant
NORD Software Consulting
Mail: algermissen@...
Blog: http://www.nordsc.com/blog/
Work: http://www.nordsc.com/
-----------------------------------
Hi everyone, I have a functionality that I would like to describe in a REST fashion. Because it is effectively an idempotent transform of input to output, I would usually describe it as Resources with addresses like http://example.dom/transformed_result?input1=1&input2=y or some other equivalent URL scheme, and use GET. Except in this case, my input is a large text (csv) that can be passed as POST input rather than GET. What are my options? The first thing that comes to mind is uploading the csv creating a new resource in the process and GETting another resource that holds the result of the transform. Only that I never need the created csv resource again and that it will be multiple HTTP requests for what warrants in my opinion a single one. Any help is appreciated. Regards, Muhammad Alkarouri
>>>>> "malkarouri" == malkarouri <malkarouri@...> writes:
malkarouri> Only that I never need the created csv resource again
malkarouri> and that it will be multiple HTTP requests for what
malkarouri> warrants in my opinion a single one.
So that is clearly useless. Just use a POST.
--
Cheers,
Berend de Boer
On Feb 22, 2010, at 1:45 AM, Kevin Duffey wrote: > > > Ok..looking at the OpenSearch stuff.. interesting. So they basically describe the document that can be returned, and each element within it. > > Maybe you can shed some light on the media type thing. That stills confuses me a bit. I read somewhere that for my java/jersey rest services, I should support a media type like vnd+com.mypackage.myclass+xml and so on. Should each and every method I support for each URL have a specific media type? Huh? No, why do you think so? Just ask yourself: When I receive a response of type application/opensearchdescription+xml what possibilities are there for next state transitions? Hint: look at the spec of that media type. > I've seen a few posts now saying it's all about the media types. Ok.. what does that mean exactly? From the media type can you determine what to call on it? Exacty. How does your browser implementor know, what to do and what the method is if a user clicks on an <a> link? How does she know what to do and what the method is in the case of an HTML <form> submission? Check the section "Protocol Operations" in RFC5023 (AtomPub). > In other words, should I have a specific media type specified for a given resource that handles a POST for a single item, and a different media type for a POST that handles multiple items (for example, posting a new item to a web cart, or posting the entire web cart at one shot to "check out")? The media type does not describe the resource of the next request. It just describes how to interprete the representation that the client holds *now*. The meaning of the links is what is being described. > > Along the same lines, the OpenSearch seems to be a single resource. ??? That sentence does not make sense. > So I suppose I would have a document for each resource that indicates the elements that can come back in a response.. > and in some cases some elements may not be part of the response. In the case of one resource call that responds with a URI to another resource, that would be documented in some manner as well... not sure how exactly. Again, look at AtomPub <service> documents and Opensearch description (not the search result) documents. What do they tell you? Jan > > > > --- On Sun, 2/21/10, Jan Algermissen <algermissen1971@...> wrote: > > From: Jan Algermissen <algermissen1971@...> > Subject: Re: [rest-discuss] How do I help developers get started using my HATEOAS API? > To: "Kevin Duffey" <andjarnic@...> > Cc: rest-discuss@yahoogroups.com > Date: Sunday, February 21, 2010, 3:06 PM > > > On Feb 21, 2010, at 11:32 PM, Kevin Duffey wrote: > > > > > > > Hi all, > > > > I am looking to start messing around with HATEOAS a bit more with my java/jersey stuff. > > Funny, I am just reading this[1] post about the hypermedia constraint, maybe that helps. > > > > I am not quite sure I understand the flow of things tho. If I publish a URI to my service I have something likehttp://myservice.com/webcart as the URI. What is the first call done.. > > It is always a GET on the URI of the entry URI the client has. This will bring the client into the corresponding entry state. Note that there might be more entry states, there is no need to limit that to a single one. In fact, any bookmarkable state can be considered an entry state. For example, when you are being sent a link to a book on Amazon, you GET it with your browser and you are right in an entry state of the Amazon shopping application. > > > > a GET or an OPTION in order to get the possible URIs to call next? > > The next possible states will always be in your current state in the form of hypermedia constrols (links, forms). When the client understands (==implements) the media type it will understand the meaning of these transitions. (If it doesn't it'll have to give up or try something else). > > > > I am trying to figure out how I start up my service SDK that I'll publish on my site to help developers get started using my public service. > > Do not hide the HTTP interface inside an SDK. This will only obfuscate the essence of the hypermedia constraint. Specifiy your media type or media type extensions (reuse standard types as much as possible of course) and that is all the client developer needs to know. > > > > > > Regardless.. I assume the response would be some sort of list of links that can be called? > > Do not approach this too 'technically'. The links can be anywhere in your hypermedia. They could be plain text, XML elements, generic link elements or forms (or take some exotic form [2]) > Have a look at OpenSearch[1] or AtomPub (RFC5023) for 'learning'. > > > > I was originally turned on to this notion of using something like as a response body: > > > > <links> > > <link rel="self" href="http://myservice.com"/> > > <link rel="edit" href="http://myservice.com"/> > > </links> > > Can be like this, but absolutely need not be. > > > > > I am not quite sure tho how to specify this in the response. I know some are talking about using the Links header..but from what I gather it's not going to be standard until HTML5, which who knows when that will come out. > > (Link header is orthogonal to HTML5. Consider it standard right now. It has already been in an earlier ersion of HTTP, too) > > > > > As well, I am not quite sure how you specify the methods that can be used.. > > Write it up in the specification of your hypermedia controls (See my last posting on that, too). > > > GET, POST, PUT and DELETE. Does my SDK doc say "if rel is EDIT then you can PUT to that URI. > > Yes, but not your SDK doc but the media type (or link relation-) specification. Just like HTML does for <a>, <img> or <form> > > > If it is SELF then you can only GET to that URI"? or is there another attribute that lists the allowed methods on the URI of each link? I guess I am looking for a little help in understanding how best to describe this HATEOAS API to other developers that will consume it.. how to get them started, and how to explain each URI they can use at any given point and what methods they can use on the URI, what params, what body, etc. > > Just do the equivalent of AtomPub or OpenSearch for what you need your service to enable. > > > > > > In the case of a DELETE, if it succeeds.. it returns a 204 No Content. > > Maybe. Or a 303 See other, or a 202 Accepted ... > > > How then do you respond with any sort of links that it can do at that point? If it can't return any body, I can't return any <links> with it. > > Right. The point you are (correctly) making here is what to consider the steady state after a 204. My personal opinion is that the steady state remains the one you were in before. If that had changed, the server should have sent you a 303 See Other to reload that state. > > (There is a lot to be leveraged from the HTTP codes themselves!) > > > > I think we "can" fudge it.. but then I would guess it wouldn't be RESTful to provide any sort of body with a No Content response. Maybe Delete doesn't have to return 204 to indicate a success? > > > > So what are you all that have provided a HATEOAS API been doing? > > Enjoy your travel towards the next hypermedia constraint[4] 'aha-moment'. It will surely come. > > Jan > > > > > > Thank you. > > > > > [1] http://tech.groups.yahoo.com/group/rest-discuss/message/8377 > [2] http://www.nordsc.com/blog/?p=293#cbcID-tweak > [3] http://www.opensearch.org > [4] You see me hammering on the correct term as opposed to the acronym, don't you :-) > > > > > > > > > > > > ----------------------------------- > Jan Algermissen, Consultant > NORD Software Consulting > > Mail: algermissen@... > Blog: http://www.nordsc.com/blog/ > Work: http://www.nordsc.com/ > ----------------------------------- > > > > > > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
On Feb 21, 2010, at 1:15 PM, malkarouri wrote: > Hi everyone, > > I have a functionality that I would like to describe in a REST fashion. Because it is effectively an idempotent transform of input to output, I would usually describe it as Resources with addresses like http://example.dom/transformed_result?input1=1&input2=y or some other equivalent URL scheme, and use GET. > Except in this case, my input is a large text (csv) that can be passed as POST input rather than GET. What are my options? Use POST. It is the more natural solution. POST /transformer Content-Type: text/csv 200 Ok Content-Type: text/csv [transformed output] If you want to keep the output around onthe server, create a new resource and return 201 instead. Jan > The first thing that comes to mind is uploading the csv creating a new resource in the process and GETting another resource that holds the result of the transform. Only that I never need the created csv resource again and that it will be multiple HTTP requests for what warrants in my opinion a single one. > > Any help is appreciated. > > Regards, > > Muhammad Alkarouri > > > > ------------------------------------ > > Yahoo! Groups Links > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
Volume 4 of This week in REST is up on the REST wiki - http://rest.blueoxen.net/cgi-bin/wiki.pl?RESTWeekly_Feb_15_2010 and the blog - http://bit.ly/axW0Gn For contributing links this week visit http://rest.blueoxen.net/cgi-bin/wiki.pl?RESTWeekly_Feb_22_2010 Enjoy! Ivan
On Feb 22, 2010, at 10:03 AM, izuzak wrote: > Volume 4 of This week in REST is up on the REST wiki - > http://rest.blueoxen.net/cgi-bin/wiki.pl?RESTWeekly_Feb_15_2010 > and the blog - http://bit.ly/axW0Gn > > For contributing links this week visit > http://rest.blueoxen.net/cgi-bin/wiki.pl?RESTWeekly_Feb_22_2010 > > Enjoy! Thanks! Keep up the good work. Jan > Ivan > > > > ------------------------------------ > > Yahoo! Groups Links > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
Hello, Alan Dean wrote: > > Quite some time ago I wrote a little sample app, based mostly on an > article by Paul James [1] which was a simple digest auth implementation. > Now perhaps my memory has gone soggy, but I'm sure that there was a > workaround to avoid the annoying pop-up dialog being displayed > (something like you had to make sure that the response length was at > least 420 characters). > > Well, I just ran the sample again and for the life of me I can't > suppress the dialog (in order to replace it with something prettier that > uses AJAX to carry out the actual auth handshake). Has there been fix > work done on recent browser versions to stop this end-around? If so (or > even if not, for that matter) does anyone have a way of doing it? > > Thanks, > Alan > > [1] http://www.peej.co.uk/articles/http-auth-with-html-forms.html > <http://www.peej.co.uk/articles/http-auth-with-html-forms.html> Are you trying to access the protected resource directly first? I suppose in this case, it will return a 401 status with a WWW-Authenticate header (basic or digest) which will trigger the browser popup anyway, unless the initial request with made with XMLHttpRequest AND with username/password (at least according to the draft spec [1], depending on its implementation status). There's a point about this trick I'm not clear about: how to get the AJAX request to use Digest and not Basic auth? The XMLHttpRequest draft spec is silent on this issue. If the authentication is pre-emptive (that is, if it's sent with the first request, not as a response to a 401/WWW-Authenticate challenge), which one is it meant to choose? It looks like it's Basic by default. I guess a workaround for this would be to make two requests: C -> S: Request with something like like Basic null:null (like the logout function in Paul James's example service [2]) S -> C: 401 with WWW-Authenticate Digest C -> S: another Ajax request with username:password, assuming that the browser's mechanism has retained the digest parameters sent by the previous response. I can't really see how it's possible without an initial dummy request. You could try by sending the Authorization header via XHR manually, but (a) your server would need to support pre-emtive Digest authentication and (b) that username/password would need to integrate with the browser's Digest auth mechanism for subsequent requests. I reckon the only way of doing (b) is to let the browser know the username and password the user typed in (perhaps via XHR, but again, XHR doesn't seem to let you specify Basic or Digest, pre-emptively at least). (As a side-note, Paul James's example service returns a 401 status code without a WWW-Authenticate header when replying to a "logout" (via null:null) request, which isn't compliant with the HTTP spec; I guess it's just a small bug, since in principle, doing so shouldn't trigger a new popup box.) Best wishes, Bruno. [1] http://www.w3.org/TR/XMLHttpRequest/#the-send-method [2] http://www.peej.co.uk/sandbox/htmlhttpauth/index.html
Hi,
I have the following questions on the implementation of a REST API:
1.
I have a resource which can be used like:
(a) GET resource/{id}
but it would be useful, to save round trips, to be able to invoke it with
several ids at once:
(b) GET resource?id={id1}&id={id2}... or
GET resource/ids;id1;id2... or
GET resource/id1/id2... (BTW is this URL template acceptable? The
ids are not in a hierarchy as the slashes may suggest, but I've seen it used
elsewhere).
This falls in the realm of REST + batch operations but I don't know what is
the best solution according to the REST architectural style.
From what I've read some people are of the opinion that (a) should be used
with HTTP pipelining for batch operations while others think that the
solution (b) is
equally acceptable since this is a resource that happens to take multiple
query/path/matrix parameters.
2.
I have another resource which is an algorithm that takes as input:
- a list of parameters [x1, x2, ...]
- another list of parameters [y1, y2, ...]
- some other optional arguments
My initial attempt is very RPC-style since I just implemented a resource
that takes the input as:
GET resource?x=x1&x=x2&...&y=y1&y=y2...&{rest of the optional arguments}
One possible solution is to define a resource that accepts a POST with a
representation of the input of the algorithm (say in JSON) and returns
the HTTP status code 201 and another resource which can be invoked with GET.
However this may complicate the server side implementation since I have to
maintain state and decide for how long will the resource created by the POST
be available, etc...
In fact, if all the client wants is to invoke an algorithm, why would he do
it in 2 steps instead of just one? So this solution doesn't appeal to me
very much.
How can I make the resources 1. and 2. above RESTful?
Thanks,
Dário
I'm trying to implement a RESTful API. What happened if I have these following URLs:
1 - http://localhost/api/Product/1/Category
2 - http://localhost/api/Category/8
and this is the response in both cases:
<category>
<category-name>Category1</category-name>
<created-at type="datetime">2010-02-15T15:38:30Z</created-at>
<description>Category Description</description>
<id type="integer">1</id>
<updated-at type="datetime">2010-02-15T15:38:30Z</updated-at>
</category>
Is it valid? If I have that case, Should I use Content-Location?, in order to show the alternatives URLs.
Thanks in advance.
Regards.
>>>>> "Drio" == Drio Abdulrehman <dario.rehman@...> writes:
Drio> but it would be useful, to save round trips, to be able to
Drio> invoke it with several ids at once:
I think what you should become clear about is: is this a resource?
It probably is. I.e. a subset of the results. So what you pass in is a
filter that identifies the subset.
So how you solve that is perhaps not so important. But it helps if you
come up with some URL for a subset/filter. Perhaps:
/resource/subset/?id=1&id=2
So I think that fits in well.
Don't think that REST architectural style forces you to do weird
things like using POSTs to create such resources.
--
Cheers,
Berend de Boer
Hello Dario, Just be careful with URI templates... remember to stick to as few entry points as possible The POST/201+GET solution makes sense but as Berend mentioned, not required. Regards Guilherme Silveira Caelum | Ensino e Inovao http://www.caelum.com.br/ On Mon, Feb 22, 2010 at 3:32 PM, <berend@...> wrote: > > > >>>>> "Drio" == Drio Abdulrehman <dario.rehman@...<dario.rehman%40gmail.com>> > writes: > > Drio> but it would be useful, to save round trips, to be able to > Drio> invoke it with several ids at once: > > I think what you should become clear about is: is this a resource? > > It probably is. I.e. a subset of the results. So what you pass in is a > filter that identifies the subset. > > So how you solve that is perhaps not so important. But it helps if you > come up with some URL for a subset/filter. Perhaps: > > /resource/subset/?id=1&id=2 > > So I think that fits in well. > > Don't think that REST architectural style forces you to do weird > things like using POSTs to create such resources. > > -- > Cheers, > > Berend de Boer > >
Hello there, You are right, a resource representation does not need to be uniquelly mapped to a URI. The response header can be used to either redirect or give the client a hint where to go to for the "original" one. Regards Guilherme Silveira Caelum | Ensino e Inovao http://www.caelum.com.br/ On Mon, Feb 22, 2010 at 3:10 PM, cosme.perez82 <cosme.perez82@yahoo.com>wrote: > > > I'm trying to implement a RESTful API. What happened if I have these > following URLs: > > 1 - http://localhost/api/Product/1/Category > 2 - http://localhost/api/Category/8 > > and this is the response in both cases: > > <category> > <category-name>Category1</category-name> > <created-at type="datetime">2010-02-15T15:38:30Z</created-at> > <description>Category Description</description> > <id type="integer">1</id> > <updated-at type="datetime">2010-02-15T15:38:30Z</updated-at> > </category> > > Is it valid? If I have that case, Should I use Content-Location?, in order > to show the alternatives URLs. > > Thanks in advance. > Regards. > > >
One more question:
I understand that one of the principles of RESTful design is to return links
for other resources in the responses.
Let's say I have a resource A that returns a response which contains an idA.
I know that probably some users will be satisfied with idA but others will
want to get further information on idA by invoking another resource B.
From what I've read I have no question in my mind that the URI for resource
B/idA should be returned in the response for resource A.
However should I return also idA in that response to save a round trip for
the users that are satisfied with idA, like so (assuming a JSON media type):
{"name" "idA" "link" "GET /B/idA"}
or should I just return
{"link" "GET /B/idA"} ?
Thanks.
On Mon, Feb 22, 2010 at 6:41 PM, Guilherme Silveira <
guilherme.silveira@....br> wrote:
> Hello Dario,
>
> Just be careful with URI templates... remember to stick to as few entry
> points as possible
>
> The POST/201+GET solution makes sense but as Berend mentioned, not
> required.
>
>
> Regards
>
> Guilherme Silveira
> Caelum | Ensino e Inovação
> http://www.caelum.com.br/
>
>
> On Mon, Feb 22, 2010 at 3:32 PM, <berend@...> wrote:
>
>>
>>
>> >>>>> "Dário" == Dário Abdulrehman <dario.rehman@...<dario.rehman%40gmail.com>>
>> writes:
>>
>> Dário> but it would be useful, to save round trips, to be able to
>> Dário> invoke it with several ids at once:
>>
>> I think what you should become clear about is: is this a resource?
>>
>> It probably is. I.e. a subset of the results. So what you pass in is a
>> filter that identifies the subset.
>>
>> So how you solve that is perhaps not so important. But it helps if you
>> come up with some URL for a subset/filter. Perhaps:
>>
>> /resource/subset/?id=1&id=2
>>
>> So I think that fits in well.
>>
>> Don't think that REST architectural style forces you to do weird
>> things like using POSTs to create such resources.
>>
>> --
>> Cheers,
>>
>> Berend de Boer
>>
>>
>
>
> > > In reality of course, this shouldn't really necessitate multiple calls to > the server if called multiple times since previous results have been cached > and processed on the client side. Only when the cache expires, should there > be an attempt to request again. I'm not really sure if RESTful clients that > respect HATEOAS do it this way, and should they in the first place. If they > do, are there tools that exist for this? > It is quite unnecessary to treat this as a client problem, whether the system is hypertext based or not. If there is a cache in the reverse proxy on the server side, all the interactions will be seamless to the client. If there is a forward proxy cache on the client network, it will cut down the requests based on the expiry/invalidation policies set by the server or configured in the cache. BUT, please note that, chaining calls like this is not a good idea since it performs poorly when the caches are cold. Subbu
Drio,
On Feb 22, 2010, at 6:13 PM, Drio Abdulrehman wrote:
>
>
> Hi,
>
> I have the following questions on the implementation of a REST API:
>
> 1.
> I have a resource which can be used like:
>
> (a) GET resource/{id}
>
> but it would be useful, to save round trips,
Can you explain what the scenario is? It is not a primary goal to save round trips. IOW, just because you can does not mean you should (save round trips).
> to be able to invoke it with several ids at once:
>
> (b) GET resource?id={id1}&id={id2}... or
> GET resource/ids;id1;id2... or
> GET resource/id1/id2... (BTW is this URL template acceptable? The ids are not in a hierarchy as the slashes may suggest, but I've seen it used elsewhere).
>
Use the first one: GET resource?id={id1}&id={id2}.
> This falls in the realm of REST + batch operations but I don't know what is the best solution according to the REST architectural style.
If you really have to do this, the RESTful way is to make a resource that has the semantics of the "bag" (like you did). But beware that the response is a representation of *that* resource.
You might want to look at multipart messages for this. Here is a related experimental I-D:
http://tools.ietf.org/html/draft-snell-http-batch-01
> From what I've read some people are of the opinion that (a) should be used with HTTP pipelining for batch operations while others think that the solution (b) is
> equally acceptable since this is a resource that happens to take multiple query/path/matrix parameters.
HTTP pipelining is not yet a reality, but theoretically, it would a the preferable solution as opposed to a batch retrievel. You batch would not be visible to caches for example.
>
> 2.
> I have another resource which is an algorithm that takes as input:
> - a list of parameters [x1, x2, ...]
> - another list of parameters [y1, y2, ...]
> - some other optional arguments
>
> My initial attempt is very RPC-style since I just implemented a resource that takes the input as:
>
> GET resource?x=x1&x=x2&...&y=y1&y=y2...&{rest of the optional arguments}
That is not RPC-style. I see nothing unRESTful in it.
>
> One possible solution is to define a resource that accepts a POST with a representation of the input of the algorithm (say in JSON) and returns
> the HTTP status code 201 and another resource which can be invoked with GET.
This only makes sense if you want to persist the result as a resource. And if you do, use 201, Location and Content-Location so the client does not need the extra GET.
> However this may complicate the server side implementation since I have to maintain state and decide for how long will the resource created by the POST be available, etc...
Yes. If you do not need it, don't do it. The GET is fine.
Jan
> In fact, if all the client wants is to invoke an algorithm, why would he do it in 2 steps instead of just one? So this solution doesn't appeal to me very much.
>
> How can I make the resources 1. and 2. above RESTful?
>
> Thanks,
> Drio
>
>
>
>
-----------------------------------
Jan Algermissen, Consultant
NORD Software Consulting
Mail: algermissen@...
Blog: http://www.nordsc.com/blog/
Work: http://www.nordsc.com/
-----------------------------------
Jan Algermissen wrote:
>
> Use the first one: GET resource?id={id1}&id={id2}.
>
At the risk of being smacked down by Roy again on URI opacity, I point
out that generic URI parsers will return 'id=id2' when presented with
two separate values for the 'id' parameter...
>
> > GET resource?x=x1&x=x2&...&y=y1&y=y2...&{rest of the optional
> > arguments}
>
> That is not RPC-style. I see nothing unRESTful in it.
>
It's a "Matrix URI", as per:
http://www.w3.org/DesignIssues/MatrixURIs.html
I've experimented with this notion, there are plenty of drawbacks to
such a URI allocation scheme, however. The question for Dario is
whether his system can use fragment URIs to achieve the same result.
TBL's design note suggests using Matrix URIs for latitude, longitude
and scale for a map, as an example. SVG implements coordinates and
scale using a fragment URI syntax. Something to consider.
-Eric
On Feb 23, 2010, at 8:49 AM, Eric J. Bowman wrote:
> Jan Algermissen wrote:
>>
>> Use the first one: GET resource?id={id1}&id={id2}.
>>
>
> At the risk of being smacked down by Roy again on URI opacity, I point
> out that generic URI parsers will return 'id=id2' when presented with
> two separate values for the 'id' parameter...
No, there can be multiple parameters with the same name. The value is being returned as a list.
>
>>
>>> GET resource?x=x1&x=x2&...&y=y1&y=y2...&{rest of the optional
>>> arguments}
>>
>> That is not RPC-style. I see nothing unRESTful in it.
>>
>
> It's a "Matrix URI", as per:
>
> http://www.w3.org/DesignIssues/MatrixURIs.html
>
> I've experimented with this notion, there are plenty of drawbacks to
> such a URI allocation scheme, however. The question for Dario is
> whether his system can use fragment URIs to achieve the same result.
Do you mean URIs with fragment identifiers?
If so: no because the fragment is not being sent to the server.
Jan
>
> TBL's design note suggests using Matrix URIs for latitude, longitude
> and scale for a map, as an example. SVG implements coordinates and
> scale using a fragment URI syntax. Something to consider.
>
> -Eric
-----------------------------------
Jan Algermissen, Consultant
NORD Software Consulting
Mail: algermissen@...
Blog: http://www.nordsc.com/blog/
Work: http://www.nordsc.com/
-----------------------------------
On Mon, Feb 22, 2010 at 11:15 PM, Jan Algermissen
<algermissen1971@...>wrote:
> On Feb 22, 2010, at 6:13 PM, Drio Abdulrehman wrote:
> ...
>
> > to be able to invoke it with several ids at once:
> >
> > (b) GET resource?id={id1}&id={id2}... or
> > GET resource/ids;id1;id2... or
> > GET resource/id1/id2... (BTW is this URL template acceptable? The ids are
> not in a hierarchy as the slashes may suggest, but I've seen it used
> elsewhere).
> >
>
> Use the first one: GET resource?id={id1}&id={id2}.
>
>
> > This falls in the realm of REST + batch operations but I don't know what
> is the best solution according to the REST architectural style.
>
> If you really have to do this, the RESTful way is to make a resource that
> has the semantics of the "bag" (like you did). But beware that the response
> is a representation of *that* resource.
>
> You might want to look at multipart messages for this. Here is a related
> experimental I-D:
> http://tools.ietf.org/html/draft-snell-http-batch-01
>
>
>
Another complexity to consider is, what happens if id1 is a valid identifier
and id2 is not? If you were just GETting id2, the answer would be obvious
... return a 404. But now I need to return multiple statuses (200 for id1
and 404 for id2). Hmm ...
WebDAV deals with this by returning a "multi-status response" and making the
client go through the contortions of interpreting all the response statuses
and matching them up to the original requests. It is technically feasible,
but this is one of the reasons you don't see a very large number of people
writing WebDAV clients :-).
I would tend to think of your use case more as a "search" rather than a
"batch GET". Think about defining a resource representing the collection of
all your resources, and use query parameters as filter expressions to limit
the results. One advantage of this approach is you are no longer limited to
just filtering based on the identifier ... you could select on other values
as well. And, maybe even throw in support for interpreting an "order by"
parameter for sorting, and maybe even "offset" and "limit" for pagination.
If you like the search paradigm, I would also suggest considering the Open
Search API (http://opensearch.org) as an alternative to rolling your own
approach.
Craig McClanahan
Jan Algermissen wrote: > > Do you mean URIs with fragment identifiers? > > If so: no because the fragment is not being sent to the server. > How do we know that the query needs to go to the server? We have no notion of "resource" going here, media types haven't been considered, so how can we jump right into designing a URI allocation scheme...? (More to follow shortly, this thread needs hijacking.) -Eric
On Feb 23, 2010, at 9:28 AM, Eric J. Bowman wrote: > Jan Algermissen wrote: >> >> Do you mean URIs with fragment identifiers? >> >> If so: no because the fragment is not being sent to the server. >> > > How do we know that the query needs to go to the server? What do you mean? Should the whole collection go to the client before making the selection? Jan > We have no > notion of "resource" going here, media types haven't been considered, > so how can we jump right into designing a URI allocation scheme...? > > (More to follow shortly, this thread needs hijacking.) > > -Eric ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
Drio Abdulrehman wrote:
>
> Hi,
>
> I have the following questions on the implementation of a REST API:
>
Hi Drio. Please don't take my hijacking your thread personally, this
goes to a larger point I've been trying to make of late, that the
starting place is resource modeling, not implementation (URI design).
I'm not criticizing you (or anyone else) or singling you (or anyone
else) out, just tilting at the windmill of changing how REST is taught.
Today, I'm trying to illustrate my point by considering fragment vs.
query string.
>
> 1.
> I have a resource which can be used like:
>
> (a) GET resource/{id}
>
Let's try to model that, first of all, by calling it a collection of
resources with unique IDs, '/collection/{member}' is widely understood.
But at this point I know of no reason it can't be /collection#member.
>
> but it would be useful, to save round trips, to be able to invoke it
> with several ids at once:
>
> (b) GET resource?id={id1}&id={id2}... or
> GET resource/ids;id1;id2... or
> GET resource/id1/id2... (BTW is this URL template acceptable?
> The ids are not in a hierarchy as the slashes may suggest, but I've
> seen it used elsewhere).
>
This is where I get worried every time someone new posts exactly this
sort of question here. We all know how pervasive misconceptions of
REST are. We shouldn't play along with questions regarding URI design
without having any notion of what the resource is.
I'd love to go into resources/subresources and URI syntax, Xpointer,
and other solutions to this -- except I think this would push the thread
starter further along in the wrong direction. I can't tell, because I
don't understand his notion of this resource.
>
> This falls in the realm of REST + batch operations...
>
I don't have enough information about the nature of the resource in
question, to know if that's an accurate assessment. As Jan pointed
out, saving round trips isn't a goal of REST API design. A RESTful API
will be conducive to caching, the side effect of which is saving round
trips.
Without some notion of the nature of the resource, it's impossible to
proceed in a disciplined fashion. Can the data be modeled as SVG, and
if so, can SVG's fragment syntax be utilized to extract subresources
for display? If not, is this a "lightbulb" for developing a new media
type with its own fragment syntax?
>
> From what I've read some people are of the opinion that (a) should be
> used with HTTP pipelining for batch operations while others think
> that the solution (b) is
> equally acceptable since this is a resource that happens to take
> multiple query/path/matrix parameters.
>
The opinions of others is a bad place to learn REST. Roy's thesis has
some nuggets of wisdom in it, though...
"
A distributed hypermedia architect has only three fundamental options:
1) render the data where it is located and send a fixed-format image to
the recipient; 2) encapsulate the data with a rendering engine and send
both to the recipient; or, 3) send the raw data to the recipient along
with metadata that describes the data type, so that the recipient can
choose their own rendering engine.
"
Since we have no notion of "resource" to ground us here, we lack the
ability to choose one of these options. If SVG or some SVG-like
fragment implementation within a media type isn't the solution, then
perhaps approximating the mobile object style by applying the optional
Code on Demand constraint (2) is called for. In which case, parameters
may not show up in the URI at all, query or fragment (still REST).
Resources must be conceptually modeled before any decisions about
system architecture may be effectively made. Craig's response was
sound advice, but rests on an assumption that the resource models out
as something that can be represented as OpenSearch. But, the resource
may not fit that model -- who's to say, from the information we've been
given?
I would suggest that if the purpose is to submit some JSON code to a
resource, then make that code a query string and GET the response
(Craig's right, POST isn't correct), saving a round-trip and enabling
caching. But I have no idea if this is appropriate, vs. using some URI
fragment syntax, since we haven't modeled the resource or considered
any media types, or discussed architectural options like CoD.
I'm up on my high horse on this, because I think when we REST sempai
start winging it by throwing URI patterns around, we only add to the
confusion of the REST kohai by reinforcing whatever misconceptions led
them to design an API by starting with URI allocation scheme in the
first place. Meaning no offense to anyone, I'm guilty of same.
>
> 2.
> I have another resource which is an algorithm that takes as input:
> - a list of parameters [x1, x2, ...]
> - another list of parameters [y1, y2, ...]
> - some other optional arguments
>
Let's try to model that resource, first of all, by calling it a service
endpoint. Then we can get down to the brass tacks of implementation.
>
> How can I make the resources 1. and 2. above RESTful?
>
I have no idea. Even if I did have a better idea of the nature of
these resources, REST is concerned with the interaction between
components in a system via Uniform Connector Interfaces. Analyzing one
resource in isolation won't usually tell you anything about whether the
system is RESTful or not, any more than looking at its URI can.
I can help you model your resources, determine their relations to one
another, and express that using hypertext link relations. I can help
with the selection and/or design of media types. I can help develop an
architecture around those media types and link relations. I can help
design a hypertext engine to drive application state.
What I can't do, is wave my magic guru wand and come up with a URI
allocation scheme describing what methods to use on what resources of
interest, and call it a REST API. I can only develop a REST API *after*
I know the nature of the resources involved, otherwise I'd just be
guessing.
REST development is all about tradeoffs. Disciplined REST development
is based on weighing the pros and cons of architectural choices.
Coming up with a RESTful solution without considering choices and
weighing tradeoffs is possible, but how is the architect then to
evaluate his or her creation against the goals for the system?
Starting with URI design locks the architect into a set of fundamental
design constraints which may be inappropriate for the system. The
choice between query string and URI fragment places a rigid dividing
line between client and server that's difficult, if not impossible, to
change further on down the road.
-Eric
Jan Algermissen wrote: > > On Feb 23, 2010, at 9:28 AM, Eric J. Bowman wrote: > > > Jan Algermissen wrote: > >> > >> Do you mean URIs with fragment identifiers? > >> > >> If so: no because the fragment is not being sent to the server. > >> > > > > How do we know that the query needs to go to the server? > > What do you mean? Should the whole collection go to the client before > making the selection? > If that architectural choice best fits the goals of the system, then yes. Either choice may be RESTful, one is not more RESTful than the other. The point of applied software architecture, following any given style, is to provide a context for making implementation decisions. Media type design impacts URI design in REST. Media type selection/creation does not follow from URI design -- the cart must come *before* the horse. Restricting media type choice based on what works with the URI design, instead of choosing/creating the media type which best fits the data model, breaks with REST discipline. The resulting system can only be compared against its own narrow interpretation of REST, which defeats the purpose of applied software architecture. Ideally, the architect can evaluate the implementation's design choices against the entire REST style, as opposed to a preconceived notion of the style -- where queries must be made against the server, and the pros and cons of constraining query to the client through use of URI fragments are not considered. (No offense intended, just illustrating my point.) To follow REST, is to use it as your guide when considering tradeoffs, like where to draw the line between client (#) and server (?) when modeling a new API. Being RESTful is not the end-all, be-all of API design -- meeting the goals of the system, is (RESTful or not). One API may not be more RESTful than another, but only one likely fits best with the goals of the system. REST is a process, not an outcome. -Eric
"Eric J. Bowman" wrote: > > the cart must come *before* the > horse. > Wait... that isn't right! Crap. -Eric
On Feb 23, 2010, at 1:38 PM, Subbu Allamaraju wrote: > > In reality of course, this shouldn't really necessitate multiple calls to the server if called multiple times since previous results have been cached and processed on the client side. Only when the cache expires, should there be an attempt to request again. I'm not really sure if RESTful clients that respect HATEOAS do it this way, and should they in the first place. If they do, are there tools that exist for this? > > It is quite unnecessary to treat this as a client problem, whether the system is hypertext based or not. If there is a cache in the reverse proxy on the server side, all the interactions will be seamless to the client. If there is a forward proxy cache on the client network, it will cut down the requests based on the expiry/invalidation policies set by the server or configured in the cache. > > BUT, please note that, chaining calls like this is not a good idea since it performs poorly when the caches are cold. > Hm, but the cache expiry states up to when the the resource is valid. Sure, chaining calls would perform poorly when the server doesn't instruct the client on how long the results are valid. But, for well-designed REST services, I believe it should be alright. Please let me know if I should think otherwise. How else would one go about accessing resources using HATEOAS though? Assuming moreover that the end application can't provide a web-like interface. > Subbu > Jan Vincent Liwanag jvliwanag@...
On Tue, Feb 23, 2010 at 10:37 AM, Eric J. Bowman <eric@...>wrote:
> Dário Abdulrehman wrote:
> >
> > Hi,
> >
> > I have the following questions on the implementation of a REST API:
> >
>
> Hi Dário. Please don't take my hijacking your thread personally, this
> goes to a larger point I've been trying to make of late, that the
> starting place is resource modeling, not implementation (URI design).
> I'm not criticizing you (or anyone else) or singling you (or anyone
> else) out, just tilting at the windmill of changing how REST is taught.
>
> Today, I'm trying to illustrate my point by considering fragment vs.
> query string.
>
> >
> > 1.
> > I have a resource which can be used like:
> >
> > (a) GET resource/{id}
> >
>
> Let's try to model that, first of all, by calling it a collection of
> resources with unique IDs, '/collection/{member}' is widely understood.
> But at this point I know of no reason it can't be /collection#member.
>
I will provide details about the domain I'm working on:
I have a database with biological data: proteins, genes, regulations, etc.,
and the REST API I would like to design gives access to those resources for
querying.
Since the types of queries I want to provide are very restricted I don't
think it fits the OpenSearch model suggested by Craig.
So, the example (a) I gave previously could be instantiated for the case of
proteins and genes.
A protein/gene is identified by its name but it has other interesting data
associated (description, aminoacid sequence, etc.), so I would like to model
it as a resource that responds to GET, returning a media type with that
information.
GET /protein/{id} => Returns media type (JSON for example) with
description, aminoacid sequence, etc.
Mutatis mutandis for gene.
> >
> > but it would be useful, to save round trips, to be able to invoke it
> > with several ids at once:
> >
> > (b) GET resource?id={id1}&id={id2}... or
> > GET resource/ids;id1;id2... or
> > GET resource/id1/id2... (BTW is this URL template acceptable?
> > The ids are not in a hierarchy as the slashes may suggest, but I've
> > seen it used elsewhere).
> >
>
> This is where I get worried every time someone new posts exactly this
> sort of question here. We all know how pervasive misconceptions of
> REST are. We shouldn't play along with questions regarding URI design
> without having any notion of what the resource is.
>
> I'd love to go into resources/subresources and URI syntax, Xpointer,
> and other solutions to this -- except I think this would push the thread
> starter further along in the wrong direction. I can't tell, because I
> don't understand his notion of this resource.
>
> >
> > This falls in the realm of REST + batch operations...
> >
>
> I don't have enough information about the nature of the resource in
> question, to know if that's an accurate assessment. As Jan pointed
> out, saving round trips isn't a goal of REST API design. A RESTful API
> will be conducive to caching, the side effect of which is saving round
> trips.
>
> Without some notion of the nature of the resource, it's impossible to
> proceed in a disciplined fashion. Can the data be modeled as SVG, and
> if so, can SVG's fragment syntax be utilized to extract subresources
> for display? If not, is this a "lightbulb" for developing a new media
> type with its own fragment syntax?
>
Given the above description of the resources does it still make sense to
provide the batch version of the resource?
I see the users wanting to GET information about a list of proteins/genes
and it would certainly be useful to do it batch style.
>
> >
> > From what I've read some people are of the opinion that (a) should be
> > used with HTTP pipelining for batch operations while others think
> > that the solution (b) is
> > equally acceptable since this is a resource that happens to take
> > multiple query/path/matrix parameters.
> >
>
> The opinions of others is a bad place to learn REST. Roy's thesis has
> some nuggets of wisdom in it, though...
>
> "
> A distributed hypermedia architect has only three fundamental options:
> 1) render the data where it is located and send a fixed-format image to
> the recipient; 2) encapsulate the data with a rendering engine and send
> both to the recipient; or, 3) send the raw data to the recipient along
> with metadata that describes the data type, so that the recipient can
> choose their own rendering engine.
> "
>
> Since we have no notion of "resource" to ground us here, we lack the
> ability to choose one of these options. If SVG or some SVG-like
> fragment implementation within a media type isn't the solution, then
> perhaps approximating the mobile object style by applying the optional
> Code on Demand constraint (2) is called for. In which case, parameters
> may not show up in the URI at all, query or fragment (still REST).
>
> Resources must be conceptually modeled before any decisions about
> system architecture may be effectively made. Craig's response was
> sound advice, but rests on an assumption that the resource models out
> as something that can be represented as OpenSearch. But, the resource
> may not fit that model -- who's to say, from the information we've been
> given?
>
> I would suggest that if the purpose is to submit some JSON code to a
> resource, then make that code a query string and GET the response
> (Craig's right, POST isn't correct), saving a round-trip and enabling
> caching. But I have no idea if this is appropriate, vs. using some URI
> fragment syntax, since we haven't modeled the resource or considered
> any media types, or discussed architectural options like CoD.
>
> I'm up on my high horse on this, because I think when we REST sempai
> start winging it by throwing URI patterns around, we only add to the
> confusion of the REST kohai by reinforcing whatever misconceptions led
> them to design an API by starting with URI allocation scheme in the
> first place. Meaning no offense to anyone, I'm guilty of same.
>
> >
> > 2.
> > I have another resource which is an algorithm that takes as input:
> > - a list of parameters [x1, x2, ...]
> > - another list of parameters [y1, y2, ...]
> > - some other optional arguments
> >
>
> Let's try to model that resource, first of all, by calling it a service
> endpoint. Then we can get down to the brass tacks of implementation.
>
This resource is an algorithm that takes as input lists of proteins, genes
and some other parameters and outputs the results.
>
> >
> > How can I make the resources 1. and 2. above RESTful?
> >
>
> I have no idea. Even if I did have a better idea of the nature of
> these resources, REST is concerned with the interaction between
> components in a system via Uniform Connector Interfaces. Analyzing one
> resource in isolation won't usually tell you anything about whether the
> system is RESTful or not, any more than looking at its URI can.
>
> I can help you model your resources, determine their relations to one
> another, and express that using hypertext link relations. I can help
> with the selection and/or design of media types. I can help develop an
> architecture around those media types and link relations. I can help
> design a hypertext engine to drive application state.
>
> What I can't do, is wave my magic guru wand and come up with a URI
> allocation scheme describing what methods to use on what resources of
> interest, and call it a REST API. I can only develop a REST API *after*
> I know the nature of the resources involved, otherwise I'd just be
> guessing.
>
> REST development is all about tradeoffs. Disciplined REST development
> is based on weighing the pros and cons of architectural choices.
> Coming up with a RESTful solution without considering choices and
> weighing tradeoffs is possible, but how is the architect then to
> evaluate his or her creation against the goals for the system?
>
> Starting with URI design locks the architect into a set of fundamental
> design constraints which may be inappropriate for the system. The
> choice between query string and URI fragment places a rigid dividing
> line between client and server that's difficult, if not impossible, to
> change further on down the road.
>
Given the nature of the resources I hope it should now be easier to design a
URI scheme.
Thanks.
>
> -Eric
>
When designing media type(s) for a domain that includes a family of "business document types" (such as Atom does with feed and entry or such as UBL does with catalogue, order, invoice,...) what are the pros and cons of 1. Defining one 'big' media type encompassing all documents 2. Defining many media types that correspond to the individual "business document types" Personally, I favor the 'one big media type' ecause I like the type to in a sense identify the domain and to subsum all the processing rules involved. OTH, it causes real dispatching pains because you do not know what you have before you poke into the body. I usually address that with the use of a profile parameter in conneg, for example: Accept: application/mydomain;profile=doctypeA[1]. Servers the know the client's preferences and can send the doctypeA (without the need to put the profile parameter on the ContentType header (where it might get stripped my interediaries anyhow). However, frameworks seem to be bad at that kind of conneg at the moment, hence this posting :-) Any thoughts or insights? Jan [1] This works also well with client driven conneg, as in <link href="" rel="" type="application/mydomain;profile=doctypeA"/>
Jan: I, too, prefer a large-grained media-type (application level and higher). When you say "...It causes real dispatching pains because you do not know what you have before you poke into the body." are you referring to dispatch issues on the client? server? There are a number of factors in designing a media-type. For example, I think over-specifying the document structure can make implementing state-machines against the media-type difficult. Recently, I've been copying the HTML document structure ((html = head + body) when creating my application-level media types. For example, one large app I'm working on has the following media-type structure: <root> <system /> <!-- system-level meta data and other control values including general links --> <data /> <!-- request-specific data including any lists, item details, etc. --> </root> It's then up to the client and server to understand the details of the <system> and <data> sections such as <user-list /> or <user-details /> or <list class="users" /> depending on your approach. mca http://amundsen.com/blog/ On Tue, Feb 23, 2010 at 12:40, Jan Algermissen <algermissen1971@...> wrote: > When designing media type(s) for a domain that includes a family of "business document types" (such as Atom does with feed and entry or such as UBL does with catalogue, order, invoice,...) what are the pros and cons of > > 1. Defining one 'big' media type encompassing all documents > > 2. Defining many media types that correspond to the individual "business document types" > > > > Personally, I favor the 'one big media type' ecause I like the type to in a sense identify the domain and to subsum all the processing rules involved. OTH, it causes real dispatching pains because you do not know what you have before you poke into the body. > > I usually address that with the use of a profile parameter in conneg, for example: Accept: application/mydomain;profile=doctypeA[1]. Servers the know the client's preferences and can send the doctypeA (without the need to put the profile parameter on the ContentType header (where it might get stripped my interediaries anyhow). > > However, frameworks seem to be bad at that kind of conneg at the moment, hence this posting :-) > > > Any thoughts or insights? > > Jan > > [1] This works also well with client driven conneg, as in <link href="" rel="" type="application/mydomain;profile=doctypeA"/> > > > ------------------------------------ > > Yahoo! Groups Links > > > >
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
On 2/20/2010 11:16 AM, Markus KARG wrote:
>
>
> > > As Roy Fielding pointed out several times, an API must not call
> > itself RESTful as long as it is not applying HATEOAS. I want to
> support
> > this constraint by adding HATEOASfulness to my future
> applications. One
> > thing I just do not understand so far about HATEOAS-via-HTTP is (and
> > what other people asked me when citing Fielding in this issue): How a
> > client shall actually know which http method to use to follow a link
> > received with the previous request to a RESTful server? Roy
> answered in
> > his blog that the method could be read out of the last result. But
> > actually how?
> >
> > This information is part of the hypermedia semantics specified (media
> > type specification or link relation specification etc). Such a
> > specification can either explicitly state the method to use (see RFC
> > 5023 for example) or specify a hypermedia element that tells the
> client
> > at runtime what method to use (e.g. HTML forms).
>
> I understand that with AtomPub RFC5023 specifies that. Call me dumb, but
> what to do if I am not using AtomPub but self-made service (like a
> web shop
> application)? How to do it then? For example, if I am writing a web
> shop, an
> that one allows to place an order using a POST. How to tell a client
> that it
> shall use that POST? I mean, *where* to put that information in a
> technical
> sense?
>
As Jan pointed it out, understanding navigation is media driven. In
the JSON media realm, you can use JSON Schema to generically instruct
a user agent how to navigate different JSON data structures (which may
represent different sub-media types). You can write a JSON Schema that
describes your data:
{
name: "Order",
links: [
rel: "create",
href: "/create_order",
method: "POST"
],
properties: {
...
}
}
Thanks,
- --
Kris Zyp
SitePen
(503) 806-1841
http://sitepen.com
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
iEYEARECAAYFAkuEHHAACgkQ9VpNnHc4zAxU/gCglAiCEHugG0VNZWl1HfRlnpzj
vz0An3k8UcK4f4oivLTjKEt49Imn0p3A
=S5pq
-----END PGP SIGNATURE-----
Personally, I think that that should not only be one big domain-specific media type, but actually I would love to see a global standard for any type of machine-readable business document, just like HTML for humans. Dublin Core is going in the right direction by standardizing things globally, but it is not global enough. Also I see interesting relations to semantic solutions like RDF. But in short term, I do not see such a global solution, so there might be a need to live with "small" media types meanwhile. > -----Original Message----- > From: rest-discuss@yahoogroups.com [mailto:rest- > discuss@yahoogroups.com] On Behalf Of Jan Algermissen > Sent: Dienstag, 23. Februar 2010 18:40 > To: REST Discuss > Subject: [rest-discuss] Media Types: application/mydomain vs > application/mydomain.doctypeA, application/mydomain.doctypeB, > application/mydomain.doctypeC > > When designing media type(s) for a domain that includes a family of > "business document types" (such as Atom does with feed and entry or > such as UBL does with catalogue, order, invoice,...) what are the pros > and cons of > > 1. Defining one 'big' media type encompassing all documents > > 2. Defining many media types that correspond to the individual > "business document types" > > > > Personally, I favor the 'one big media type' ecause I like the type to > in a sense identify the domain and to subsum all the processing rules > involved. OTH, it causes real dispatching pains because you do not know > what you have before you poke into the body. > > I usually address that with the use of a profile parameter in conneg, > for example: Accept: application/mydomain;profile=doctypeA[1]. Servers > the know the client's preferences and can send the doctypeA (without > the need to put the profile parameter on the ContentType header (where > it might get stripped my interediaries anyhow). > > However, frameworks seem to be bad at that kind of conneg at the > moment, hence this posting :-) > > > Any thoughts or insights? > > Jan > > [1] This works also well with client driven conneg, as in <link href="" > rel="" type="application/mydomain;profile=doctypeA"/> > > > ------------------------------------ > > Yahoo! Groups Links > > >
Kris,
this is interesting! As I am working with XSL / XML a lot, I did not take
such a deep look at JSON. Is that type of link support native to JSON or is
that just a specific use of JSON?
Thanks
Markus
From: Kris Zyp [mailto:kris@...]
Sent: Dienstag, 23. Februar 2010 19:21
To: Markus KARG
Cc: 'Jan Algermissen'; 'REST Discuss'
Subject: Re: [rest-discuss] Re: [Jersey] Moved thread to rest-discuss /
HATEOAS-via-HTTP: Which HTTP Method to use to follow link?
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
On 2/20/2010 11:16 AM, Markus KARG wrote:
>
>
> > > As Roy Fielding pointed out several times, an API must
not call
> > itself RESTful as long as it is not applying HATEOAS. I
want to
> support
> > this constraint by adding HATEOASfulness to my future
> applications. One
> > thing I just do not understand so far about
HATEOAS-via-HTTP is (and
> > what other people asked me when citing Fielding in this
issue): How a
> > client shall actually know which http method to use to
follow a link
> > received with the previous request to a RESTful server? Roy
> answered in
> > his blog that the method could be read out of the last
result. But
> > actually how?
> >
> > This information is part of the hypermedia semantics
specified (media
> > type specification or link relation specification etc).
Such a
> > specification can either explicitly state the method to use
(see RFC
> > 5023 for example) or specify a hypermedia element that
tells the
> client
> > at runtime what method to use (e.g. HTML forms).
>
> I understand that with AtomPub RFC5023 specifies that. Call me
dumb, but
> what to do if I am not using AtomPub but self-made service (like
a
> web shop
> application)? How to do it then? For example, if I am writing a
web
> shop, an
> that one allows to place an order using a POST. How to tell a
client
> that it
> shall use that POST? I mean, *where* to put that information in a
> technical
> sense?
>
As Jan pointed it out, understanding navigation is media driven. In
the JSON media realm, you can use JSON Schema to generically instruct
a user agent how to navigate different JSON data structures (which may
represent different sub-media types). You can write a JSON Schema that
describes your data:
{
name: "Order",
links: [
rel: "create",
href: "/create_order",
method: "POST"
],
properties: {
...
}
}
Thanks,
- --
Kris Zyp
SitePen
(503) 806-1841
http://sitepen.com
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
iEYEARECAAYFAkuEHHAACgkQ9VpNnHc4zAxU/gCglAiCEHugG0VNZWl1HfRlnpzj
vz0An3k8UcK4f4oivLTjKEt49Imn0p3A
=S5pq
-----END PGP SIGNATURE-----
I wonder whether the outline type of link description is actually RESTful: I
mean, "create order" clearly is a command, and such is not document driven
but method driven, which in turn looks like RPC to me?
From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On
Behalf Of Kris Zyp
Sent: Dienstag, 23. Februar 2010 19:21
To: Markus KARG
Cc: 'Jan Algermissen'; 'REST Discuss'
Subject: Re: [rest-discuss] Re: [Jersey] Moved thread to rest-discuss /
HATEOAS-via-HTTP: Which HTTP Method to use to follow link?
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
On 2/20/2010 11:16 AM, Markus KARG wrote:
>
>
> > > As Roy Fielding pointed out several times, an API must
not call
> > itself RESTful as long as it is not applying HATEOAS. I
want to
> support
> > this constraint by adding HATEOASfulness to my future
> applications. One
> > thing I just do not understand so far about
HATEOAS-via-HTTP is (and
> > what other people asked me when citing Fielding in this
issue): How a
> > client shall actually know which http method to use to
follow a link
> > received with the previous request to a RESTful server? Roy
> answered in
> > his blog that the method could be read out of the last
result. But
> > actually how?
> >
> > This information is part of the hypermedia semantics
specified (media
> > type specification or link relation specification etc).
Such a
> > specification can either explicitly state the method to use
(see RFC
> > 5023 for example) or specify a hypermedia element that
tells the
> client
> > at runtime what method to use (e.g. HTML forms).
>
> I understand that with AtomPub RFC5023 specifies that. Call me
dumb, but
> what to do if I am not using AtomPub but self-made service (like
a
> web shop
> application)? How to do it then? For example, if I am writing a
web
> shop, an
> that one allows to place an order using a POST. How to tell a
client
> that it
> shall use that POST? I mean, *where* to put that information in a
> technical
> sense?
>
As Jan pointed it out, understanding navigation is media driven. In
the JSON media realm, you can use JSON Schema to generically instruct
a user agent how to navigate different JSON data structures (which may
represent different sub-media types). You can write a JSON Schema that
describes your data:
{
name: "Order",
links: [
rel: "create",
href: "/create_order",
method: "POST"
],
properties: {
...
}
}
Thanks,
- --
Kris Zyp
SitePen
(503) 806-1841
http://sitepen.com
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
iEYEARECAAYFAkuEHHAACgkQ9VpNnHc4zAxU/gCglAiCEHugG0VNZWl1HfRlnpzj
vz0An3k8UcK4f4oivLTjKEt49Imn0p3A
=S5pq
-----END PGP SIGNATURE-----
<snip> Yes, that is usually good. However, I also think it is sometimes confusing because you have the HTTP header and the document header and entity meta data makes sense in both places. </snip> In the project to which I am referring, the <system /> element contains data not currently defined in headers and/or items (sometimes object graphs) the stakeholders wanted to me sure was "easily available" to a wide range of clients within the self-descriptive message. There are a number of items in HTML <head /> that are also "duplicates" of HTTP headers. Yes, it's a bit muddled; but it _is_ consistent and well-defined which was a big win at the time this media-type was designed. mca http://amundsen.com/blog/ On Tue, Feb 23, 2010 at 13:32, Jan Algermissen <algermissen1971@...> wrote: > > On Feb 23, 2010, at 7:02 PM, mike amundsen wrote: > >> Jan: >> >> I, too, prefer a large-grained media-type (application level and higher). >> >> When you say "...It causes real dispatching pains because you do not >> know what you have before you poke into the body." are you referring >> to dispatch issues on the client? server? > > Both, unfortunately. But then - the approach taken cannot rely on certain clients having special, improved features. > >> >> There are a number of factors in designing a media-type. For example, >> I think over-specifying the document structure can make implementing >> state-machines against the media-type difficult. > > Yes. > >> >> Recently, I've been copying the HTML document structure ((html = head >> + body) when creating my application-level media types. For example, >> one large app I'm working on has the following media-type structure: >> > > Yes, that is usually good. However, I also think it is sometimes confusing because you have the HTTP header and the document header and entity meta data makes sense in both places. > > jan > >> <root> >> <system /> <!-- system-level meta data and other control values >> including general links --> >> <data /> <!-- request-specific data including any lists, item details, etc. --> >> </root> >> >> It's then up to the client and server to understand the details of the >> <system> and <data> sections such as <user-list /> or <user-details /> >> or <list class="users" /> depending on your approach. >> >> mca >> http://amundsen.com/blog/ >> >> >> >> >> On Tue, Feb 23, 2010 at 12:40, Jan Algermissen <algermissen1971@...> wrote: >>> When designing media type(s) for a domain that includes a family of "business document types" (such as Atom does with feed and entry or such as UBL does with catalogue, order, invoice,...) what are the pros and cons of >>> >>> 1. Defining one 'big' media type encompassing all documents >>> >>> 2. Defining many media types that correspond to the individual "business document types" >>> >>> >>> >>> Personally, I favor the 'one big media type' ecause I like the type to in a sense identify the domain and to subsum all the processing rules involved. OTH, it causes real dispatching pains because you do not know what you have before you poke into the body. >>> >>> I usually address that with the use of a profile parameter in conneg, for example: Accept: application/mydomain;profile=doctypeA[1]. Servers the know the client's preferences and can send the doctypeA (without the need to put the profile parameter on the ContentType header (where it might get stripped my interediaries anyhow). >>> >>> However, frameworks seem to be bad at that kind of conneg at the moment, hence this posting :-) >>> >>> >>> Any thoughts or insights? >>> >>> Jan >>> >>> [1] This works also well with client driven conneg, as in <link href="" rel="" type="application/mydomain;profile=doctypeA"/> >>> >>> >>> ------------------------------------ >>> >>> Yahoo! Groups Links >>> >>> >>> >>> > > ----------------------------------- > Jan Algermissen, Consultant > NORD Software Consulting > > Mail: algermissen@... > Blog: http://www.nordsc.com/blog/ > Work: http://www.nordsc.com/ > ----------------------------------- > > > > >
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 2/23/2010 11:58 AM, Markus KARG wrote: > this is interesting! As I am working with XSL / XML a lot, I did > not take such a deep look at JSON. Is that type of link support > native to JSON or is that just a specific use of JSON? It is not "native" to application/json, it is part of JSON Schema (application/schema+json) [1], and is therefore a meta-description of the links that can understood from the data/documents. [1] http://tools.ietf.org/html/draft-zyp-json-schema > I wonder whether the outline type of link description is actually RESTful: > I mean, "create order" clearly is a command, and such is not > document driven but method driven, which in turn looks like RPC to > me? "create_order" was just what I used to make it clear, since I thought you were asking for a way to indicate to a client how to navigate to/submit a request to create an order (using a POST). - -- Kris Zyp SitePen (503) 806-1841 http://sitepen.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (MingW32) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAkuELQsACgkQ9VpNnHc4zAx+/ACgl79LdqrDrkUII9Uz2tSXJkiz EqcAn0Lx28cSF8A6A5S+il74gAE24ye7 =SW2g -----END PGP SIGNATURE-----
On Feb 23, 2010, at 7:02 PM, mike amundsen wrote: > Jan: > > I, too, prefer a large-grained media-type (application level and higher). > > When you say "...It causes real dispatching pains because you do not > know what you have before you poke into the body." are you referring > to dispatch issues on the client? server? Both, unfortunately. But then - the approach taken cannot rely on certain clients having special, improved features. > > There are a number of factors in designing a media-type. For example, > I think over-specifying the document structure can make implementing > state-machines against the media-type difficult. Yes. > > Recently, I've been copying the HTML document structure ((html = head > + body) when creating my application-level media types. For example, > one large app I'm working on has the following media-type structure: > Yes, that is usually good. However, I also think it is sometimes confusing because you have the HTTP header and the document header and entity meta data makes sense in both places. jan > <root> > <system /> <!-- system-level meta data and other control values > including general links --> > <data /> <!-- request-specific data including any lists, item details, etc. --> > </root> > > It's then up to the client and server to understand the details of the > <system> and <data> sections such as <user-list /> or <user-details /> > or <list class="users" /> depending on your approach. > > mca > http://amundsen.com/blog/ > > > > > On Tue, Feb 23, 2010 at 12:40, Jan Algermissen <algermissen1971@...> wrote: >> When designing media type(s) for a domain that includes a family of "business document types" (such as Atom does with feed and entry or such as UBL does with catalogue, order, invoice,...) what are the pros and cons of >> >> 1. Defining one 'big' media type encompassing all documents >> >> 2. Defining many media types that correspond to the individual "business document types" >> >> >> >> Personally, I favor the 'one big media type' ecause I like the type to in a sense identify the domain and to subsum all the processing rules involved. OTH, it causes real dispatching pains because you do not know what you have before you poke into the body. >> >> I usually address that with the use of a profile parameter in conneg, for example: Accept: application/mydomain;profile=doctypeA[1]. Servers the know the client's preferences and can send the doctypeA (without the need to put the profile parameter on the ContentType header (where it might get stripped my interediaries anyhow). >> >> However, frameworks seem to be bad at that kind of conneg at the moment, hence this posting :-) >> >> >> Any thoughts or insights? >> >> Jan >> >> [1] This works also well with client driven conneg, as in <link href="" rel="" type="application/mydomain;profile=doctypeA"/> >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> >> ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
Can one of you guys/gals explain to me how you determine the media type for each URI returned? This is the one thing that still perplexes me. In your example, you return the name, rel, and URI. But from those three items, how do I set the media type for Content-Type when I want to make a request?
On the server side, I might support different media types for each of my methods, some how I would assume I need to return that info as well so that a developer can set the right media type for the request.
Or perhaps I've completely confused the importance and use of media types with regards to HATEOAS responses that provide potentially multiple links for the activities that can be performed for a given state?
Thank you.
--- On Tue, 2/23/10, Kris Zyp <kris@...> wrote:
From: Kris Zyp <kris@...>
Subject: Re: [rest-discuss] Re: [Jersey] Moved thread to rest-discuss / HATEOAS-via-HTTP: Which HTTP Method to use to follow link?
To: "Markus KARG" <markus@...>
Cc: "'Jan Algermissen'" <algermissen1971@...>, "'REST Discuss'" <rest-discuss@yahoogroups.com>
Date: Tuesday, February 23, 2010, 11:31 AM
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
On 2/23/2010 11:58 AM, Markus KARG wrote:
> this is interesting! As I am
working with XSL / XML a lot, I did
> not take such a deep look at JSON. Is that type of link support
> native to JSON or is that just a specific use of JSON?
It is not "native" to application/ json, it is part of JSON Schema
(application/ schema+json) [1], and is therefore a meta-description of
the links that can understood from the data/documents.
[1] http://tools. ietf.org/ html/draft- zyp-json- schema
> I wonder whether the outline
type of link description is actually
RESTful:
> I mean, "create order" clearly
is a command, and such is not
> document driven but method driven, which in turn looks like RPC
to
> me?
"create_order" was just what I used to make it clear, since I thought
you were asking for a way to indicate to a client how to navigate
to/submit a request to create an order (using a POST).
- --
Kris Zyp
SitePen
(503) 806-1841
http://sitepen. com
-----BEGIN PGP SIGNATURE--- --
Version: GnuPG v1.4.9 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail. mozdev.org/
iEYEARECAAYFAkuELQs ACgkQ9VpNnHc4zAx +/ACgl79LdqrDrkU II9Uz2tSXJkiz
EqcAn0Lx28cSF8A6A5S +il74gAE24ye7
=SW2g
-----END PGP SIGNATURE--- --
On Sat, Feb 20, 2010 at 6:38 PM, Guilherme Silveira <guilherme.silveira@...> wrote: > Hello guys, > > Jan Vincent, answering your question about the cache, Restfulie and > Exylus are the only two clients APIs that I am aware of supporting > cache (according to the restwiki). You can see how it works here: If you are using Ruby, Resourceful (<http://rdoc.info/projects/paul/resourceful>) also has cache support. Peter
On Tue, Feb 23, 2010 at 9:40 AM, Jan Algermissen <algermissen1971@...> wrote: > Personally, I favor the 'one big media type' ecause I like the type to in a sense > identify the domain and to subsum all the processing rules involved. OTH, it causes > real dispatching pains because you do not know what you have before you poke > into the body. Obviously there's a balance. But if you're using an encompassing "one big media type", then don't you inevitably have "large", internal elements with different semantics? Contrived example of, say, "financial document" containing "invoice" and "payment". Two different, sizable, media types embedded in the generic "financial document". So, as a consumer, my "invoice processor" has to basically accept "any" "financial document" and then check it to see if it actually IS an invoice. I certainly can't ASSUME it's an invoice, it can be anything. But, overall, it doesn't change the semantic load or burden on consumers trying to leverage the document. The "work" is essentially the same whether the invoice is embedded in the larger payload, or if it's its own document. The documentation of the semantics is the same (largely), the code to leverage that data etc is the same. The large document doesn't really "gain" anything save a vague promise and a check to ensure I'm actually working with the right data. Clearly you don't want ultra fine grained media types (tho, arguably, that's what a micro format is). Anyway, that's just the other side of the fence in my opinion of this discussion. Regards, Will Hartung (willh@...)
<snip> > So, as a consumer, my "invoice processor" has to basically accept > "any" "financial document" and then check it to see if it actually IS > an invoice. I certainly can't ASSUME it's an invoice, it can be > anything. </snip> Yep, this is the trade-off. I've only recently started creating true state-machine clients, but so far limiting the number of media types supported by my clients has been less cumbersome than "teaching" my clients to understand a larger set of variations (invoice, bill of lading, customer, catalog, etc.) for a single media type. Time will tell. mca http://amundsen.com/blog/ On Tue, Feb 23, 2010 at 21:43, Will Hartung <willh@mirthcorp.com> wrote: > On Tue, Feb 23, 2010 at 9:40 AM, Jan Algermissen > <algermissen1971@...> wrote: > >> Personally, I favor the 'one big media type' ecause I like the type to in a sense >> identify the domain and to subsum all the processing rules involved. OTH, it causes >> real dispatching pains because you do not know what you have before you poke >> into the body. > > Obviously there's a balance. > > But if you're using an encompassing "one big media type", then don't > you inevitably have "large", internal elements with different > semantics? > > Contrived example of, say, "financial document" containing "invoice" > and "payment". > > Two different, sizable, media types embedded in the generic "financial > document". > > So, as a consumer, my "invoice processor" has to basically accept > "any" "financial document" and then check it to see if it actually IS > an invoice. I certainly can't ASSUME it's an invoice, it can be > anything. > > But, overall, it doesn't change the semantic load or burden on > consumers trying to leverage the document. The "work" is essentially > the same whether the invoice is embedded in the larger payload, or if > it's its own document. The documentation of the semantics is the same > (largely), the code to leverage that data etc is the same. The large > document doesn't really "gain" anything save a vague promise and a > check to ensure I'm actually working with the right data. > > Clearly you don't want ultra fine grained media types (tho, arguably, > that's what a micro format is). > > Anyway, that's just the other side of the fence in my opinion of this > discussion. > > Regards, > > Will Hartung > (willh@...) > > > ------------------------------------ > > Yahoo! Groups Links > > > >
On Feb 24, 2010, at 3:43 AM, Will Hartung wrote: > So, as a consumer, my "invoice processor" has to basically accept > "any" "financial document" and then check it to see if it actually IS > an invoice. I certainly can't ASSUME it's an invoice, it can be > anything. Yes. That is what the type or profile parameters and conneg should be helping with. If I specifically *ask* for application/finance?type=invoice then I should receive an invoice (but still cannot be sure of course). Jan ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
Drio Abdulrehman wrote:
>
> I will provide details about the domain I'm working on:
> I have a database with biological data: proteins, genes, regulations,
> etc., and the REST API I would like to design gives access to those
> resources for querying.
>
Interesting. A REST API may or may not work the same way for each of
those resource types. I was contacted by another project recently, for
similar purpose, so I'm actually somewhat familiar with part of what
you're trying to do.
>
> Since the types of queries I want to provide are very restricted I
> don't think it fits the OpenSearch model suggested by Craig.
>
You may be surprised. The solution I proposed to the other project was
to use Atom as a wrapper for XHTML marked up with RDFa, and OpenSearch
(for the same reasons Craig mentioned), with eXist XMLDB (as they
already used Java, it was an easy choice) as an 'Atom Store' layer in
front of the back-end DBs, with the system actually coded using Xquery.
Doing this exposes subresources at their own URIs by Xpath (without
creating separate DB cells), for example you may have a resource:
/protein/{id} which has subresource: /protein/{id}//sequence. Or, you
may have a resource: /protein/{id}#sequence, we aren't far enough
along to tell whether we're after <sequence> or <div class='sequence'>.
>
> A protein/gene is identified by its name but it has other interesting
> data associated (description, aminoacid sequence, etc.), so I would
> like to model it as a resource that responds to GET, returning a
> media type with that information.
>
Right. But, I would bet dollars to donuts that your {id} syntax is
specific to your project, rather than being based on some standard for
naming proteins or genes (those don't translate well into any URI
allocation scheme). This sort of thing maps directly to atom:id, while
the names of the folks who, say, sequenced a gene maps nicely to atom:
author constructs. Publication date is atom:published, atom:updated
maps to the date of the most recent research referred to.
I would also be willing to bet that you have some data hierarchies in
there, too. So, continuing with the "identification of resources"
constraint, you need to figure out those hierarchies before you can
design your URIs -- you won't have /protein or /gene, but rather,
/typeof/protein or /typeof/gene, and other type/subtype relationships.
Do you really want to have a resource, /gene, which can return a list of
every named gene ever discovered in both the Plant and Animal Kingdoms?
Craig referred to this as "filters" in his response. Human genetics
would be /animal/human/typeof/gene or somesuch.
>
> GET /protein/{id} => Returns media type (JSON for example) with
> description, aminoacid sequence, etc.
>
> Mutatis mutandis for gene.
>
Actually, JSON is about the worst choice you can make for this sort of
project. It's too general a media type, doesn't define links, is not
extensible, has no semantics, can't be validated against a schema, and
anything you return as JSON is completely inaccessible to boot. Your
system is really a searchable collection of documents. JSON isn't a
document markup language, so it's out-of-place for your purposes.
What's called for is a structured markup language. The data your
service returns will be organized into lists and tables. JSON has no
semantics for expressing tabular or listed data. XHTML does, with the
benefit that XHTML tables and lists may be marked up for accessibility.
The fact is that an XHTML data table is human- and machine- readable,
accessibly -- and that no such capability exists in JSON. So instead
of reinventing common hypertext data structures like lists and tables
in JSON, focus your efforts on defining a domain-specific vocabulary
for these data structures, expressed inside a standard (X)HTML media
type.
This promotes serendipitous re-use. You could also come up with your
own XML language, but as with JSON you'd have to reinvent several
wheels to make that work. Whereas XHTML tables make the generic
"tabular data" structure self-evident. Embedding metadata in (X)HTML
using RDFa is standard, so no special parser is required to read it,
and it would be simple to re-use across projects.
The easier you make it, the more likely it will be adopted by others.
Using RDFa to add semantics to XHTML tables is simple, and easily
understood. Not so for metadata in JSON, where I have to refer to some
sort of external documentation just to decipher your data structures,
before I can begin to figure out what your vocabulary is.
Instead, you should be relying on my common knowledge of hypertext data
structures, to impart "tabular data" to me through your API. Since I
already understand hypertext tables, your specific vocabulary sticks
out, and is thus easy for me to understand, since you're establishing
your definitions as an extension of my working knowledge of their
underlying data structure.
As opposed to making me start from scratch, requiring me to learn your
interpretation of tabular data expressed as JSON before I can recognize
your domain-specific vocabulary embedded within. I use tables as an
example, but I mean any data structure that can be implemented within
the basic semantics of HTML.
If the other project took my advice, which I don't think they did, then
there may already exist an RDFa ontology that partly covers what you're
doing, and you'd just be extending that. There is nothing inherent to
your resources that would require nonstandard media types to express.
Publishing scientific data on the Web is exactly the problem HTML was
originally conceived to solve. It did spectacularly well. Use it.
Constrain it to your ontology using RELAX NG and Schematron. Nowadays,
wrap it in Atom. If you need a new element, like say <sequence>, both
XHTML and Atom are extensible to allow for it. PCR imaging is saved as
what, PNG files? Make Atom Media Entries for them, properly linked all
around.
If you're generating images from DB queries, save the image as a file
with a name and link to that, instead of generating the same image from
the DB with every request by calling some image-generation endpoint.
Don't expose the image-generating endpoint to the public, it's a DDoS
magnet. Every online genetic DB I've looked at, clearly uses image
generation, which is obvious by looking at the URLs. It shouldn't be.
>
> Given the above description of the resources does it still make sense
> to provide the batch version of the resource?
> I see the users wanting to GET information about a list of
> proteins/genes and it would certainly be useful to do it batch style.
>
If you follow my advice, you'd have an eXist XMLDB and a whole bunch of
stored procedures (resources). Then you could simply turn on Atom
Protocol. A researcher could log in, create a workspace, populate it
with some search results, write some custom Xquery code to create their
own search results, then query against their own contrived collection
of, essentially, stored procedures.
This, to me, is the most important aspect of my solution. There's no
way of telling how someone may want to access your data -- ways that
haven't occurred to you. By virtue of publishing your data in an XMLDB
that allows direct Xquery access, you avoid this problem (by basically
making the number of "resources" in your system infinite).
If someone doesn't like your REST API, they could just write their own.
Those without login privileges to the XMLDB are stuck using the REST
API you provide for them, restricted to using your system only in ways
you have specifically anticipated they'll need.
I don't see an overwhelming need, in light of this, to provide some way
to submit multiple searches in one request. Even if you could make it
work in browsers. I suppose it could be done even if _I_ don't see why,
and it could probably be made RESTful, but there would be a cost in
reduced visibility.
While that isn't a REST constraint, it's my process to weigh the pros
and cons. The con of reduced visibility is a stopper for me, unless
I'm gaining the benefit of some pro in return. I don't see the pro
here, so I don't see any benefit to reducing visibility to support a
batch-GET feature.
>
> > >
> > > 2.
> > > I have another resource which is an algorithm that takes as input:
> > > - a list of parameters [x1, x2, ...]
> > > - another list of parameters [y1, y2, ...]
> > > - some other optional arguments
> > >
> >
> > Let's try to model that resource, first of all, by calling it a
> > service endpoint. Then we can get down to the brass tacks of
> > implementation.
> >
>
> This resource is an algorithm that takes as input lists of proteins,
> genes and some other parameters and outputs the results.
>
OK, *that* sounds like an RPC endpoint. You don't want an endpoint
that takes some query syntax, you just want some query syntax. I'll
elaborate on this later, I have some example URIs I can adapt from that
other project (I've already solved this problem for someone else, so I
can skip a few steps here and start throwing hypothetical URIs around).
>
> Given the nature of the resources I hope it should now be easier to
> design a URI scheme.
>
Nope, not yet. The URI allocation scheme will consist of hierarchical
resources and a query syntax. We haven't identified the hierarchy, or
worked out the query syntax, yet. That's assuming you even agree to
using Atom, which I usually recommend for the purpose of prototyping,
even when I don't know it's a good fit.
In this case, I'm pretty sure it's a good fit, and I have an idea of
where to separate client logic from server logic. The only thing I can
say about URI design, is that the query syntax might go in a query, or
it might go in a fragment, or it might wind up in both. IOW, nothing's
settled.
-Eric
On Wed, Feb 24, 2010 at 1:49 AM, Jan Algermissen <algermissen1971@...> wrote: > That is what the type or profile parameters and conneg should be > helping with. If I specifically *ask* for > application/finance?type=invoice then I should receive an invoice > (but still cannot be sure of course). How is 'application/finance?type=invoice' different than 'application/invoice'? Using a parameter on the mime type has the practical disadvantage very few, if any, exiting tools handle conneg using media type parameters smartly. With a discreet media type you have a fighting chance of finding a tool that will support conneg. To your original question, i prefer to construct media types that are the larger than a single document type. Media types should be cohesive, though. To use some of the examples from earlier in the thread, a media type including invoice and payment document types seems pretty reasonable but i would probably put a catalog document type in a different media type. Peter http://barelyenough.org
Jan, Have you tried to communicate with the UBL gang to see if they are interested in defining media types? That would be the best route: getting some semi-popular semi-standard organization to define some standard media types for business. Then, whatever variation they define would probably be better for REST than whatever you and I define, regardless of much I might prefer my own format.
What I wonder about is whether we actually need a definition of the methods: If we would "normalize" all RESTful documents down to atomic CRUD operations, then it would be clear what GET / PUT / POST / DELETE are to be used for. I mean, we all can use any database table just by SELECT / UPDATE / INSERT / DELETE without any documentation about what the actual use the command is, and we can normalize a database to hold any type of business data. So why do we need to agree upon documents types and link rels at all? Why don't we just normalize our apps? From: Kris Zyp [mailto:kris@...] Sent: Dienstag, 23. Februar 2010 20:31 To: Markus KARG Cc: 'Jan Algermissen'; 'REST Discuss' Subject: Re: [rest-discuss] Re: [Jersey] Moved thread to rest-discuss / HATEOAS-via-HTTP: Which HTTP Method to use to follow link? -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 2/23/2010 11:58 AM, Markus KARG wrote: > this is interesting! As I am working with XSL / XML a lot, I did > not take such a deep look at JSON. Is that type of link support > native to JSON or is that just a specific use of JSON? It is not "native" to application/json, it is part of JSON Schema (application/schema+json) [1], and is therefore a meta-description of the links that can understood from the data/documents. [1] http://tools.ietf.org/html/draft-zyp-json-schema > I wonder whether the outline type of link description is actually RESTful: > I mean, "create order" clearly is a command, and such is not > document driven but method driven, which in turn looks like RPC to > me? "create_order" was just what I used to make it clear, since I thought you were asking for a way to indicate to a client how to navigate to/submit a request to create an order (using a POST). - -- Kris Zyp SitePen (503) 806-1841 http://sitepen.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (MingW32) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAkuELQsACgkQ9VpNnHc4zAx+/ACgl79LdqrDrkUII9Uz2tSXJkiz EqcAn0Lx28cSF8A6A5S+il74gAE24ye7 =SW2g -----END PGP SIGNATURE-----
If we stick to http in the original definition's sense, you don't need to define the mime type at all: The client will supply a list of accepted media type preferences in a request, and will get one of those back. The actual returned type is found in the header. If you want ONLY the header, don't use GET but HEAD. Rather simple, isn't it? From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of Kevin Duffey Sent: Dienstag, 23. Februar 2010 21:34 To: rest-discuss@yahoogroups.com Subject: Re: [rest-discuss] Re: [Jersey] Moved thread to rest-discuss / HATEOAS-via-HTTP: Which HTTP Method to use to follow link? Can one of you guys/gals explain to me how you determine the media type for each URI returned? This is the one thing that still perplexes me. In your example, you return the name, rel, and URI. But from those three items, how do I set the media type for Content-Type when I want to make a request? On the server side, I might support different media types for each of my methods, some how I would assume I need to return that info as well so that a developer can set the right media type for the request. Or perhaps I've completely confused the importance and use of media types with regards to HATEOAS responses that provide potentially multiple links for the activities that can be performed for a given state? Thank you. --- On Tue, 2/23/10, Kris Zyp <kris@sitepen.com> wrote: From: Kris Zyp <kris@...> Subject: Re: [rest-discuss] Re: [Jersey] Moved thread to rest-discuss / HATEOAS-via-HTTP: Which HTTP Method to use to follow link? To: "Markus KARG" <markus@headcrashing.eu> Cc: "'Jan Algermissen'" <algermissen1971@...>, "'REST Discuss'" <rest-discuss@yahoogroups.com> Date: Tuesday, February 23, 2010, 11:31 AM -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 2/23/2010 11:58 AM, Markus KARG wrote: > this is interesting! As I am working with XSL / XML a lot, I did > not take such a deep look at JSON. Is that type of link support > native to JSON or is that just a specific use of JSON? It is not "native" to application/ json, it is part of JSON Schema (application/ schema+json) [1], and is therefore a meta-description of the links that can understood from the data/documents. [1] http://tools. <http://tools.ietf.org/html/draft-zyp-json-schema> ietf.org/ html/draft- zyp-json- schema > I wonder whether the outline type of link description is actually RESTful: > I mean, "create order" clearly is a command, and such is not > document driven but method driven, which in turn looks like RPC to > me? "create_order" was just what I used to make it clear, since I thought you were asking for a way to indicate to a client how to navigate to/submit a request to create an order (using a POST). - -- Kris Zyp SitePen (503) 806-1841 http://sitepen. com <http://sitepen.com> -----BEGIN PGP SIGNATURE--- -- Version: GnuPG v1.4.9 (MingW32) Comment: Using GnuPG with Mozilla - http://enigmail. mozdev.org/ <http://enigmail.mozdev.org/> iEYEARECAAYFAkuELQs ACgkQ9VpNnHc4zAx +/ACgl79LdqrDrkU II9Uz2tSXJkiz EqcAn0Lx28cSF8A6A5S +il74gAE24ye7 =SW2g -----END PGP SIGNATURE--- --
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 I think that forcing REST to strictly conform to CRUD is considered a REST anti-pattern. Applications can have actions outside of pure CRUD actions, and servers should be allowed to provide navigation to non-safe, non-idempotent resource-oriented actions through POSTs (like a form that triggers the sending of an email from the server, for example). Kris On 2/24/2010 11:16 AM, Markus KARG wrote: > > What I wonder about is whether we actually need a definition of the > methods: If we would "normalize" all RESTful documents down to > atomic CRUD operations, then it would be clear what GET / PUT / > POST / DELETE are to be used for. I mean, we all can use any > database table just by SELECT / UPDATE / INSERT / DELETE without > any documentation about what the actual use the command is, and we > can normalize a database to hold any type of business data. So why > do we need to agree upon documents types and link rels at all? Why > don't we just normalize our apps? > > > > *From:* Kris Zyp [mailto:kris@...] *Sent:* Dienstag, 23. > Februar 2010 20:31 *To:* Markus KARG *Cc:* 'Jan Algermissen'; 'REST > Discuss' *Subject:* Re: [rest-discuss] Re: [Jersey] Moved thread to > rest-discuss / HATEOAS-via-HTTP: Which HTTP Method to use to follow > link? > > > > > > On 2/23/2010 11:58 AM, Markus KARG wrote: >> this is interesting! As I am > > working with XSL / XML a lot, I did > >> not take such a deep look at JSON. Is that type of link support > >> native to JSON or is that just a specific use of JSON? > > It is not "native" to application/json, it is part of JSON Schema > (application/schema+json) [1], and is therefore a meta-description > of the links that can understood from the data/documents. > > [1] http://tools.ietf.org/html/draft-zyp-json-schema > >> I wonder whether the outline > > type of link description is actually RESTful: >> I mean, "create order" clearly > > is a command, and such is not > >> document driven but method driven, which in turn looks like RPC > > to > >> me? > > "create_order" was just what I used to make it clear, since I > thought you were asking for a way to indicate to a client how to > navigate to/submit a request to create an order (using a POST). > - -- Kris Zyp SitePen (503) 806-1841 http://sitepen.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (MingW32) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAkuFceoACgkQ9VpNnHc4zAxJCgCghuhhnEinnB8yfrhU5VlexchS dpgAnjUgssc0Y8zO29HXHEKwhSsDBPcl =XHCs -----END PGP SIGNATURE-----
I think we must differentiate here between CRUD GUIs (which I dislike) and CRUD APIs: As SQL works so well, I don't actually see a real need to invent more commands. I mean, what benefit should it actually bringt? If I want my server to send an email, what would be wrong to say that I must do a POST to the "http://.../mail-outbox/" URI, which in turn will make the server send away the received entity as an email, and which ist a RESTful operation (nobody says that CRUD means that a resource must be persistent *forever*, so the server is free to DELETE it on it's own once the mail is sent). I just don't see a need to do anything besides CRUD, but maybe I just didn't find the ultimate example? From: Kris Zyp [mailto:kris@...] Sent: Mittwoch, 24. Februar 2010 19:38 To: Markus KARG Cc: 'Jan Algermissen'; 'REST Discuss' Subject: Re: [rest-discuss] Re: [Jersey] Moved thread to rest-discuss / HATEOAS-via-HTTP: Which HTTP Method to use to follow link? -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 I think that forcing REST to strictly conform to CRUD is considered a REST anti-pattern. Applications can have actions outside of pure CRUD actions, and servers should be allowed to provide navigation to non-safe, non-idempotent resource-oriented actions through POSTs (like a form that triggers the sending of an email from the server, for example). Kris On 2/24/2010 11:16 AM, Markus KARG wrote: > > What I wonder about is whether we actually need a definition of the > methods: If we would "normalize" all RESTful documents down to > atomic CRUD operations, then it would be clear what GET / PUT / > POST / DELETE are to be used for. I mean, we all can use any > database table just by SELECT / UPDATE / INSERT / DELETE without > any documentation about what the actual use the command is, and we > can normalize a database to hold any type of business data. So why > do we need to agree upon documents types and link rels at all? Why > don't we just normalize our apps? > > > > *From:* Kris Zyp [mailto:kris@...] *Sent:* Dienstag, 23. > Februar 2010 20:31 *To:* Markus KARG *Cc:* 'Jan Algermissen'; 'REST > Discuss' *Subject:* Re: [rest-discuss] Re: [Jersey] Moved thread to > rest-discuss / HATEOAS-via-HTTP: Which HTTP Method to use to follow > link? > > > > > > On 2/23/2010 11:58 AM, Markus KARG wrote: >> this is interesting! As I am > > working with XSL / XML a lot, I did > >> not take such a deep look at JSON. Is that type of link support > >> native to JSON or is that just a specific use of JSON? > > It is not "native" to application/json, it is part of JSON Schema > (application/schema+json) [1], and is therefore a meta-description > of the links that can understood from the data/documents. > > [1] http://tools.ietf.org/html/draft-zyp-json-schema > >> I wonder whether the outline > > type of link description is actually RESTful: >> I mean, "create order" clearly > > is a command, and such is not > >> document driven but method driven, which in turn looks like RPC > > to > >> me? > > "create_order" was just what I used to make it clear, since I > thought you were asking for a way to indicate to a client how to > navigate to/submit a request to create an order (using a POST). > - -- Kris Zyp SitePen (503) 806-1841 http://sitepen.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (MingW32) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAkuFceoACgkQ9VpNnHc4zAxJCgCghuhhnEinnB8yfrhU5VlexchS dpgAnjUgssc0Y8zO29HXHEKwhSsDBPcl =XHCs -----END PGP SIGNATURE-----
Ho Bob, On Feb 24, 2010, at 5:11 PM, Bob Haugen wrote: > Jan, > > Have you tried to communicate with the UBL gang to see if they are > interested in defining media types? > Yes, I asked a while ago (can't find the link right now). They were not really interested because they said UBL was intended as message payload (paraphrasing) and should not contain 'processing intention'. I'll try to find the answer later. Have you seen my 'SCM quest'? See http://www.nordsc.com/blog/?cat=13 The media type is discussed here: http://www.nordsc.com/blog/?p=293 I take an experimental shot at a UBL media type - but really only to show how it could be done. > That would be the best route: getting some semi-popular semi-standard > organization to define some standard media types for business. Yes - Web UBL.... WUBL :-) > > Then, whatever variation they define would probably be better for REST > than whatever you and I define, regardless of much I might prefer my > own format. Yes, they certainly know the nuts and bolts - which is probably why UBL enables almost anything :-) However, stripping it down works quite nicely and it is understandable then. Thanks, Jan > > > ------------------------------------ > > Yahoo! Groups Links > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
Markus,
What I meant was... what does the response look like? Is it something like:
<links>
<link rel="create" uri="...." method="post" media-type="application/xml, application/json"/>
<link rel="self" uir="..." method="get, put" media-type="application/xml"/>
</links>
That is.. if I do some sort of GET or OPTION or what have you on a published URI, and two URIs come back indicating I can get the self and post a create to another URI, given that we can specify different media types on each method (at least in Jersey), I am curious what lets the client side no WHAT to set the header Content-Type to, in order to get it to the right method on the server side for handling?
With JAX-RS, we can specify two methods from above:
@POST
@Consumes({"application/xml", "application/json"})
public Response create(...){}
@PUT
@Consumes({"application/xml"})
public Response update(...){}
@GET
@Produces({"application/xml"})
public Response get(...){}
I am curious as I try to build out a HATEOAS based response system, how I send back multiple links that can be called by the client at that point, and how I tell the client the specific media type each URI can handle? Or is that in a document instead that says "for this URI, you must use the media type..."?
--- On Wed, 2/24/10, Markus KARG <markus@...> wrote:
From: Markus KARG <markus@...>
Subject: RE: [rest-discuss] Re: [Jersey] Moved thread to rest-discuss / HATEOAS-via-HTTP: Which HTTP Method to use to follow link?
To: "'Kevin Duffey'" <andjarnic@...>, rest-discuss@yahoogroups.com
Date: Wednesday, February 24, 2010, 10:19 AM
If we stick to http in the original definition's sense, you
don't need to define the mime type at all: The client will supply a list of
accepted media type preferences in a request, and will get one of those back.
The actual returned type is found in the header. If you want ONLY the header,
don't use GET but HEAD. Rather simple, isn't it?
From: rest-discuss@yahoogroups.com
[mailto:rest-discuss@yahoogroups.com] On Behalf Of Kevin Duffey
Sent: Dienstag, 23. Februar 2010 21:34
To: rest-discuss@yahoogroups.com
Subject: Re: [rest-discuss] Re: [Jersey] Moved thread to rest-discuss /
HATEOAS-via-HTTP: Which HTTP Method to use to follow link?
Can one of you guys/gals explain to me how you determine
the media type for each URI returned? This is the one thing that still
perplexes me. In your example, you return the name, rel, and URI. But from
those three items, how do I set the media type for Content-Type when I want
to make a request?
On the server side, I might support different media types for each of my
methods, some how I would assume I need to return that info as well so that a
developer can set the right media type for the request.
Or perhaps I've completely confused the importance and use of media types
with regards to HATEOAS responses that provide potentially multiple links for
the activities that can be performed for a given state?
Thank you.
--- On Tue, 2/23/10, Kris Zyp <kris@...> wrote:
From: Kris Zyp <kris@sitepen.com>
Subject: Re: [rest-discuss] Re: [Jersey] Moved thread to rest-discuss /
HATEOAS-via-HTTP: Which HTTP Method to use to follow link?
To: "Markus KARG" <markus@...>
Cc: "'Jan Algermissen'" <algermissen1971@...>,
"'REST Discuss'" <rest-discuss@yahoogroups.com>
Date: Tuesday, February 23, 2010, 11:31 AM
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
On 2/23/2010 11:58 AM, Markus KARG wrote:
> this is interesting! As I am
working with XSL / XML a lot, I did
> not take such a deep look at JSON. Is that type of link support
> native to JSON or is that just a specific use of JSON?
It is not "native" to application/ json, it is part of JSON Schema
(application/ schema+json) [1], and is therefore a meta-description of
the links that can understood from the data/documents.
[1] http://tools.
ietf.org/ html/draft- zyp-json- schema
> I wonder whether the outline
type of link description is actually
RESTful:
> I mean, "create order" clearly
is a command, and such is not
> document driven but method driven, which in turn looks like RPC
to
> me?
"create_order" was just what I used to make it clear, since I thought
you were asking for a way to indicate to a client how to navigate
to/submit a request to create an order (using a POST).
- --
Kris Zyp
SitePen
(503) 806-1841
http://sitepen. com
-----BEGIN PGP SIGNATURE--- --
Version: GnuPG v1.4.9 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail. mozdev.org/
iEYEARECAAYFAkuELQs ACgkQ9VpNnHc4zAx +/ACgl79LdqrDrkU II9Uz2tSXJkiz
EqcAn0Lx28cSF8A6A5S +il74gAE24ye7
=SW2g
-----END PGP SIGNATURE--- --
On Wed, Feb 24, 2010 at 2:28 PM, Jan Algermissen <algermissen1971@...> wrote: > On Feb 24, 2010, at 5:11 PM, Bob Haugen wrote: >> Have you tried to communicate with the UBL gang to see if they are >> interested in defining media types? >> > > Yes, I asked a while ago (can't find the link right now). They were not really interested because they said UBL was intended as message payload (paraphrasing) and should not contain 'processing intention'. I'll try to find the answer later. > So how does media type contain processing intention? (Unless they mean "web" vs "SOAP"? I think they started assuming either typical EDI transport mechanisms or SOAP.) >> That would be the best route: getting some semi-popular semi-standard >> organization to define some standard media types for business. > > Yes - Web UBL.... WUBL :-) > >> >> Then, whatever variation they define would probably be better for REST >> than whatever you and I define, regardless of much I might prefer my >> own format. > > Yes, they certainly know the nuts and bolts - which is probably why UBL enables almost anything :-) However, stripping it down works quite nicely and it is understandable then. > Enabling almost anything is the curse of EDI standards (of which UBL is a bastard child), which makes them fiendishly complex to work with. Back in the day, companies used to define subsets.
On Feb 24, 2010, at 10:56 PM, Bob Haugen wrote: > On Wed, Feb 24, 2010 at 2:28 PM, Jan Algermissen > <algermissen1971@...> wrote: >> On Feb 24, 2010, at 5:11 PM, Bob Haugen wrote: >>> Have you tried to communicate with the UBL gang to see if they are >>> interested in defining media types? >>> >> >> Yes, I asked a while ago (can't find the link right now). They were not really interested because they said UBL was intended as message payload (paraphrasing) and should not contain 'processing intention'. I'll try to find the answer later. Here is the thread: <http://lists.oasis-open.org/archives/ubl-dev/200802/msg00050.html> (Re media type: see at bottom and read response ref-ed below) >> > > So how does media type contain processing intention? (Unless they > mean "web" vs "SOAP"? I think they started assuming either typical EDI > transport mechanisms or SOAP.) This is the answer mail <http://lists.oasis-open.org/archives/ubl-dev/200802/msg00052.html> > >>> That would be the best route: getting some semi-popular semi-standard >>> organization to define some standard media types for business. >> >> Yes - Web UBL.... WUBL :-) >> >>> >>> Then, whatever variation they define would probably be better for REST >>> than whatever you and I define, regardless of much I might prefer my >>> own format. >> >> Yes, they certainly know the nuts and bolts - which is probably why UBL enables almost anything :-) However, stripping it down works quite nicely and it is understandable then. >> > > Enabling almost anything is the curse of EDI standards (of which UBL > is a bastard child), which makes them fiendishly complex to work with. > Back in the day, companies used to define subsets. The question would be I guess if a common 80/20[1] subset could be defined or if it is the nature of EDI that the parties must define an agreed subtype by themselves. [1] IIRC UBL is already aimed to be an 80/20 EDI :-) Jan ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
On Wed, Feb 24, 2010 at 4:09 PM, Jan Algermissen <algermissen1971@...> wrote: > On Feb 24, 2010, at 10:56 PM, Bob Haugen wrote: >> So how does media type contain processing intention? (Unless they >> mean "web" vs "SOAP"? I think they started assuming either typical EDI >> transport mechanisms or SOAP.) > > This is the answer mail <http://lists.oasis-open.org/archives/ubl-dev/200802/msg00052.html> I would not read that answer, and especially the followup, the way you did in your summary. The responders do not seem to have thought of the issue very deeply, but appear to be to be sympathetic. They appear to have assumed a message-passing architecture (that is, they do assume a processing intention in that sense), not a REST or resource-oriented architecture. I also got the idea that if somebody knew how to define a UBL media type or two, they might like it. Where did you get "They were not really interested because they said UBL was intended as message payload (paraphrasing) and should not contain 'processing intention'"?
On Feb 24, 2010, at 11:47 PM, Bob Haugen wrote: > On Wed, Feb 24, 2010 at 4:09 PM, Jan Algermissen > <algermissen1971@...> wrote: >> On Feb 24, 2010, at 10:56 PM, Bob Haugen wrote: >>> So how does media type contain processing intention? (Unless they >>> mean "web" vs "SOAP"? I think they started assuming either typical EDI >>> transport mechanisms or SOAP.) >> >> This is the answer mail <http://lists.oasis-open.org/archives/ubl-dev/200802/msg00052.html> > > I would not read that answer, and especially the followup, the way you > did in your summary. The responders do not seem to have thought of > the issue very deeply, but appear to be to be sympathetic. They > appear to have assumed a message-passing architecture (that is, they > do assume a processing intention in that sense), not a REST or > resource-oriented architecture. Well, yes :-) What I wanted back then was to see if there was interest in the UBL community to bring UBL 'on the Web'. I did not have a 'mission' to actually do that. > > I also got the idea that if somebody knew how to define a UBL media > type or two, they might like it. Maybe - OTH, I understood from the response that UBL is not meant to be used without an acompanying agreement tailored towards the communicating partners. I think the position is that UBL itself is not contracturally binding enough to work without such side agreements. But of course I might have completely misunderstood that. > > Where did you get "They were not really interested because they said > UBL was intended as message payload (paraphrasing) and should not > contain 'processing intention'"? It was the memory of the mail before I had found it in the archive. Not very precise :-) Are you interested in persuing a UBL (procurement) media type? I lack the knowledge of the real world supply chain to judge whether a general format is suitable for a reasonable percentage of actual requirements. Jan > > > ------------------------------------ > > Yahoo! Groups Links > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
On Wed, Feb 24, 2010 at 10:50 AM, Markus KARG <markus@...> wrote: > > > > I think we must differentiate here between CRUD GUIs (which I dislike) and CRUD APIs: > As SQL works so well, I don't actually see a real need to invent more commands. I mean, > what benefit should it actually bringt? You gain a API that has better granularity than simply CRUD. Yes, inevitably, you end up doing pure "atomic" CRUD against, say, a SQL database. But there's no reason you should be forced to exposed that limited of an API to your clients. You should be able to have higher level interactions with your sever than just raw CRUD, actions that themselves may manifest into sever primitive hits against your data store. The point of the Common Interface isn't so much to hamstring applications in to a pure CRUD HTTP view of the world, but rather to help ensure you don't simply run off hog wild creating eleventy zillion new verbs. Taking a constrained, conservative view at the verb layer and how they're used can make the API easier to use, and more approachable, rather than a wiener dog nest of exceptions and special cases and what ifs. Consistent application of a consistent, common interface. But that doesn't mean you're limited solely to CRUD. Regards, Will Hartung (willh@...)
On Feb 24, 2010, at 7:50 PM, Markus KARG wrote:
>
>
> I think we must differentiate here between CRUD GUIs (which I dislike) and CRUD APIs: As SQL works so well, I don't actually see a real need to invent more commands. I mean, what benefit should it actually bringt? If I want my server to send an email, what would be wrong to say that I must do a POST to the "http://.../mail-outbox/" URI, which in turn will make the server send away the received entity as an email, and which ist a RESTful operation
Nothing is wrong with that. That is how domain specific operations ('goals' if you want) are achieved: tell a resource *with the appropriate semantics* to process-this (POST). So, if you learned from some hypermedia that http://.../mail-outbox/ has the semantics of sending a mail when you POST something to it then that is how you achieve that domain goal.
The need for the hypermedia is there because that is how the client learns ar runtime that http://.../mail-outbox/ has these semantics.
Note: if the general semantics of the domain operation map to PUT or DELETE or PATCH these specific methods should be used because you get more visibility compared to POST (which has visibility zero). For example, a PUT on /orders/2 allows caches to flush what they have for /orders/2 and store the response to the PUT. If you POST /orders/2 the caches just flush - and they only do this not because they know what is going on but because that is the (necessary) default behavior for POST.
Jan
> (nobody says that CRUD means that a resource must be persistent *forever*, so the server is free to DELETE it on it's own once the mail is sent). I just don't see a need to do anything besides CRUD, but maybe I just didn't find the ultimate example?
>
> From: Kris Zyp [mailto:kris@...]
> Sent: Mittwoch, 24. Februar 2010 19:38
> To: Markus KARG
> Cc: 'Jan Algermissen'; 'REST Discuss'
> Subject: Re: [rest-discuss] Re: [Jersey] Moved thread to rest-discuss / HATEOAS-via-HTTP: Which HTTP Method to use to follow link?
>
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> I think that forcing REST to strictly conform to CRUD is considered a
> REST anti-pattern. Applications can have actions outside of pure CRUD
> actions, and servers should be allowed to provide navigation to
> non-safe, non-idempotent resource-oriented actions through POSTs (like
> a form that triggers the sending of an email from the server, for
> example).
> Kris
>
> On 2/24/2010 11:16 AM, Markus KARG wrote:
> >
>
> > What I wonder about is whether we actually need a definition of
> the
>
> > methods: If we would "normalize" all RESTful documents down to
>
> > atomic CRUD operations, then it would be clear what GET / PUT /
>
> > POST / DELETE are to be used for. I mean, we all can use any
>
> > database table just by SELECT / UPDATE / INSERT / DELETE without
>
> > any documentation about what the actual use the command is, and
> we
>
> > can normalize a database to hold any type of business data. So
> why
>
> > do we need to agree upon documents types and link rels at all?
> Why
>
> > don't we just normalize our apps?
>
> >
>
> >
>
> >
>
> > *From:* Kris Zyp [mailto:kris@...] *Sent:* Dienstag, 23.
>
> > Februar 2010 20:31 *To:* Markus KARG *Cc:* 'Jan Algermissen';
> 'REST
>
> > Discuss' *Subject:* Re: [rest-discuss] Re: [Jersey] Moved thread
> to
>
> > rest-discuss / HATEOAS-via-HTTP: Which HTTP Method to use to
> follow
>
> > link?
>
> >
>
> >
>
> >
>
> >
>
> >
>
> > On 2/23/2010 11:58 AM, Markus KARG wrote:
>
> >> this is interesting! As I am
>
> >
>
> > working with XSL / XML a lot, I did
>
> >
>
> >> not take such a deep look at JSON. Is that type of link
> support
>
> >
>
> >> native to JSON or is that just a specific use of JSON?
>
> >
>
> > It is not "native" to application/json, it is part of JSON
> Schema
>
> > (application/schema+json) [1], and is therefore a
> meta-description
>
> > of the links that can understood from the data/documents.
>
> >
>
> > [1] http://tools.ietf.org/html/draft-zyp-json-schema
>
> >
>
> >> I wonder whether the outline
>
> >
>
> > type of link description is actually RESTful:
>
> >> I mean, "create order" clearly
>
> >
>
> > is a command, and such is not
>
> >
>
> >> document driven but method driven, which in turn looks like
> RPC
>
> >
>
> > to
>
> >
>
> >> me?
>
> >
>
> > "create_order" was just what I used to make it clear, since I
>
> > thought you were asking for a way to indicate to a client how to
>
> > navigate to/submit a request to create an order (using a POST).
>
> >
>
> - --
> Kris Zyp
> SitePen
> (503) 806-1841
> http://sitepen.com
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1.4.9 (MingW32)
> Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
>
> iEYEARECAAYFAkuFceoACgkQ9VpNnHc4zAxJCgCghuhhnEinnB8yfrhU5VlexchS
> dpgAnjUgssc0Y8zO29HXHEKwhSsDBPcl
> =XHCs
> -----END PGP SIGNATURE-----
>
>
>
>
-----------------------------------
Jan Algermissen, Consultant
NORD Software Consulting
Mail: algermissen@...
Blog: http://www.nordsc.com/blog/
Work: http://www.nordsc.com/
-----------------------------------
Hi, I have put together a table classifying HTTP-based API-types according to the REST constraints they adhere to: <http://nordsc.com/ext/classification_of_http_based_apis.html> Hope this is useful. Jan ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
On Wed, Feb 24, 2010 at 1:31 PM, Jan Algermissen <algermissen1971@...> wrote: >> To your original question, i prefer to construct media types that are >> the larger than a single document type. Media types should be >> cohesive, though. To use some of the examples from earlier in the >> thread, a media type including invoice and payment document types >> seems pretty reasonable but i would probably put a catalog document >> type in a different media type. > > I pout catalog in because it is a common doc type from the procurement domain. I wonder: why would you put it into a separate type? It is purely a matter of taste. Basically, i don't see much cohesion between a catalog and an invoice or payment. (I have not thought deeply about this particular domain so i am probably missing a lot of things.) I can imagine clients that are not doing procurement needing access to a catalog. For example, an inventory system updating the "in-stock"ness of items in the catalog. A single document type could easily be used in multiple media types. Perhaps it would be best to have a "procurement" media type and a separate "inventory management" media type which both include a "catalog" document type. Peter <http://barelyenough.org>
Hi Jan,
Good info. Wonder if Flickr is going to give you a call? :D
Of all the types, the Type II you list gives me the hardest time accepting. Your red box indicates that the client side is aware of the state ahead of time, thus it fails the REST test. I don't quite get this tho. I understand that if I call a single URI, and it returns me with some dynamic list of links I can call based on the representation at that time.. that I can only use those links. That should remain REST based correct?
I don't understand why, ahead of time a client might not know the possible outcomes. Let's say you call a get on /orders. It returns a list of orders. For each order you can call GET on it, PUT to update it, or DELETE. As well, you can call POST on /orders to create a new order. I am confused on why it is UN-RESTful if a client knows ahead of time that these are ALL the actions that can happen at this point. By this I mean, let's say the GET /orders did NOT return a URI to allow a user to create a new order. Ok..that's fine, the client wrapper that says order.create() simply wont work. If I am a client wrapping a RESTful api into an couple of classes to provide a java based api, so that other client users can simply use my classes and not try to figure out REST... I would check the URIs and if a user of my wrapper classes tried to call order.create(), I would simply fail immediately because the URI did not come back from the first get /orders.
I can't imagine Amazon, Flickr or anyone would simply publish a single URI and say "Hey world.. here is the published URI.. go have fun" and then make clients guess what is possible within the realms of the API. There has got to be some sort of knowledge about what this API will allow. I think as long as it stresses in the document that a client must never assume all possible calls are available.. there should be no reason why this wouldn't work.
For example, if I go to a certain web site, I can easily jot down the list of href links I can click on. Most of the time, those are going to be the same based on a given state. If I log in, a web site lists some menus/links I can click. They are pretty much always the same thing. I could write a client side test system to automate testing of this. I might have to do something like "get me all links".. and then a "if this link.. click it.. I know it does this..". sort of thing.. so that in case I log in and a new News link is available, I can handle it, possibly based on the media type.
I guess my point is, I don't understand how a client is supposed to go into a RESTful API completely blind. There has to be some idea of what is possible with the API and that if you create an order, the possible links coming back might be these.. they may or may not all be present, but at most it's these so you can at least know ahead of time what you might have and can do.
thanks.
--- On Thu, 2/25/10, Jan Algermissen <algermissen1971@...> wrote:
From: Jan Algermissen <algermissen1971@...>
Subject: [rest-discuss] Differentiating HTTP-based APIs
To: "REST Discuss" <rest-discuss@yahoogroups.com>
Date: Thursday, February 25, 2010, 5:19 AM
Hi,
I have put together a table classifying HTTP-based API-types according to the REST constraints they adhere to:
<http://nordsc. com/ext/classifi cation_of_ http_based_ apis.html>
Hope this is useful.
Jan
------------ --------- --------- -----
Jan Algermissen, Consultant
NORD Software Consulting
Mail: algermissen@ acm.org
Blog: http://www.nordsc. com/blog/
Work: http://www.nordsc. com/
------------ --------- --------- -----
Very good indeed. Simple, concise and to the point, things that to be frankly are not very frequent in REST world. I just wish, for the sake of preserving the notion that REST is a (or can be a) multi protocol architecture style, that in some place you mentioned that that analysis is in respect to the HTTP Uniform Interface and not to HTTP itself. If I understood correctly, I mean. Besides that, I was expecting some mention to POX or POX Over HTTP kind of stuff, maybe is worth to mention in what category it falls. Nevertheless, very nice work. Cheers. _________________________________________________ Melhores cumprimentos / Beir beannacht / Best regards Antnio Manuel dos Santos Mota http://card.ly/amsmota _________________________________________________ 2010/2/25 Jan Algermissen <algermissen1971@mac.com> > > > Hi, > > I have put together a table classifying HTTP-based API-types according to > the REST constraints they adhere to: > > <http://nordsc.com/ext/classification_of_http_based_apis.html> > > Hope this is useful. > > Jan > > ----------------------------------- > Jan Algermissen, Consultant > NORD Software Consulting > > Mail: algermissen@... <algermissen%40acm.org> > Blog: http://www.nordsc.com/blog/ > Work: http://www.nordsc.com/ > ----------------------------------- > > >
On Feb 25, 2010, at 5:25 PM, Antnio Mota wrote: > > > Very good indeed. Simple, concise and to the point, things that to be frankly are not very frequent in REST world. Thanks Antnio! > > I just wish, for the sake of preserving the notion that REST is a (or can be a) multi protocol architecture style, that in some place you mentioned that that analysis is in respect to the HTTP Uniform Interface and not to HTTP itself. If I understood correctly, I mean. Are you saying I should not name the 4th style 'REST' but sort of 'RESTful use of HTTP' because otherwise I am mixing style (REST) and actual architectur (HTTP)? > > Besides that, I was expecting some mention to POX or POX Over HTTP kind of stuff, maybe is worth to mention in what category it falls. I do not think it relates to the classification, but I agree that it makes sense to tell people sth like "if you do POX+JAXB over HTTP you'll likely end up with a Type I API" > > Nevertheless, very nice work. > Thanks. The point was to put some posts in the ground to ease discussion. Jan > Cheers. > > _________________________________________________ > > Melhores cumprimentos / Beir beannacht / Best regards > > Antnio Manuel dos Santos Mota > > http://card.ly/amsmota > _________________________________________________ > > > > 2010/2/25 Jan Algermissen <algermissen1971@...> > > Hi, > > I have put together a table classifying HTTP-based API-types according to the REST constraints they adhere to: > > <http://nordsc.com/ext/classification_of_http_based_apis.html> > > Hope this is useful. > > Jan > > ----------------------------------- > Jan Algermissen, Consultant > NORD Software Consulting > > Mail: algermissen@... > Blog: http://www.nordsc.com/blog/ > Work: http://www.nordsc.com/ > ----------------------------------- > > > > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
Actually I am doing exactly that with JAXB + XML/JSON/JSONP formats. I modeled the links in a JPA table and established a relationship between the entities with this table, so the links are in the domain model itself... for example, when you call this URI: http://fgaucho.dyndns.org:8080/arena-http/competition that links element comes from the database.. serialized pure with JAXB.. For me, HATEOAS is more about the self-descriptiveness and links to the next feasible states than the technology behind that... * and my project is a work in progress, it is not mature yet. I am learning a lot here in this list and this new classification sheet will help me as benchmark :) thanks............ On Thu, Feb 25, 2010 at 6:12 PM, Jan Algermissen <algermissen1971@...> wrote: > > > On Feb 25, 2010, at 5:57 PM, Felipe Gaucho wrote: > >> Other point: in your list of examples you included a set of services implementatiom and the atom pub spec... >> >> Please change to an atom pub implementation.. If any :) > > Yes, you are right - I am mixing type and instance there. > > Thanks (pls wait for next version) > > Jan > > > >> >> On 25.02.2010, at 17:39, Jan Algermissen <algermissen1971@mac.com> wrote: >> >>> >>> On Feb 25, 2010, at 5:25 PM, Antnio Mota wrote: >>> >>>> >>>> >>>> Very good indeed. Simple, concise and to the point, things that to be frankly are not very frequent in REST world. >>> >>> Thanks Antnio! >>> >>>> >>>> I just wish, for the sake of preserving the notion that REST is a (or can be a) multi protocol architecture style, that in some place you mentioned that that analysis is in respect to the HTTP Uniform Interface and not to HTTP itself. If I understood correctly, I mean. >>> >>> Are you saying I should not name the 4th style 'REST' but sort of 'RESTful use of HTTP' because otherwise I am mixing style (REST) and actual architectur (HTTP)? >>> >>>> >>>> Besides that, I was expecting some mention to POX or POX Over HTTP kind of stuff, maybe is worth to mention in what category it falls. >>> >>> I do not think it relates to the classification, but I agree that it makes sense to tell people sth like "if you do POX+JAXB over HTTP you'll likely end up with a Type I API" >>> >>>> >>>> Nevertheless, very nice work. >>>> >>> >>> Thanks. The point was to put some posts in the ground to ease discussion. >>> >>> Jan >>> >>>> Cheers. >>>> >>>> _________________________________________________ >>>> >>>> Melhores cumprimentos / Beir beannacht / Best regards >>>> >>>> Antnio Manuel dos Santos Mota >>>> >>>> http://card.ly/amsmota >>>> _________________________________________________ >>>> >>>> >>>> >>>> 2010/2/25 Jan Algermissen <algermissen1971@mac.com> >>>> >>>> Hi, >>>> >>>> I have put together a table classifying HTTP-based API-types according to the REST constraints they adhere to: >>>> >>>> <http://nordsc.com/ext/classification_of_http_based_apis.html> >>>> >>>> Hope this is useful. >>>> >>>> Jan >>>> >>>> ----------------------------------- >>>> Jan Algermissen, Consultant >>>> NORD Software Consulting >>>> >>>> Mail: algermissen@... >>>> Blog: http://www.nordsc.com/blog/ >>>> Work: http://www.nordsc.com/ >>>> ----------------------------------- >>>> >>>> >>>> >>>> >>>> >>>> >>> >>> ----------------------------------- >>> Jan Algermissen, Consultant >>> NORD Software Consulting >>> >>> Mail: algermissen@... >>> Blog: http://www.nordsc.com/blog/ >>> Work: http://www.nordsc.com/ >>> ----------------------------------- >>> >>> > > ----------------------------------- > Jan Algermissen, Consultant > NORD Software Consulting > > Mail: algermissen@... > Blog: http://www.nordsc.com/blog/ > Work: http://www.nordsc.com/ > ----------------------------------- > > > > > -- ------------------------------------------ Felipe Gacho 10+ Java Programmer CEJUG Senior Advisor
On Feb 25, 2010, at 5:57 PM, Felipe Gaucho wrote: > Other point: in your list of examples you included a set of services implementatiom and the atom pub spec... > > Please change to an atom pub implementation.. If any :) Yes, you are right - I am mixing type and instance there. Thanks (pls wait for next version) Jan > > On 25.02.2010, at 17:39, Jan Algermissen <algermissen1971@...> wrote: > >> >> On Feb 25, 2010, at 5:25 PM, Antnio Mota wrote: >> >>> >>> >>> Very good indeed. Simple, concise and to the point, things that to be frankly are not very frequent in REST world. >> >> Thanks Antnio! >> >>> >>> I just wish, for the sake of preserving the notion that REST is a (or can be a) multi protocol architecture style, that in some place you mentioned that that analysis is in respect to the HTTP Uniform Interface and not to HTTP itself. If I understood correctly, I mean. >> >> Are you saying I should not name the 4th style 'REST' but sort of 'RESTful use of HTTP' because otherwise I am mixing style (REST) and actual architectur (HTTP)? >> >>> >>> Besides that, I was expecting some mention to POX or POX Over HTTP kind of stuff, maybe is worth to mention in what category it falls. >> >> I do not think it relates to the classification, but I agree that it makes sense to tell people sth like "if you do POX+JAXB over HTTP you'll likely end up with a Type I API" >> >>> >>> Nevertheless, very nice work. >>> >> >> Thanks. The point was to put some posts in the ground to ease discussion. >> >> Jan >> >>> Cheers. >>> >>> _________________________________________________ >>> >>> Melhores cumprimentos / Beir beannacht / Best regards >>> >>> Antnio Manuel dos Santos Mota >>> >>> http://card.ly/amsmota >>> _________________________________________________ >>> >>> >>> >>> 2010/2/25 Jan Algermissen <algermissen1971@...> >>> >>> Hi, >>> >>> I have put together a table classifying HTTP-based API-types according to the REST constraints they adhere to: >>> >>> <http://nordsc.com/ext/classification_of_http_based_apis.html> >>> >>> Hope this is useful. >>> >>> Jan >>> >>> ----------------------------------- >>> Jan Algermissen, Consultant >>> NORD Software Consulting >>> >>> Mail: algermissen@... >>> Blog: http://www.nordsc.com/blog/ >>> Work: http://www.nordsc.com/ >>> ----------------------------------- >>> >>> >>> >>> >>> >>> >> >> ----------------------------------- >> Jan Algermissen, Consultant >> NORD Software Consulting >> >> Mail: algermissen@... >> Blog: http://www.nordsc.com/blog/ >> Work: http://www.nordsc.com/ >> ----------------------------------- >> >> ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
On Feb 25, 2010, at 4:57 PM, Kevin Duffey wrote: > Hi Jan, > > Good info. Wonder if Flickr is going to give you a call? :D Well....it had to come down on *somebody* :-) > > Of all the types, the Type II you list gives me the hardest time accepting. Your red box indicates that the client side is aware of the state ahead of time, thus it fails the REST test. Yes, correct. > I don't quite get this tho. I understand that if I call a single URI, and it returns me with some dynamic list of links I can call based on the representation at that time.. that I can only use those links. That should remain REST based correct? Yes, as long as the representation that contains the links has a specific media type (that defines the meaning of the links). > > I don't understand why, ahead of time a client might not know the possible outcomes. Because the hypermedia constraint constrains that :-) The client MUST not assume what is coming back - only discover it. That is REST's essence if you want. Because it enables the server to evolve without asking the client what it assmes. > Let's say you call a get on /orders. It returns a list of orders. .. maybe ... > For each order you can call GET on it, .. maybe ... > PUT to update it, or DELETE. ... maybe ... Th epoint being: if you *find* a link to an order that has the semantics (from a media type or link rel specs) that these methods can be called *then* you know you can do that with an order resource. (The 'edit' link of AtomPub does something like that). As long as the server does not tell you, you cannot assume that you can do what you describe. > As well, you can call POST on /orders to create a new order. If some hypermedia tells you that about /orders - then you can go ahead and POST. You need to discover that information. > I am confused on why it is UN-RESTful if a client knows ahead of time that these are ALL the actions that can happen at this point. It is unESTful because the client would break if the server chooses to change. > By this I mean, let's say the GET /orders did NOT return a URI to allow a user to create a new order. Ok..that's fine, the client wrapper that says order.create() simply wont work. Right. The current state machine you are 'walking through' does not allow you to do that. Might be different next time. [Side note: the essential thing to understand here is that in a decentralized environment there can never be a guarrantee that something works at some point because you do not control the server. Hiding the decentralization by an OO API doesnot change that - it just causes the false impression because the IDL specifies that method on class orders. IDLs are for APIs of compoenents that are inside the same 'piece of software'. REST only emphasizes that the client should plan for the create link not to be there.] > If I am a client wrapping a RESTful api into an couple of classes to provide a java based api, That is generally a bad idea - do not hide the network! > so that other client users can simply use my classes and not try to figure out REST... I would check the URIs and if a user of my wrapper classes tried to call order.create(), I would simply fail immediately because the URI did not come back from the first get /orders. This would again just lead you deveoper to think being able to create would be the normal case. > > I can't imagine Amazon, Flickr or anyone would simply publish a single URI and say "Hey world.. here is the published URI.. go have fun" and then make clients guess what is possible within the realms of the API. The clients need to know up front what the media types are that the service uses. You need *some* information to code the client. See http://www.nordsc.com/blog/?p=382 (and previous blogs on that issue) > There has got to be some sort of knowledge about what this API will allow. I think as long as it stresses in the document that a client must never assume all possible calls are available.. there should be no reason why this wouldn't work. Right. See above. > > For example, if I go to a certain web site, I can easily jot down the list of href links I can click on. Most of the time, those are going to be the same based on a given state. If I log in, a web site lists some menus/links I can click. They are pretty much always the same thing. I could write a client side test system to automate testing of this. I might have to do something like "get me all links".. and then a "if this link.. click it.. I know it does this..". sort of thing.. so that in case I log in and a new News link is available, I can handle it, possibly based on the media type. This assumes you have the service instance before you code the client. That model would work on the Web (learing about the service by inspection) but it does not work inside the enterprise because those people usually want to develop in parallel :-) > > I guess my point is, I don't understand how a client is supposed to go into a RESTful API completely blind. See above - you need to know the media types. > There has to be some idea of what is possible with the API and that if you create an order, the possible links coming back might be these.. they may or may not all be present, but at most it's these so you can at least know ahead of time what you might have and can do. HTH, Jan > > thanks. > > > --- On Thu, 2/25/10, Jan Algermissen <algermissen1971@...> wrote: > > From: Jan Algermissen <algermissen1971@...> > Subject: [rest-discuss] Differentiating HTTP-based APIs > To: "REST Discuss" <rest-discuss@yahoogroups.com> > Date: Thursday, February 25, 2010, 5:19 AM > > Hi, > > I have put together a table classifying HTTP-based API-types according to the REST constraints they adhere to: > > <http://nordsc. com/ext/classifi cation_of_ http_based_ apis.html> > > Hope this is useful. > > Jan > > ------------ --------- --------- ----- > Jan Algermissen, Consultant > NORD Software Consulting > > Mail: algermissen@ acm.org > Blog: http://www.nordsc. com/blog/ > Work: http://www.nordsc. com/ > ------------ --------- --------- ----- > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
2010/2/25 Jan Algermissen <algermissen1971@...>: >> >> I just wish, for the sake of preserving the notion that REST is a (or can be a) multi protocol architecture style, that in some place you mentioned that that analysis is in respect to the HTTP Uniform Interface and not to HTTP itself. If I understood correctly, I mean. > > Are you saying I should not name the 4th style 'REST' but sort of 'RESTful use of HTTP' because otherwise I am mixing style (REST) and actual architectur (HTTP)? > Well, not quite, I think it is REST and you do well in calling it that without ambiguity. What I was saying is that if you constraint other protocols to use the same interface as HTTP, then your analysis apply to those other protocols based architectures as well. So it was worth mentioning that... >> >> Besides that, I was expecting some mention to POX or POX Over HTTP kind of stuff, maybe is worth to mention in what category it falls. > > I do not think it relates to the classification, but I agree that it makes sense to tell people sth like "if you do POX+JAXB over HTTP you'll likely end up with a Type I API" > Yes, it was in that sense, of pointing as example, that I was referring to. Cheers.
On Feb 25, 2010, at 5:50 PM, Felipe Gaucho wrote: > You can use jaxb and use xml and get a restful service... > There is no mandatory link between these technologies and "non-rest" style... Right, sorry to imply that. OTH, there will often be no 1:1 mapping between domain object (that's how I understood POJO) so if you use JAXB on your POJO you'll rather have a serialized domain object than 'resource representation' But I now saw my error: I thought of POJO, not POX. Sorry. > > Excellent sumary ... Congrats... Thanks, Jan > > On 25.02.2010, at 17:39, Jan Algermissen <algermissen1971@...> wrote: > >> >> On Feb 25, 2010, at 5:25 PM, Antnio Mota wrote: >> >>> >>> >>> Very good indeed. Simple, concise and to the point, things that to be frankly are not very frequent in REST world. >> >> Thanks Antnio! >> >>> >>> I just wish, for the sake of preserving the notion that REST is a (or can be a) multi protocol architecture style, that in some place you mentioned that that analysis is in respect to the HTTP Uniform Interface and not to HTTP itself. If I understood correctly, I mean. >> >> Are you saying I should not name the 4th style 'REST' but sort of 'RESTful use of HTTP' because otherwise I am mixing style (REST) and actual architectur (HTTP)? >> >>> >>> Besides that, I was expecting some mention to POX or POX Over HTTP kind of stuff, maybe is worth to mention in what category it falls. >> >> I do not think it relates to the classification, but I agree that it makes sense to tell people sth like "if you do POX+JAXB over HTTP you'll likely end up with a Type I API" >> >>> >>> Nevertheless, very nice work. >>> >> >> Thanks. The point was to put some posts in the ground to ease discussion. >> >> Jan >> >>> Cheers. >>> >>> _________________________________________________ >>> >>> Melhores cumprimentos / Beir beannacht / Best regards >>> >>> Antnio Manuel dos Santos Mota >>> >>> http://card.ly/amsmota >>> _________________________________________________ >>> >>> >>> >>> 2010/2/25 Jan Algermissen <algermissen1971@...> >>> >>> Hi, >>> >>> I have put together a table classifying HTTP-based API-types according to the REST constraints they adhere to: >>> >>> <http://nordsc.com/ext/classification_of_http_based_apis.html> >>> >>> Hope this is useful. >>> >>> Jan >>> >>> ----------------------------------- >>> Jan Algermissen, Consultant >>> NORD Software Consulting >>> >>> Mail: algermissen@... >>> Blog: http://www.nordsc.com/blog/ >>> Work: http://www.nordsc.com/ >>> ----------------------------------- >>> >>> >>> >>> >>> >>> >> >> ----------------------------------- >> Jan Algermissen, Consultant >> NORD Software Consulting >> >> Mail: algermissen@... >> Blog: http://www.nordsc.com/blog/ >> Work: http://www.nordsc.com/ >> ----------------------------------- >> >> ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
On Thu, Feb 25, 2010 at 9:12 AM, Jan Algermissen <algermissen1971@...> wrote: > > On Feb 25, 2010, at 5:50 PM, Felipe Gaucho wrote: > >> You can use jaxb and use xml and get a restful service... >> There is no mandatory link between these technologies and "non-rest" style... > > Right, sorry to imply that. OTH, there will often be no 1:1 mapping between domain object (that's how I understood POJO) so if you use JAXB on your POJO you'll rather have a serialized domain object than 'resource representation' > When using JAX-RS, I'm finding myself more and more often building a set of JAXB annotated classes that directly represent my resources, separate from the classes that might represent my domain tier (with, perhaps, JPA or Hibernate annotations on them). Besides the fact that this means I don't have to write all of the boring serialization code, it has some other benefits: - Both XML and JSON serialization, nearly for free. - Ability to include properties for however I'm going to represent links (which don't belong in the domain model at all). - Ability to include properties for related resources (either individual child beans or collections of them), for which JAXB does a slick job of including as nested sub-elements, versus entity beans that are typically associated with only one table. - Ability to write business logic that is natural to Java developers used to beans oriented development, independent of the fact that this resource was received (or will be sent) across some HTTP or other transport. - Ablity to write much better unit and functional tests that can reason about the resource model (independent of how the resources got received from a client or synthesized from my database domain objects), with all the usual benefits of a strongly typed language (versus using XPath or poking through some JSON data structure with string based keys and hoping I spelled the keys right). It's good stuff for Java developers. Craig
Hello, Bruno Harbulot recently mentioned this thread, so I thought I'd join the mailing list and contribute a tidbit to it. I recently published a proof of concept tool which implements form-based HTTP authentication using a combination of XMLHttpRequest objects and response code tricks. It seems to work reliably in the four most popular browsers. Code is here: http://www.vsecurity.com/download/tools/fbha-poc_0.1.zip This work was inspired by the work I did previously on the difficulties of securing cookies and some of the benefits of HTTP digest authentication: http://www.vsecurity.com/download/papers/WeaningTheWebOffOfSessionCookies.pdf http://www.vsecurity.com/download/papers/HTTPDigestIntegrity.pdf Thanks, tim
On Feb 25, 2010, at 8:37 PM, Craig McClanahan wrote: > On Thu, Feb 25, 2010 at 9:12 AM, Jan Algermissen > <algermissen1971@...> wrote: >> >> On Feb 25, 2010, at 5:50 PM, Felipe Gaucho wrote: >> >>> You can use jaxb and use xml and get a restful service... >>> There is no mandatory link between these technologies and "non-rest" style... >> >> Right, sorry to imply that. OTH, there will often be no 1:1 mapping between domain object (that's how I understood POJO) so if you use JAXB on your POJO you'll rather have a serialized domain object than 'resource representation' >> > > When using JAX-RS, I'm finding myself more and more often building a > set of JAXB annotated classes that directly represent my resources, That's a good approach. If you are using specific media types that means the annotated classes map to media types or individual doctypes of a media type (Apache Abdera for example). > separate from the classes that might represent my domain tier (with, > perhaps, JPA or Hibernate annotations on them). Besides the fact that > this means I don't have to write all of the boring serialization code, > it has some other benefits: > > - Both XML and JSON serialization, nearly for free. > > - Ability to include properties for however I'm going to represent > links (which don't belong in the domain model at all). > > - Ability to include properties for related resources (either individual > child beans or collections of them), for which JAXB does a > slick job of including as nested sub-elements, versus > entity beans that are typically associated with only one table. > > - Ability to write business logic that is natural to Java > developers used to beans oriented development, > independent of the fact that this resource was received > (or will be sent) across some HTTP or other transport. > > - Ablity to write much better unit and functional tests that can > reason about the resource model (independent of how the > resources got received from a client or synthesized from my > database domain objects), with all the usual > benefits of a strongly typed language (versus using > XPath or poking through some JSON data structure > with string based keys and hoping I spelled the keys right). > > It's good stuff for Java developers. Good points, thanks. Jan > > Craig > > > ------------------------------------ > > Yahoo! Groups Links > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
Maybe I am doing this wrong, but what I do is usually I put my JAXB generated XSD model objects in a separate project. I then build this jar and copy it to my web-inf/lib of my REST/ejb based application. I use the JAXB model objects in my Jersey resources, as well as in my session beans to convert from the JAXB bean to the entity bean. There is probably an easier way to do this, but presently I simply do a eb.setXxx(jb.getXxx()) method for each item I wish to copy to the entity, or from it when going back the other way. JEE6 makes this a breeze to work with.
I've not yet looked into the ramifications of scaling, caching, etc.. I am hoping that the EE6 containers do a lot of this for me for free these days. Being stateless in the session bean and REST service, and only the entity persists when required, I would hope that I should be able to scale quite rapidly and easily, the back end DB being the most difficult probably to deal with.
--- On Thu, 2/25/10, Craig McClanahan <craigmcc@...> wrote:
From: Craig McClanahan <craigmcc@...>
Subject: Re: [rest-discuss] Differentiating HTTP-based APIs
To: "Jan Algermissen" <algermissen1971@...>
Cc: "Felipe Gaucho" <fgaucho@gmail.com>, "REST Discuss" <rest-discuss@yahoogroups.com>
Date: Thursday, February 25, 2010, 11:37 AM
On Thu, Feb 25, 2010 at 9:12 AM, Jan Algermissen
<algermissen1971@ mac.com> wrote:
>
> On Feb 25, 2010, at 5:50 PM, Felipe Gaucho wrote:
>
>> You can use jaxb and use xml and get a restful service...
>> There is no mandatory link between these technologies and "non-rest" style...
>
> Right, sorry to imply that. OTH, there will often be no 1:1 mapping between domain object (that's how I understood POJO) so if you use JAXB on your POJO you'll rather have a serialized domain object than 'resource representation'
>
When using JAX-RS, I'm finding myself more and more often building a
set of JAXB annotated classes that directly represent my resources,
separate from the classes that might represent my domain tier (with,
perhaps, JPA or Hibernate annotations on them). Besides the fact that
this means I don't have to write all of the boring serialization code,
it has some other benefits:
- Both XML and JSON serialization, nearly for free.
- Ability to include properties for however I'm going to represent
links (which don't belong in the domain model at all).
- Ability to include properties for related resources (either individual
child beans or collections of them), for which JAXB does a
slick job of including as nested sub-elements, versus
entity beans that are typically associated with only one table.
- Ability to write business logic that is natural to Java
developers used to beans oriented development,
independent of the fact that this resource was received
(or will be sent) across some HTTP or other transport.
- Ablity to write much better unit and functional tests that can
reason about the resource model (independent of how the
resources got received from a client or synthesized from my
database domain objects), with all the usual
benefits of a strongly typed language (versus using
XPath or poking through some JSON data structure
with string based keys and hoping I spelled the keys right).
It's good stuff for Java developers.
Craig
There is only 1 drawback in your solution: JAXB unfortunately doesn't allow you to add extra annotations in the generated files, so you need to manually manage the ORM.. Despite that there is a large community that believes that isolating domain model from business model is a better approach than mix them together.. (a long Java Design Patterns discussion I will avoid here) One alternative is to use the package-info file to manage the namespaces and do annotate the model classes directly.. it is more programmatic approach but you preserve the namespaces as well... and then you can annotate the classes also as Entities and you get the DB roundtrip for free.. and Jersey + CDI com inject the EntityManager in your service layer if you want... and it scales like hell :) eheh the problem still remains: how to control the hypermedia workflow.. I've seen people adopting custom annotations and several other tricks but so far a stable solution didn't emerged ... we are waiting your best thoughts :) On Fri, Feb 26, 2010 at 3:33 AM, Kevin Duffey <andjarnic@...> wrote: > > > > Maybe I am doing this wrong, but what I do is usually I put my JAXB > generated XSD model objects in a separate project. I then build this jar and > copy it to my web-inf/lib of my REST/ejb based application. I use the JAXB > model objects in my Jersey resources, as well as in my session beans to > convert from the JAXB bean to the entity bean. There is probably an easier > way to do this, but presently I simply do a eb.setXxx(jb.getXxx()) method > for each item I wish to copy to the entity, or from it when going back the > other way. JEE6 makes this a breeze to work with. > > I've not yet looked into the ramifications of scaling, caching, etc.. I am > hoping that the EE6 containers do a lot of this for me for free these days. > Being stateless in the session bean and REST service, and only the entity > persists when required, I would hope that I should be able to scale quite > rapidly and easily, the back end DB being the most difficult probably to > deal with. > > --- On *Thu, 2/25/10, Craig McClanahan <craigmcc@...>* wrote: > > > From: Craig McClanahan <craigmcc@...> > Subject: Re: [rest-discuss] Differentiating HTTP-based APIs > To: "Jan Algermissen" <algermissen1971@...> > Cc: "Felipe Gaucho" <fgaucho@...>, "REST Discuss" < > rest-discuss@yahoogroups.com> > Date: Thursday, February 25, 2010, 11:37 AM > > > > > On Thu, Feb 25, 2010 at 9:12 AM, Jan Algermissen > <algermissen1971@ mac.com <http://mc/compose?to=algermissen1971%40mac.com>> > wrote: > > > > On Feb 25, 2010, at 5:50 PM, Felipe Gaucho wrote: > > > >> You can use jaxb and use xml and get a restful service... > >> There is no mandatory link between these technologies and "non-rest" > style... > > > > Right, sorry to imply that. OTH, there will often be no 1:1 mapping > between domain object (that's how I understood POJO) so if you use JAXB on > your POJO you'll rather have a serialized domain object than 'resource > representation' > > > > When using JAX-RS, I'm finding myself more and more often building a > set of JAXB annotated classes that directly represent my resources, > separate from the classes that might represent my domain tier (with, > perhaps, JPA or Hibernate annotations on them). Besides the fact that > this means I don't have to write all of the boring serialization code, > it has some other benefits: > > - Both XML and JSON serialization, nearly for free. > > - Ability to include properties for however I'm going to represent > links (which don't belong in the domain model at all). > > - Ability to include properties for related resources (either individual > child beans or collections of them), for which JAXB does a > slick job of including as nested sub-elements, versus > entity beans that are typically associated with only one table. > > - Ability to write business logic that is natural to Java > developers used to beans oriented development, > independent of the fact that this resource was received > (or will be sent) across some HTTP or other transport. > > - Ablity to write much better unit and functional tests that can > reason about the resource model (independent of how the > resources got received from a client or synthesized from my > database domain objects), with all the usual > benefits of a strongly typed language (versus using > XPath or poking through some JSON data structure > with string based keys and hoping I spelled the keys right). > > It's good stuff for Java developers. > > Craig > > > > -- ------------------------------------------ Felipe Gacho 10+ Java Programmer CEJUG Senior Advisor
2010/2/25 Felipe Gacho <fgaucho@...>
>
>
> There is only 1 drawback in your solution: JAXB unfortunately doesn't allow
> you to add extra annotations in the generated files, so you need to manually
> manage the ORM.. Despite that there is a large community that believes that
> isolating domain model from business model is a better approach than mix
> them together.. (a long Java Design Patterns discussion I will avoid here)
>
> You're missing a key feature of what both Kevin and I said ... use
*different* classes for the *resource* domain model and the *business*
domain model. Then, each kind of class can have the annotations that are
relevant for that tier. If you think about an MVC style architecture, and
take the point of view of the server developer, the classes representing
your resources are part of the view tier, while the classes representing the
business domain are part of the model tier.
The value of putting the resource model classes in a separate library is
that, if you use one of several JAX-RS implementations like Jersey, you can
leverage the same JAX-RS infrastructure for serialization on the client side
(when you have a Java based client) as on the server side, thus reducing
development effort and potential for impedance mismatches. But that is
orthogonal to the idea that the two models are often different.
> One alternative is to use the package-info file to manage the namespaces
> and do annotate the model classes directly.. it is more programmatic
> approach but you preserve the namespaces as well... and then you can
> annotate the classes also as Entities and you get the DB roundtrip for
> free.. and Jersey + CDI com inject the EntityManager in your service layer
> if you want... and it scales like hell :) eheh
>
>
> the problem still remains: how to control the hypermedia workflow.. I've
> seen people adopting custom annotations and several other tricks but so far
> a stable solution didn't emerged ... we are waiting your best thoughts :)
>
> Managing workflow, like obeying the hypermedia constraint, is a view tier
concern (from the point of view of a server side developer). All you need
to do is define the appropriate hypermedia links into your representations
(and therefore into your resource model classes if you're using Java and
JAX-RS), and populate them appropriately when a JAX-RS resource method is
called. In the mean time, your JPA or Hibernate classes representing the
business domain model can remain serenely oblivious to things like URIs.
That's a good thing, because a REST API is typically not the only way that
such business model objects get manipulated.
Self-test time: are you storing URIs in your database? If so, think
again. Are you using values (like primary keys) from your business domain
model classes in order to calculate the URIs that show up in your resource
model classes? That's OK, although you should not feel constrained to build
URIs in the fashion you typically see
(".../customers/{customerID}/orders/{orderID}"). The only part of the
server side logic that cares about the generated URIs should be the logic
that processes incoming requests from those generated URIs. In the case of
a JAX-RS based server application, that "care" is expressed by @Path
annotations. Fortunately, JAX-RS also has APIs to "reverse engineer" an
appropriate resource URI from the @Path annotations that are present in many
cases, so if you leverage this functionality when generating URIs in your
resource representations, you still have to change only one thing -- the
@Path annotations -- to change the URIs included in your representations.
In the shopping cart use case (no, I don't care if some people don't like it
as an example :-), a well designed REST API should *not* define a URI
template like "/checkout" that accepts a POST with a shopping cart
representation. Instead, the representation of the cart last sent by the
server should include a URI you can POST to to initiate checkout on *this*
shopping cart instance (and it should be included only if the cart is in a
state that is appropriate for checkout) ... for security, you're much better
off if the server generates a random string based URI for this, which
expires after a short amount of time, to reduce the opportunity for client
initiated mischief.
A properly coded client, then, will need to understand how to extract this
URI from the representation, and understand that it needs to do a POST with
the current cart contents as its request body. But, as far as the client is
concerned, the URI itself is an opaque string.
Craig
> You're missing a key feature of what both Kevin and I said ... use > *different* classes for the *resource* domain model and the *business* > domain model. yes, I am aware about that... but the effort to maintain the mapping between those models generates an extra and unnecessary development effort ,.... I also noticed some people having a third model when the data will be rendered in the presentation layer.. (a managedbean or other component..)... My experiment is looking to reduce the amount of effort required to synchronize all those models, having only one model.. and it works fine and the performance is better and maintenance effort much less than the traditional MVC design... > Then, each kind of class can have the annotations that are relevant for that tier. Usually the classes are identical and you are just including a copy layer to transfer data between them :) > Managing workflow, like obeying the hypermedia constraint, is a view tier concern (from the point of view of a server side developer). May be, may be not.. in my case the model contains the sate of the application, so it is a business concern :) > That's a good thing, because a REST API is typically not the only way that > such business model objects get manipulated. yes, but the effort to manage the hypermedia just in time makes the whole application much slower.. and it is a repettive task.. that is one of the reasons that motivated me to push the links in the database.... > Self-test time: are you storing URIs in your database? yes. The sate of the application is persistent :) and scales quite well since I don't need to recalculate the sate on every request.. (ok, memcache can help a bit, but anyway... the hypermedia engine it is the heavy part of the request - eliminating that I have much faster services) > a JAX-RS based server application, that "care" is expressed by @Path I am experimenting with that idea.. in order to use regular expressions or other DSL facilty to manipulate the final URI dinamically.. perhaps storing just the tail of the URI in the databse or other format of information that lead me a chance to produce the external representation by demand.. this part is my current research...... (I prefer the "experimentation" word since I am not a formal researcher..) > @Path annotations -- to change the URIs included in your representations. yes, but you need to iterate over the collections and change manually each URI, what can lead you to manipulate thousand of strings before to respond to the client.. this part scares me... (think about a collection with 100 elements, each with 20 URIs) > In the shopping cart use case (no, I don't care if some people don't like it > as an example :-), a well designed REST API should *not* define a URI > template like "/checkout" that accepts a POST with a shopping cart That is a basic REST principle .. and it is not related to the way you store or calculate the URIs... > concerned, the URI itself is an opaque string. yes...... thanks for your feedback... interesting.. I will keep thinking about your thoughts.... Felipe Gacho
excellent points.. I will consider that... 2010/2/26 Craig McClanahan <craigmcc@...>: > 2010/2/25 Felipe Gacho <fgaucho@...>: >>> You're missing a key feature of what both Kevin and I said ... use >>> *different* classes for the *resource* domain model and the *business* >>> domain model. >> >> yes, I am aware about that... but the effort to maintain the mapping >> between those models generates an extra and unnecessary development >> effort ,.... I also noticed some people having a third model when the >> data will be rendered in the presentation layer.. (a managedbean or >> other component..)... My experiment is looking to reduce the amount of >> effort required to synchronize all those models, having only one >> model.. and it works fine and the performance is better and >> maintenance effort much less than the traditional MVC design... > > In your business domain objects, do you store the CSS style class that > should be used to display this object in a browser? Of course not: > > * The *name* of the style class is totally up to the designer. > > * Do *you* know any designers that care about backwards compatibility > of style names? I don't either ... > but I don't care, as long as the designer updates all the > corresponding HTML pages when the style name changes. > > * Style names can be changed at any time (for example, when you refresh > the look and feel characteristics of your website). > > * And, of course, there is normally more than one webapp > that needs to be able to render this business domain object, > so having only one style name would not be useful. > > The same principles apply to designing resource representations -- we > should assume that there will be more than one representation that > includes this particular object, and more than one RESTful application > that needs to provide access to it, so assuming any *single* approach > is not likely to help all our users. > >> >> >>> Then, each kind of class can have the annotations that are relevant for that tier. >> >> Usually the classes are identical and you are just including a copy >> layer to transfer data between them :) >> >>> Managing workflow, like obeying the hypermedia constraint, is a view tier concern (from the point of view of a server side developer). >> >> May be, may be not.. in my case the model contains the sate of the >> application, so it is a business concern :) >> > > There is pretty much always more than one application needing your > data, so it's more than one business concern :-). Ideally, you can > share the business domain objects across these applications, but the > reality is you'll need multiple RESTful resource representations that > include data from these business domain objects, for the same reason > that you will need multiple HTML representations (even within the same > webapp, how many different pages include information from particular > domain objects?). One size does not fit all. > >>> That's a good thing, because a REST API is typically not the only way that >>> such business model objects get manipulated. >> >> yes, but the effort to manage the hypermedia just in time makes the >> whole application much slower.. and it is a repettive task.. that is >> one of the reasons that motivated me to push the links in the >> database.... >> >>> Self-test time: are you storing URIs in your database? >> >> yes. The sate of the application is persistent :) and scales quite >> well since I don't need to recalculate the sate on every request.. >> (ok, memcache can help a bit, but anyway... the hypermedia engine it >> is the heavy part of the request - eliminating that I have much faster >> services) > > Scale for one application is nice (although I'm waiting for your > benchmarks to show that the overhead of calculating URIs dynamically > is crushing, given how cheap CPU time is versus other server side > resources). How about scale for multiple applications? > > Oh, you only have one? Good for you! But that's not a particularly > common problem domain. > >> >>> a JAX-RS based server application, that "care" is expressed by @Path >> >> I am experimenting with that idea.. in order to use regular >> expressions or other DSL facilty to manipulate the final URI >> dinamically.. perhaps storing just the tail of the URI in the databse >> or other format of information that lead me a chance to produce the >> external representation by demand.. this part is my current >> research...... (I prefer the "experimentation" word since I am not a >> formal researcher..) >> > > Please think about the idea that the same business domain resources > may need to be exposed by different applications, using different > resource representations, and different URI schemes, all at the same > time. Or, even within the same application, exposed in different > pages (web app) or resources (RESTful web service) at the same time. > > Personal history lesson -- when I was first learning web development > (mid-late 1990s), I figured "why not have a toHTML() method on all my > Java business domain model classes"? It quickly became clear that > different pages within the same app, as well as different apps, needed > different HTML representations of the same business objects. Indeed, > this realization was one of the motivating factors that led to the > creation of Struts. > > The same is true for RESTful resource representations. There will be > more than one representation that requires information from a > particular business domain model object, as well as more than one > application (each with their own resource and representation > requirements). > > In MVC terms: Model classes != View classes. > >>> @Path annotations -- to change the URIs included in your representations. >> >> yes, but you need to iterate over the collections and change manually >> each URI, what can lead you to manipulate thousand of strings before >> to respond to the client.. this part scares me... (think about a >> collection with 100 elements, each with 20 URIs) > > For amusement, you should go grab a raw Atom or RSS feed from a busy > feed source (I use Google Reader for my feed aggregator, but the same > principle applies to anyone who provides feeds) and see how many URIs > are included. And, funny thing, none of the apps that *created* that > content had any idea that *I* would be aggregating their feeds, via > Google Reader, for my own use. > > URIs in RESTful web services are a view tier concern, just like CSS style names. > >> >>> In the shopping cart use case (no, I don't care if some people don't like it >>> as an example :-), a well designed REST API should *not* define a URI >>> template like "/checkout" that accepts a POST with a shopping cartURI >> >> That is a basic REST principle .. and it is not related to the way you >> store or calculate the URIs... >> > > Unfortunately, *lots* of theoretically "RESTful API" specifications > include instructions on how to calculate the URI for a particular > operation (versus telling the client "get the URI you need from the > current representation of the resource.). And I'm as guilty as anyone > else at violating the hypermedia constraint this way, in my earlier > work. But, my point in this particular scenario was, assuming that > there was a single URI for "checkout", for *all* shopping carts, is > technically feasible, but not a good idea. > >>> concerned, the URI itself is an opaque string. >> >> yes...... >> >> thanks for your feedback... interesting.. I will keep thinking about >> your thoughts.... >> >> >> Felipe Gacho >> > > Craig > -- ------------------------------------------ Felipe Gacho 10+ Java Programmer CEJUG Senior Advisor
other point: if two distinct application will have different machine states, then the state of the resources will be different, and they cannot share the same database, right ? because eventually the attributes of the resources will be in different states, isn't it ? the analogy with MVC and web application views doesn't apply here.. as far I can see......... 2010/2/26 Felipe Gacho <fgaucho@...>: > just a question about your domain model sharing: > > and what about the code ? > > If you code the whole engine logic in a code, and you need a second > application to access the same data, you will need to recode the > engine logic again ? > > ok, we all think about "reuse of components", but so far I've seen > ugly hack in the code to simulate the hypermedia engine and that's > what I am trying to do better.. but your points are good.. I will > digest them and try to incorporate the best I can in my design > here..... > > 2010/2/26 Felipe Gacho <fgaucho@...>: >> excellent points.. I will consider that... >> >> 2010/2/26 Craig McClanahan <craigmcc@...>: >>> 2010/2/25 Felipe Gacho <fgaucho@...>: >>>>> You're missing a key feature of what both Kevin and I said ... use >>>>> *different* classes for the *resource* domain model and the *business* >>>>> domain model. >>>> >>>> yes, I am aware about that... but the effort to maintain the mapping >>>> between those models generates an extra and unnecessary development >>>> effort ,.... I also noticed some people having a third model when the >>>> data will be rendered in the presentation layer.. (a managedbean or >>>> other component..)... My experiment is looking to reduce the amount of >>>> effort required to synchronize all those models, having only one >>>> model.. and it works fine and the performance is better and >>>> maintenance effort much less than the traditional MVC design... >>> >>> In your business domain objects, do you store the CSS style class that >>> should be used to display this object in a browser? Of course not: >>> >>> * The *name* of the style class is totally up to the designer. >>> >>> * Do *you* know any designers that care about backwards compatibility >>> of style names? I don't either ... >>> but I don't care, as long as the designer updates all the >>> corresponding HTML pages when the style name changes. >>> >>> * Style names can be changed at any time (for example, when you refresh >>> the look and feel characteristics of your website). >>> >>> * And, of course, there is normally more than one webapp >>> that needs to be able to render this business domain object, >>> so having only one style name would not be useful. >>> >>> The same principles apply to designing resource representations -- we >>> should assume that there will be more than one representation that >>> includes this particular object, and more than one RESTful application >>> that needs to provide access to it, so assuming any *single* approach >>> is not likely to help all our users. >>> >>>> >>>> >>>>> Then, each kind of class can have the annotations that are relevant for that tier. >>>> >>>> Usually the classes are identical and you are just including a copy >>>> layer to transfer data between them :) >>>> >>>>> Managing workflow, like obeying the hypermedia constraint, is a view tier concern (from the point of view of a server side developer). >>>> >>>> May be, may be not.. in my case the model contains the sate of the >>>> application, so it is a business concern :) >>>> >>> >>> There is pretty much always more than one application needing your >>> data, so it's more than one business concern :-). Ideally, you can >>> share the business domain objects across these applications, but the >>> reality is you'll need multiple RESTful resource representations that >>> include data from these business domain objects, for the same reason >>> that you will need multiple HTML representations (even within the same >>> webapp, how many different pages include information from particular >>> domain objects?). One size does not fit all. >>> >>>>> That's a good thing, because a REST API is typically not the only way that >>>>> such business model objects get manipulated. >>>> >>>> yes, but the effort to manage the hypermedia just in time makes the >>>> whole application much slower.. and it is a repettive task.. that is >>>> one of the reasons that motivated me to push the links in the >>>> database.... >>>> >>>>> Self-test time: are you storing URIs in your database? >>>> >>>> yes. The sate of the application is persistent :) and scales quite >>>> well since I don't need to recalculate the sate on every request.. >>>> (ok, memcache can help a bit, but anyway... the hypermedia engine it >>>> is the heavy part of the request - eliminating that I have much faster >>>> services) >>> >>> Scale for one application is nice (although I'm waiting for your >>> benchmarks to show that the overhead of calculating URIs dynamically >>> is crushing, given how cheap CPU time is versus other server side >>> resources). How about scale for multiple applications? >>> >>> Oh, you only have one? Good for you! But that's not a particularly >>> common problem domain. >>> >>>> >>>>> a JAX-RS based server application, that "care" is expressed by @Path >>>> >>>> I am experimenting with that idea.. in order to use regular >>>> expressions or other DSL facilty to manipulate the final URI >>>> dinamically.. perhaps storing just the tail of the URI in the databse >>>> or other format of information that lead me a chance to produce the >>>> external representation by demand.. this part is my current >>>> research...... (I prefer the "experimentation" word since I am not a >>>> formal researcher..) >>>> >>> >>> Please think about the idea that the same business domain resources >>> may need to be exposed by different applications, using different >>> resource representations, and different URI schemes, all at the same >>> time. Or, even within the same application, exposed in different >>> pages (web app) or resources (RESTful web service) at the same time. >>> >>> Personal history lesson -- when I was first learning web development >>> (mid-late 1990s), I figured "why not have a toHTML() method on all my >>> Java business domain model classes"? It quickly became clear that >>> different pages within the same app, as well as different apps, needed >>> different HTML representations of the same business objects. Indeed, >>> this realization was one of the motivating factors that led to the >>> creation of Struts. >>> >>> The same is true for RESTful resource representations. There will be >>> more than one representation that requires information from a >>> particular business domain model object, as well as more than one >>> application (each with their own resource and representation >>> requirements). >>> >>> In MVC terms: Model classes != View classes. >>> >>>>> @Path annotations -- to change the URIs included in your representations. >>>> >>>> yes, but you need to iterate over the collections and change manually >>>> each URI, what can lead you to manipulate thousand of strings before >>>> to respond to the client.. this part scares me... (think about a >>>> collection with 100 elements, each with 20 URIs) >>> >>> For amusement, you should go grab a raw Atom or RSS feed from a busy >>> feed source (I use Google Reader for my feed aggregator, but the same >>> principle applies to anyone who provides feeds) and see how many URIs >>> are included. And, funny thing, none of the apps that *created* that >>> content had any idea that *I* would be aggregating their feeds, via >>> Google Reader, for my own use. >>> >>> URIs in RESTful web services are a view tier concern, just like CSS style names. >>> >>>> >>>>> In the shopping cart use case (no, I don't care if some people don't like it >>>>> as an example :-), a well designed REST API should *not* define a URI >>>>> template like "/checkout" that accepts a POST with a shopping cartURI >>>> >>>> That is a basic REST principle .. and it is not related to the way you >>>> store or calculate the URIs... >>>> >>> >>> Unfortunately, *lots* of theoretically "RESTful API" specifications >>> include instructions on how to calculate the URI for a particular >>> operation (versus telling the client "get the URI you need from the >>> current representation of the resource.). And I'm as guilty as anyone >>> else at violating the hypermedia constraint this way, in my earlier >>> work. But, my point in this particular scenario was, assuming that >>> there was a single URI for "checkout", for *all* shopping carts, is >>> technically feasible, but not a good idea. >>> >>>>> concerned, the URI itself is an opaque string. >>>> >>>> yes...... >>>> >>>> thanks for your feedback... interesting.. I will keep thinking about >>>> your thoughts.... >>>> >>>> >>>> Felipe Gacho >>>> >>> >>> Craig >>> >> >> >> >> -- >> ------------------------------------------ >> Felipe Gacho >> 10+ Java Programmer >> CEJUG Senior Advisor >> > > > > -- > ------------------------------------------ > Felipe Gacho > 10+ Java Programmer > CEJUG Senior Advisor > -- ------------------------------------------ Felipe Gacho 10+ Java Programmer CEJUG Senior Advisor
just a question about your domain model sharing: and what about the code ? If you code the whole engine logic in a code, and you need a second application to access the same data, you will need to recode the engine logic again ? ok, we all think about "reuse of components", but so far I've seen ugly hack in the code to simulate the hypermedia engine and that's what I am trying to do better.. but your points are good.. I will digest them and try to incorporate the best I can in my design here..... 2010/2/26 Felipe Gacho <fgaucho@...>: > excellent points.. I will consider that... > > 2010/2/26 Craig McClanahan <craigmcc@...>: >> 2010/2/25 Felipe Gacho <fgaucho@...>: >>>> You're missing a key feature of what both Kevin and I said ... use >>>> *different* classes for the *resource* domain model and the *business* >>>> domain model. >>> >>> yes, I am aware about that... but the effort to maintain the mapping >>> between those models generates an extra and unnecessary development >>> effort ,.... I also noticed some people having a third model when the >>> data will be rendered in the presentation layer.. (a managedbean or >>> other component..)... My experiment is looking to reduce the amount of >>> effort required to synchronize all those models, having only one >>> model.. and it works fine and the performance is better and >>> maintenance effort much less than the traditional MVC design... >> >> In your business domain objects, do you store the CSS style class that >> should be used to display this object in a browser? Of course not: >> >> * The *name* of the style class is totally up to the designer. >> >> * Do *you* know any designers that care about backwards compatibility >> of style names? I don't either ... >> but I don't care, as long as the designer updates all the >> corresponding HTML pages when the style name changes. >> >> * Style names can be changed at any time (for example, when you refresh >> the look and feel characteristics of your website). >> >> * And, of course, there is normally more than one webapp >> that needs to be able to render this business domain object, >> so having only one style name would not be useful. >> >> The same principles apply to designing resource representations -- we >> should assume that there will be more than one representation that >> includes this particular object, and more than one RESTful application >> that needs to provide access to it, so assuming any *single* approach >> is not likely to help all our users. >> >>> >>> >>>> Then, each kind of class can have the annotations that are relevant for that tier. >>> >>> Usually the classes are identical and you are just including a copy >>> layer to transfer data between them :) >>> >>>> Managing workflow, like obeying the hypermedia constraint, is a view tier concern (from the point of view of a server side developer). >>> >>> May be, may be not.. in my case the model contains the sate of the >>> application, so it is a business concern :) >>> >> >> There is pretty much always more than one application needing your >> data, so it's more than one business concern :-). Ideally, you can >> share the business domain objects across these applications, but the >> reality is you'll need multiple RESTful resource representations that >> include data from these business domain objects, for the same reason >> that you will need multiple HTML representations (even within the same >> webapp, how many different pages include information from particular >> domain objects?). One size does not fit all. >> >>>> That's a good thing, because a REST API is typically not the only way that >>>> such business model objects get manipulated. >>> >>> yes, but the effort to manage the hypermedia just in time makes the >>> whole application much slower.. and it is a repettive task.. that is >>> one of the reasons that motivated me to push the links in the >>> database.... >>> >>>> Self-test time: are you storing URIs in your database? >>> >>> yes. The sate of the application is persistent :) and scales quite >>> well since I don't need to recalculate the sate on every request.. >>> (ok, memcache can help a bit, but anyway... the hypermedia engine it >>> is the heavy part of the request - eliminating that I have much faster >>> services) >> >> Scale for one application is nice (although I'm waiting for your >> benchmarks to show that the overhead of calculating URIs dynamically >> is crushing, given how cheap CPU time is versus other server side >> resources). How about scale for multiple applications? >> >> Oh, you only have one? Good for you! But that's not a particularly >> common problem domain. >> >>> >>>> a JAX-RS based server application, that "care" is expressed by @Path >>> >>> I am experimenting with that idea.. in order to use regular >>> expressions or other DSL facilty to manipulate the final URI >>> dinamically.. perhaps storing just the tail of the URI in the databse >>> or other format of information that lead me a chance to produce the >>> external representation by demand.. this part is my current >>> research...... (I prefer the "experimentation" word since I am not a >>> formal researcher..) >>> >> >> Please think about the idea that the same business domain resources >> may need to be exposed by different applications, using different >> resource representations, and different URI schemes, all at the same >> time. Or, even within the same application, exposed in different >> pages (web app) or resources (RESTful web service) at the same time. >> >> Personal history lesson -- when I was first learning web development >> (mid-late 1990s), I figured "why not have a toHTML() method on all my >> Java business domain model classes"? It quickly became clear that >> different pages within the same app, as well as different apps, needed >> different HTML representations of the same business objects. Indeed, >> this realization was one of the motivating factors that led to the >> creation of Struts. >> >> The same is true for RESTful resource representations. There will be >> more than one representation that requires information from a >> particular business domain model object, as well as more than one >> application (each with their own resource and representation >> requirements). >> >> In MVC terms: Model classes != View classes. >> >>>> @Path annotations -- to change the URIs included in your representations. >>> >>> yes, but you need to iterate over the collections and change manually >>> each URI, what can lead you to manipulate thousand of strings before >>> to respond to the client.. this part scares me... (think about a >>> collection with 100 elements, each with 20 URIs) >> >> For amusement, you should go grab a raw Atom or RSS feed from a busy >> feed source (I use Google Reader for my feed aggregator, but the same >> principle applies to anyone who provides feeds) and see how many URIs >> are included. And, funny thing, none of the apps that *created* that >> content had any idea that *I* would be aggregating their feeds, via >> Google Reader, for my own use. >> >> URIs in RESTful web services are a view tier concern, just like CSS style names. >> >>> >>>> In the shopping cart use case (no, I don't care if some people don't like it >>>> as an example :-), a well designed REST API should *not* define a URI >>>> template like "/checkout" that accepts a POST with a shopping cartURI >>> >>> That is a basic REST principle .. and it is not related to the way you >>> store or calculate the URIs... >>> >> >> Unfortunately, *lots* of theoretically "RESTful API" specifications >> include instructions on how to calculate the URI for a particular >> operation (versus telling the client "get the URI you need from the >> current representation of the resource.). And I'm as guilty as anyone >> else at violating the hypermedia constraint this way, in my earlier >> work. But, my point in this particular scenario was, assuming that >> there was a single URI for "checkout", for *all* shopping carts, is >> technically feasible, but not a good idea. >> >>>> concerned, the URI itself is an opaque string. >>> >>> yes...... >>> >>> thanks for your feedback... interesting.. I will keep thinking about >>> your thoughts.... >>> >>> >>> Felipe Gacho >>> >> >> Craig >> > > > > -- > ------------------------------------------ > Felipe Gacho > 10+ Java Programmer > CEJUG Senior Advisor > -- ------------------------------------------ Felipe Gacho 10+ Java Programmer CEJUG Senior Advisor
2010/2/25 Felipe Gacho <fgaucho@...>: >> You're missing a key feature of what both Kevin and I said ... use >> *different* classes for the *resource* domain model and the *business* >> domain model. > > yes, I am aware about that... but the effort to maintain the mapping > between those models generates an extra and unnecessary development > effort ,.... I also noticed some people having a third model when the > data will be rendered in the presentation layer.. (a managedbean or > other component..)... My experiment is looking to reduce the amount of > effort required to synchronize all those models, having only one > model.. and it works fine and the performance is better and > maintenance effort much less than the traditional MVC design... In your business domain objects, do you store the CSS style class that should be used to display this object in a browser? Of course not: * The *name* of the style class is totally up to the designer. * Do *you* know any designers that care about backwards compatibility of style names? I don't either ... but I don't care, as long as the designer updates all the corresponding HTML pages when the style name changes. * Style names can be changed at any time (for example, when you refresh the look and feel characteristics of your website). * And, of course, there is normally more than one webapp that needs to be able to render this business domain object, so having only one style name would not be useful. The same principles apply to designing resource representations -- we should assume that there will be more than one representation that includes this particular object, and more than one RESTful application that needs to provide access to it, so assuming any *single* approach is not likely to help all our users. > > >> Then, each kind of class can have the annotations that are relevant for that tier. > > Usually the classes are identical and you are just including a copy > layer to transfer data between them :) > >> Managing workflow, like obeying the hypermedia constraint, is a view tier concern (from the point of view of a server side developer). > > May be, may be not.. in my case the model contains the sate of the > application, so it is a business concern :) > There is pretty much always more than one application needing your data, so it's more than one business concern :-). Ideally, you can share the business domain objects across these applications, but the reality is you'll need multiple RESTful resource representations that include data from these business domain objects, for the same reason that you will need multiple HTML representations (even within the same webapp, how many different pages include information from particular domain objects?). One size does not fit all. >> That's a good thing, because a REST API is typically not the only way that >> such business model objects get manipulated. > > yes, but the effort to manage the hypermedia just in time makes the > whole application much slower.. and it is a repettive task.. that is > one of the reasons that motivated me to push the links in the > database.... > >> Self-test time: are you storing URIs in your database? > > yes. The sate of the application is persistent :) and scales quite > well since I don't need to recalculate the sate on every request.. > (ok, memcache can help a bit, but anyway... the hypermedia engine it > is the heavy part of the request - eliminating that I have much faster > services) Scale for one application is nice (although I'm waiting for your benchmarks to show that the overhead of calculating URIs dynamically is crushing, given how cheap CPU time is versus other server side resources). How about scale for multiple applications? Oh, you only have one? Good for you! But that's not a particularly common problem domain. > >> a JAX-RS based server application, that "care" is expressed by @Path > > I am experimenting with that idea.. in order to use regular > expressions or other DSL facilty to manipulate the final URI > dinamically.. perhaps storing just the tail of the URI in the databse > or other format of information that lead me a chance to produce the > external representation by demand.. this part is my current > research...... (I prefer the "experimentation" word since I am not a > formal researcher..) > Please think about the idea that the same business domain resources may need to be exposed by different applications, using different resource representations, and different URI schemes, all at the same time. Or, even within the same application, exposed in different pages (web app) or resources (RESTful web service) at the same time. Personal history lesson -- when I was first learning web development (mid-late 1990s), I figured "why not have a toHTML() method on all my Java business domain model classes"? It quickly became clear that different pages within the same app, as well as different apps, needed different HTML representations of the same business objects. Indeed, this realization was one of the motivating factors that led to the creation of Struts. The same is true for RESTful resource representations. There will be more than one representation that requires information from a particular business domain model object, as well as more than one application (each with their own resource and representation requirements). In MVC terms: Model classes != View classes. >> @Path annotations -- to change the URIs included in your representations. > > yes, but you need to iterate over the collections and change manually > each URI, what can lead you to manipulate thousand of strings before > to respond to the client.. this part scares me... (think about a > collection with 100 elements, each with 20 URIs) For amusement, you should go grab a raw Atom or RSS feed from a busy feed source (I use Google Reader for my feed aggregator, but the same principle applies to anyone who provides feeds) and see how many URIs are included. And, funny thing, none of the apps that *created* that content had any idea that *I* would be aggregating their feeds, via Google Reader, for my own use. URIs in RESTful web services are a view tier concern, just like CSS style names. > >> In the shopping cart use case (no, I don't care if some people don't like it >> as an example :-), a well designed REST API should *not* define a URI >> template like "/checkout" that accepts a POST with a shopping cartURI > > That is a basic REST principle .. and it is not related to the way you > store or calculate the URIs... > Unfortunately, *lots* of theoretically "RESTful API" specifications include instructions on how to calculate the URI for a particular operation (versus telling the client "get the URI you need from the current representation of the resource.). And I'm as guilty as anyone else at violating the hypermedia constraint this way, in my earlier work. But, my point in this particular scenario was, assuming that there was a single URI for "checkout", for *all* shopping carts, is technically feasible, but not a good idea. >> concerned, the URI itself is an opaque string. > > yes...... > > thanks for your feedback... interesting.. I will keep thinking about > your thoughts.... > > > Felipe Gacho > Craig
Hi, as an update to yesterday's API classification I've added an impact analysis that shows the effect of the violation of the various constraints is on the overall system properties. > <http://nordsc.com/ext/classification_of_http_based_apis.html> Jan ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
> > I think we must differentiate here between CRUD GUIs (which I
> dislike) and CRUD APIs:
> > As SQL works so well, I don't actually see a real need to invent more
> commands. I mean,
> > what benefit should it actually bringt?
>
> You gain a API that has better granularity than simply CRUD. Yes,
> inevitably, you end up doing pure "atomic" CRUD against, say, a SQL
> database. But there's no reason you should be forced to exposed that
> limited of an API to your clients. You should be able to have higher
> level interactions with your sever than just raw CRUD, actions that
> themselves may manifest into sever primitive hits against your data
> store.
>
> The point of the Common Interface isn't so much to hamstring
> applications in to a pure CRUD HTTP view of the world, but rather to
> help ensure you don't simply run off hog wild creating eleventy
> zillion new verbs. Taking a constrained, conservative view at the verb
> layer and how they're used can make the API easier to use, and more
> approachable, rather than a wiener dog nest of exceptions and special
> cases and what ifs. Consistent application of a consistent, common
> interface.
>
> But that doesn't mean you're limited solely to CRUD.
While our application is highly complex and in pre-REST times was using lots
of very special RPC methods, I do actually not feel "constrained" by CRUD,
also CRUD does not mean that you must have a 1:1 mapping of each request to
exactly one database penetration.
Example:
Our software allow to create "inspection orders" from "inspection order
templates". Such a template or instance contains lots of very deep children
and thousands of data fields. While one is free to do nearly anything in a
template, if you want to make an instance from a template, you have to pass
a very complex validation which includes hundreds of rules on a lot of the
fields in all of the child object. It is one of the most complex algorithms
I've ever seen with thousands of LoC. It includes such a lot of database
requests that it must work completely on the server to be fast enough.
In RPC times, this was done as one bug stored procedure, reading lots of
rows, and writing lots of rows. All the caller had to do was "CALL
CreateInspectionFromTemplate(ID)". What a simple API.
Now we turn it to REST. The server is unchanged: It still calls the same
SQL. We just added a RESTful layer ontop of it. To execute the (nearly) same
procedure, you now will just do:
GET /templates/{id}
POST /orders/
This triggers the stored procedure, runs all the thousands of lines of code,
and creates the object on the database. What you'll get back is an URI of
the from /order/{id} for the newly created instance (the root of the tree of
rows created in the database forming the instance created from the
template), and a validation result (XML, either beeing empty, or containing
a lot of validation faults).
So CRUD does not mean to create exactly one row, or to not being able to
execute highly complex transactions. As you can see, I can execute a complex
algorithm running on thousands of database rows, creating even more
thousands, and thus performance a very complex business task, with a simple
CRUD operation, without the need for "RPC-like" names or URIs.
THAT's why I said, why not doing CRUD? :-)
Regards
Markus
Kevin,
as with anything in REST, the answer is like always: How would the solution
look like if your client would be a browser? ;-)
In fact, I still do not understand your issue: Just like your browser
doesn't know what it would receive at the end of a <A> link that you
clicked, your REST client does not need to know the type prior to invoking
the http method, and it will see the actually received content type as soon
as it received the content. Just as a browser sees the content of a
referenced resource after following an <A> link in a HTML file. I don't see
why you need to know the type *before* receiving it. At time of requesting,
the client shall just provide a list of what types *it can process* and of
it's preference. You browser always send "I like HTML most", independent of
what the server actually can send.
Also, why do you like to use different types at all? Why not just
restricting your service to work with XML and / or JSON always instead of
switching types all the time (I just don't see the use of this, while
certainly it is a valid business requirement).
Regards
Markus
From: Kevin Duffey [mailto:andjarnic@...]
Sent: Mittwoch, 24. Februar 2010 21:59
To: rest-discuss@yahoogroups.com; Markus KARG
Subject: RE: [rest-discuss] Re: [Jersey] Moved thread to rest-discuss /
HATEOAS-via-HTTP: Which HTTP Method to use to follow link?
Markus,
What I meant was... what does the response look like? Is it something like:
<links>
<link rel="create" uri="...." method="post" media-type="application/xml,
application/json"/>
<link rel="self" uir="..." method="get, put"
media-type="application/xml"/>
</links>
That is.. if I do some sort of GET or OPTION or what have you on a published
URI, and two URIs come back indicating I can get the self and post a create
to another URI, given that we can specify different media types on each
method (at least in Jersey), I am curious what lets the client side no WHAT
to set the header Content-Type to, in order to get it to the right method on
the server side for handling?
With JAX-RS, we can specify two methods from above:
@POST
@Consumes({"application/xml", "application/json"})
public Response create(...){}
@PUT
@Consumes({"application/xml"})
public Response update(...){}
@GET
@Produces({"application/xml"})
public Response get(...){}
I am curious as I try to build out a HATEOAS based response system, how I
send back multiple links that can be called by the client at that point, and
how I tell the client the specific media type each URI can handle? Or is
that in a document instead that says "for this URI, you must use the media
type..."?
--- On Wed, 2/24/10, Markus KARG <markus@...> wrote:
From: Markus KARG <markus@...>
Subject: RE: [rest-discuss] Re: [Jersey] Moved thread to rest-discuss /
HATEOAS-via-HTTP: Which HTTP Method to use to follow link?
To: "'Kevin Duffey'" <andjarnic@...>, rest-discuss@yahoogroups.com
Date: Wednesday, February 24, 2010, 10:19 AM
If we stick to http in the original definition's sense, you don't need to
define the mime type at all: The client will supply a list of accepted media
type preferences in a request, and will get one of those back. The actual
returned type is found in the header. If you want ONLY the header, don't use
GET but HEAD. Rather simple, isn't it?
From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On
Behalf Of Kevin Duffey
Sent: Dienstag, 23. Februar 2010 21:34
To: rest-discuss@yahoogroups.com
Subject: Re: [rest-discuss] Re: [Jersey] Moved thread to rest-discuss /
HATEOAS-via-HTTP: Which HTTP Method to use to follow link?
Can one of you guys/gals explain to me how you determine the media type for
each URI returned? This is the one thing that still perplexes me. In your
example, you return the name, rel, and URI. But from those three items, how
do I set the media type for Content-Type when I want to make a request?
On the server side, I might support different media types for each of my
methods, some how I would assume I need to return that info as well so that
a developer can set the right media type for the request.
Or perhaps I've completely confused the importance and use of media types
with regards to HATEOAS responses that provide potentially multiple links
for the activities that can be performed for a given state?
Thank you.
--- On Tue, 2/23/10, Kris Zyp <kris@...> wrote:
From: Kris Zyp <kris@...>
Subject: Re: [rest-discuss] Re: [Jersey] Moved thread to rest-discuss /
HATEOAS-via-HTTP: Which HTTP Method to use to follow link?
To: "Markus KARG" <markus@...>
Cc: "'Jan Algermissen'" <algermissen1971@...>, "'REST Discuss'"
<rest-discuss@yahoogroups.com>
Date: Tuesday, February 23, 2010, 11:31 AM
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
On 2/23/2010 11:58 AM, Markus KARG wrote:
> this is interesting! As I am
working with XSL / XML a lot, I did
> not take such a deep look at JSON. Is that type of link support
> native to JSON or is that just a specific use of JSON?
It is not "native" to application/ json, it is part of JSON Schema
(application/ schema+json) [1], and is therefore a meta-description of
the links that can understood from the data/documents.
[1] <http://tools.ietf.org/html/draft-zyp-json-schema> http://tools.
ietf.org/ html/draft- zyp-json- schema
> I wonder whether the outline
type of link description is actually
RESTful:
> I mean, "create order" clearly
is a command, and such is not
> document driven but method driven, which in turn looks like RPC
to
> me?
"create_order" was just what I used to make it clear, since I thought
you were asking for a way to indicate to a client how to navigate
to/submit a request to create an order (using a POST).
- --
Kris Zyp
SitePen
(503) 806-1841
<http://sitepen.com> http://sitepen. com
-----BEGIN PGP SIGNATURE--- --
Version: GnuPG v1.4.9 (MingW32)
Comment: Using GnuPG with Mozilla - <http://enigmail.mozdev.org/>
http://enigmail. mozdev.org/
iEYEARECAAYFAkuELQs ACgkQ9VpNnHc4zAx +/ACgl79LdqrDrkU II9Uz2tSXJkiz
EqcAn0Lx28cSF8A6A5S +il74gAE24ye7
=SW2g
-----END PGP SIGNATURE--- --
Good work, but actually one thing needs to be added: REST has nothing to do with http. It is an architectural style. Certainly most REST applications are using http currently. But for correctness, the table should not say "REST" but "REST over http". ;-) > -----Original Message----- > From: rest-discuss@yahoogroups.com [mailto:rest- > discuss@yahoogroups.com] On Behalf Of Jan Algermissen > Sent: Donnerstag, 25. Februar 2010 14:19 > To: REST Discuss > Subject: [rest-discuss] Differentiating HTTP-based APIs > > Hi, > > I have put together a table classifying HTTP-based API-types according > to the REST constraints they adhere to: > > <http://nordsc.com/ext/classification_of_http_based_apis.html> > > Hope this is useful. > > Jan > > ----------------------------------- > Jan Algermissen, Consultant > NORD Software Consulting > > Mail: algermissen@... > Blog: http://www.nordsc.com/blog/ > Work: http://www.nordsc.com/ > ----------------------------------- > > > > > > > ------------------------------------ > > Yahoo! Groups Links > > >
On Feb 26, 2010, at 9:13 PM, Markus KARG wrote: > Good work, but actually one thing needs to be added: REST has nothing to do > with http. It is an architectural style. Certainly most REST applications > are using http currently. But for correctness, the table should not say > "REST" but "REST over http". ;-) Yep, that's correct. I am mixing levels there. OTH, read the others as names for 'styles' (e.g. 'RPC Resource Identifier Tunneling' as opposed to 'RPC URI-Tunneling'. Then the levels fit again :-) Jan > >> -----Original Message----- >> From: rest-discuss@yahoogroups.com [mailto:rest- >> discuss@yahoogroups.com] On Behalf Of Jan Algermissen >> Sent: Donnerstag, 25. Februar 2010 14:19 >> To: REST Discuss >> Subject: [rest-discuss] Differentiating HTTP-based APIs >> >> Hi, >> >> I have put together a table classifying HTTP-based API-types according >> to the REST constraints they adhere to: >> >> <http://nordsc.com/ext/classification_of_http_based_apis.html> >> >> Hope this is useful. >> >> Jan >> >> ----------------------------------- >> Jan Algermissen, Consultant >> NORD Software Consulting >> >> Mail: algermissen@... >> Blog: http://www.nordsc.com/blog/ >> Work: http://www.nordsc.com/ >> ----------------------------------- >> >> >> >> >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> > > > > ------------------------------------ > > Yahoo! Groups Links > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
> > I think we must differentiate here between CRUD GUIs (which I
> dislike) and CRUD APIs: As SQL works so well, I don't actually see a
> real need to invent more commands. I mean, what benefit should it
> actually bringt? If I want my server to send an email, what would be
> wrong to say that I must do a POST to the "http://.../mail-outbox/"
> URI, which in turn will make the server send away the received entity
> as an email, and which ist a RESTful operation
>
> Nothing is wrong with that. That is how domain specific operations
> ('goals' if you want) are achieved: tell a resource *with the
> appropriate semantics* to process-this (POST). So, if you learned from
> some hypermedia that http://.../mail-outbox/ has the semantics of
> sending a mail when you POST something to it then that is how you
> achieve that domain goal.
>
> The need for the hypermedia is there because that is how the client
> learns ar runtime that http://.../mail-outbox/ has these semantics.
>
I need to disagree here: The WWW is working without anybody having to learn
about the semantics of a particular method called upon a particular URI.
Why? Because it is defined by http iteself! I think it is just a wrong idea
to impose any other semantics than those defined by http.
Example:
A POST on an URI implies that a new resource is created (just like an INSERT
defines that in SQL).
The fact that the created object actually is an email but not a row in a
database or a file on disk plays no role. The semantics are the same. So
there actually is nothing to learn. It is just clear that if you want to
send an email, that it obviously must be POST, since a PUT would imply an
update of an existing object (just like UPDATE defines that in SQL) - but
you cannot update an email, as it certainly is already sent!
Just keep things simple and drill down to the bottom, and you'll see that
there is no need for "learning" about semantics.
More complex example:
You like to instruct your car to drive either left or right etc.: You know
your car reacts if you send "application/instruction+xml" as that might be
the mime found on Wikipedia for car steering systems (containing primitives
like "left turn", "right turn", etc. or complex instructions like "drive to
los angeles") to the URI "http://mycar/steering/" (as this might be printed
on your car's key). What would PUT / POST / DELETE / GET do? Is there a need
to defined the actual semantics (I suppose: No). While driving a car
actually is such complex that there still (sigh!) is no automatic driving
car to buy even in 2010, the answer to this question is rather simple:
POST ("INSERT"): Uploads a new instruction, so obviously the car will finish
the existing instructions and after that follow the new one. It doesn't
matter that the result might not be cached, as it will as you will GET it
later using the new instruction's server-generated URI returned by POST. The
uploaded instruction is kept in some form of a queue, as you didn't say PUT
to replace it. It's just obvious, just as a human driver also would
understand a sequence of instructions as a queue.
DELETE (on sub-resource) ("DELETE"): Deletes the specified instruction, i.
e. will stop executing it at once if already running, otherwise just drops
it from the queue of queued instructions.
GET (on root) ("SELECT"): Returns a list of all uploaded instructions ("What
will the car do in sequence?")
GET (on sub-resource) ("SELECT"): Returns the specified instruction,
possibly to learn about it's execution status ("Did the car already follow
this instruction, or is it still in the queue?")
PUT (on sub-resource) ("UPDATE"): Overwrites the specified instruction by
this new one, i. e. will stop executing it and instead execute the new one
when already running (otherwise just replaces it in the queue). Just as a
human driver would understand if you say: "I said turn left next, but I
actually meant turn right!".
Even in this complex example, the semantics of PUT / POST / DELETE / GET is
really obvious and follows exacty the WWW semantics. No need to define
semantics. Just expect anything on earth to be a document that can be
created, read, updated, deleted. Nobody says that CRUD must deal with
databases or single rows in a table. The CRUD's "C" or "U" can be
implemented as highly complex algorithms, as the car example shows.
So I just wait for a business use case (not a synthetic example) that will
not be that simple in the end. ;-)
> Note: if the general semantics of the domain operation map to PUT or
> DELETE or PATCH these specific methods should be used because you get
> more visibility compared to POST (which has visibility zero). For
> example, a PUT on /orders/2 allows caches to flush what they have for
> /orders/2 and store the response to the PUT. If you POST /orders/2 the
> caches just flush - and they only do this not because they know what is
> going on but because that is the (necessary) default behavior for POST.
In this particular example, actually I do not see any problem in the fact
that a cache is flushing as the target was to create an email: There is no
result to flush actually. ;-) But I understand what you mean actually.
Hey Markus...
I think we're missing each other on this conversation. First, like you say, a browser would understand ahead of time what an <A> element is and what to do with it. As a client consuming my service API they *should* know that what ever way I return the various links that they can make use of at some given state, they will know what to do. I believe this is what you are saying. However, being that my service can evolve.. I can add new URI links for say, new features later on, there has to be a way to describe each link's method to use (get/post/put/delete) correct? I mean, I can't just return something like:
<links>
<link href="http://myservice.com/create"/>
<link href="http://myservice.com/orders"/>
<link href="http://myservice.com/history"/>
</links>
and expect a client to know that they can post to the /create, get from /orders and get a /history/{id} can I? If instead I returned:
<links>
<link href="http://myservice.com/create" method="post"/>
<link href="http://myservice.com/orders" method="get"/>
<link href="http://myservice.com/history" method="get,put,delete"/>
</links>
Is that wrong? That would allow a client, as well as a bot to be able to determine what it can do with each resource link.
Furthermore, I thought the use of media types was important? I am still a bit rusty with this, hence my questions, but I thought there was some push to use media types like "application/vnd.com.myservice.create+xml" for the /create resource and that that would be set as the Content-Type on the request... in Jersey, I'd have a method like this set up:
@POST
@Consumes("application/vnd.com.myservice.create+xml")
public Response create(..){
}
But, in order for a client to know WHAT media type to set the Content-Type to when sending the request, I'd have to return that media type string in my links:
<links>
<link href="http://myservice.com/create" method="post" type="application/vnd.com.myservice.create+xml"/>
</links>
So that a client, be it a developer consuming my API or a bot scanning the response, could look at each <link> element, and know that it can post/put/delete/get and what to set the Content-Type header to so that it goes to the right method. Naturally the client doesn't know that it will go to the right method, the point here is that the media type is specific for each resource.
So my question is really.. the type="..." portion.. is that what would or should be done.. passed back as part of a response so that client consumers can figure out what method(s), URI and media type to set when making a request? I do NOT mean to say that a client should know BEFORE that this is what is coming back. Simply that WHEN it comes back, what attributes to look for in order to make the next request.
If that does not explain it well enough, I don't know what else to say.
Hi Felipe,
So, it seems you are really having a hard time with this issue of reducing how much code it takes to do what you would like. As Craig said, you're allowing your "view" tier mix in with your "model" tier. Neither should know about the other. There is a reason, in my opinion, MVC has stuck around so long and is so well known and implemented. To break this well sought after development paradigm in order to reduce some code to me is going backwards.
Now, there are some things you could do that could keep the concerns separated, but at the same time reduce code... but usually getting something for free means giving something up. I'll fill you in on something I've done recently that works, although I have not tested it, I imagine I may be giving up a little performance for a lot of convenience. In the end, there are things I can do to increase performance... but if I mix my model with my view tiers, and some other business need comes up that requires a different view with the same model.. I am going to have to write a lot of new code or spend a lot of time rewriting it the way it should be, which is separated.
So, what I did was to use XSD with JAXB for my "view" model that allows me to automatically thanks to the wonders of JAXB and Jersey's ability to convert incoming xml/json requests into objects, and vice versa, send xml/json in response from my XSD objects. Now, as you say, and I too have done, my XSD/JAXB classes have pretty much identical method names and properties. For example, my user entity is basically a name, street, city, state zip, area, country, phone and email. Likewise, my XSD mimics those fields, and in my case they are basically identical. Enough so that the generated XSD/JAXB object properties match with the entity bean property names.
What I decided to do was in my ejb session beans, which are stateless using the @Stateless, to pass in the JAXB model object as a paramter to the method that would utilize the same entity bean on the back end. I also utilized the apache BeanUtils class to copy properties for me. This is where it might cause a slight performance hiccup..I am not entirely sure how the copy method does what it does, but I am guessing with reflection. Still, reflection is much faster now, so I am far less worried about a fraction more time lost given the benefits I get of the separation and cleaner code. And meanwhile, as you can see, it's not that much code. This particular implementation works with all objects, so I can quickly copy/paste this method for whatever my needs are. I am not having to call tons of user.setXxx(u.getXxx()) methods in code.
@Stateless
public class UserEJBSession{
@PersistenceContext(unitName = "myPersistenceUnit")
EntityManager em;
public long createUser(com.myjaxbgeneratedpackage.models.User user) {
User u = new User();
try {
BeanUtils.copyProperties(u, user);
} catch (InvocationTargetException ite) {
ite.printStackTrace(System.out);
} catch (IllegalAccessException e) {
e.printStackTrace(System.out);
}
em.persist(u);
em.flush();
em.refresh(u);
return u.getId().longValue();
}
The above, to me, is not much code, it works, and it allows me to keep my domains and views separated. It allows me to make use of my domains for other purposes. I generally put my JAXB/XSD into a separate project, build it and put the .jar into this projects /lib directory. Thus, it is perfectly fine for my ejb session to "see" the JAXB model classes. I am NOT allowing my entity classes to use the JAXB objects. The session bean also does not dictate the "view" going back. The resource class (which would call this createUser method passing in to it the JAXB user object that it created when the request came in with the xml or json of a user) gets back the response from this method and then figures out what to respond with. If this were a web site, it would forward to some JSP page, perhaps setting some request scoped properties that the JSP page could use for it's response. As my service is a rest service, it would return a response with probably a 201
created and a URI linking to the new created resource. The point being, the service does not see or know about the back end entity model, and the back end entity model knows nothing of the view or the JAXB classes.
Another thing to think about. Part of JEE6 is this new profile setup, where by you can install a subset of the JEE6 spec as part of the container. For example, I think the Web Profile provides the servlet/jsp, some ejb but not all of it, no JMS, etc. I don't know for sure, but it may be that if you were to try to deploy your app with your mixed models as you have them now, they may not deploy into this Web Profile container. Don't quote me on this, but it is something to think about.
Another thing to consider is scalability. Given that the ejb/entity stuff is going to be working with the database, it's likely that it will require more time to work with back end resources. As such, you may need to scale the server farm (for larger sites) more so in that tier than say, the front end rest/jsp/servlet view tier. If you deploy everything completely mixed up, that is fine, but often times many of the front end requests can be handled via cache, or non-back end needs and responded to quickly. By separating it, you could provide fewer servers to handle the majority of front end requests, while scaling up the back end as needed.
HTH
just a question about your domain model sharing:
and what about the code ?
If you code the whole engine logic in a code, and you need a second
application to access the same data, you will need to recode the
engine logic again ?
ok, we all think about "reuse of components", but so far I've seen
ugly hack in the code to simulate the hypermedia engine and that's
what I am trying to do better.. but your points are good.. I will
digest them and try to incorporate the best I can in my design
here.....
2010/2/26 Felipe Gaúcho <fgaucho@gmail. com>:
> excellent points.. I will consider that...
>
> 2010/2/26 Craig McClanahan <craigmcc@gmail. com>:
>> 2010/2/25 Felipe Gaúcho <fgaucho@gmail. com>:
>>>> You're missing a key feature of what both Kevin and I said ... use
>>>> *different* classes for the *resource* domain model and the *business*
>>>> domain model.
>>>
>>> yes, I am aware about that... but the effort to maintain the mapping
>>> between those models generates an extra and unnecessary development
>>> effort ,.... I also noticed some people having a third model when the
>>> data will be rendered in the presentation layer.. (a managedbean or
>>> other component..) ... My experiment is looking to reduce the amount of
>>> effort required to synchronize all those models, having only one
>>> model.. and it works fine and the performance is better and
>>> maintenance effort much less than the traditional MVC design...
>>
>> In your business domain objects, do you store the CSS style class that
>> should be used to display this object in a browser? Of course not:
>>
>> * The *name* of the style class is totally up to the designer.
>>
>> * Do *you* know any designers that care about backwards compatibility
>> of style names? I don't either ...
>> but I don't care, as long as the designer updates all the
>> corresponding HTML pages when the style name changes.
>>
>> * Style names can be changed at any time (for example, when you refresh
>> the look and feel characteristics of your website).
>>
>> * And, of course, there is normally more than one webapp
>> that needs to be able to render this business domain object,
>> so having only one style name would not be useful.
>>
>> The same principles apply to designing resource representations -- we
>> should assume that there will be more than one representation that
>> includes this particular object, and more than one RESTful application
>> that needs to provide access to it, so assuming any *single* approach
>> is not likely to help all our users.
>>
>>>
>>>
>>>> Then, each kind of class can have the annotations that are relevant for that tier.
>>>
>>> Usually the classes are identical and you are just including a copy
>>> layer to transfer data between them :)
>>>
>>>> Managing workflow, like obeying the hypermedia constraint, is a view tier concern (from the point of view of a server side developer).
>>>
>>> May be, may be not.. in my case the model contains the sate of the
>>> application, so it is a business concern :)
>>>
>>
>> There is pretty much always more than one application needing your
>> data, so it's more than one business concern :-). Ideally, you can
>> share the business domain objects across these applications, but the
>> reality is you'll need multiple RESTful resource representations that
>> include data from these business domain objects, for the same reason
>> that you will need multiple HTML representations (even within the same
>> webapp, how many different pages include information from particular
>> domain objects?). One size does not fit all.
>>
>>>> That's a good thing, because a REST API is typically not the only way that
>>>> such business model objects get manipulated.
>>>
>>> yes, but the effort to manage the hypermedia just in time makes the
>>> whole application much slower.. and it is a repettive task.. that is
>>> one of the reasons that motivated me to push the links in the
>>> database....
>>>
>>>> Self-test time: are you storing URIs in your database?
>>>
>>> yes. The sate of the application is persistent :) and scales quite
>>> well since I don't need to recalculate the sate on every request..
>>> (ok, memcache can help a bit, but anyway... the hypermedia engine it
>>> is the heavy part of the request - eliminating that I have much faster
>>> services)
>>
>> Scale for one application is nice (although I'm waiting for your
>> benchmarks to show that the overhead of calculating URIs dynamically
>> is crushing, given how cheap CPU time is versus other server side
>> resources). How about scale for multiple applications?
>>
>> Oh, you only have one? Good for you! But that's not a particularly
>> common problem domain.
>>
>>>
>>>> a JAX-RS based server application, that "care" is expressed by @Path
>>>
>>> I am experimenting with that idea.. in order to use regular
>>> expressions or other DSL facilty to manipulate the final URI
>>> dinamically. . perhaps storing just the tail of the URI in the databse
>>> or other format of information that lead me a chance to produce the
>>> external representation by demand.. this part is my current
>>> research.... .. (I prefer the "experimentation" word since I am not a
>>> formal researcher.. )
>>>
>>
>> Please think about the idea that the same business domain resources
>> may need to be exposed by different applications, using different
>> resource representations, and different URI schemes, all at the same
>> time. Or, even within the same application, exposed in different
>> pages (web app) or resources (RESTful web service) at the same time.
>>
>> Personal history lesson -- when I was first learning web development
>> (mid-late 1990s), I figured "why not have a toHTML() method on all my
>> Java business domain model classes"? It quickly became clear that
>> different pages within the same app, as well as different apps, needed
>> different HTML representations of the same business objects. Indeed,
>> this realization was one of the motivating factors that led to the
>> creation of Struts.
>>
>> The same is true for RESTful resource representations. There will be
>> more than one representation that requires information from a
>> particular business domain model object, as well as more than one
>> application (each with their own resource and representation
>> requirements) .
>>
>> In MVC terms: Model classes != View classes.
>>
>>>> @Path annotations -- to change the URIs included in your representations.
>>>
>>> yes, but you need to iterate over the collections and change manually
>>> each URI, what can lead you to manipulate thousand of strings before
>>> to respond to the client.. this part scares me... (think about a
>>> collection with 100 elements, each with 20 URIs)
>>
>> For amusement, you should go grab a raw Atom or RSS feed from a busy
>> feed source (I use Google Reader for my feed aggregator, but the same
>> principle applies to anyone who provides feeds) and see how many URIs
>> are included. And, funny thing, none of the apps that *created* that
>> content had any idea that *I* would be aggregating their feeds, via
>> Google Reader, for my own use.
>>
>> URIs in RESTful web services are a view tier concern, just like CSS style names.
>>
>>>
>>>> In the shopping cart use case (no, I don't care if some people don't like it
>>>> as an example :-), a well designed REST API should *not* define a URI
>>>> template like "/checkout" that accepts a POST with a shopping cartURI
>>>
>>> That is a basic REST principle .. and it is not related to the way you
>>> store or calculate the URIs...
>>>
>>
>> Unfortunately, *lots* of theoretically "RESTful API" specifications
>> include instructions on how to calculate the URI for a particular
>> operation (versus telling the client "get the URI you need from the
>> current representation of the resource.). And I'm as guilty as anyone
>> else at violating the hypermedia constraint this way, in my earlier
>> work. But, my point in this particular scenario was, assuming that
>> there was a single URI for "checkout", for *all* shopping carts, is
>> technically feasible, but not a good idea.
>>
>>>> concerned, the URI itself is an opaque string.
>>>
>>> yes......
>>>
>>> thanks for your feedback... interesting. . I will keep thinking about
>>> your thoughts....
>>>
>>>
>>> Felipe Gaúcho
>>>
>>
>> Craig
>>
>
>
>
> --
> ------------ --------- --------- --------- ---
> Felipe Gaúcho
> 10+ Java Programmer
> CEJUG Senior Advisor
>
--
------------ --------- --------- --------- ---
Felipe Gaúcho
10+ Java Programmer
CEJUG Senior Advisor
Kevin,
if I were you, I would not do this:
<links>
<link href="http://myservice.com/create"/>
<link href="http://myservice.com/orders"/>
<link href="http://myservice.com/history"/>
</links>
as "create" is a command verb, not a resource name.
If you instead think of documents solely, then you should be able to create
things by a POST of a document in the right folder. For example, if you like
to order something, you don't do a PUT in create but a POST in orders. As
the meaning of http methods is semantically clear, there wouldn't be a need
to declare what a method is good for. The need to define what a method
semantically means arises solely due to two reasons: Missing thinking in
documents and using command verbs. As soon as you start modelling your
application in terms of documents, you will notice that it is clear what a
http method will do.
It seems you have misunderstood a bit what the media type header is good for
when invoking a http method. It tells the receiving party not which method
to select on the server side (this is just a technical side effect), but it
tells the receiving party what the the type of the actually uploaded
document is. So there is no need to know what the server can process, but
instead there is a need for the server to understand the mime types that
your client like to send (see that this is a different approach?).
Typically, a HTML server ("Web Server") will be able to process HTML. It
won't make much sense to have an API that says "This web server supports
HTML only at GET, while at PUT it wants to receive a PDF instead.". Same is
with business applications. If you are developing a web shop, then it makes
sense that the *complete* shop is able to deal with
"application/product+xml" for example. It makes no sense to have different
MIME type for each method. If the server doesn't accept a particular mime
type at one particular URI, it can just reject it (this is part of http
already). You don't need to know it earlier. I mean, what shall it be good
for to know? If you client would know and is able to serve it, it could have
served it without knowing it, just by trial. If the client would know and is
NOT able to serve it, it won't be any better off. So what have you won?
Nothing! As in REST the driver is always the client, the client can just do
what it wants to do, and the server must be able to do that or reject the
request. This is how the web works, and I don't see why it should be changed
for REST.
It seems you like to give the server control on what the client has to do.
This is wrong. In REST, still (as in every C/S architecture), it is the
client that tells the server what to do. The server can just give advice
what it thinks would be useful (like presenting some links), and it can
support some MIME types while it does not support others. But the boss is
the client. The server can only serve it, or reject it.
Regards
Markus
From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On
Behalf Of Kevin Duffey
Sent: Samstag, 27. Februar 2010 07:00
To: rest-discuss@yahoogroups.com
Subject: RE: [rest-discuss] Re: [Jersey] Moved thread to rest-discuss /
HATEOAS-via-HTTP: Which HTTP Method to use to follow link?
Hey Markus...
I think we're missing each other on this conversation. First, like you say,
a browser would understand ahead of time what an <A> element is and what to
do with it. As a client consuming my service API they *should* know that
what ever way I return the various links that they can make use of at some
given state, they will know what to do. I believe this is what you are
saying. However, being that my service can evolve.. I can add new URI links
for say, new features later on, there has to be a way to describe each
link's method to use (get/post/put/delete) correct? I mean, I can't just
return something like:
<links>
<link href="http://myservice.com/create"/>
<link href="http://myservice.com/orders"/>
<link href="http://myservice.com/history"/>
</links>
and expect a client to know that they can post to the /create, get from
/orders and get a /history/{id} can I? If instead I returned:
<links>
<link href="http://myservice.com/create" method="post"/>
<link href="http://myservice.com/orders" method="get"/>
<link href="http://myservice.com/history" method="get,put,delete"/>
</links>
Is that wrong? That would allow a client, as well as a bot to be able to
determine what it can do with each resource link.
Furthermore, I thought the use of media types was important? I am still a
bit rusty with this, hence my questions, but I thought there was some push
to use media types like "application/vnd.com.myservice.create+xml" for the
/create resource and that that would be set as the Content-Type on the
request... in Jersey, I'd have a method like this set up:
@POST
@Consumes("application/vnd.com.myservice.create+xml")
public Response create(..){
}
But, in order for a client to know WHAT media type to set the Content-Type
to when sending the request, I'd have to return that media type string in my
links:
<links>
<link href="http://myservice.com/create" method="post"
type="application/vnd.com.myservice.create+xml"/>
</links>
So that a client, be it a developer consuming my API or a bot scanning the
response, could look at each <link> element, and know that it can
post/put/delete/get and what to set the Content-Type header to so that it
goes to the right method. Naturally the client doesn't know that it will go
to the right method, the point here is that the media type is specific for
each resource.
So my question is really.. the type="..." portion.. is that what would or
should be done.. passed back as part of a response so that client consumers
can figure out what method(s), URI and media type to set when making a
request? I do NOT mean to say that a client should know BEFORE that this is
what is coming back. Simply that WHEN it comes back, what attributes to look
for in order to make the next request.
If that does not explain it well enough, I don't know what else to say.
Hi guys,
I wish to create a framework for accessing REST resources over HTTP. I wish to focus on xhtml Content-Type in particular. The idea is that the developer would provide instructions on how to get to the resource from a single URL.
Implementation-wise however, the framework would provide all the necessary plumbing to take care of caching and what not.
Consider three resources:
Root Resource - primary URL ("/"), entry point for the service, has a link to the User List
User List - lists all users, on GET, may accept a query string "email" to search for a specific user, contains link to the users' respective profiles
User Profile - the profile of a user
In order to implement something like get_user_by_email, the developer would have to describe how to get from the Root Resource to the User Profile. In code, a developer using the framework would do something like:
get_user_by_email(email) {
from("/")
.on(200) { |Root|
Root.follow("#users_link")
.on(200) { |Users|
Users.fill_in("#search_form", {"email": email})
.on(200) { |SearchResult|
SearchResult ...get_first_result...
.on(200) { |Profile|
return profile_to_some_struct(Profile)
}
}
}
}
}
I'm still working on how to best express this intent as code, and it's pretty ugly now I must admit.
However, the framework doesn't really execute the instructions by the developer directly. Instead, it uses its built in cache to get the result. From the example above, the framework would do things in reverse:
1. Is there a cache* of the result to a call to get_user_by_email(email)? If YES, return prior result, If NO, go to step 2
2. Is there a cache of the result to a call getting the search matches of a user given a specified email? If YES, using that result, go down the code -- following the link to the user profile, then returning the result. If NO, go to step 3.
3. Is there a cache of the list of users? If YES, go on and fill in the search form, etc. If NO, go to step 4
4. Is there a cache of the root resource? If YES, go back steps 3,2,1. If NO, get the root resource, and then go further back the steps.
* When I say cached, I generally mean that there has been a prior call, and the result was cached AND the cache hasn't expired yet based on the server cache instructions
The framework forms a tree of possible scenarios. It starts from the most optimistic test (step 1) on the leaf, and if it fails, goes back to its parent.
I believe this would be useful especially if the applications that are going to be built don't follow the UI style of web pages following linked documents. Is this a HATEOAS respecting client? I'd truly appreciate some inputs.
FYI, I'll start development of an Erlang version at http://bitbucket.org/jvliwanag/restr/ . Though, there's nothing there yet now. Hehe.
Jan Vincent Liwanag
jvliwanag@...
Jan
On Feb 27, 2010, at 10:15 AM, Jan Vincent wrote:
>
>
> Hi guys,
>
> I wish to create a framework for accessing REST resources over HTTP. I wish to focus on xhtml Content-Type in particular. The idea is that the developer would provide instructions on how to get to the resource from a single URL.
>
> Implementation-wise however, the framework would provide all the necessary plumbing to take care of caching and what not.
>
> Consider three resources:
>
> Root Resource - primary URL ("/"), entry point for the service, has a link to the User List
> User List - lists all users, on GET, may accept a query string "email" to search for a specific user, contains link to the users' respective profiles
> User Profile - the profile of a user
>
> In order to implement something like get_user_by_email, the developer would have to describe how to get from the Root Resource to the User Profile. In code, a developer using the framework would do something like:
>
> get_user_by_email(email) {
> from("/")
> .on(200) { |Root|
> Root.follow("#users_link")
> .on(200) { |Users|
> Users.fill_in("#search_form", {"email": email})
> .on(200) { |SearchResult|
> SearchResult ...get_first_result...
> .on(200) { |Profile|
> return profile_to_some_struct(Profile)
> }
> }
> }
> }
> }
>
> I'm still working on how to best express this intent as code, and it's pretty ugly now I must admit.
The problem (from a RESTfulness POV) with this is that the code assumes a certain state machine of the application. If the server decides to change that state machine, the code will break.
If the service publishes information that allows the client to make such assumptions as manifested by the code above, the service is not RESTful but is an "HTTP-based Type I" <http://nordsc.com/ext/classification_of_http_based_apis.html#http-type-one> (or "HTTP-based Type II") API.
If the server does not publish such information the code above just represents guess-work which would be worse because the coupling would actually be hidden inside the code.
When you think about such a framework approach, keep in mind that it will lead to tightly coupled systems no matter how "Webby" the system looks. If the service evolves, the client will break.
Whether this is actually a bad thing depends on the requirements - maybe long term evolvability has been traded for getting something started fast and maybe the expected system lifetime is so short that evolvability does not matter, but you need to be aware of this to make an informed decision.
Jan
>
> However, the framework doesn't really execute the instructions by the developer directly. Instead, it uses its built in cache to get the result. From the example above, the framework would do things in reverse:
>
> 1. Is there a cache* of the result to a call to get_user_by_email(email)? If YES, return prior result, If NO, go to step 2
> 2. Is there a cache of the result to a call getting the search matches of a user given a specified email? If YES, using that result, go down the code -- following the link to the user profile, then returning the result. If NO, go to step 3.
> 3. Is there a cache of the list of users? If YES, go on and fill in the search form, etc. If NO, go to step 4
> 4. Is there a cache of the root resource? If YES, go back steps 3,2,1. If NO, get the root resource, and then go further back the steps.
>
> * When I say cached, I generally mean that there has been a prior call, and the result was cached AND the cache hasn't expired yet based on the server cache instructions
>
> The framework forms a tree of possible scenarios. It starts from the most optimistic test (step 1) on the leaf, and if it fails, goes back to its parent.
>
> I believe this would be useful especially if the applications that are going to be built don't follow the UI style of web pages following linked documents. Is this a HATEOAS respecting client? I'd truly appreciate some inputs.
>
> FYI, I'll start development of an Erlang version at http://bitbucket.org/jvliwanag/restr/ . Though, there's nothing there yet now. Hehe.
>
> Jan Vincent Liwanag
> jvliwanag@...
>
>
>
>
>
>
-----------------------------------
Jan Algermissen, Consultant
NORD Software Consulting
Mail: algermissen@...
Blog: http://www.nordsc.com/blog/
Work: http://www.nordsc.com/
-----------------------------------
Markus,
Gave you a bad example, yes, create is a command and not a resource. My bad.
So summing up what you said, you did clear up the media type use. For a resource /orders, I would have one media type, say, application/vnd.com.mypackage.orders+xml (perhaps two, with a +json on the end). That would work for ALL resource calls, be it to create, get, put a single item or a list of them. The same one XSD segment would work in all cases. I see that now. Thank you.
I was confused I think partly because of Jan's and some other responses on the Jersey forum regarding using media types for everything. I suppose I thought with how Jersey handles methods for specific media types (even tho you can use the same custom one on multiple methods), I thought perhaps what was meant was to use specific media types for each method (post/put/get/delete) so that you did NOT need to specify in a response link the method to use.
As for the right method to use... you're saying that just like a web page, if I were to copy a link off of a web site, and try to PUT that link to the server, I'd probably get a 405 method not allowed or some error response back. So basically, a list of links that come back to the client in response to a a resource request would simply be a bunch of URIs, that is it. No media type specification, not methods allowed to be called on it. The client consumer simply has to trial and error to the server... if a client calls GET on /orders, and the authentication for the user making that request deems the user can NOT delete/update individual orders, the client UI still shows DELETE/UPDATE links. When the user, whose not allowed to delete/update, clicks one of those links, basically the server responds saying you're not allowed to do this. Is that what I am understanding you to say?
If so, I don't agree with that because I want to build a dynamic UI (if I am a consumer of this API) that gives "hints" so to speak on what a user can do at a given point. I expect my reply to give me the bare minimum essential information so I can represent the users next set of actions correctly. I surely wouldn't want to provide links to delete/update if the user can't do this. There has to be some correct/common/agreed upon/best practices manner in which REST developers are returning these URIs with enough information to allow a client consumer to render a UI that allows its users to make choices it's allowed to make, and not present every possibility.
So, would you give me an example of how you might do something like this? Show me two xml snippets.. using the same resource (lets use /orders or /cart your choice), one user is only allowed to GET resource items, the other can do the full gamut.. update/delete/get. Show me how you would respond in both scenarios so that I can see how the response URIs are formatted to allow the client consumer to know what can be done and present it as such in a UI to end users.
Thank you.
Jan,
On Feb 28, 2010, at 1:17 AM, Jan Vincent wrote:
> I'm not sure again, why some knowledge of the state machine on the server would be a bad thing.
It is bad because it couples client and server. This has the effect that the server owner needs to be aware of its clients to anticipate the impact of change. REST aims to eliminate that coupling.
> The idea I have is something like that of a user browsing through different pages of a website, the difference being is that it is based on some script. Should the server decide to deviate from that script, then yes, things would be screwed up.
Yep - and REST focusses on the server being able to change without things screwing up. This can only be achieved if the client adheres to the hypermedia constraint. Meaning that the client must look at any steady state (that it is being put into by the server) how to proceed to achieve its overall goal. The client must only decide this based on the current steady state after having reached it.
>
> As such, the content-types provide some form of contract that some elements would need to exist on the representations the restful service serves.
But the client must not make any design time assumptions about the content type it will actually receive.
> In the example provided below, I assume the presence of certain links, and some forms I could fill out.
The hypermedia constraint forbids such assumptions.
This is of course not to say that clients that make such assumptions cannot be appropriate for a given set of requirements. But it is important to understand that the system you end up with is not RESTful because client and server are coupled around these assumptions.
In my opinion, RESTful systems have two essential benefits: Simplicity and eliminating the need for service owners to communicate with client owners when they intend to change the service to support some previously unanticipated requirement (think "business agility").
Simplicity is a huge benefit in itself and achieving it does not depend on adhering to the hypermedia constraint (see my HTTP-based Type I/II). However, being able to evolve the components of a complex system (think "The Web" or "enterprise integration") at an independent pace easily justifies the effort of building a truly RESTful system.
Jan
> I don't really care about the format of the URL, and to some extent, even the methods (since I simply fill out forms on the xhtml representation).
>
> Moreover, I liken what I have described below as something like tabbed browsing by some user. The user, goes on to the main site, clicks on the lists of users, fills in a form to search for some user and then clicks on the result. If another search is needed, a new 'tab' is opened to save the old resource (say, the setting on the browser is to open the same page on the former tab), hit 'back' to the search users form, and search again.
>
> On Feb 27, 2010, at 9:21 PM, Jan Algermissen wrote:
>
>> Jan
>>
>> On Feb 27, 2010, at 10:15 AM, Jan Vincent wrote:
>>
>>>
>>>
>>> Hi guys,
>>>
>>> I wish to create a framework for accessing REST resources over HTTP. I wish to focus on xhtml Content-Type in particular. The idea is that the developer would provide instructions on how to get to the resource from a single URL.
>>>
>>> Implementation-wise however, the framework would provide all the necessary plumbing to take care of caching and what not.
>>>
>>> Consider three resources:
>>>
>>> Root Resource - primary URL ("/"), entry point for the service, has a link to the User List
>>> User List - lists all users, on GET, may accept a query string "email" to search for a specific user, contains link to the users' respective profiles
>>> User Profile - the profile of a user
>>>
>>> In order to implement something like get_user_by_email, the developer would have to describe how to get from the Root Resource to the User Profile. In code, a developer using the framework would do something like:
>>>
>>> get_user_by_email(email) {
>>> from("/")
>>> .on(200) { |Root|
>>> Root.follow("#users_link")
>>> .on(200) { |Users|
>>> Users.fill_in("#search_form", {"email": email})
>>> .on(200) { |SearchResult|
>>> SearchResult ...get_first_result...
>>> .on(200) { |Profile|
>>> return profile_to_some_struct(Profile)
>>> }
>>> }
>>> }
>>> }
>>> }
>>>
>>> I'm still working on how to best express this intent as code, and it's pretty ugly now I must admit.
>>
>> The problem (from a RESTfulness POV) with this is that the code assumes a certain state machine of the application. If the server decides to change that state machine, the code will break.
>>
>> If the service publishes information that allows the client to make such assumptions as manifested by the code above, the service is not RESTful but is an "HTTP-based Type I" <http://nordsc.com/ext/classification_of_http_based_apis.html#http-type-one> (or "HTTP-based Type II") API.
>>
>> If the server does not publish such information the code above just represents guess-work which would be worse because the coupling would actually be hidden inside the code.
>>
>> When you think about such a framework approach, keep in mind that it will lead to tightly coupled systems no matter how "Webby" the system looks. If the service evolves, the client will break.
>>
>> Whether this is actually a bad thing depends on the requirements - maybe long term evolvability has been traded for getting something started fast and maybe the expected system lifetime is so short that evolvability does not matter, but you need to be aware of this to make an informed decision.
>>
>> Jan
>>
>>
>>
>>
>>>
>>> However, the framework doesn't really execute the instructions by the developer directly. Instead, it uses its built in cache to get the result. From the example above, the framework would do things in reverse:
>>>
>>> 1. Is there a cache* of the result to a call to get_user_by_email(email)? If YES, return prior result, If NO, go to step 2
>>> 2. Is there a cache of the result to a call getting the search matches of a user given a specified email? If YES, using that result, go down the code -- following the link to the user profile, then returning the result. If NO, go to step 3.
>>> 3. Is there a cache of the list of users? If YES, go on and fill in the search form, etc. If NO, go to step 4
>>> 4. Is there a cache of the root resource? If YES, go back steps 3,2,1. If NO, get the root resource, and then go further back the steps.
>>>
>>> * When I say cached, I generally mean that there has been a prior call, and the result was cached AND the cache hasn't expired yet based on the server cache instructions
>>>
>>> The framework forms a tree of possible scenarios. It starts from the most optimistic test (step 1) on the leaf, and if it fails, goes back to its parent.
>>>
>>> I believe this would be useful especially if the applications that are going to be built don't follow the UI style of web pages following linked documents. Is this a HATEOAS respecting client? I'd truly appreciate some inputs.
>>>
>>> FYI, I'll start development of an Erlang version at http://bitbucket.org/jvliwanag/restr/ . Though, there's nothing there yet now. Hehe.
>>>
>>> Jan Vincent Liwanag
>>> jvliwanag@...
>>>
>>>
>>>
>>>
>>>
>>>
>>
>> -----------------------------------
>> Jan Algermissen, Consultant
>> NORD Software Consulting
>>
>> Mail: algermissen@...
>> Blog: http://www.nordsc.com/blog/
>> Work: http://www.nordsc.com/
>> -----------------------------------
>>
>>
>>
>>
>
> Jan Vincent Liwanag
> jvliwanag@...
>
>
>
-----------------------------------
Jan Algermissen, Consultant
NORD Software Consulting
Mail: algermissen@...
Blog: http://www.nordsc.com/blog/
Work: http://www.nordsc.com/
-----------------------------------
After careful thought, I believe I understand HATEOAS better now. My
question however is, since the current resource dictates where I can go,
does this mean that in an application, the UI is highly dependent on which
state I am right now? For web applications, this is understandable. But for
desktop applications, this may not be so. Say, I create a rich address book
app. Assume further that I have a simple feature wherein if I hover on a
contact's entry, I display that contact's brief profile information. How do
I accomplish this? Which resource should I be at?
In addition to this, is it feasible to access multiple REST web services,
thereby maintaining more than one current 'state'?
On Feb 28, 2010, at 9:20 AM, Jan Algermissen wrote:
> Jan,
>
> On Feb 28, 2010, at 1:17 AM, Jan Vincent wrote:
>
>> I'm not sure again, why some knowledge of the state machine on the server
would be a bad thing.
>
> It is bad because it couples client and server. This has the effect that
the server owner needs to be aware of its clients to anticipate the impact
of change. REST aims to eliminate that coupling.
>
>> The idea I have is something like that of a user browsing through
different pages of a website, the difference being is that it is based on
some script. Should the server decide to deviate from that script, then yes,
things would be screwed up.
>
> Yep - and REST focusses on the server being able to change without things
screwing up. This can only be achieved if the client adheres to the
hypermedia constraint. Meaning that the client must look at any steady state
(that it is being put into by the server) how to proceed to achieve its
overall goal. The client must only decide this based on the current steady
state after having reached it.
>
>>
>> As such, the content-types provide some form of contract that some
elements would need to exist on the representations the restful service
serves.
>
> But the client must not make any design time assumptions about the content
type it will actually receive.
>
>> In the example provided below, I assume the presence of certain links,
and some forms I could fill out.
>
> The hypermedia constraint forbids such assumptions.
>
> This is of course not to say that clients that make such assumptions
cannot be appropriate for a given set of requirements. But it is important
to understand that the system you end up with is not RESTful because client
and server are coupled around these assumptions.
>
> In my opinion, RESTful systems have two essential benefits: Simplicity and
eliminating the need for service owners to communicate with client owners
when they intend to change the service to support some previously
unanticipated requirement (think "business agility").
>
> Simplicity is a huge benefit in itself and achieving it does not depend on
adhering to the hypermedia constraint (see my HTTP-based Type I/II).
However, being able to evolve the components of a complex system (think "The
Web" or "enterprise integration") at an independent pace easily justifies
the effort of building a truly RESTful system.
>
> Jan
>
>
>> I don't really care about the format of the URL, and to some extent, even
the methods (since I simply fill out forms on the xhtml representation).
>>
>> Moreover, I liken what I have described below as something like tabbed
browsing by some user. The user, goes on to the main site, clicks on the
lists of users, fills in a form to search for some user and then clicks on
the result. If another search is needed, a new 'tab' is opened to save the
old resource (say, the setting on the browser is to open the same page on
the former tab), hit 'back' to the search users form, and search again.
>>
>> On Feb 27, 2010, at 9:21 PM, Jan Algermissen wrote:
>>
>>> Jan
>>>
>>> On Feb 27, 2010, at 10:15 AM, Jan Vincent wrote:
>>>
>>>>
>>>>
>>>> Hi guys,
>>>>
>>>> I wish to create a framework for accessing REST resources over HTTP. I
wish to focus on xhtml Content-Type in particular. The idea is that the
developer would provide instructions on how to get to the resource from a
single URL.
>>>>
>>>> Implementation-wise however, the framework would provide all the
necessary plumbing to take care of caching and what not.
>>>>
>>>> Consider three resources:
>>>>
>>>> Root Resource - primary URL ("/"), entry point for the service, has a
link to the User List
>>>> User List - lists all users, on GET, may accept a query string "email"
to search for a specific user, contains link to the users' respective
profiles
>>>> User Profile - the profile of a user
>>>>
>>>> In order to implement something like get_user_by_email, the developer
would have to describe how to get from the Root Resource to the User
Profile. In code, a developer using the framework would do something like:
>>>>
>>>> get_user_by_email(email) {
>>>> from("/")
>>>> .on(200) { |Root|
>>>> Root.follow("#users_link")
>>>> .on(200) { |Users|
>>>> Users.fill_in("#search_form", {"email": email})
>>>> .on(200) { |SearchResult|
>>>> SearchResult ...get_first_result...
>>>> .on(200) { |Profile|
>>>> return profile_to_some_struct(Profile)
>>>> }
>>>> }
>>>> }
>>>> }
>>>> }
>>>>
>>>> I'm still working on how to best express this intent as code, and it's
pretty ugly now I must admit.
>>>
>>> The problem (from a RESTfulness POV) with this is that the code assumes
a certain state machine of the application. If the server decides to change
that state machine, the code will break.
>>>
>>> If the service publishes information that allows the client to make such
assumptions as manifested by the code above, the service is not RESTful but
is an "HTTP-based Type I" <
http://nordsc.com/ext/classification_of_http_based_apis.html#http-type-one>
(or "HTTP-based Type II") API.
>>>
>>> If the server does not publish such information the code above just
represents guess-work which would be worse because the coupling would
actually be hidden inside the code.
>>>
>>> When you think about such a framework approach, keep in mind that it
will lead to tightly coupled systems no matter how "Webby" the system looks.
If the service evolves, the client will break.
>>>
>>> Whether this is actually a bad thing depends on the requirements - maybe
long term evolvability has been traded for getting something started fast
and maybe the expected system lifetime is so short that evolvability does
not matter, but you need to be aware of this to make an informed decision.
>>>
>>> Jan
>>>
>>>
>>>
>>>
>>>>
>>>> However, the framework doesn't really execute the instructions by the
developer directly. Instead, it uses its built in cache to get the result.
From the example above, the framework would do things in reverse:
>>>>
>>>> 1. Is there a cache* of the result to a call to
get_user_by_email(email)? If YES, return prior result, If NO, go to step 2
>>>> 2. Is there a cache of the result to a call getting the search matches
of a user given a specified email? If YES, using that result, go down the
code -- following the link to the user profile, then returning the result.
If NO, go to step 3.
>>>> 3. Is there a cache of the list of users? If YES, go on and fill in the
search form, etc. If NO, go to step 4
>>>> 4. Is there a cache of the root resource? If YES, go back steps 3,2,1.
If NO, get the root resource, and then go further back the steps.
>>>>
>>>> * When I say cached, I generally mean that there has been a prior call,
and the result was cached AND the cache hasn't expired yet based on the
server cache instructions
>>>>
>>>> The framework forms a tree of possible scenarios. It starts from the
most optimistic test (step 1) on the leaf, and if it fails, goes back to its
parent.
>>>>
>>>> I believe this would be useful especially if the applications that are
going to be built don't follow the UI style of web pages following linked
documents. Is this a HATEOAS respecting client? I'd truly appreciate some
inputs.
>>>>
>>>> FYI, I'll start development of an Erlang version at
http://bitbucket.org/jvliwanag/restr/ . Though, there's nothing there yet
now. Hehe.
>>>>
>>>> Jan Vincent Liwanag
>>>> jvliwanag@...
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>
>>> -----------------------------------
>>> Jan Algermissen, Consultant
>>> NORD Software Consulting
>>>
>>> Mail: algermissen@...
>>> Blog: http://www.nordsc.com/blog/
>>> Work: http://www.nordsc.com/
>>> -----------------------------------
>>>
>>>
>>>
>>>
>>
>> Jan Vincent Liwanag
>> jvliwanag@...
>>
>>
>>
>
> -----------------------------------
> Jan Algermissen, Consultant
> NORD Software Consulting
>
> Mail: algermissen@...
> Blog: http://www.nordsc.com/blog/
> Work: http://www.nordsc.com/
> -----------------------------------
>
>
>
>
Jan Vincent Liwanag
jvliwanag@...
Kevin, there are several solutions to that. I'll outline three here, there might be more. The third is the one we recently discussed regards HATEOAS. First of all, your client could invoke the OPTIONS method and inspect the Allow header (http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.7) to learn about the currently allowed (i. e. possible) http methods on one particular URI. The server is free to decide at that point what he responds, so it could use the authentication information for example to decide with respect to the user's access rights, or it could check the resource's state to dedice with respect to the business status. You client can do that check for each URI right at the time of receiving it, far before the GUI rendered the received entity on the screen. This is a rather good method to use if the media type (see below) is not able to define the link type. Second, if your client just wants to know whether GET is allowed, it could at that time do a HEAD or GET and cache the result. As HEAD or GET are idempotent, this is no risk. If HEAD or GET is not allowed, the server would respond with an Allow header, so you would know what insted is possible. I see this solution only in special cases. Third and possibly best (and "the" RESTful solution), would be to not use general link syntax but specific business information. In HTML it is defined that <A> is a general link, which can be followed solely by HEAD or GET. It is not defined to use any other http method. So a browser knows a set of possible and impossible methods just due to the fact that the link is there and is <A> (in contrast to the link is not there, or the tag being not <A>). So a HTML aware client can learn from that to either render a link or not, and to use GET. This is what we currently discuss as "learn from the media type": The browser knows the media type (here: HTML), so by inspecting the actual content (here: HTML) it will know from the content what method to use (here: GET since it is <A>). Let's assume your home-brewn media type. Its schema could contain the definition of different types of links - not just <A> but let's say <X> and <Y>. Just as the HTML specification says that <A> results in HEAD or GET, your home-brewn specification would say that <X> can only be GET or HEAD, while <Y> could be POST (since it serves as an inbound for creation of new stuff for example). Since your client is aware of your home-brewn media type, it will know your specification. As a result, it will know what the associated http methods are. And your server will either contain the particular link in the document, or it will abstain, and such has control the tell the client what currently is possible to do. That's HATEOAS. Forth, if your business model is rather simple (or it makes sense to turn it into a simple one), you can just rely on the http specification, which defines what the methods are good for. In this scenarioa your resources are such atomic that it is rather clear what POST / GET / PUT / DELETE will be good for. That's CRUD. Whether or not this is a useful method on that resource might be clear from the resource itself or can be learned from the first solution. Regards Markus From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of Kevin Duffey Sent: Samstag, 27. Februar 2010 19:23 To: rest-discuss@yahoogroups.com Subject: RE: [rest-discuss] Re: [Jersey] Moved thread to rest-discuss / HATEOAS-via-HTTP: Which HTTP Method to use to follow link? Markus, Gave you a bad example, yes, create is a command and not a resource. My bad. So summing up what you said, you did clear up the media type use. For a resource /orders, I would have one media type, say, application/vnd.com.mypackage.orders+xml (perhaps two, with a +json on the end). That would work for ALL resource calls, be it to create, get, put a single item or a list of them. The same one XSD segment would work in all cases. I see that now. Thank you. I was confused I think partly because of Jan's and some other responses on the Jersey forum regarding using media types for everything. I suppose I thought with how Jersey handles methods for specific media types (even tho you can use the same custom one on multiple methods), I thought perhaps what was meant was to use specific media types for each method (post/put/get/delete) so that you did NOT need to specify in a response link the method to use. As for the right method to use... you're saying that just like a web page, if I were to copy a link off of a web site, and try to PUT that link to the server, I'd probably get a 405 method not allowed or some error response back. So basically, a list of links that come back to the client in response to a a resource request would simply be a bunch of URIs, that is it. No media type specification, not methods allowed to be called on it. The client consumer simply has to trial and error to the server... if a client calls GET on /orders, and the authentication for the user making that request deems the user can NOT delete/update individual orders, the client UI still shows DELETE/UPDATE links. When the user, whose not allowed to delete/update, clicks one of those links, basically the server responds saying you're not allowed to do this. Is that what I am understanding you to say? If so, I don't agree with that because I want to build a dynamic UI (if I am a consumer of this API) that gives "hints" so to speak on what a user can do at a given point. I expect my reply to give me the bare minimum essential information so I can represent the users next set of actions correctly. I surely wouldn't want to provide links to delete/update if the user can't do this. There has to be some correct/common/agreed upon/best practices manner in which REST developers are returning these URIs with enough information to allow a client consumer to render a UI that allows its users to make choices it's allowed to make, and not present every possibility. So, would you give me an example of how you might do something like this? Show me two xml snippets.. using the same resource (lets use /orders or /cart your choice), one user is only allowed to GET resource items, the other can do the full gamut.. update/delete/get. Show me how you would respond in both scenarios so that I can see how the response URIs are formatted to allow the client consumer to know what can be done and present it as such in a UI to end users. Thank you.
On Feb 28, 2010, at 5:30 AM, Jan Vincent wrote:
> After careful thought, I believe I understand HATEOAS better now. My question however is, since the current resource dictates where I can go, does this mean that in an application, the UI is highly dependent on which state I am right now?
Yes, axactly.
> For web applications, this is understandable. But for desktop applications, this may not be so.
> Say, I create a rich address book app. Assume further that I have a simple feature wherein if I hover on a contact's entry, I display that contact's brief profile information. How do I accomplish this? Which resource should I be at?
I'd do this:
Either the data is included in the current state already or the current state includes a link to the data which the client downloads when the user hovers over the contact.
The hypermedia constraint issue here is that the hovering will only display data if the data is there or if that link is available.
> In addition to this, is it feasible to access multiple REST web services, thereby maintaining more than one current 'state'?
Personally, I have not made up my mind on this. I guess that an application is limited to a single service unless the service itself point to another service itself.
Jan
>
> On Feb 28, 2010, at 9:20 AM, Jan Algermissen wrote:
>
>> Jan,
>>
>> On Feb 28, 2010, at 1:17 AM, Jan Vincent wrote:
>>
>>> I'm not sure again, why some knowledge of the state machine on the server would be a bad thing.
>>
>> It is bad because it couples client and server. This has the effect that the server owner needs to be aware of its clients to anticipate the impact of change. REST aims to eliminate that coupling.
>>
>>> The idea I have is something like that of a user browsing through different pages of a website, the difference being is that it is based on some script. Should the server decide to deviate from that script, then yes, things would be screwed up.
>>
>> Yep - and REST focusses on the server being able to change without things screwing up. This can only be achieved if the client adheres to the hypermedia constraint. Meaning that the client must look at any steady state (that it is being put into by the server) how to proceed to achieve its overall goal. The client must only decide this based on the current steady state after having reached it.
>>
>>>
>>> As such, the content-types provide some form of contract that some elements would need to exist on the representations the restful service serves.
>>
>> But the client must not make any design time assumptions about the content type it will actually receive.
>>
>>> In the example provided below, I assume the presence of certain links, and some forms I could fill out.
>>
>> The hypermedia constraint forbids such assumptions.
>>
>> This is of course not to say that clients that make such assumptions cannot be appropriate for a given set of requirements. But it is important to understand that the system you end up with is not RESTful because client and server are coupled around these assumptions.
>>
>> In my opinion, RESTful systems have two essential benefits: Simplicity and eliminating the need for service owners to communicate with client owners when they intend to change the service to support some previously unanticipated requirement (think "business agility").
>>
>> Simplicity is a huge benefit in itself and achieving it does not depend on adhering to the hypermedia constraint (see my HTTP-based Type I/II). However, being able to evolve the components of a complex system (think "The Web" or "enterprise integration") at an independent pace easily justifies the effort of building a truly RESTful system.
>>
>> Jan
>>
>>
>>> I don't really care about the format of the URL, and to some extent, even the methods (since I simply fill out forms on the xhtml representation).
>>>
>>> Moreover, I liken what I have described below as something like tabbed browsing by some user. The user, goes on to the main site, clicks on the lists of users, fills in a form to search for some user and then clicks on the result. If another search is needed, a new 'tab' is opened to save the old resource (say, the setting on the browser is to open the same page on the former tab), hit 'back' to the search users form, and search again.
>>>
>>> On Feb 27, 2010, at 9:21 PM, Jan Algermissen wrote:
>>>
>>>> Jan
>>>>
>>>> On Feb 27, 2010, at 10:15 AM, Jan Vincent wrote:
>>>>
>>>>>
>>>>>
>>>>> Hi guys,
>>>>>
>>>>> I wish to create a framework for accessing REST resources over HTTP. I wish to focus on xhtml Content-Type in particular. The idea is that the developer would provide instructions on how to get to the resource from a single URL.
>>>>>
>>>>> Implementation-wise however, the framework would provide all the necessary plumbing to take care of caching and what not.
>>>>>
>>>>> Consider three resources:
>>>>>
>>>>> Root Resource - primary URL ("/"), entry point for the service, has a link to the User List
>>>>> User List - lists all users, on GET, may accept a query string "email" to search for a specific user, contains link to the users' respective profiles
>>>>> User Profile - the profile of a user
>>>>>
>>>>> In order to implement something like get_user_by_email, the developer would have to describe how to get from the Root Resource to the User Profile. In code, a developer using the framework would do something like:
>>>>>
>>>>> get_user_by_email(email) {
>>>>> from("/")
>>>>> .on(200) { |Root|
>>>>> Root.follow("#users_link")
>>>>> .on(200) { |Users|
>>>>> Users.fill_in("#search_form", {"email": email})
>>>>> .on(200) { |SearchResult|
>>>>> SearchResult ...get_first_result...
>>>>> .on(200) { |Profile|
>>>>> return profile_to_some_struct(Profile)
>>>>> }
>>>>> }
>>>>> }
>>>>> }
>>>>> }
>>>>>
>>>>> I'm still working on how to best express this intent as code, and it's pretty ugly now I must admit.
>>>>
>>>> The problem (from a RESTfulness POV) with this is that the code assumes a certain state machine of the application. If the server decides to change that state machine, the code will break.
>>>>
>>>> If the service publishes information that allows the client to make such assumptions as manifested by the code above, the service is not RESTful but is an "HTTP-based Type I" <http://nordsc.com/ext/classification_of_http_based_apis.html#http-type-one> (or "HTTP-based Type II") API.
>>>>
>>>> If the server does not publish such information the code above just represents guess-work which would be worse because the coupling would actually be hidden inside the code.
>>>>
>>>> When you think about such a framework approach, keep in mind that it will lead to tightly coupled systems no matter how "Webby" the system looks. If the service evolves, the client will break.
>>>>
>>>> Whether this is actually a bad thing depends on the requirements - maybe long term evolvability has been traded for getting something started fast and maybe the expected system lifetime is so short that evolvability does not matter, but you need to be aware of this to make an informed decision.
>>>>
>>>> Jan
>>>>
>>>>
>>>>
>>>>
>>>>>
>>>>> However, the framework doesn't really execute the instructions by the developer directly. Instead, it uses its built in cache to get the result. From the example above, the framework would do things in reverse:
>>>>>
>>>>> 1. Is there a cache* of the result to a call to get_user_by_email(email)? If YES, return prior result, If NO, go to step 2
>>>>> 2. Is there a cache of the result to a call getting the search matches of a user given a specified email? If YES, using that result, go down the code -- following the link to the user profile, then returning the result. If NO, go to step 3.
>>>>> 3. Is there a cache of the list of users? If YES, go on and fill in the search form, etc. If NO, go to step 4
>>>>> 4. Is there a cache of the root resource? If YES, go back steps 3,2,1. If NO, get the root resource, and then go further back the steps.
>>>>>
>>>>> * When I say cached, I generally mean that there has been a prior call, and the result was cached AND the cache hasn't expired yet based on the server cache instructions
>>>>>
>>>>> The framework forms a tree of possible scenarios. It starts from the most optimistic test (step 1) on the leaf, and if it fails, goes back to its parent.
>>>>>
>>>>> I believe this would be useful especially if the applications that are going to be built don't follow the UI style of web pages following linked documents. Is this a HATEOAS respecting client? I'd truly appreciate some inputs.
>>>>>
>>>>> FYI, I'll start development of an Erlang version at http://bitbucket.org/jvliwanag/restr/ . Though, there's nothing there yet now. Hehe.
>>>>>
>>>>> Jan Vincent Liwanag
>>>>> jvliwanag@...
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>
>>>> -----------------------------------
>>>> Jan Algermissen, Consultant
>>>> NORD Software Consulting
>>>>
>>>> Mail: algermissen@...
>>>> Blog: http://www.nordsc.com/blog/
>>>> Work: http://www.nordsc.com/
>>>> -----------------------------------
>>>>
>>>>
>>>>
>>>>
>>>
>>> Jan Vincent Liwanag
>>> jvliwanag@...
>>>
>>>
>>>
>>
>> -----------------------------------
>> Jan Algermissen, Consultant
>> NORD Software Consulting
>>
>> Mail: algermissen@...
>> Blog: http://www.nordsc.com/blog/
>> Work: http://www.nordsc.com/
>> -----------------------------------
>>
>>
>>
>>
>
> Jan Vincent Liwanag
> jvliwanag@...
>
>
>
-----------------------------------
Jan Algermissen, Consultant
NORD Software Consulting
Mail: algermissen@...
Blog: http://www.nordsc.com/blog/
Work: http://www.nordsc.com/
-----------------------------------
Sorry guys, missed the CC on this part.
Begin forwarded message:
> From: Jan Algermissen <algermissen1971@...>
> Date: February 28, 2010 5:17:54 PM GMT+08:00
> To: Jan Vincent <jvliwanag@...>
> Subject: Re: [rest-discuss] Idea for a REST client
>
> resend to list?
>
>
> On Feb 28, 2010, at 2:45 AM, Jan Vincent wrote:
>
>> Could you perhaps point me to client implementations that are considered RESTful?
>>
>
> Umm - no :-)
>
> I am working on the Jersey client side right now. Should be 1 or two weeks until it is good enough to show something.
>
> Jan
>
>
>
>> Thanks,
>>
>> On Feb 28, 2010, at 9:20 AM, Jan Algermissen wrote:
>>
>>> Jan,
>>>
>>> On Feb 28, 2010, at 1:17 AM, Jan Vincent wrote:
>>>
>>>> I'm not sure again, why some knowledge of the state machine on the server would be a bad thing.
>>>
>>> It is bad because it couples client and server. This has the effect that the server owner needs to be aware of its clients to anticipate the impact of change. REST aims to eliminate that coupling.
>>>
>>>> The idea I have is something like that of a user browsing through different pages of a website, the difference being is that it is based on some script. Should the server decide to deviate from that script, then yes, things would be screwed up.
>>>
>>> Yep - and REST focusses on the server being able to change without things screwing up. This can only be achieved if the client adheres to the hypermedia constraint. Meaning that the client must look at any steady state (that it is being put into by the server) how to proceed to achieve its overall goal. The client must only decide this based on the current steady state after having reached it.
>>>
>>>>
>>>> As such, the content-types provide some form of contract that some elements would need to exist on the representations the restful service serves.
>>>
>>> But the client must not make any design time assumptions about the content type it will actually receive.
>>>
>>>> In the example provided below, I assume the presence of certain links, and some forms I could fill out.
>>>
>>> The hypermedia constraint forbids such assumptions.
>>>
>>> This is of course not to say that clients that make such assumptions cannot be appropriate for a given set of requirements. But it is important to understand that the system you end up with is not RESTful because client and server are coupled around these assumptions.
>>>
>>> In my opinion, RESTful systems have two essential benefits: Simplicity and eliminating the need for service owners to communicate with client owners when they intend to change the service to support some previously unanticipated requirement (think "business agility").
>>>
>>> Simplicity is a huge benefit in itself and achieving it does not depend on adhering to the hypermedia constraint (see my HTTP-based Type I/II). However, being able to evolve the components of a complex system (think "The Web" or "enterprise integration") at an independent pace easily justifies the effort of building a truly RESTful system.
>>>
>>> Jan
>>>
>>>
>>>> I don't really care about the format of the URL, and to some extent, even the methods (since I simply fill out forms on the xhtml representation).
>>>>
>>>> Moreover, I liken what I have described below as something like tabbed browsing by some user. The user, goes on to the main site, clicks on the lists of users, fills in a form to search for some user and then clicks on the result. If another search is needed, a new 'tab' is opened to save the old resource (say, the setting on the browser is to open the same page on the former tab), hit 'back' to the search users form, and search again.
>>>>
>>>> On Feb 27, 2010, at 9:21 PM, Jan Algermissen wrote:
>>>>
>>>>> Jan
>>>>>
>>>>> On Feb 27, 2010, at 10:15 AM, Jan Vincent wrote:
>>>>>
>>>>>>
>>>>>>
>>>>>> Hi guys,
>>>>>>
>>>>>> I wish to create a framework for accessing REST resources over HTTP. I wish to focus on xhtml Content-Type in particular. The idea is that the developer would provide instructions on how to get to the resource from a single URL.
>>>>>>
>>>>>> Implementation-wise however, the framework would provide all the necessary plumbing to take care of caching and what not.
>>>>>>
>>>>>> Consider three resources:
>>>>>>
>>>>>> Root Resource - primary URL ("/"), entry point for the service, has a link to the User List
>>>>>> User List - lists all users, on GET, may accept a query string "email" to search for a specific user, contains link to the users' respective profiles
>>>>>> User Profile - the profile of a user
>>>>>>
>>>>>> In order to implement something like get_user_by_email, the developer would have to describe how to get from the Root Resource to the User Profile. In code, a developer using the framework would do something like:
>>>>>>
>>>>>> get_user_by_email(email) {
>>>>>> from("/")
>>>>>> .on(200) { |Root|
>>>>>> Root.follow("#users_link")
>>>>>> .on(200) { |Users|
>>>>>> Users.fill_in("#search_form", {"email": email})
>>>>>> .on(200) { |SearchResult|
>>>>>> SearchResult ...get_first_result...
>>>>>> .on(200) { |Profile|
>>>>>> return profile_to_some_struct(Profile)
>>>>>> }
>>>>>> }
>>>>>> }
>>>>>> }
>>>>>> }
>>>>>>
>>>>>> I'm still working on how to best express this intent as code, and it's pretty ugly now I must admit.
>>>>>
>>>>> The problem (from a RESTfulness POV) with this is that the code assumes a certain state machine of the application. If the server decides to change that state machine, the code will break.
>>>>>
>>>>> If the service publishes information that allows the client to make such assumptions as manifested by the code above, the service is not RESTful but is an "HTTP-based Type I" <http://nordsc.com/ext/classification_of_http_based_apis.html#http-type-one> (or "HTTP-based Type II") API.
>>>>>
>>>>> If the server does not publish such information the code above just represents guess-work which would be worse because the coupling would actually be hidden inside the code.
>>>>>
>>>>> When you think about such a framework approach, keep in mind that it will lead to tightly coupled systems no matter how "Webby" the system looks. If the service evolves, the client will break.
>>>>>
>>>>> Whether this is actually a bad thing depends on the requirements - maybe long term evolvability has been traded for getting something started fast and maybe the expected system lifetime is so short that evolvability does not matter, but you need to be aware of this to make an informed decision.
>>>>>
>>>>> Jan
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>>
>>>>>> However, the framework doesn't really execute the instructions by the developer directly. Instead, it uses its built in cache to get the result. From the example above, the framework would do things in reverse:
>>>>>>
>>>>>> 1. Is there a cache* of the result to a call to get_user_by_email(email)? If YES, return prior result, If NO, go to step 2
>>>>>> 2. Is there a cache of the result to a call getting the search matches of a user given a specified email? If YES, using that result, go down the code -- following the link to the user profile, then returning the result. If NO, go to step 3.
>>>>>> 3. Is there a cache of the list of users? If YES, go on and fill in the search form, etc. If NO, go to step 4
>>>>>> 4. Is there a cache of the root resource? If YES, go back steps 3,2,1. If NO, get the root resource, and then go further back the steps.
>>>>>>
>>>>>> * When I say cached, I generally mean that there has been a prior call, and the result was cached AND the cache hasn't expired yet based on the server cache instructions
>>>>>>
>>>>>> The framework forms a tree of possible scenarios. It starts from the most optimistic test (step 1) on the leaf, and if it fails, goes back to its parent.
>>>>>>
>>>>>> I believe this would be useful especially if the applications that are going to be built don't follow the UI style of web pages following linked documents. Is this a HATEOAS respecting client? I'd truly appreciate some inputs.
>>>>>>
>>>>>> FYI, I'll start development of an Erlang version at http://bitbucket.org/jvliwanag/restr/ . Though, there's nothing there yet now. Hehe.
>>>>>>
>>>>>> Jan Vincent Liwanag
>>>>>> jvliwanag@gmail.com
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>> -----------------------------------
>>>>> Jan Algermissen, Consultant
>>>>> NORD Software Consulting
>>>>>
>>>>> Mail: algermissen@...
>>>>> Blog: http://www.nordsc.com/blog/
>>>>> Work: http://www.nordsc.com/
>>>>> -----------------------------------
>>>>>
>>>>>
>>>>>
>>>>>
>>>>
>>>> Jan Vincent Liwanag
>>>> jvliwanag@...
>>>>
>>>>
>>>>
>>>
>>> -----------------------------------
>>> Jan Algermissen, Consultant
>>> NORD Software Consulting
>>>
>>> Mail: algermissen@...
>>> Blog: http://www.nordsc.com/blog/
>>> Work: http://www.nordsc.com/
>>> -----------------------------------
>>>
>>>
>>>
>>>
>>
>> Jan Vincent Liwanag
>> jvliwanag@...m
>>
>>
>>
>
> -----------------------------------
> Jan Algermissen, Consultant
> NORD Software Consulting
>
> Mail: algermissen@...
> Blog: http://www.nordsc.com/blog/
> Work: http://www.nordsc.com/
> -----------------------------------
>
>
>
>
Jan Vincent Liwanag
jvliwanag@...
On Feb 28, 2010, at 11:10 AM, Mark Derricutt wrote:
> Hi,
>
> Would it be more acceptable with something like this:
>
> get_user_by_email(email) {
> from("/")
> .on("vnd.myapplication.root", 200) { |Root|
> Root.follow("#users_link")
>
> In this minor modification, we're checking for successfully receiving content of a specific media type,
But what does the client do if it does not receive application/vnd.myapplication.root?
> then following a named link which (we shall assume) the media types documentation says will be present.
>
> My reading of HATEOAS says we should only need to know the root URL for an API. With such an API above, I could imagine having multiple on(mediatype, code) blocks for the various media types (or versions of media type) the client can handle.
ah, see. Yes, that is the right direction. Suppose the appication uses the media types A, B and C (client develoiper needs to know which types are used by server, otherwise it could not develop the client in the first place). The programming model is then to decide how to pursue the client's overall goal (e.g. to buy a book) for each of the possible reprsentation types (and status codes of course).
During that process the client might adapt the evaluation rules, which reflects the client advancing through its own intermediate states while it is working towards its goal.
The really challanging part is to come up with a client side API that on the one hand exposes that programming model to force the client developer to apply it but on the other hand to make it intuitive enough to be usable.
The underlying issue really is to 'mediate' between the two state machines (the one of the Web application and the one of the client working towards its goal).
Think of a purchase at Amazon: you enter the site, look for a means to browse through the catalogue, pick an item, put it into the cart, go to check-out and purchase. Within that flow, you go through your own states that change how you interprete the next application state. You need to have reached your own 'watching at list of interesting items' state to actually pick one.
Suppose you entered Amazon with a bookmark of a certain item (maybe because that happened to be the top link in your history) - you interprete that application state differently as you would if you already had chosen that item. What you usually will do is go from the item where you enetered Amazon to the overview page or enter a search directly.
So instead of assuming the entry link will take you to the start page, in your mind you have several rules to interpret the initial application state: if it is the start page start browsing, if it is an item page, go to start page first and then start browsing.
Does that help?
Jan
>
> Would this be more acceptable in your mind?
>
> Mark
>
>
> --
> Pull me down under...
>
> On Sun, Feb 28, 2010 at 2:21 AM, Jan Algermissen <algermissen1971@...> wrote:
> Jan
>
> On Feb 27, 2010, at 10:15 AM, Jan Vincent wrote:
>
> >
> >
> > Hi guys,
> >
> > I wish to create a framework for accessing REST resources over HTTP. I wish to focus on xhtml Content-Type in particular. The idea is that the developer would provide instructions on how to get to the resource from a single URL.
> >
> > Implementation-wise however, the framework would provide all the necessary plumbing to take care of caching and what not.
> >
> > Consider three resources:
> >
> > Root Resource - primary URL ("/"), entry point for the service, has a link to the User List
> > User List - lists all users, on GET, may accept a query string "email" to search for a specific user, contains link to the users' respective profiles
> > User Profile - the profile of a user
> >
> > In order to implement something like get_user_by_email, the developer would have to describe how to get from the Root Resource to the User Profile. In code, a developer using the framework would do something like:
> >
> > get_user_by_email(email) {
> > from("/")
> > .on(200) { |Root|
> > Root.follow("#users_link")
> > .on(200) { |Users|
> > Users.fill_in("#search_form", {"email": email})
> > .on(200) { |SearchResult|
> > SearchResult ...get_first_result...
> > .on(200) { |Profile|
> > return profile_to_some_struct(Profile)
> > }
> > }
> > }
> > }
> > }
> >
> > I'm still working on how to best express this intent as code, and it's pretty ugly now I must admit.
>
> The problem (from a RESTfulness POV) with this is that the code assumes a certain state machine of the application. If the server decides to change that state machine, the code will break.
>
> If the service publishes information that allows the client to make such assumptions as manifested by the code above, the service is not RESTful but is an "HTTP-based Type I" <http://nordsc.com/ext/classification_of_http_based_apis.html#http-type-one> (or "HTTP-based Type II") API.
>
> If the server does not publish such information the code above just represents guess-work which would be worse because the coupling would actually be hidden inside the code.
>
> When you think about such a framework approach, keep in mind that it will lead to tightly coupled systems no matter how "Webby" the system looks. If the service evolves, the client will break.
>
> Whether this is actually a bad thing depends on the requirements - maybe long term evolvability has been traded for getting something started fast and maybe the expected system lifetime is so short that evolvability does not matter, but you need to be aware of this to make an informed decision.
>
> Jan
>
>
>
>
> >
> > However, the framework doesn't really execute the instructions by the developer directly. Instead, it uses its built in cache to get the result. From the example above, the framework would do things in reverse:
> >
> > 1. Is there a cache* of the result to a call to get_user_by_email(email)? If YES, return prior result, If NO, go to step 2
> > 2. Is there a cache of the result to a call getting the search matches of a user given a specified email? If YES, using that result, go down the code -- following the link to the user profile, then returning the result. If NO, go to step 3.
> > 3. Is there a cache of the list of users? If YES, go on and fill in the search form, etc. If NO, go to step 4
> > 4. Is there a cache of the root resource? If YES, go back steps 3,2,1. If NO, get the root resource, and then go further back the steps.
> >
> > * When I say cached, I generally mean that there has been a prior call, and the result was cached AND the cache hasn't expired yet based on the server cache instructions
> >
> > The framework forms a tree of possible scenarios. It starts from the most optimistic test (step 1) on the leaf, and if it fails, goes back to its parent.
> >
> > I believe this would be useful especially if the applications that are going to be built don't follow the UI style of web pages following linked documents. Is this a HATEOAS respecting client? I'd truly appreciate some inputs.
> >
> > FYI, I'll start development of an Erlang version at http://bitbucket.org/jvliwanag/restr/ . Though, there's nothing there yet now. Hehe.
> >
> > Jan Vincent Liwanag
> > jvliwanag@...
> >
> >
> >
> >
> >
> >
>
> -----------------------------------
> Jan Algermissen, Consultant
> NORD Software Consulting
>
> Mail: algermissen@...
> Blog: http://www.nordsc.com/blog/
> Work: http://www.nordsc.com/
> -----------------------------------
>
>
>
>
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
>
-----------------------------------
Jan Algermissen, Consultant
NORD Software Consulting
Mail: algermissen@...
Blog: http://www.nordsc.com/blog/
Work: http://www.nordsc.com/
-----------------------------------
Hello Craid, everyone, > When using JAX-RS, I'm finding myself more and more often building a set of JAXB annotated classes that directly represent my resources, separate from the classes that might represent my domain tier (with, perhaps, JPA or Hibernate annotations on them). Besides the fact that this means I don't have to write all of the boring serialization code, it has some other benefits: Unfortunately it usually means that you have a new layer that maps http representations to your domain model... pretty much in the same way that DTO used to (reminds me of those old J2EE core patterns): another level of similar structures, with bean data being copied around. Kevin's example shows exactly that kind of pattern... I believe thats the serialization and ORM tool's job to work on their own through conventions and configuration, with only one model... avoiding that copy and paste of anemic classes in your code and anemic objects between layers. Regards Guilherme Silveira Caelum | Ensino e Inovao http://www.caelum.com.br/ On Thu, Feb 25, 2010 at 4:37 PM, Craig McClanahan <craigmcc@...> wrote: > > > > On Thu, Feb 25, 2010 at 9:12 AM, Jan Algermissen > <algermissen1971@...> wrote: > > > > On Feb 25, 2010, at 5:50 PM, Felipe Gaucho wrote: > > > >> You can use jaxb and use xml and get a restful service... > >> There is no mandatory link between these technologies and "non-rest" style... > > > > Right, sorry to imply that. OTH, there will often be no 1:1 mapping between domain object (that's how I understood POJO) so if you use JAXB on your POJO you'll rather have a serialized domain object than 'resource representation' > > > > When using JAX-RS, I'm finding myself more and more often building a > set of JAXB annotated classes that directly represent my resources, > separate from the classes that might represent my domain tier (with, > perhaps, JPA or Hibernate annotations on them). Besides the fact that > this means I don't have to write all of the boring serialization code, > it has some other benefits: > > - Both XML and JSON serialization, nearly for free. > > - Ability to include properties for however I'm going to represent > links (which don't belong in the domain model at all). > > - Ability to include properties for related resources (either individual > child beans or collections of them), for which JAXB does a > slick job of including as nested sub-elements, versus > entity beans that are typically associated with only one table. > > - Ability to write business logic that is natural to Java > developers used to beans oriented development, > independent of the fact that this resource was received > (or will be sent) across some HTTP or other transport. > > - Ablity to write much better unit and functional tests that can > reason about the resource model (independent of how the > resources got received from a client or synthesized from my > database domain objects), with all the usual > benefits of a strongly typed language (versus using > XPath or poking through some JSON data structure > with string based keys and hoping I spelled the keys right). > > It's good stuff for Java developers. > > Craig >
Hi,
Would it be more acceptable with something like this:
get_user_by_email(email) {
from("/")
.on("vnd.myapplication.root", 200) { |Root|
Root.follow("#users_link")
In this minor modification, we're checking for successfully receiving
content of a specific media type, then following a named link which (we
shall assume) the media types documentation says will be present.
My reading of HATEOAS says we should only need to know the root URL for an
API. With such an API above, I could imagine having multiple on(mediatype,
code) blocks for the various media types (or versions of media type) the
client can handle.
Would this be more acceptable in your mind?
Mark
--
Pull me down under...
On Sun, Feb 28, 2010 at 2:21 AM, Jan Algermissen <algermissen1971@...>wrote:
> Jan
>
> On Feb 27, 2010, at 10:15 AM, Jan Vincent wrote:
>
> >
> >
> > Hi guys,
> >
> > I wish to create a framework for accessing REST resources over HTTP. I
> wish to focus on xhtml Content-Type in particular. The idea is that the
> developer would provide instructions on how to get to the resource from a
> single URL.
> >
> > Implementation-wise however, the framework would provide all the
> necessary plumbing to take care of caching and what not.
> >
> > Consider three resources:
> >
> > Root Resource - primary URL ("/"), entry point for the service, has a
> link to the User List
> > User List - lists all users, on GET, may accept a query string "email" to
> search for a specific user, contains link to the users' respective profiles
> > User Profile - the profile of a user
> >
> > In order to implement something like get_user_by_email, the developer
> would have to describe how to get from the Root Resource to the User
> Profile. In code, a developer using the framework would do something like:
> >
> > get_user_by_email(email) {
> > from("/")
> > .on(200) { |Root|
> > Root.follow("#users_link")
> > .on(200) { |Users|
> > Users.fill_in("#search_form", {"email": email})
> > .on(200) { |SearchResult|
> > SearchResult ...get_first_result...
> > .on(200) { |Profile|
> > return profile_to_some_struct(Profile)
> > }
> > }
> > }
> > }
> > }
> >
> > I'm still working on how to best express this intent as code, and it's
> pretty ugly now I must admit.
>
> The problem (from a RESTfulness POV) with this is that the code assumes a
> certain state machine of the application. If the server decides to change
> that state machine, the code will break.
>
> If the service publishes information that allows the client to make such
> assumptions as manifested by the code above, the service is not RESTful but
> is an "HTTP-based Type I" <
> http://nordsc.com/ext/classification_of_http_based_apis.html#http-type-one>
> (or "HTTP-based Type II") API.
>
> If the server does not publish such information the code above just
> represents guess-work which would be worse because the coupling would
> actually be hidden inside the code.
>
> When you think about such a framework approach, keep in mind that it will
> lead to tightly coupled systems no matter how "Webby" the system looks. If
> the service evolves, the client will break.
>
> Whether this is actually a bad thing depends on the requirements - maybe
> long term evolvability has been traded for getting something started fast
> and maybe the expected system lifetime is so short that evolvability does
> not matter, but you need to be aware of this to make an informed decision.
>
> Jan
>
>
>
>
> >
> > However, the framework doesn't really execute the instructions by the
> developer directly. Instead, it uses its built in cache to get the result.
> From the example above, the framework would do things in reverse:
> >
> > 1. Is there a cache* of the result to a call to get_user_by_email(email)?
> If YES, return prior result, If NO, go to step 2
> > 2. Is there a cache of the result to a call getting the search matches of
> a user given a specified email? If YES, using that result, go down the code
> -- following the link to the user profile, then returning the result. If NO,
> go to step 3.
> > 3. Is there a cache of the list of users? If YES, go on and fill in the
> search form, etc. If NO, go to step 4
> > 4. Is there a cache of the root resource? If YES, go back steps 3,2,1. If
> NO, get the root resource, and then go further back the steps.
> >
> > * When I say cached, I generally mean that there has been a prior call,
> and the result was cached AND the cache hasn't expired yet based on the
> server cache instructions
> >
> > The framework forms a tree of possible scenarios. It starts from the most
> optimistic test (step 1) on the leaf, and if it fails, goes back to its
> parent.
> >
> > I believe this would be useful especially if the applications that are
> going to be built don't follow the UI style of web pages following linked
> documents. Is this a HATEOAS respecting client? I'd truly appreciate some
> inputs.
> >
> > FYI, I'll start development of an Erlang version at
> http://bitbucket.org/jvliwanag/restr/ . Though, there's nothing there yet
> now. Hehe.
> >
> > Jan Vincent Liwanag
> > jvliwanag@...
> >
> >
> >
> >
> >
> >
>
> -----------------------------------
> Jan Algermissen, Consultant
> NORD Software Consulting
>
> Mail: algermissen@...
> Blog: http://www.nordsc.com/blog/
> Work: http://www.nordsc.com/
> -----------------------------------
>
>
>
>
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
>
Hi all,
>Personally, I have not made up my mind on this. I guess that an application is limited >to a single service unless the service itself point to another service itself.
I don't know that I agree with this, not that you are saying it can or can not be allowed. I believe since the client drives the process, the client can call as many services as it wants to present a valid UI for a user. Think of a web page, where it may call different pages in frames, or separate div css tags. Mashups as well are basicallyaggregatedservice calls to present multiple things in the UI. I see no reason a client can only use a single URI to start with. After all, you don't only use a single API call in your own code, there should be no restriction on a client using multiple service resources to build up a single view to the end user. Just my opinion tho.
Hey all,
Unfortunately it usually means that you have a new layer that maps
http representations to your domain model... pretty much in the same
way that DTO used to (reminds me of those old J2EE core patterns):
another level of similar structures, with bean data being copied
around.
Are you saying that the J2EE patterns are no longer applicable in general, or for your specific use case? I certainly can understand that if you have a reason for your own situation, but in general, especially if I plan on others working on my code, I'd rather stick to what is well known and usually practiced.
Kevin's example shows exactly that kind of pattern... I believe thats
the serialization and ORM tool's job to work on their own through
conventions and configuration, with only one model... avoiding that
copy and paste of anemic classes in your code and anemic objects
between layers.
Yes, true.. but if you combine the two models into one, while possible, you are requiring your front tier to know about ejb/back end stuff. An even better example of why you would avoid this.. if you were going to use those same XSD generated JAXB classes in a Jersey client (or java client).. and you have the ejb entity annotations in them as well, your client side now has to have the ejb classes to compile. Maybe this is not a big deal, but I personally think that is bad form. I would be confused as a new developer coming aboard a project that was a client side app that had classes annotated with EJB stuff. I'd think they were being stored in a local database and some sort of web/ejb engine was embedded in the client app.
I suppose it really depends on what you will use them for. If it's purely your server side and they are not part of some public API that you plan on sharing, maybe it's fine. I still like the warm fuzzy feeling I get knowing my code is separated and should something arise, I am prepared without any further work needed.
> Yes, true.. but if you combine the two models into one, while possible, you > are requiring your front tier to know about ejb/back end stuff. > no, the frontend and its clients are required to know the meta model only .. a model defined in a XSD schema.. it is the same using or not Java EE.. if you are using Java in the client side, a good practice is to export the ejb-client jar to be used as domain model .. otherwise the client will consume the JSON or XML formats generated by the frontend and somehow needs to understand that.. (you will use a mime-type that indicates that to the clients, lije application/myapp-xml)
Volume 5 of This week in REST is up on the REST wiki - http://rest.blueoxen.net/cgi-bin/wiki.pl?RESTWeekly_Feb_22_2010 and the blog - http://wp.me/pMXr1-E. Lot's of interesting links last week! For contributing links this week visit http://rest.blueoxen.net/cgi-bin/wiki.pl?RESTWeekly_Mar_1_2010 Cheers! Ivan
Hello guys
>Are you saying that the J2EE patterns are no longer applicable in general, or for your specific use case?
I believe if our client is a REST client, DTO as we used to do does
not make sense.
According to an old Roy's post: "A REST API should never have typed
resources that are significant to the client."
http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven
It goes on as: "Specification authors may use resource types for
describing server implementation behind the interface, but those types
must be irrelevant and invisible to the client. The only types that
are significant to a client are the current representations media
type and standardized relation names."
A typed resource (as this DTO provides to your client) implies in
tighter coupling between both sides, the opposite direction of freedom
that you want.
> Yes, true.. but if you combine the two models into one, while possible, you are requiring your front tier to know about ejb/back end stuff. An even better example of why you would avoid this.. if you were going to use those same XSD generated JAXB classes in a Jersey client (or java client).. and you have the ejb entity annotations in them as well, your client side now has to have the ejb classes to compile.
You are right.. that's why you should not do that. But that is not
what I meant, I probably did not leave it clear enough. Session beans
implies tight coupling by nature: it requires your clients to know and
share the same interface that your server knows, the opposite
direction that REST goes, instead of relation names and media types,
you are using java interfaces and classes. Taking out session beans
and adding rest in its place:
(dto) dto s-bean e-bean
CLIENT ---> WEB TIER --> APP TIER --> DB TIER
becomes:
(independent model) resource from here on...
CLIENT --> WEB TIER -->
APP TIER --> DB TIER
What I meant is that it should be your frameworks' responsability to
map the resource representation to your model... while it does not, so
one requires dtos, going back to non-restful architectures.
DTOs are also typically anemic, and because Java classes are closed
(unlike ruby ones), the client will have to live with an anemic DTO
representation of his resource.
That's why I believe DTO and REST should not be together...
Regards
Guilherme Silveira
Caelum | Ensino e Inovao
http://www.caelum.com.br/
We all agree that the REST interface should be decoupled from domain models .. BUT, once you make your interface REST it doesn't implies that you can't benefit for sharing a same model in both client and server sides.. it would be just waste of resources ... You should design your REST interface in a way any client can inspect and reverse engineer the data model but you are not obligated to ignore the domain model in the client side.. if you have the same technology in both sides, why not to benefit from it ? > What I meant is that it should be your frameworks' responsability to > map the resource representation to your model... Agree completely and I just don't see why this mapping cannot be an automagic mapping from JAXB if JAXb is available n the client side. :)
ok, it won't be adaptative anymore and once you change the domain model in the server side you crash the client.. what is a crime for web crowlers, right ? If it is about that, please ignore my comments....... 2010/3/1 Felipe Gacho <fgaucho@...>: > We all agree that the REST interface should be decoupled from domain > models .. BUT, once you make your interface REST it doesn't implies > that you can't benefit for sharing a same model in both client and > server sides.. it would be just waste of resources ... > > You should design your REST interface in a way any client can inspect > and reverse engineer the data model but you are not obligated to > ignore the domain model in the client side.. if you have the same > technology in both sides, why not to benefit from it ? > >> What I meant is that it should be your frameworks' responsability to >> map the resource representation to your model... > > Agree completely and I just don't see why this mapping cannot be an > automagic mapping from JAXB if JAXb is available n the client side. > :) > -- ------------------------------------------ Felipe Gacho 10+ Java Programmer CEJUG Senior Advisor
On 2010-03-02 00.44, Guilherme Silveira wrote: > DTOs are also typically anemic, and because Java classes are closed > (unlike ruby ones), the client will have to live with an anemic DTO > representation of his resource. > > That's why I believe DTO and REST should not be together... When I do "DTO"'s in my REST API, I tend to reuse the same DTO classes over and over again. I have "LinksValue" and "LinkValue" being used all over the place, rather than type-specific ones. These then get serialized to JSON in a predictable way. My client (Swing Java) therefore does not have to know all that many DTO classes, and instead focus on generic interactions like "select one link from a list and post it". So the client code becomes much more reusable, due to the low level of coupling. /Rickard
All - Recent Jan A published a classification for HTTP-based APIs and I have a question (or questions) related to the distinction between HTTP-based Type II and REST. For all I can tell, the biggest difference between these two classifications is that HTTP-based Type II assumes the flow of the state machine and REST does not. I don't understand why the distinction matters. I've read here that it matters because the server could mix things up and then the client in the hard coded situation would be stuck. I'd suggest that even in the "REST" scenario, a server could throw a curve ball that will leave the client with no place to go but home. I think this classification has to more to do with degrees of coupling than it has to do with REST. Would an automated load tester (client) for a web application that uses hypermedia to traverse the application and execution not be considered RESTful because the state-machine for the application is hard-coded to some degree? I'd like some clarity on this topic. Thanks. Eb
This is a real situation that I will need to deal with in the next year or so. Farmers raising fruits and vegetables are going online, posting their locally-available fresh food in a variety of Web apps, including some that I have written. Unfortunately, several of these apps exist, sometimes more than one that a given farmer might want to use: for example, one that sells to consumers, and another that sells to institutions like hospitals and schools. So if a farmer advertises 100 pints of strawberries on 2 such sites, and consumers buy 75 pints on one site and institutions buy 80 on another site, the farmer cannot fulfill all the orders, and neither of the buyers knows that a problem exists. Moreover, the farmers are getting tired of posting their available food on more than one site. So I think about one place (or actually more than one, but one per farmer) where farmers can go and post their food, which will then appear on all of the relevant sites, but when ordered, will go back to the original site. The original site would need to check availability and respond back to the buyer with problems, which could also cause race conditions and vicious circles. Not a pretty picture. Is there a RESTful way to handle this situation? Something vaguely like this, but on steroids? http://wiki.activitystrea.ms/Actions http://wiki.developers.facebook.com/index.php/Action_Links
yes, you can use REST to integrate all applications of your business network. Data consumers (doesn't matter if hospitals or end customers) can use clients connected to the farmers servers (directly) to buy goodies. If well done, data providers and data consumers don't need any agreement in advance, they need to share only the knowledge about the mime types. And the clients need to know a list of data providers (the servers entry points). The rest of the conversation can be handled following REST.... The farmers will handle only their own application (data provider), 1 entry point to maintain. This way you don't need to "orchestrate" how providers and consumers communicate each other, it will follow the nature of the web :) * each farmer will know his current status .. eventually if you need to have a network information you can create a client to consume the data from the different applications and then to generate the overall reports... On Tue, Mar 2, 2010 at 1:06 PM, Bob Haugen <bob.haugen@...> wrote: > > > This is a real situation that I will need to deal with in the next year or > so. > > Farmers raising fruits and vegetables are going online, posting their > locally-available fresh food in a variety of Web apps, including some > that I have written. > > Unfortunately, several of these apps exist, sometimes more than one > that a given farmer might want to use: for example, one that sells to > consumers, and another that sells to institutions like hospitals and > schools. > > So if a farmer advertises 100 pints of strawberries on 2 such sites, > and consumers buy 75 pints on one site and institutions buy 80 on > another site, the farmer cannot fulfill all the orders, and neither of > the buyers knows that a problem exists. > > Moreover, the farmers are getting tired of posting their available > food on more than one site. > > So I think about one place (or actually more than one, but one per > farmer) where farmers can go and post their food, which will then > appear on all of the relevant sites, but when ordered, will go back to > the original site. The original site would need to check availability > and respond back to the buyer with problems, which could also cause > race conditions and vicious circles. Not a pretty picture. > > Is there a RESTful way to handle this situation? > > Something vaguely like this, but on steroids? > http://wiki.activitystrea.ms/Actions > http://wiki.developers.facebook.com/index.php/Action_Links > > -- ------------------------------------------ Felipe Gacho 10+ Java Programmer CEJUG Senior Advisor
On Mar 2, 2010, at 1:06 PM, Bob Haugen wrote: > This is a real situation that I will need to deal with in the next year or so. > > Farmers raising fruits and vegetables are going online, posting their > locally-available fresh food in a variety of Web apps, including some > that I have written. > > Unfortunately, several of these apps exist, sometimes more than one > that a given farmer might want to use: for example, one that sells to > consumers, and another that sells to institutions like hospitals and > schools. > > So if a farmer advertises 100 pints of strawberries on 2 such sites, > and consumers buy 75 pints on one site and institutions buy 80 on > another site, the farmer cannot fulfill all the orders, and neither of > the buyers knows that a problem exists. > > Moreover, the farmers are getting tired of posting their available > food on more than one site. > > So I think about one place (or actually more than one, but one per > farmer) where farmers can go and post their food, which will then > appear on all of the relevant sites, but when ordered, will go back to > the original site. The original site would need to check availability > and respond back to the buyer with problems, which could also cause > race conditions and vicious circles. Not a pretty picture. Why don't the farmers tell the central site that they have 100 pints of strawberries and let the central site deal with the stock management? Jan > > Is there a RESTful way to handle this situation? > > Something vaguely like this, but on steroids? > http://wiki.activitystrea.ms/Actions > http://wiki.developers.facebook.com/index.php/Action_Links > > > ------------------------------------ > > Yahoo! Groups Links > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
On Tue, Mar 2, 2010 at 6:26 AM, Jan Algermissen <algermissen1971@...> wrote: > Why don't the farmers tell the central site that they have 100 pints of strawberries and let the central site deal with the stock management? > There is no central site, or rather, there are 2 or more central sites where the 100 pints of strawberries will be listed and where buyers can order. One of those could be "the" central site, but then how would you coordinate order conflicts where consumers and institutions try to buy the same strawberries?
2010/3/2 Felipe Gacho <fgaucho@...> > Data consumers (doesn't matter if hospitals or end customers) can use clients connected to the farmers servers (directly) to buy goodies. > The farmers don't have servers, they will post their information on more than one Web site that offers food from many farmers, but to different markets. The orders on those different marketplaces can conflict.
On Mar 2, 2010, at 1:33 PM, Bob Haugen wrote: > On Tue, Mar 2, 2010 at 6:26 AM, Jan Algermissen <algermissen1971@...> wrote: >> Why don't the farmers tell the central site that they have 100 pints of strawberries and let the central site deal with the stock management? >> > > There is no central site, or rather, there are 2 or more central sites > where the 100 pints of strawberries will be listed and where buyers > can order. So the scenarios is one of N market (-sites) and M farmes that tell each market how much they offer, yes? Isn't the conflict management 'just' a standard scenario for the need of two phase commits in order to reliably solve the problem? With the note of course that it is not doable in reality and that you just have to live with the uncertaintees? E.g. accept 'overbooking' and over compensation payment to those that did order but not received and goods? Do you think that the application of REST (or anything else) to the problem changes anything? Jan > > One of those could be "the" central site, but then how would you > coordinate order conflicts where consumers and institutions try to buy > the same strawberries? ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
Well, despite your final architecture, the farmers should post information in just one application. This is the first issue you should tackle IMO.. (this application can then distribute for several other apps.. ) That was the tip from Jan ... 2010/3/2 Bob Haugen <bob.haugen@...>: > 2010/3/2 Felipe Gacho <fgaucho@...> >> Data consumers (doesn't matter if hospitals or end customers) can use clients connected to the farmers servers (directly) to buy goodies. >> > > The farmers don't have servers, they will post their information on > more than one Web site that offers food from many farmers, but to > different markets. > > The orders on those different marketplaces can conflict. > -- ------------------------------------------ Felipe Gacho 10+ Java Programmer CEJUG Senior Advisor
Eb wrote: > > Would an automated load tester (client) for a web application that > uses hypermedia to traverse the application and execution not be > considered RESTful because the state-machine for the application is > hard-coded to some degree? > Exactly. My example is an Atom Protocol client's delete function. Unless you're using a forms language that supports DELETE, to instruct the client how that function works using hypertext, then the delete function must be hard-coded in the client chrome. This works well for deleting members, which is well specified -- but deleting collections is not. An Atom Protocol client which hard-codes a collection-delete function has its interoperability limited to those systems which interpret collection-delete behavior the same way, i.e. coupling. I don't actually care if a collection-delete button is in the client chrome, so long as its behavior is dictated by received hypertext. This decoupling allows the server's interpretation of the unspecified collection-delete function to change, without breaking clients, i.e. decoupling. The hypertext constraint is the line between this coupling/decoupling, and this holds true for a load-tester as well*. An Atom-Protocol- centric system with a load-testing REST client that incorporates collection delete, does not need re-coding in the event of the system changing. If it does need re-coding, then it cannot be a REST client, by definition. A tightly-coupled client of any sort, even an implementation-specific load tester, fails to achieve REST if the collection-delete is hard- coded within. Xforms for example, allows the server to simply define a new <model>, within hypertext, dynamically updating all clients when they next refresh -- without changing the UI (machine or human). So a load-testing REST client would just be driving an Xforms interface dynamically. Refreshing the Xform can reflect the reconfiguration of the system, without the load-tester needing to care. It's driving the same form, which now happens to have a different <model>. Hard-coding within the client defeats the purpose of the hypertext constraint, so the result is not a Uniform REST Interface. -Eric * I wouldn't necessarily consider a load tester to be a valid use-case for REST. I'd hard-code it, since that hard-coding amounts to unit tests. REST trades off the efficiencies that a protocol-exercising client likely requires.
Bob Haugen wrote: > > Is there a RESTful way to handle this situation? > Why bother? Set up a marketplace on NetSuite and let their Web Service handle all those issues you point out, charge merchants for stallage. Granted, a REST API to NetSuite would be nice. Trying to model that on rest-discuss, presuming nobody loses interest in the project, could take years! ;-) -Eric
On Mar 2, 2010, at 1:48 PM, Eric J. Bowman wrote: > Eb wrote: >> >> Would an automated load tester (client) for a web application that >> uses hypermedia to traverse the application and execution not be >> considered RESTful because the state-machine for the application is >> hard-coded to some degree? >> > > Exactly. My example is an Atom Protocol client's delete function. > Unless you're using a forms language that supports DELETE, to instruct > the client how that function works using hypertext, then the delete > function must be hard-coded in the client chrome. Not quite. The trick is to only show the delete button when there is an 'edit' link in the entry. You break the hypermedia constraint if the button is always there (or clickable) because you assume that the edit link will be there. It might not. > > This works well for deleting members, which is well specified -- but > deleting collections is not. An Atom Protocol client which hard-codes > a collection-delete function has its interoperability limited to those > systems which interpret collection-delete behavior the same way, i.e. > coupling. Deleting collections is not defined by AtomPub so you'd need to do that in a medi atype (extension) first. OTH, HTTP provides sufficient semantics to DELETE any resource. So if you want to build a client that exploits that it might well show a delete button all the time - in the same way a browser shows a GET 'button' all the time (the location bar). > > I don't actually care if a collection-delete button is in the client > chrome, so long as its behavior is dictated by received hypertext. > This decoupling allows the server's interpretation of the unspecified > collection-delete function to change, without breaking clients, i.e. > decoupling. > > The hypertext constraint is the line between this coupling/decoupling, > and this holds true for a load-tester as well*. An Atom-Protocol- > centric system with a load-testing REST client that incorporates > collection delete, does not need re-coding in the event of the system > changing. If it does need re-coding, then it cannot be a REST client, > by definition. Right. Jan > > A tightly-coupled client of any sort, even an implementation-specific > load tester, fails to achieve REST if the collection-delete is hard- > coded within. Xforms for example, allows the server to simply define a > new <model>, within hypertext, dynamically updating all clients when > they next refresh -- without changing the UI (machine or human). > > So a load-testing REST client would just be driving an Xforms interface > dynamically. Refreshing the Xform can reflect the reconfiguration of > the system, without the load-tester needing to care. It's driving the > same form, which now happens to have a different <model>. > > Hard-coding within the client defeats the purpose of the hypertext > constraint, so the result is not a Uniform REST Interface. > > -Eric > > * I wouldn't necessarily consider a load tester to be a valid use-case > for REST. I'd hard-code it, since that hard-coding amounts to unit > tests. REST trades off the efficiencies that a protocol-exercising > client likely requires. > > > ------------------------------------ > > Yahoo! Groups Links > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
On Tue, Mar 2, 2010 at 6:47 AM, Jan Algermissen <algermissen1971@...> wrote: > So the scenarios is one of N market (-sites) and M farmes that tell each market how much they offer, yes? Yup. Where N at the moment is 2 or 3, but M is a lot. > Isn't the conflict management 'just' a standard scenario for the need of two phase commits in order to reliably solve the problem? Pretty much. > With the note of course that it is not doable in reality and that you just have to live with the uncertaintees? E.g. accept 'overbooking' and over compensation payment to those that did order but not received and goods? > The overbooking will be known and can be communicated well before compensation payments would be required. > Do you think that the application of REST (or anything else) to the problem changes anything? REST does not solve the problem but the REST constraints affect the problem. And I don't mean in the sense of "dogmatic rules that must be obeyed", but in the sense that this problem will exist on the open Web and thus runs into the problems of coordination over an untrustworthy network. So I am interested in "the best that can practically be done" rather than absolute uncertainty. In those terms, compensation payments is not the best that can be done. Some form of optimistic order taking with conflict notification seems possible. Which will leave the problem of how to get the various sites to agree on the order handling procedure, which will not be easy either.
On Mar 2, 2010, at 12:44 PM, amaeze77 wrote: > All - > > Recent Jan A published a classification for HTTP-based APIs and I have a question (or questions) related to the distinction between HTTP-based Type II and REST. > > For all I can tell, the biggest difference between these two classifications is that HTTP-based Type II assumes the flow of the state machine and REST does not. Correct. REST's hypermedia constraint mandates that the client only relies on the current steady state (think "Web page") when deciding what to do next. > I don't understand why the distinction matters. Because it decouples the server implementation from the client's expectations. It is the reason why e.g. Amazon can evolve without considering the millions of clients using it. Amazon can even evolve *while* clients are in the middle of a purchase. > I've read here that it matters because the server could mix things up and then the client in the hard coded situation would be stuck. It matters because REST aims to free the server from having to consider its clients when it evolves. > I'd suggest that even in the "REST" scenario, a server could throw a curve ball that will leave the client with no place to go but home. Sure. For those cases, REST emphasizes that the client MUST expect such cases and deal with them instead of considering it to be a violation of some agreement. IOW, any 4xx response is still valid from the POV of the constract that governs the communication. It is not a sign of broken communication. > I think this classification has to more to do with degrees of coupling than it has to do with REST. Huh? REST is essentially about removing any coupling beyond the uniform interface - so that statement does not make sense :-) > > Would an automated load tester (client) for a web application that uses hypermedia to traverse the application and execution not be considered RESTful because the state-machine for the application is hard-coded to some degree? Why would it be hard-coded? (Of course you hard-code the knowledge of the media types used, but that is not hard coding the state machine) Jan > > I'd like some clarity on this topic. > > Thanks. > > Eb > > > > ------------------------------------ > > Yahoo! Groups Links > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
On Mar 2, 2010, at 2:03 PM, Bob Haugen wrote: > > REST does not solve the problem but the REST constraints affect the > problem. And I don't mean in the sense of "dogmatic rules that must > be obeyed", but in the sense that this problem will exist on the open > Web and thus runs into the problems of coordination over an > untrustworthy network. > > So I am interested in "the best that can practically be done" rather > than absolute uncertainty. In those terms, compensation payments is > not the best that can be done. Some form of optimistic order taking > with conflict notification seems possible. > > Which will leave the problem of how to get the various sites to agree > on the order handling procedure, which will not be easy either. That sounds like an excellent overall target towards my RESTifying procurement exercise could evolve. If you don't mind, I'll 'steal' your use case for that :-) I agree that the use of REST affects the problem - will chew on it. Jan ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
Jan Algermissen wrote: > > Sure. For those cases, REST emphasizes that the client MUST expect > such cases and deal with them instead of considering it to be a > violation of some agreement. IOW, any 4xx response is still valid > from the POV of the constract that governs the communication. It is > not a sign of broken communication. > Very well put. Now I'm going to go off and disagree with you some... -Eric
Hi Bob, > So I think about one place (or actually more than one, but one per > farmer) where farmers can go and post their food, which will then > appear on all of the relevant sites, but when ordered, will go back to > the original site. The original site would need to check availability > and respond back to the buyer with problems, which could also cause > race conditions and vicious circles. Not a pretty picture. For those in supply-chain management, this is the arena of Distribution Requirements(or resource) Planning. http://www.enotes.com/management-encyclopedia/distribution-distribution-requirements-planning The traditional scenario is that of a bookstore where people can post books for sale, like Amazon. DRP was designed before the web so generally each "store" would send a forecast and the DRP system would allocate inventory based on the forecast AND actual orders. That's no longer real-time enough. A good example of handling race conditions today is online ticket sales. Tickets (inventory) are reserved for a period of time. If the financial transaction occurs within a given period of time, the tickets are permanently removed from stock. If the basket goes stale, the tickets (inventory) are returned to stock and become available. This assumes there's a single inventory location. It would take work but your could come up with a peer-based solution using the Amazon model or some other peer-to-peer model like http://www.cs.virginia.edu/papers/vecchio-p2p-IDEAS05.pdf Mark W.
> Correct. REST's hypermedia constraint mandates that the client only relies on the current steady state (think "Web page") when deciding what to do next. So in the case where a client presupposes the "next" steady state, the hypermedia constraint has been broken? I'll need to digest that. :) What about if I can handle my assumption being incorrect? Do I get REST points then? To your point about Amazon and to extend that example, if I wrote a client that knew how to order books following a "predefined" state machine (not wrapped behind some OO classes but trying to use hypermedia explicitly say via curl) and it broke because Amazon added a new page that my client couldn't handle, then my client is not RESTful because it assumed what a response should/would be. Whereas if my client made zero assumptions (acted like it didn't know what would be returned even though it indicates its preference) and didn't ultimately didn't understand what was returned, it would still be RESTful because it made no assumptions and inspected every response. Am I summarizing right? Eb On Tue, Mar 2, 2010 at 7:07 AM, Jan Algermissen <algermissen1971@...>wrote: > > On Mar 2, 2010, at 12:44 PM, amaeze77 wrote: > > > All - > > > > Recent Jan A published a classification for HTTP-based APIs and I have a > question (or questions) related to the distinction between HTTP-based Type > II and REST. > > > > For all I can tell, the biggest difference between these two > classifications is that HTTP-based Type II assumes the flow of the state > machine and REST does not. > > Correct. REST's hypermedia constraint mandates that the client only relies > on the current steady state (think "Web page") when deciding what to do > next. > > > I don't understand why the distinction matters. > > Because it decouples the server implementation from the client's > expectations. It is the reason why e.g. Amazon can evolve without > considering the millions of clients using it. Amazon can even evolve *while* > clients are in the middle of a purchase. > > > I've read here that it matters because the server could mix things up > and then the client in the hard coded situation would be stuck. > > It matters because REST aims to free the server from having to consider its > clients when it evolves. > > > I'd suggest that even in the "REST" scenario, a server could throw a > curve ball that will leave the client with no place to go but home. > > Sure. For those cases, REST emphasizes that the client MUST expect such > cases and deal with them instead of considering it to be a violation of some > agreement. IOW, any 4xx response is still valid from the POV of the > constract that governs the communication. It is not a sign of broken > communication. > > > I think this classification has to more to do with degrees of coupling > than it has to do with REST. > > Huh? REST is essentially about removing any coupling beyond the uniform > interface - so that statement does not make sense :-) > > > > > Would an automated load tester (client) for a web application that uses > hypermedia to traverse the application and execution not be considered > RESTful because the state-machine for the application is hard-coded to some > degree? > > Why would it be hard-coded? > > (Of course you hard-code the knowledge of the media types used, but that is > not hard coding the state machine) > > Jan > > > > > I'd like some clarity on this topic. > > > > Thanks. > > > > Eb > > > > > > > > ------------------------------------ > > > > Yahoo! Groups Links > > > > > > > > ----------------------------------- > Jan Algermissen, Consultant > NORD Software Consulting > > Mail: algermissen@... > Blog: http://www.nordsc.com/blog/ > Work: http://www.nordsc.com/ > ----------------------------------- > > > > >
On Mar 2, 2010, at 4:02 PM, Eb wrote: > > > Correct. REST's hypermedia constraint mandates that the client only relies on the current steady state (think "Web page") when deciding what to do next. > > So in the case where a client presupposes the "next" steady state, the hypermedia constraint has been broken? Yes, right. > I'll need to digest that. :) Yeah - it is irritating. OTH, it is more about making explicit that you simply cannot make such assumptions in a networked system rather than some mysterious magic. Of course the client code eventually assumes it might find that link and provide code for that case but the approach is different and influences how you think about coding the client. > What about if I can handle my assumption being incorrect? Do I get REST points then? In a sense, yes. But the code should try to make explicit that from the current steady state the overall goal of the client cannot be persued any further. IWO, that there is no transition available that it would make sense to follow next. If you just do a try...catch I'd not give you credit :-) > > To your point about Amazon and to extend that example, if I wrote a client that knew how to order books following a "predefined" state machine (not wrapped behind some OO classes but trying to use hypermedia explicitly say via curl) and it broke because Amazon added a new page that my client couldn't handle, then my client is not RESTful because it assumed what a response should/would be. Yes. > Whereas if my client made zero assumptions (acted like it didn't know what would be returned even though it indicates its preference) and didn't ultimately didn't understand what was returned, it would still be RESTful because it made no assumptions and inspected every response. Yes. Two issues with this: 1. Despite not understanding what was returned, REST emphasizes that there still is communication because you know that you are dealing with a "I do not understand" situation (aka 406 essentially). This is a lot more than 'blurb - fail'. In an enterprise context, you could use the 406 response headers and body to directly instruct developers how to bring the client up to date. This is different from doing an error analysis in an OO system. 2. The way you design your media types greatly influences how gracefully your client can react. In fact, I have come to think that the hypermedia constraint issue is probably the primary aspect of machine-targetted media type design (which we still have no public example of, unfortunately). Jan > > Am I summarizing right? > > Eb > > > On Tue, Mar 2, 2010 at 7:07 AM, Jan Algermissen <algermissen1971@...> wrote: > > On Mar 2, 2010, at 12:44 PM, amaeze77 wrote: > > > All - > > > > Recent Jan A published a classification for HTTP-based APIs and I have a question (or questions) related to the distinction between HTTP-based Type II and REST. > > > > For all I can tell, the biggest difference between these two classifications is that HTTP-based Type II assumes the flow of the state machine and REST does not. > > Correct. REST's hypermedia constraint mandates that the client only relies on the current steady state (think "Web page") when deciding what to do next. > > > I don't understand why the distinction matters. > > Because it decouples the server implementation from the client's expectations. It is the reason why e.g. Amazon can evolve without considering the millions of clients using it. Amazon can even evolve *while* clients are in the middle of a purchase. > > > I've read here that it matters because the server could mix things up and then the client in the hard coded situation would be stuck. > > It matters because REST aims to free the server from having to consider its clients when it evolves. > > > I'd suggest that even in the "REST" scenario, a server could throw a curve ball that will leave the client with no place to go but home. > > Sure. For those cases, REST emphasizes that the client MUST expect such cases and deal with them instead of considering it to be a violation of some agreement. IOW, any 4xx response is still valid from the POV of the constract that governs the communication. It is not a sign of broken communication. > > > I think this classification has to more to do with degrees of coupling than it has to do with REST. > > Huh? REST is essentially about removing any coupling beyond the uniform interface - so that statement does not make sense :-) > > > > > Would an automated load tester (client) for a web application that uses hypermedia to traverse the application and execution not be considered RESTful because the state-machine for the application is hard-coded to some degree? > > Why would it be hard-coded? > > (Of course you hard-code the knowledge of the media types used, but that is not hard coding the state machine) > > Jan > > > > > I'd like some clarity on this topic. > > > > Thanks. > > > > Eb > > > > > > > > ------------------------------------ > > > > Yahoo! Groups Links > > > > > > > > ----------------------------------- > Jan Algermissen, Consultant > NORD Software Consulting > > Mail: algermissen@... > Blog: http://www.nordsc.com/blog/ > Work: http://www.nordsc.com/ > ----------------------------------- > > > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
I know this is an old thread, but I just want to inform other readers that might find it through a search engine, that Microsoft, Google, and Yahoo has put together a solution called "OAuth WRAP". With WRAP you can present security claims using simple small security tokens called SWT. There's a pretty good presentation here: http://microsoftpdc.com/Sessions/SVC19 - it refers pretty much to Microsofts Azure Access Control service, but the protocols used are standardized. Homepage: http://wiki.oauth.net/OAuth-WRAP Discussion group: http://groups.google.com/group/oauth-wrap-wg /Jrn --- In rest-discuss@yahoogroups.com, Jrn Wildt <jw@...> wrote: > >> Is there any standard RESTful way of doing claims based > >> authorization a'la SAML and CardSpace? The authorization schemes I > >> have seen so far usually encodes a user reference and nothing more > >> - there's no secure way to assert claims like email=xxx@... > >> <mailto:email%3Dxxx%40yyy.zz> or > >> employeenumber=12345 or age-below-twenty. > >> > >> I guess you can use SAML "HTTP Redirect (GET) Binding", but that > >> generates such a huge URL that it seems impractical to use (it's a > >> base-64 encoding of a zip-encoding of a SAML XML document). > >> > >> As I understand it a RESTful authorization scheme must be > >> stateless, so you cannot rely on any kind of session use. This > >> means you have to transfer all the claims on each and every request > >> which again means a potentially big overhead. > >> > >> What is needed is a standard way of encoding multiple claims in a > >> compact, secure, trusted way such that they can be transferred on > >> each request without too much overhead (including whatever crypto > >> stuff is needed). > >> > >> Maybe you could create a temporary ressource somewhere with the > >> claims, then at least you only had to transfer the claims URL, not > >> all the claims, and the server could then cache these claims. > >> > >> Any ideas or references? > >> > >> It even occurs to me that claims could be more RESTful than > >> username/password since they don't require any out-of-band setup of > >> user accounts. All that is needed is a standard for claims and then > >> everything should work if the claims are issued by an authority > >> that the web service trusts. No need for any human interaction - > >> the server just sends a challenge "show me your claims (and I > >> accept them from authority X, Y and Z)" whereafter the client sends > >> the claims. These claims can even be obtained without human > >> interaction if the client and the claims server trust each other. > >> > >> Comments? > >> > >> Thanks, Jrn > > > > > > > > ------------------------------------ > > > > Yahoo! Groups Links > > > > > > >
Hello Everyone, Can someone point me to the latest documents/discussions on using REST for PAYPAL and 3rd Party Payment processors? Thanks in advance and best regards, -E
> On 2010-03-02 00.44, Guilherme Silveira wrote: >> DTOs are also typically anemic, and because Java classes are closed >> (unlike ruby ones), the client will have to live with an anemic DTO >> representation of his resource. >> >> That's why I believe DTO and REST should not be together... > > When I do "DTO"'s in my REST API, I tend to reuse the same DTO classes > over and over again. I have "LinksValue" and "LinkValue" being used all > over the place, rather than type-specific ones. These then get > serialized to JSON in a predictable way. My client (Swing Java) > therefore does not have to know all that many DTO classes, and instead > focus on generic interactions like "select one link from a list and post > it". So the client code becomes much more reusable, due to the low level > of coupling. Hello Rickard! Your case is in the non-typical DTO case :) >> DTOs are also typically anemic... Regards > > /Rickard > > > ------------------------------------ > > Yahoo! Groups Links > > > >
Now now, Bob -- stop toying with the kids. Phil told me back in the eBuilt days (2000) that you were an expert in inventory control programs for ERP, so I expect you know the answer to that problem better than I do. REST does not have a solution to distributed (more than one central server) transaction problems, other than to use server-side intermediaries to change the problem itself. REST doesn't even try to solve that problem, since it is behind the server interface. IIRC, the multi-market problem is known to be intractable even for strict transaction protocols. We end up doing the same thing that everyone else does, which is to use journaling of orders and either compensating transactions or strict separation of inventory between sites (with periodic rebalancing of inventory as stocks decrease). I think the farmer could be provided with a RESTful application for tracking and rebalancing inventory across multiple sites, but I wouldn't call that a RESTful solution to the inventory problem itself. It just works well enough to keep the farmer happy. ....Roy
On Tue, Mar 2, 2010 at 8:02 PM, Roy T. Fielding <fielding@...> wrote: > Now now, Bob -- stop toying with the kids. Phil told me back > in the eBuilt days (2000) that you were an expert in inventory > control programs for ERP, so I expect you know the answer to that > problem better than I do. I know how to solve it in ERP, but ERP does not handle this problem. (That's about all I do these days, work on problems that ERP does not solve..and most of them happen on the Web, which means I sometimes run into REST constraints. And sometimes maybe think about them incorrectly.) > IIRC, the multi-market problem is known to be intractable > even for strict transaction protocols. We end up doing the > same thing that everyone else does, which is to use journaling > of orders and either compensating transactions or strict > separation of inventory between sites (with periodic rebalancing > of inventory as stocks decrease). You don't think notifications would work? Or do you think of them as compensating transactions? I'm thinking, by the way, that one resource gets nominated by the farmer as their inventory resource, and the other markets just display a representation and then send an update to the inventory resource, which could respond with an over-order notice. > I think the farmer could > be provided with a RESTful application for tracking and > rebalancing inventory across multiple sites, but I wouldn't > call that a RESTful solution to the inventory problem itself. > It just works well enough to keep the farmer happy. "Well enough" is all I need. Farming is like that, too. But all the separation and rebalancing would have to happen automatically, without farmer involvement. So one inventory resource consumed by multiple markets is an attractive idea, if it can work.
On Mar 3, 2010, at 4:22 AM, Bob Haugen wrote: > On Tue, Mar 2, 2010 at 8:02 PM, Roy T. Fielding <fielding@...> wrote: >> IIRC, the multi-market problem is known to be intractable >> even for strict transaction protocols. We end up doing the >> same thing that everyone else does, which is to use journaling >> of orders and either compensating transactions or strict >> separation of inventory between sites (with periodic rebalancing >> of inventory as stocks decrease). > > You don't think notifications would work? Or do you think of them as > compensating transactions? Message (distributed) notifications or database notifications? They might work -- it depends on how fast the orders are being processed and how reliable the delivery. Notification-based systems tend to fail spectacularly at peak times, and are usually overkill during non-peak times. *shrug* There's a good chance that you could build a good enough system for family farms using nothing more than email. > I'm thinking, by the way, that one resource gets nominated by the > farmer as their inventory resource, and the other markets just display > a representation and then send an update to the inventory resource, > which could respond with an over-order notice. I thought the sites were independent. Yes, a single inventory resource accessed by both sites should be fine as long as the state changing requests are queued by the implementation. The implementation of that resource could even be an atomic test+decrement. But how are you going to teach the markets to use such a resource? >> I think the farmer could >> be provided with a RESTful application for tracking and >> rebalancing inventory across multiple sites, but I wouldn't >> call that a RESTful solution to the inventory problem itself. >> It just works well enough to keep the farmer happy. > > "Well enough" is all I need. Farming is like that, too. > > But all the separation and rebalancing would have to happen > automatically, without farmer involvement. So one inventory resource > consumed by multiple markets is an attractive idea, if it can work. Yes, if you can make the markets use a shared resource that is much easier. If not, then it should also be possible to automate a balancer even if the farmer's interface to the markets is two different web pages. You could then provide the farmer with mash-up interface that treats the markets as gateways. ....Roy
Roy, Thanks for the responses. And I am sorry if I toyed with the kids, but this group often helps me think through these kinds of problems, even if I don't get a ready-made solution (nor do I expect one). >> You don't think notifications would work? Or do you think of them as >> compensating transactions? > > Message (distributed) notifications or database notifications? Message. Maybe even an HTTP error message? Then GET the resource again. > They might work -- it depends on how fast the orders are being > processed and how reliable the delivery. Notification-based > systems tend to fail spectacularly at peak times, and are > usually overkill during non-peak times. *shrug* There's a > good chance that you could build a good enough system for family > farms using nothing more than email. That's the way a lot of them that sell to local consumers work now. But they are herding into online markets, including several that I know and help. > Yes, a single inventory > resource accessed by both sites should be fine as long as the > state changing requests are queued by the implementation. > The implementation of that resource could even be an atomic > test+decrement. Yup. Wonder if this would help? http://www.w3.org/1999/04/Editing/ >But how are you going to teach the markets > to use such a resource? So far, not many markets, the people who run them often know each other, and might be taught to cooperate. But we'll see. > if you can make the markets use a shared resource that > is much easier. If not, then it should also be possible to > automate a balancer even if the farmer's interface to the > markets is two different web pages. You could then provide > the farmer with mash-up interface that treats the markets as > gateways. A reasonable fall-back. Thanks again.
I am experimenting with a setup where I base a website completely on a REST API. This means creating the REST API first and then only using that for fetching data to display in the website. This leads to some troubles with URLs for the website - especially how the web URL should identify the REST resources to display. In a traditional website there is a tight integration to the backend database. This means we can use DB identifiers and readable names in our URLs. Something which most people agree on is good for SEO. Example: to show Peters blog we use the URL http://www.mysite.com/blogs/peter. But what happens if the backend is a REST API? Now we cannot just write "peter" in the URL since this tells us nothing about how to fetch the "peter" resource. We must instead include the whole URL for the "peter" resource. This URL could be http://rest.mysite.com/feeds/peter which would serve an ATOM feed for our website to format and display nicely. From this follows that our website URL must include the url encoded "peter" reference. Now our URL becomes: http://www.mysite.com/blogs/http%3a%2f%2frest.mysite.com%2ffeeds%2fpeter The downside of this is that our web URL becomes SEO unfriendly, unreadable and impossible to remember. The upside is that we can now display *any* ATOM feed on our website, not just our own feeds, which in turn happens to be both good and bad. It's bad because evil persons can craft a URL with a reference to an evil hackers ATOM feed and make it look like a URL to our site. It's good because it gives us much more flexibility. I could of course publish a URL template for my ATOM resources, stating that "peter" can be mapped to http://rest.mysite.com/feeds/peter. But the use of URL templates makes link relations in the ATOM feed less usefull. A link relation must include the complete URL to the related resource. So an ATOM related link to "older entries" could be http://rest.mysite.com/feeds/peter?page=2. Now we *have* to put this complete reference into our website's URL: http://rest.mysite.com/blogs/peter/http%3a%2f%2frest.mysite.com%2ffeeds%2fpeter%3fpage%3d2 So our website's URL becomes more and more obscure if we really don't want to know anything about the REST API's url templates. Have I missed something? Comments? Thanks.
I'd like to clear up some misconceptions about URIs and search engines. The meaning of "SEF URI" should be restricted to "don't use query strings". There's nothing "SEO unfriendly" about the URL you posted. Any advantage to placing keywords in URIs comes down to extreme cases of tie-breaking where all other things are equal. Meaning there are a hundred other things you can do for your site to make it rank better, regardless of your URI allocation scheme. Lacing your URIs with keywords is almost completely pointless in this regard. There's really no connection between your URI allocation scheme's usability, and either REST or SEO. Your resource, http://www.mysite.com/blogs/peter, can return anything you need it to return, regardless of any back-end API. You can refer to my demo site for ideas on how to integrate Atom feeds into Web pages in a direct fashion: http://charger.bisonsystems.net/xmltest/example.axml (works cross-browser when /date service is online, lacks CSS for IE) Client side or server side, you can easily use XSLT to transform an Atom source into an XHTML page. Just remember to link to that source in the result, and use its URI in Content-Location if you're using conneg to determine whether to return Atom or XHTML for a particular resource. I developed the Atom framework first, that's the API, the XHTML is just transformations. There's really no need to put the URI of a "REST resource" into the URI of a request. If that's how you want your back- end to look, fine, but you still don't need to expose it to the world. -Eric "Jorn Wildt" wrote: > > I am experimenting with a setup where I base a website completely on > a REST API. This means creating the REST API first and then only > using that for fetching data to display in the website. This leads to > some troubles with URLs for the website - especially how the web URL > should identify the REST resources to display. > > In a traditional website there is a tight integration to the backend > database. This means we can use DB identifiers and readable names in > our URLs. Something which most people agree on is good for SEO. > Example: to show Peters blog we use the URL > http://www.mysite.com/blogs/peter. > > But what happens if the backend is a REST API? Now we cannot just > write "peter" in the URL since this tells us nothing about how to > fetch the "peter" resource. We must instead include the whole URL for > the "peter" resource. This URL could be > http://rest.mysite.com/feeds/peter which would serve an ATOM feed for > our website to format and display nicely. From this follows that our > website URL must include the url encoded "peter" reference. Now our > URL becomes: > > http://www.mysite.com/blogs/http%3a%2f%2frest.mysite.com%2ffeeds%2fpeter > > The downside of this is that our web URL becomes SEO unfriendly, > unreadable and impossible to remember. The upside is that we can now > display *any* ATOM feed on our website, not just our own feeds, which > in turn happens to be both good and bad. It's bad because evil > persons can craft a URL with a reference to an evil hackers ATOM feed > and make it look like a URL to our site. It's good because it gives > us much more flexibility. > > I could of course publish a URL template for my ATOM resources, > stating that "peter" can be mapped to > http://rest.mysite.com/feeds/peter. > > But the use of URL templates makes link relations in the ATOM feed > less usefull. A link relation must include the complete URL to the > related resource. So an ATOM related link to "older entries" could be > http://rest.mysite.com/feeds/peter?page=2. Now we *have* to put this > complete reference into our website's URL: > > http://rest.mysite.com/blogs/peter/http%3a%2f%2frest.mysite.com%2ffeeds%2fpeter%3fpage%3d2 > > So our website's URL becomes more and more obscure if we really don't > want to know anything about the REST API's url templates. > > Have I missed something? Comments? > > Thanks. > >
Thanks for the feedback. I'll skip the SEO discussion, that's not the interesting part for me. > Your resource, http://www.mysite.com/blogs/peter, can return anything > you need it to return, regardless of any back-end API. Yes. But I have to point out that www.mysite.com is not the same as my backend rest.mysite.com. The www site has no database in it. It is a simple system that just transforms ATOM feeds exactly like you describe. I don't want the www site to return anything but HTML. If someone wants the ATOM feed, I'll give them the backend REST resource URL. The point here is that if I construct the REST API first, and build my website on that then I am guaranteed that I have a usefull and working REST API for the rest of the world to use. My system becomes open by design, not as an afterthought. The interesting point is: how does my www site find the correct ATOM resouce when the only thing it has in it's URL is "peter", as in http://www.mysite.com/blogs/peter. I can see two solutions: 1) The www site have apriori knowledge of how to transform "peter" into the backend resource http://rest.mysite.com/feeds/peter 2) The complete resource reference is included in the URL, as in http://www.mysite.com/blogs/http%3a2f%2frest.mysite.com%2ffeeds%2fpeter > There's really no need to put the URI of a "REST > resource" into the URI of a request. But this is exactly my problem: if I do not want to use apriori information about the REST resource's URL format, then I must include the complete REST URL into my www URL. Either I get a very tight integration with my backend REST API, by utilizing apriori information, or I must include complete resource URLs in my www URL. Consider a situation where the owner of the website merges with another company, leading to a situation where we have two backend REST APIs - the original one, and one for the new company. We want to present both backend datasets on the same www site. Which URL template should I use to convert "peter" into a resource? The original or the new? What if both datasets has a "peter" blog? Now it becomes clearer that putting the full resource URL into my www URL can be nice. /Jrn
"Jorn Wildt" wrote: > > > Your resource, http://www.mysite.com/blogs/peter, can return > > anything you need it to return, regardless of any back-end API. > > Yes. But I have to point out that www.mysite.com is not the same as > my backend rest.mysite.com. The www site has no database in it. > Doesn't matter. What matters is *that* URIs appear in link relations, not *what* URIs you are using. > > The interesting point is: how does my www site find the correct ATOM > resouce when the only thing it has in it's URL is "peter", as in > http://www.mysite.com/blogs/peter. > By using hypertext and link relations. > > 1) The www site have apriori knowledge of how to transform "peter" > into the backend resource http://rest.mysite.com/feeds/peter > OK, if you mean transform _from_ the backend resource. That's the sort of implementation detail in REST that's opaque behind the interface. BTW, your website must have this a priori knowledge of the relationship, otherwise it can't instruct the client about it. > > 2) The complete resource reference is included in the URL, as in > http://www.mysite.com/blogs/http%3a2f%2frest.mysite.com%2ffeeds%2fpeter > > > There's really no need to put the URI of a "REST > > resource" into the URI of a request. > > But this is exactly my problem: if I do not want to use apriori > information about the REST resource's URL format, then I must include > the complete REST URL into my www URL. > OK, you don't want to use 1), gotcha. So use hypertext. What my demo shows is an XHTML file which calls an XSLT transformation. The XSLT code knows what source file to tranform by reading it from the XHTML file's <link rel='source'/> href (actually from @data, same diff). For client-side XSLT like my demo, this has to be from the same domain due to cross-site restrictions. A server-side transformation works the same way, though. REST only requires that the source file is linked to. Your <link rel='source'/> will point to a different domain, is all. No need to put it in the URI. > > Either I get a very tight integration with my backend REST API, by > utilizing apriori information, or I must include complete resource > URLs in my www URL. > I'm obviously not agreeing with that 'must include' bit. You can also employ headers, including Link. My demo includes my /date translation service, which has a different domain. Due to cross-site restrictions, I must proxy it from the demo domain. I just do this using libcurl: http://charger.bisonsystems.net/date?iso=2010-03-04 simply loads: http://en.wiski.org/date?iso=2010-03-04 while adding some headers. There will eventually be language negotiation there, for a multilingual service. The headers on charger aren't right, but they do include a 'Content-Location: http://en.wiski.org/date?iso=2010-03-04' header. That's your 1), there, except what I've done is identify the cross- domain source in a header -- which is completely RESTful. > > Consider a situation where the owner of the website merges with > another company, leading to a situation where we have two backend > REST APIs - the original one, and one for the new company. We want to > present both backend datasets on the same www site. Which URL > template should I use to convert "peter" into a resource? The > original or the new? What if both datasets has a "peter" blog? Now it > becomes clearer that putting the full resource URL into my www URL > can be nice. > If both datasets have a "peter" blog, they should still have different atom:id's, so it's easy to create "peter_a" and "peter_b" resources. I'm still not seeing any problem that requires URIs within URIs here. Whichever API is used, is reflected in hypertext link relations, inside markup and/or inside headers. Disclaimer: Again, REST has nothing to do with URI design, so my solution vs. yours has nothing to do with degrees of REST or any such thing. The reason I post counter-examples is to steer you away from the notion that putting URIs in your URIs is somehow required by, or amounts to following, REST. What matters is that you're providing the standard link relations, what (mostly) doesn't matter is the syntax of the URIs in those link relations (the URIs themselves are opaque, including domain name). A "hypertext link relation" can be a <link/> element, a Link: header, or in my example a Content-Location header. You can check with RFC 2616bis for current practice on Content-Location, I'm not sure if it's now allowed as a general response instead of just as a conneg response. -Eric
> > Yes. But I have to point out that www.mysite.com is not the same as > my backend rest.mysite.com. The www site has no database in it. It is > a simple system that just transforms ATOM feeds exactly like you > describe. I don't want the www site to return anything but HTML. If > someone wants the ATOM feed, I'll give them the backend REST resource > URL. > If someone wants the Atom feed, they can find it by looking for a link rel='alternate' with type='application/atom+xml' (you may also use this link relation to load a source file, instead of rel='source'). Browsers tend to do this automatically these days, and display an icon for feed subscription. If a request comes to www.mysite.com, presumably the default is to serve an index.html file. However, if the request only has Accept: application/atom+xml, you have three options. First, a 404 response. Second and third, conneg. You can serve the Atom source as a 200 response with a Content-Location header pointing to the rest.mysite.com backend. Or you can respond with a 301 redirect with a Location header pointing to the rest.mysite. com backend. This "a priori" knowledge on the server's part does not constitute coupling. -Eric
> > First, a 404 response. > Errr, 406. -Eric
On 4 March 2010 09:28, Jorn Wildt <jw@...> wrote: > > 1) The www site have apriori knowledge of how to transform "peter" into > the backend resource http://rest.mysite.com/feeds/peter > > 2) The complete resource reference is included in the URL, as in > http://www.mysite.com/blogs/http%3a2f%2frest.mysite.com%2ffeeds%2fpeter > > You could use something like the <BASE> tag and then you can have a mix of both 1 and 2.
Thanks for taking your time to discuss this.
I'll dive right into this statement:
> http://charger.bisonsystems.net/date?iso=2010-03-04 simply loads:
> http://en.wiski.org/date?iso=2010-03-04 while adding some headers.
Now, how does your server know how to transform the incoming http://charger.bisonsystems.net/date?iso=2010-03-04 request to the http://en.wiski.org/date?iso=2010-03-04 location?
You can only do this by having some knowledge of your backend REST API, namely that the URL template is http://en.wiski.org/date?iso={date}. If you did not have this information, how would you then know what to do with the "2010-03-04" value?
> If both datasets have a "peter" blog, they should still have different
> atom:id's, so it's easy to create "peter_a" and "peter_b" resources.
Yes, I understand that. Assuming we keep the original hostname, you could create new resources named http://rest.mysite.com/blogs/peter_a and http://rest.mysite.com/blogs/peter_b based on the ATOM ids.
But what if I don't want to create a new resource "space" for the two companies? I want to keep the old http://rest.mysite.com/feeds/peter and http://rest.anothersite.com/feeds/peter ... well, never mind, I was trying to argue why a URI in a URI could be useful - not discus how to merge companies :-)
> BTW, your website must have this a priori knowledge of the relationship,
> otherwise it can't instruct the client about it.
Not really. I could do a search for "peter" and get back a URL.
The search itself would of course be through the REST API and it could return yet another ATOM feed of the search result (using open search for instance).
Lets see what should be in the search result. Links! Certainly, but links to what? It cannot return links to the www site since that would imply my REST API knew something about it's client. So it must return links to REST API resources. This mean we would get the REST API URL http://rest.mysite.com/feeds/peter back.
Now, what should my www site do with the http://rest.mysite.com/feeds/peter URL in order to generate a browser link to itself that can display the feed? My www site knows nothing about the URL format, so I has no way to figure out how to select "peter" and present a http://www.mysite.com/blogs/peter URL to the end user. And here is my point: the only thing the www site can do, is to include the full REST URL in the www URL.
You could argue that the REST search should return both the complete resource URL as well as the feed name "peter". But then, again, the www site would have to know how to transform "peter" into the REST API URL http://rest.mysite.com/feeds/peter.
/Jrn
> You could argue that the REST search should return both the complete resource URL as well as the feed name "peter". But then, again, the www site would have to know how to transform "peter" into the REST API URL http://rest.mysite.com/feeds/peter. I would like to add this: what happens if the REST search result suddenly starts to return links to feeds outside of our company space? Maybe it found a good feed on http://johny.com/myblog/feed. The www site would have no way to shorten this URL in it's own URL. All it can do is to include the complete URL. /Jrn
> > You could argue that the REST search should return both the complete resource URL as well as the feed name "peter". But then, again, the www site would have to know how to transform "peter" into the REST API URL http://rest.mysite.com/feeds/peter. > > I would like to add this: what happens if the REST search result suddenly starts to return links to feeds outside of our company space? Maybe it found a good feed on http://johny.com/myblog/feed. The www site would have no way to shorten this URL in it's own URL. All it can do is to include the complete URL. The funny thing is that I would have absolutely no problems with handing URLs around if I was building a standalone WPF/Java client. All this is just because "we", as the general internet users, wants nice looking URLs in our browsers. If only browsers would hide that address bar until we wanted to enter something in it :-) /Jrn
Hey Jorn,
I sent a reply to you a few minutes ago, meant to put it to the list.. if you could so kindly reply and add the list back in to it.
There IS a way to hide that URL line.. and I don't mean actually programatically hiding it with javascript.
I worked on a Swing desktop application that made calls to a server back end a while back, and we had a business need come up to convert it to a web app. We actually installed the web UI on desktops, using Install4J. The app ran just like a desktop app, within a browser on the client, but we embedded tomcat and hsqldb as part of the isntallation. The reason was we had a need to deploy the UI centrally, but also wanted individual users to have it locally to mimic a desktop app. Rather than maintain a Swing desktop app and a web UI, we incorporated the two needs into one web UI.
Anyway, in order to mimic a desktop, we didn't want the URI to be important. We could tell our end users "ignore the URI line..no matter what you see", but that wasn't appropriate. So instead, our www.website.com URI would break the web app into two frames.. a hidden frame of 0 height, and the main frame. In the hidden frame, I loaded all the javascript, downloaded UI images (to cache them in the client), and so on. The UI was purely ajax based, meaning every link/form/etc made ajax calls (to the local tomcat container). The beauty of this was that the URL line remained fixed on our www.website.com URL, while we used javascript and div tags to show/hide portions of the site as needed dynamically. This allowed us to avoid page reloading when a link was clicked, and it kept all the javascript and other "global" variables in the browser memory the entire time the UI was loaded. That way, even if we reloaded the main frame, the javascript was always in
memory in the hidden frame and didn't have to be reloaded... thus avoiding losing "state" variables and other things.
Just thought I'd share. Not sure if you have considered this approach.
--- On Thu, 3/4/10, Jorn Wildt <jw@...> wrote:
From: Jorn Wildt <jw@...>
Subject: [rest-discuss] Re: Thoughts about URLs for a REST driven website
To: rest-discuss@yahoogroups.com
Date: Thursday, March 4, 2010, 4:48 AM
> > You could argue that the REST search should return both the complete resource URL as well as the feed name "peter". But then, again, the www site would have to know how to transform "peter" into the REST API URL http://rest. mysite.com/ feeds/peter.
>
> I would like to add this: what happens if the REST search result suddenly starts to return links to feeds outside of our company space? Maybe it found a good feed on http://johny. com/myblog/ feed. The www site would have no way to shorten this URL in it's own URL. All it can do is to include the complete URL.
The funny thing is that I would have absolutely no problems with handing URLs around if I was building a standalone WPF/Java client. All this is just because "we", as the general internet users, wants nice looking URLs in our browsers. If only browsers would hide that address bar until we wanted to enter something in it :-)
/Jørn
(putting message back on the list) ----- Original Message ----- From: "Kevin Duffey" <andjarnic@...> To: "Jorn Wildt" <jw@...> Sent: Thursday, March 04, 2010 5:44 PM Subject: Re: [rest-discuss] Thoughts about URLs for a REST driven website > I've read the thread you guys got going on. I guess I am not understanding > what the problem is. I've utilized REST as a sort of MVC for my web site > as well. I look at it this way, the REST API that is available beyond the > web to anyone who wants to consume it, ALSO allows me to build my own web > UI around it. I do this in one of two ways. One is to have a separate > "view" tier web front end. I do this with a simple struts/mvc style web > app with JSPs for the view of the web site. When a user clicks on a link > that is handled by an action, in that action code I make the REST call to > the API, and do all the work inside of the action as needed. The other way > I might do it, is to deploy the API within my own UI web app archive. When > I do this, I utilize prototype to make ajax calls directly to the REST > api, specifying headers, method type, etc as needed. It really depends on > which way you want to deploy your API and your front end UI. > > I am a little confused on why end users must have some > www.mysite.com/peter as their URL? That seems to me that you're passing > the REST API uris up to the end user of your UI. I don't quite understand > why that is needed or wanted. I don't imagine anyone else using your API > from anothter site, or say desktop or mobile client, would somehow use > your API URIs within their UI representation of your API. Is there some > reason you must incorporate the REST URIs into the front end UI your users > would use? > > > > > > > > > > > > > > >
Kevin, you are describing exactly what I am trying to implement, except for the little detail where the whole problem lies: > When a user clicks on a link that is handled by an action, in that action > code I make the REST call to the API How does that link look? I guess it is something like http://www.kevin.com/display-item.jsp?id=1234. Right? I also guess your REST API URL is something like http://rest.kevin.com/items/1234. Right? Now how do you get from the first URL to the second URL? You grap "id=1234" from the website URL and paste "1234" into the REST API URL template. This means you have apriori knowledge of your REST API. Okay, this in itself is not such a big problem. You are just utilizing a prepublished official bookmark/template URL. But what if your item contains links to other items? or to purchase orders for that item? or to users that bought that item? You know, "hypermedia as an engine of application state". The "orders" REST URLs could look like http://rest.kevin.com/orders/5678 or http://rest.kevin.com/items/1234/orders/5678 - it could even be links to orders placed on amazon.com! The point is, you do not know the URL format. You can only follow the link and grab the resource found at the endpoint. Now, on http://www.kevin.com/display-item.jsp?id=1234 we want to have nice HTML links to the orders placed on the item. We also want to display the order information inside www.kevin.com. What are the URLs for these links? You cannot just link to the REST URLs since these do not return HTML, so linking to http://rest.kevin.com/items/1234/orders/5678 or http://rest.amazon.com/orders/6554 is not an option. The only solution I see is to include the complete REST order URL in the web URL, and thus we end up with: http://www.kevin.com/display-item-orders?order=http%3a%2f%2frest.amazon.com%2forders%2f6554 Hopefully this explain why I insist on passing complete URLs around - while at the same time I really don't like it because of the ugly URLs. /J�rn ----- Original Message ----- From: "Kevin Duffey" <andjarnic@...> To: "Jorn Wildt" <jw@...> Sent: Thursday, March 04, 2010 5:44 PM Subject: Re: [rest-discuss] Thoughts about URLs for a REST driven website > I've read the thread you guys got going on. I guess I am not understanding > what the problem is. I've utilized REST as a sort of MVC for my web site > as well. I look at it this way, the REST API that is available beyond the > web to anyone who wants to consume it, ALSO allows me to build my own web > UI around it. I do this in one of two ways. One is to have a separate > "view" tier web front end. I do this with a simple struts/mvc style web > app with JSPs for the view of the web site. When a user clicks on a link > that is handled by an action, in that action code I make the REST call to > the API, and do all the work inside of the action as needed. The other way > I might do it, is to deploy the API within my own UI web app archive. When > I do this, I utilize prototype to make ajax calls directly to the REST > api, specifying headers, method type, etc as needed. It really depends on > which way you want to deploy your API and your front end UI. > > I am a little confused on why end users must have some > www.mysite.com/peter as their URL? That seems to me that you're passing > the REST API uris up to the end user of your UI. I don't quite understand > why that is needed or wanted. I don't imagine anyone else using your API > from anothter site, or say desktop or mobile client, would somehow use > your API URIs within their UI representation of your API. Is there some > reason you must incorporate the REST URIs into the front end UI your users > would use? > > > > > > > > > > > > > > >
Hi,
I think we must be thinking different ways of doing this. To me, the front action/mvc/struts/spring/whatever is an API consumer. Like you said, take away the URL line. It's responsible for making API calls and translating the responses from the API into some form of valid HTML.. and likewise, when the user clicks on some link, your front-end should know that <a href="mysite.com?action=GetOrders&user=kevin&id=12345&filterBy=name,phone"> should translate to something like GET http://myservice.com/users/kevin/orders/12345?filterby=name,phone.
Of course the filterBy is just a silly example of adding a query string to it. The point is, if you build your API to be an API and not try to be an API that is ALSO going to be a web front end... you got to think along the API frame of mind. It sounds like you are (or were) planning on building the API first, which seems logical to me. Once the API is done, you build yoru web front end separately. It will "translate" xml/json responses from the API into valid HTML info. Let's say you get a list of orders that you want to display nicely. Your initial call GET /orders returns 5 <order id="..." name="..."/> elements. Now you want to do something with that to make it look nice for the end user. In my mind, I do NOT care what the URL looks like to the end user..they just see links, images, etc that they interact with. The makeup of that link means nothing to your API or the user. So, you want to represent a list of orders in HTML to the user. You have some
sort of servlet, action, etc that handles when the user clicks that link. The click they do does NOT go straight back to the API layer. You will intercept everything they click on, to translate it as necessary into a valid REST API call. They may get some thml like:
<a href="http://myuisite.com?action=SelectedOrder&orderId=12">Order 12</a>
<a href="http://myuisite.com?action=SelectedOrder&orderId=13">Order 13</a>
when they click one of those, the servlet or action or bean you have mapped to handle the SelectedOrder mapping, gets passed in the orderid request paramter. You pull the ID from that, you then make a call to the REST API GET /orders/12 filling in the Authorization header, etc as needed to satisfy the REST API. In response, you may get some order details. From that, you either populate a JAX-RS bean (which is what I do when I have webiste "client" call a REST API service), or what have you. You use that response data to then return the JSP, JSF, etc..what ever you have that finally generates the HTML the end user sees.
So.. maybe I am way off.. but that is how I have and will continue to do it until I am told otherwise.
You'll have to forgive me, but I am not entirely sure what apriori knowledge means? If it's what I think it means, you're saying that when you call /orders to get the list of orders.. it would return you orders AND link info regarding those individual orders. So like above, let's say it returns:
<orders>
<order id="12" name="SomeOrder" href="MyService.com/orders/12"/>
...
</orders>
Now, when you return this as HTML like I showed above, then the user clicks, your web site will still maintain the "state" of the <orders> items it got back. Order 12 is selected, you search thru your <order> items for 12, you find it, you pull the HREF out of it and you use that URI to make the call to the REST API. I don't see how this is prior knowledge.. the REST API gave you this on the last call. You're web site is basically translating it to HTML and maintaining that state across requests. But that is ok.. its the "client" and it's allowed to maintain state, be it in a HttpSession or using URLRewriting.
I am personally a fan of setting loading up JavaScript and passing various state info to the javascript tier, so as to avoid any HttpSession data. Every single HREF/JavaScript link would then append the "stateful" info back on to the link when making the ajax call. It really depends tho on the amount of data as well. If you got hundreds of users loading up 1000's of orders each, you are going to have to scale your web tier to maintain all that. I still argue that this is the job of a desktop app.. The browser is just an extension of the "client side", and sending that info back for the browser to keep in memory seems trivial to me.
As for responses from the API containing link info to various resources, I've not seen an example of that before, but I would say the web site "client" maintains response data that came back, and maybe provides a few options to the end user, be it a link to the order, or a link elsewhere. You can only present to the end user the info you got back from the API tho. If there are alternative links available, you can provide more links to the end user. Still, the same thing happens. Say you get orders, and history info back. So you set up a different servlet or action to handle the user clicking on a history link. Your web tier server still has all the response data from the API, either in the HttpSession (so that any servlert/action can access it), or passed back to the web client UI so that they can then pass it along on the next request. Either way, you have access to that info so you can make the appropriate REST API call at that point.
Not sure that covers everything you asked about.
On Thu, Mar 4, 2010 at 11:43 AM, Jrn Wildt <jw@...> wrote: Hopefully this explain why I insist on passing complete URLs around - while > at the same time I really don't like it because of the ugly URLs. > > I try to make my client apps pretty smart, but adding a sense of aesthetics (so they can "hold their nose" figuratively when invoking an "ugly" URL) is beyond even my capabilities :-). More seriously, who cares what the URL looks like, when you should really be treating them (on the client side) as opaque strings? From the server perspective, having a predicatable URL template could be considered a security risk. I would imagine that applications very concerned about security are going to be minting random-string URLs (sometimes even with a limited lifetime) anyway. Think about the confirmation message many web sites send when you first sign up, with some random string of characters included that, if you try to invoke it a year from now, will simply return a 404. It's perfectly reasonable for the server application that *created* a URL to impute some sort of meaning to the URL structure, and thus be tightly bound to it. But, from the client perspective, that's an irrelevant implementation detail. > /Jrn > Craig PS: I agree with you on passing complete URLs around, by the way ... nothing about REST requires that all of the relevant URLs be provided by the same server. At the same time, a robust client should support relative URLs when the media type of the response containing them has well defined semantics for interpreting them (as HTML does, for example).
Craig, you are right, I know. Well, my logical mind knows. The emotional part of mind has a different opinion. Damn :-) /Jrn --- In rest-discuss@yahoogroups.com, Craig McClanahan <craigmcc@...> wrote: > > On Thu, Mar 4, 2010 at 11:43 AM, Jrn Wildt <jw@...> wrote: > > Hopefully this explain why I insist on passing complete URLs around - while > > at the same time I really don't like it because of the ugly URLs. > > > > > I try to make my client apps pretty smart, but adding a sense of aesthetics > (so they can "hold their nose" figuratively when invoking an "ugly" URL) is > beyond even my capabilities :-). > > More seriously, who cares what the URL looks like, when you should really be > treating them (on the client side) as opaque strings? > > From the server perspective, having a predicatable URL template could be > considered a security risk. I would imagine that applications very > concerned about security are going to be minting random-string URLs > (sometimes even with a limited lifetime) anyway. Think about the > confirmation message many web sites send when you first sign up, with some > random string of characters included that, if you try to invoke it a year > from now, will simply return a 404. > > It's perfectly reasonable for the server application that *created* a URL to > impute some sort of meaning to the URL structure, and thus be tightly bound > to it. But, from the client perspective, that's an irrelevant > implementation detail. > > > > /Jrn > > > > Craig > > PS: I agree with you on passing complete URLs around, by the way ... > nothing about REST requires that all of the relevant URLs be provided by the > same server. At the same time, a robust client should support relative URLs > when the media type of the response containing them has well defined > semantics for interpreting them (as HTML does, for example). >
On Thu, Mar 4, 2010 at 9:52 PM, Jorn Wildt <jw@...> wrote: > > > Craig, you are right, I know. Well, my logical mind knows. The emotional > part of mind has a different opinion. Damn :-) > > I know the feeling. I also know that (from a client perspective) I *really* do not want to understand how the sausage factory (i.e. the server) decided to mint the URLs for this application. Too. Much. Information. :-) > /Jrn > Craig PS: But it's all good in the long run ... if the server application changes its approach to URL creation, my client is still working fine (as long as I just faithfully follow the URLs that I'm given), while all those URL-template folks are off reprogramming to whatever the template de jour turns out to be. PPS: When I'm the server side guy instead of the client side guy, I think "you mean I can mint obtuse URLs that mean something to *me* even if they don't mean anything to the client? Sweet."
Kevin, you are describing exactly what is the most intuitive and most widely implemented way to consume a restful API. Your example is exactly what I am thinking of.
Unfortunately, in a strictly RESTful way, it is also wrong. It completely ignores the HATEOS principle.
See for instance Roy Fieldings (now fameous) rant: http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven. Or the Starbucks example: http://www.infoq.com/articles/webber-rest-workflow. Or Subbus post: http://www.subbu.org/blog/2008/09/on-linking-part-1
Consider this situation: your are surfing the web and a friend of yours comes by with an Amazon order ID. "Could you please check this order for me?" he says. "Sure!" you answer. Now what do you do? I am quite sure that you are not going to type the direct Amazon order url into your browser - how would you know what it should look like? But this is equivalent to your suggested REST API consumer - when you pass it an order ID, it utilizes it's apriori ("at design time") knowledge of the REST API's URL structure.
On the user facing web you don't expect to have any knowledge of URL structures. Instead you follow links, do searches, and let the end server calculate the links for you. This is missing link in many REST APIs: they force the client to completely understand the URL structure in order to use the API. This in turns make the client brittle and will cause it to break the first time REST API changes structure.
So for these reasons I cannot see other solutions than handing resource URLs around as parameters in my website URLs.
What I am trying to, is to point out one of the consequences of consuming a truely RESTful API. Namely that your API consumers cannot work with simple item IDs, but have to pass complete resource URLs around.
/Jrn
Jrn,
maybe not exactly to the point of your original question, but...
On Mar 5, 2010, at 6:50 AM, Jorn Wildt wrote:
> What I am trying to, is to point out one of the consequences of consuming a truely RESTful API. Namely that your API consumers cannot work with simple item IDs, but have to pass complete resource URLs around.
It is ok to use IDs to refer to server side 'things' as long as the client discovers them in hypermedia. For example, an HTML form with a bunch of IDs presented in <option> elements in a select control does not violate the hypermedia constraint. IOW, you need not put URIs in the options just to be RESTful.
Likewise, it is ok to have a URI template like /users/{userId} if the link relation defines the meaning of the userId param and if the client discovers user IDs in received representations.
Jan
-----------------------------------
Jan Algermissen, Consultant
NORD Software Consulting
Mail: algermissen@...
Blog: http://www.nordsc.com/blog/
Work: http://www.nordsc.com/
-----------------------------------
> maybe not exactly to the point of your original question, but...
It certainly is right to the point :-)
> It is ok to use IDs to refer to server side 'things' as long as the client discovers them in hypermedia. For example, an HTML form with a bunch of IDs presented in <option> elements in a select control does not violate the hypermedia constraint. IOW, you need not put URIs in the options just to be RESTful.
This only works because your website behind the form knows how to map IDs from the <option> element to resource URLs on it's backend REST API. When the user posts the form I assume the result is something like this:
POST /show-item.aspx?id=1234 HTTP/1.1
How does the webserver know how to map "1234" into it's backend resource http://rest.mysite.com/items/1234?
Your website is certainly RESTful on the user facing part. But it's interaction with the backend REST API is not.
Notice that we don't have this issue when our backend is a database. In that case there is a really tight coupling between the website and the DB - the website know exactly how to fetch items from the DB using IDs only.
> Likewise, it is ok to have a URI template like /users/{userId} if the link relation defines the meaning of the userId param and if the client discovers user IDs in received representations.
Okay, that might work. Lets explore that solution. Assume we have a REST API for searching our items. The search API has one official URL that lets us search items by name:
http://rest.mysite.com/items?name={name}
The search returns an ATOM feed with found items (ATOM entries):
<entry>
<title>Playstation</title>
<id>1234</id>
<link rel="self">http://rest.mysite.com/items/1234</link>
<entry>
<entry>
<title>Commodore 64</title>
<id>1234</id>
<link rel="self">http://rest.mysite.com/olditems/1234</link>
<entry>
Check the "self" links. The resource format is not the same for the two items. So I cannot just grab the ID and render HTML anchors with them since the "commodere 64" link would be wrong.
I want to render the search result as an HTML list with links to my user facing web:
<ul>
<li>
<a href="http://www.mysite.com/display-item.aspx?id=1234">Playstation</a>
</li>
<li>
<a href="http://www.mysite.com/display-item.aspx?id=1234">Commodore 64</a>
</li>
</ul>
Notice how my two items suddenly have the same user facing URL!
Now lets add URL templates in the result as you suggest. I do not know of such a thing, so I'll create my own:
<entry>
<title>Playstation</title>
<id>1234</id>
<link-template rel="self" param="id">http://rest.mysite.com/items/{id}</link>
<link rel="self">http://rest.mysite.com/items/1234</link>
<entry>
<entry>
<title>Commodore 64</title>
<id>1234</id>
<link-template rel="self" param="id">http://rest.mysite.com/olditems/{id}</link>
<link rel="self">http://rest.mysite.com/olditems/1234</link>
<entry>
Now, having this template information, we can construct resource URLs our self. But to do so we must pass the template together with the ID, otherwise our display-item webpage won't know how to generate the resource URL.
So the HTLM becomes:
<ul>
<li>
<a href="http://www.mysite.com/display-item.aspx?id=1234&template=http://rest.mysite.com/items/{id}">Playstation</a>
</li>
<li>
<a href="http://www.mysite.com/display-item.aspx?id=1234&template=http://rest.mysite.com/olditems/{id}">Commodore 64</a>
</li>
</ul>
This works ... but I would rather just include the real resource URL instead of both the ID and the template.
So again I must conclude that a consequence of using a REST backend, is that we must pass complete resource URLs around in our website URLs. Adhering stricly to a backend REST API means loosing the ability to use short URLs in the user facing frontend.
/Jrn
On Mar 5, 2010, at 9:48 AM, Jorn Wildt wrote:
>> maybe not exactly to the point of your original question, but...
>
> It certainly is right to the point :-)
>
>> It is ok to use IDs to refer to server side 'things' as long as the client discovers them in hypermedia. For example, an HTML form with a bunch of IDs presented in <option> elements in a select control does not violate the hypermedia constraint. IOW, you need not put URIs in the options just to be RESTful.
>
> This only works because your website behind the form knows how to map IDs from the <option> element to resource URLs on it's backend REST API. When the user posts the form I assume the result is something like this:
>
> POST /show-item.aspx?id=1234 HTTP/1.1
>
> How does the webserver know how to map "1234" into it's backend resource http://rest.mysite.com/items/1234?
Eh.....from a form!? It must act as a RESTful client to the backend RESTful service. Otherwise you would be in portal-land....
>
> Your website is certainly RESTful on the user facing part. But it's interaction with the backend REST API is not.
>
Then make it so
> Notice that we don't have this issue when our backend is a database. In that case there is a really tight coupling between the website and the DB - the website know exactly how to fetch items from the DB using IDs only.
>
>
>> Likewise, it is ok to have a URI template like /users/{userId} if the link relation defines the meaning of the userId param and if the client discovers user IDs in received representations.
>
> Okay, that might work. Lets explore that solution. Assume we have a REST API for searching our items. The search API has one official URL that lets us search items by name:
>
> http://rest.mysite.com/items?name={name}
>
> The search returns an ATOM feed with found items (ATOM entries):
>
> <entry>
> <title>Playstation</title>
> <id>1234</id>
> <link rel="self">http://rest.mysite.com/items/1234</link>
> <entry>
>
> <entry>
> <title>Commodore 64</title>
> <id>1234</id>
> <link rel="self">http://rest.mysite.com/olditems/1234</link>
> <entry>
>
> Check the "self" links. The resource format is not the same for the two items. So I cannot just grab the ID and render HTML anchors with them since the "commodere 64" link would be wrong.
Then the 'form' that the backend gave to the service is simply wrong. In your example, there are two different construction rules (forms) for different context. Why would you do that? Why not just one form? The server can sort out the old vs new stuff behind the interface.
(Note: your link syntax is wrong. should be href="")
>
> I want to render the search result as an HTML list with links to my user facing web:
>
> <ul>
> <li>
> <a href="http://www.mysite.com/display-item.aspx?id=1234">Playstation</a>
> </li>
> <li>
> <a href="http://www.mysite.com/display-item.aspx?id=1234">Commodore 64</a>
> </li>
> </ul>
>
> Notice how my two items suddenly have the same user facing URL!
And?
>
> Now lets add URL templates in the result as you suggest. I do not know of such a thing, so I'll create my own:
>
> <entry>
> <title>Playstation</title>
> <id>1234</id>
> <link-template rel="self" param="id">http://rest.mysite.com/items/{id}</link>
> <link rel="self">http://rest.mysite.com/items/1234</link>
> <entry>
>
> <entry>
> <title>Commodore 64</title>
> <id>1234</id>
> <link-template rel="self" param="id">http://rest.mysite.com/olditems/{id}</link>
> <link rel="self">http://rest.mysite.com/olditems/1234</link>
> <entry>
You only need one template, not one per entry. Also I would put that into some service document and not into the search result.
>
> Now, having this template information, we can construct resource URLs our self. But to do so we must pass the template together with the ID, otherwise our display-item webpage won't know how to generate the resource URL.
>
> So the HTLM becomes:
>
> <ul>
> <li>
> <a href="http://www.mysite.com/display-item.aspx?id=1234&template=http://rest.mysite.com/items/{id}">Playstation</a>
> </li>
> <li>
> <a href="http://www.mysite.com/display-item.aspx?id=1234&template=http://rest.mysite.com/olditems/{id}">Commodore 64</a>
> </li>
> </ul>
>
> This works ... but I would rather just include the real resource URL instead of both the ID and the template.
>
> So again I must conclude that a consequence of using a REST backend, is that we must pass complete resource URLs around in our website URLs. Adhering stricly to a backend REST API means loosing the ability to use short URLs in the user facing frontend.
There is something wrong in your overall thinking.
You'd just have the fron Web app that exposes a RESTful API. It would then interact RESTfully with a RESTful backend. It is simply 'wrong' to relate the URIs of the backend in any way to the URIs of the forntend. You do not even have to make use of the same IDs as long as the frontend service keeps its own mapping table.
HTH,
Jan
>
> /Jrn
>
>
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
-----------------------------------
Jan Algermissen, Consultant
NORD Software Consulting
Mail: algermissen@...
Blog: http://www.nordsc.com/blog/
Work: http://www.nordsc.com/
-----------------------------------
> > POST /show-item.aspx?id=1234 HTTP/1.1 > > > > How does the webserver know how to map "1234" into it's backend resource http://rest.mysite.com/items/1234? > > Eh.....from a form!? It must act as a RESTful client to the backend RESTful service. Otherwise you would be in portal-land.... Ok. Sorry. I thought you were talking about a user facing form. Not a form on the REST API. That makes sense. I assume you discover the form at runtime, not designtime. Right? But you have to know at designtime that <input name="itemname"/> is the input you want to put the item's name in. Right? That's great - we have to know something about the parameters at designtime, but we avoid knowing anything about URLs until runtime. > > Check the "self" links. The resource format is not the same for the two items. So I cannot just grab the ID and render HTML anchors with them since the "commodere 64" link would be wrong. > > Then the 'form' that the backend gave to the service is simply wrong. In your example, there are two different construction rules (forms) for different context. Why would you do that? Why not just one form? The server can sort out the old vs new stuff behind the interface. The server can only sort out the old vs. new stuff if it somehow encodes new/old in the ID. Could be "old1234" vs. "new1234". That works. But you are starting to invent your own URI format ... that would not be necessary if you simply included the whole resource URL. > You'd just have the fron Web app that exposes a RESTful API. It would then interact RESTfully with a RESTful backend. It is simply 'wrong' to relate the URIs of the backend in any way to the URIs of the forntend. You do not even have to make use of the same IDs as long as the frontend service keeps its own mapping table. But the whole point is to build a database-less website that completely depends on a REST API for it to work. There is no "state" in the website, no place to save that mapping table. That's the path I am exploring. By doing so my website will be open by design instead of adding REST APIs as an afterthought. It will be possibe to write a new .NET/Java client without changing a single line of code in the website. Instead of building the REST API on top of the website, I want to build my clients on top of the API - and by client I mean both HTML, .NET and Java. Anyway, I think I have used up my amount of time from you guys on this. Thanks a lot. It has been interesting. You are of course welcome to reply, but there is no expectation on it on my part. /Jrn
> This only works because your website behind the form knows how to map IDs from > the element to resource URLs on it's backend REST API. When the user > posts the form I assume the result is something like this: > > POST /show-item.aspx?id=1234 HTTP/1.1 > > How does the webserver know how to map "1234" into it's backend resource > http://rest.mysite.com/items/1234? Isn't that what libraries are for? Java has Jersey (JAX-RS) to map from your URI to method calls and a URI-builder to create state links. Mark W.
Hi,
If I understand HATEOAS correctly, if I make a /orders call, get back a list of orders with links on what I can do next, the server at this point has no "state" memory.. it has no idea after it sent back the response, that it just told me "here is what you can do now". I could come back 3 years later, and use those URIs it sent back to me and it should work. Is that not correct?
As for giving it to someone else, well, they would first have to pass authentication as far as I know. But that does raise an interesting question. Using your example, how would the RESTful api "prevent" any call being made from anyone if they authenticate? That is, if you and I are both authenticated users, and you pull a /orders with your info, and it gives you a list back for your orders, you could then give me the URIs it returns, and I could make those same calls on your behalf, using my authentication. Correct? My only guess is that every single call made to a RESTful API should perform validation checks which would be expensive I suppose if every time I tried to call one of your URIs for a specific order, the RESTful api had to verify that the user making the call is the same user that received the URIs in the first place. I am not sure if that makes sense, rereading it, but I am curious how this sort of issue is prevented. Generally, you
wouldn't hand me your URIs that you got back, but a man-in-the-middle could easily catch those URIs (assuming that person defeated SSL when going over SSL). So how is such a scenario handled.. or should be handled by REST apis?
Consider this situation: your are surfing the web and a friend of yours comes by with an Amazon order ID. "Could you please check this order for me?" he says. "Sure!" you answer. Now what do you do? I am quite sure that you are not going to type the direct Amazon order url into your browser - how would you know what it should look like? But this is equivalent to your suggested REST API consumer - when you pass it an order ID, it utilizes it's apriori ("at design time") knowledge of the REST API's URL structure.
On the user facing web you don't expect to have any knowledge of URL structures. Instead you follow links, do searches, and let the end server calculate the links for you. This is missing link in many REST APIs: they force the client to completely understand the URL structure in order to use the API. This in turns make the client brittle and will cause it to break the first time REST API changes structure.
So for these reasons I cannot see other solutions than handing resource URLs around as parameters in my website URLs.
What I am trying to, is to point out one of the consequences of consuming a truely RESTful API. Namely that your API consumers cannot work with simple item IDs, but have to pass complete resource URLs around.
/Jørn
Hi Keven, > Generally, you wouldn't hand me your URIs that you got back, but a man-in-the-middle could easily > catch those URIs (assuming that person defeated SSL when going over SSL). So how is such a scenario > handled.. or should be handled by REST apis? The method I see most often is to use the security in HTTP and a token system like OAuth. For example: http://developers.sun.com/identity/reference/techart/restwebservices.html Mark W.
On Fri, Mar 5, 2010 at 7:28 AM, Mark Wonsil <mark_wonsil@...> wrote: > > > Hi Keven, > > > > Generally, you wouldn't hand me your URIs that you got back, but a > man-in-the-middle could easily > > catch those URIs (assuming that person defeated SSL when going over SSL). > So how is such a scenario > > handled.. or should be handled by REST apis? > > The method I see most often is to use the security in HTTP and a token > system like OAuth. For example: > > http://developers.sun.com/identity/reference/techart/restwebservices.html > > Mark W. > > For whichever security scheme you choose, yes you must check authN/authZ on every single request (no different from any webapp in this respect, although webapps typically do authN implicitly based on the inclusion of a valid session id token). That way, if you give me a URI to one of your orders and I try to access it with my credentials, I'll get either a 403 (forbidden) or a 404 (if the server doesn't even want me to know that the URI happens to reference a valid order, just not one of mine). Craig
As long as DTOs are used in rendering a representation back to the client, there is no inherent coupling introduced to that client. There's coupling to the representation format, which is the coupling everybody is comfortable with on ReST architectures. Don't throw DTOs with the bath water as they say. > To: andjarnic@... > CC: craigmcc@...; algermissen1971@...; fgaucho@...; rest-discuss@yahoogroups.com > From: guilherme.silveira@... > Date: Mon, 1 Mar 2010 13:44:46 -0300 > Subject: Re: [rest-discuss] Differentiating HTTP-based APIs > > Hello guys > > >Are you saying that the J2EE patterns are no longer applicable in general, or for your specific use case? > I believe if our client is a REST client, DTO as we used to do does > not make sense. > > According to an old Roy's post: "A REST API should never have typed > resources that are significant to the client." > http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven > > It goes on as: "Specification authors may use resource types for > describing server implementation behind the interface, but those types > must be irrelevant and invisible to the client. The only types that > are significant to a client are the current representations media > type and standardized relation names." > > A typed resource (as this DTO provides to your client) implies in > tighter coupling between both sides, the opposite direction of freedom > that you want. > > > > > Yes, true.. but if you combine the two models into one, while possible, you are requiring your front tier to know about ejb/back end stuff. An even better example of why you would avoid this.. if you were going to use those same XSD generated JAXB classes in a Jersey client (or java client).. and you have the ejb entity annotations in them as well, your client side now has to have the ejb classes to compile. > > You are right.. that's why you should not do that. But that is not > what I meant, I probably did not leave it clear enough. Session beans > implies tight coupling by nature: it requires your clients to know and > share the same interface that your server knows, the opposite > direction that REST goes, instead of relation names and media types, > you are using java interfaces and classes. Taking out session beans > and adding rest in its place: > > (dto) dto s-bean e-bean > CLIENT ---> WEB TIER --> APP TIER --> DB TIER > > becomes: > > (independent model) resource from here on... > CLIENT --> WEB TIER --> > APP TIER --> DB TIER > > What I meant is that it should be your frameworks' responsability to > map the resource representation to your model... while it does not, so > one requires dtos, going back to non-restful architectures. > > DTOs are also typically anemic, and because Java classes are closed > (unlike ruby ones), the client will have to live with an anemic DTO > representation of his resource. > > That's why I believe DTO and REST should not be together... > > Regards > > Guilherme Silveira > Caelum | Ensino e Inovao > http://www.caelum.com.br/ > > > ------------------------------------ > > Yahoo! Groups Links > > > _________________________________________________________________ Tell us your greatest, weirdest and funniest Hotmail stories http://clk.atdmt.com/UKM/go/195013117/direct/01/
Hey guys,
I do realize you authenticate every request, you taught me that Craig. ;) What I meant is, do I ALSO look at the /orders/<id> (id value) and compare it to the auth header id to make sure it too is valid? If for example Craig and I can both authenticate to the same service, and I give Craig my /orders/<id> (with my auth ID), when Craig tries to make the call to say, delete the order, his authentication will pass the Auth check to be able to access the method/API call, but as far as I can tell, the service would also need to check the /orders/<id> to make sure it matches the authenticated users id.
In order for Craig to delete my order, I would have to have given him some special token the service looks at and knows that he is allowed to delete my order on my behalf. I think OAuth calls this a 3-legged auth. So, my question, to reiterate is HOW does a service, without a 3-legged auth, know if Craig or I am actually deleting my order without checking that the ID passed in the URI matches the ID of the user making the call? OR, more to the point of my question, should EVERY API call not only authenticate a user, but validate that for a given call if an ID is passed in the URI that it matches the authenticated user's id. I think I repeated myself. :D
--- On Fri, 3/5/10, Craig McClanahan <craigmcc@...> wrote:
From: Craig McClanahan <craigmcc@...>
Subject: Re: [rest-discuss] Re: Thoughts about URLs for a REST driven website
To: "Mark Wonsil" <wonsil@...m>
Cc: "Kevin Duffey" <andjarnic@...>, rest-discuss@yahoogroups.com, "Jorn Wildt" <jw@...>
Date: Friday, March 5, 2010, 8:48 AM
On Fri, Mar 5, 2010 at 7:28 AM, Mark Wonsil <mark_wonsil@ yahoo.com> wrote:
Hi Keven,
> Generally, you wouldn't hand me your URIs that you got back, but a man-in-the-middle could easily
> catch those URIs (assuming that person defeated SSL when going over SSL). So how is such a scenario
> handled.. or should be handled by REST apis?
The method I see most often is to use the security in HTTP and a token system like OAuth. For example:
http://developers. sun.com/identity /reference/ techart/restwebs ervices.html
Mark W.
For whichever security scheme you choose, yes you must check authN/authZ on every single request (no different from any webapp in this respect, although webapps typically do authN implicitly based on the inclusion of a valid session id token). That way, if you give me a URI to one of your orders and I try to access it with my credentials, I'll get either a 403 (forbidden) or a 404 (if the server doesn't even want me to know that the URI happens to reference a valid order, just not one of mine).
Craig
It sounds like you are asking about how to handle authorization (not authentication). If so, that's not really a REST issue, but an implementation detail for HTTP apps. In my HTTP applications I refer to a list of URIs (or regexp-based URI templates) plus a list of HTTP methods for each authenticated user; storage and representation can vary to taste. Consider a user-centered representation of an application's access control list for a user: <user id="mamund"> <acl href="/(.*)" methods="GET,HEAD,OPTIONS" /> <!-- default access --> <acl href="/admin/(.*)" methods="!"/> <!-- deny all methods --> <acl href="/mamund/(.*)" methods="*"/> <!-- allow all methods --> </user> All that is left is to compare the incoming request's METHOD + URI against the above list. Any access failures result in a "403 Forbidden" response. Another possible (non HTTP-centric) public ontology for access control can be found here: http://dig.csail.mit.edu/2009/Papers/ISWC/rdf-access-control/paper.pdf mca http://amundsen.com/blog/ On Fri, Mar 5, 2010 at 20:09, Kevin Duffey <andjarnic@...> wrote: > > > Hey guys, > > I do realize you authenticate every request, you taught me that Craig. ;) > What I meant is, do I ALSO look at the /orders/<id> (id value) and compare > it to the auth header id to make sure it too is valid? If for example Craig > and I can both authenticate to the same service, and I give Craig my > /orders/<id> (with my auth ID), when Craig tries to make the call to say, > delete the order, his authentication will pass the Auth check to be able to > access the method/API call, but as far as I can tell, the service would also > need to check the /orders/<id> to make sure it matches the authenticated > users id. > > In order for Craig to delete my order, I would have to have given him some > special token the service looks at and knows that he is allowed to delete my > order on my behalf. I think OAuth calls this a 3-legged auth. So, my > question, to reiterate is HOW does a service, without a 3-legged auth, know > if Craig or I am actually deleting my order without checking that the ID > passed in the URI matches the ID of the user making the call? OR, more to > the point of my question, should EVERY API call not only authenticate a > user, but validate that for a given call if an ID is passed in the URI that > it matches the authenticated user's id. I think I repeated myself. :D > > > --- On *Fri, 3/5/10, Craig McClanahan <craigmcc@...>* wrote: > > > From: Craig McClanahan <craigmcc@...> > Subject: Re: [rest-discuss] Re: Thoughts about URLs for a REST driven > website > To: "Mark Wonsil" <wonsil@...> > Cc: "Kevin Duffey" <andjarnic@...>, rest-discuss@yahoogroups.com, > "Jorn Wildt" <jw@...> > Date: Friday, March 5, 2010, 8:48 AM > > > > On Fri, Mar 5, 2010 at 7:28 AM, Mark Wonsil <mark_wonsil@ yahoo.com<http://mc/compose?to=mark_wonsil@...> > > wrote: > >> >> >> Hi Keven, >> >> >> > Generally, you wouldn't hand me your URIs that you got back, but a >> man-in-the-middle could easily >> > catch those URIs (assuming that person defeated SSL when going over >> SSL). So how is such a scenario >> > handled.. or should be handled by REST apis? >> >> The method I see most often is to use the security in HTTP and a token >> system like OAuth. For example: >> >> http://developers. sun.com/identity /reference/ techart/restwebs >> ervices.html<http://developers.sun.com/identity/reference/techart/restwebservices.html> >> >> Mark W. >> >> > For whichever security scheme you choose, yes you must check authN/authZ on > every single request (no different from any webapp in this respect, although > webapps typically do authN implicitly based on the inclusion of a valid > session id token). That way, if you give me a URI to one of your orders and > I try to access it with my credentials, I'll get either a 403 (forbidden) or > a 404 (if the server doesn't even want me to know that the URI happens to > reference a valid order, just not one of mine). > > Craig > > > > > >
On Fri, Mar 5, 2010 at 6:04 PM, mike amundsen <mamund@...> wrote:
>
>
> It sounds like you are asking about how to handle authorization (not
> authentication). If so, that's not really a REST issue, but an
> implementation detail for HTTP apps.
>
> In my HTTP applications I refer to a list of URIs (or regexp-based URI
> templates) plus a list of HTTP methods for each authenticated user; storage
> and representation can vary to taste. Consider a user-centered
> representation of an application's access control list for a user:
>
> <user id="mamund">
> <acl href="/(.*)" methods="GET,HEAD,OPTIONS" /> <!-- default access -->
> <acl href="/admin/(.*)" methods="!"/> <!-- deny all methods -->
> <acl href="/mamund/(.*)" methods="*"/> <!-- allow all methods -->
> </user>
>
> All that is left is to compare the incoming request's METHOD + URI against
> the above list. Any access failures result in a "403 Forbidden" response.
>
> This approach works if a URL template is either valid or not for a
particular user, but that doesn't cover Kevin's use case where
"/orders/{id}" is valid for both of us, but we should not be able to see
each other's orders.
As you mentioned, this is not a REST issue at all, because exactly the same
scenario applies to a traditional webapp where a crafty user might try to
hand modify the URL that displays an order's content, plugging in someone
else's orderid to see if the system will let him see it. A properly
designed webapp should not; neither should a web service.
There's lots of ways to implement this in the back end. At Jive, for
example, we have a low level DAO that handles the database calls with no
authorization (could be JPA or Hibernate or whatever), and a higher level
"manager" that checks whether the authenticated user has access to the the
requested data. This is what all of the other business logic in the
application calls -- the DAO is *only* used by the corresponding manager.
The rules for determining authorized access are necessarily domain specific,
but for the use case being described it's pretty simple -- if the
authenticated user owns the requested order, the data should be returned;
otherwise it should not.
And you should use exactly the same logic behind a REST service and a
corresponding webapp that supports HTML based access to the same data.
Craig
PS: Kevin, if you're really willing to give me your security token and your
identity, as well as the URL for your order, give me a second to set up a
little HTML form so you can give me your banking login credentials too :-).
<snip>
> This approach works if a URL template is either valid or not for a
> particular user, but that doesn't cover Kevin's use case where
> "/orders/{id}" is valid for both of us, but we should not be able to see
> each other's orders.
</snip>
There are lots of possibilities:
detailed list
<user id="kevin">
<acl href="/orders/1" methods="*"/>
<acl href="/orders/6" methods="*"/>
<acl href="/orders/7" methods="*"/>
<acl href="/orders/8" methods="*"/>
</user>
template:
<user id="craig">
<acl href="/orders/(2|3|4|5)" methods="*"/>
</user>
change URI design:
order resource URI contains prefix (mca-001, mca-002, etc)
<user id="mamund">
<acl href="/orders/mca-(.*)" methods="*" />
</user>
etc.
mca
http://amundsen.com/blog/
On Fri, Mar 5, 2010 at 22:44, Craig McClanahan <craigmcc@...> wrote:
>
>
> On Fri, Mar 5, 2010 at 6:04 PM, mike amundsen <mamund@...> wrote:
>>
>>
>>
>> It sounds like you are asking about how to handle authorization (not
>> authentication). If so, that's not really a REST issue, but an
>> implementation detail for HTTP apps.
>>
>> In my HTTP applications I refer to a list of URIs (or regexp-based URI
>> templates) plus a list of HTTP methods for each authenticated user; storage
>> and representation can vary to taste. Consider a user-centered
>> representation of an application's access control list for a user:
>> <user id="mamund">
>> <acl href="/(.*)" methods="GET,HEAD,OPTIONS" /> <!-- default access -->
>> <acl href="/admin/(.*)" methods="!"/> <!-- deny all methods -->
>> <acl href="/mamund/(.*)" methods="*"/> <!-- allow all methods -->
>> </user>
>> All that is left is to compare the incoming request's METHOD + URI against
>> the above list. Any accessfailures result in a "403 Forbidden" response.
>
> This approach works if a URL template is either valid or not for a
> particular user, but that doesn't cover Kevin's use case where
> "/orders/{id}" is valid for both of us, but we should not be able to see
> each other's orders.
>
> As you mentioned, this is not a REST issue at all, because exactly the same
> scenario applies to a traditional webapp where a crafty user might try to
> hand modify the URL that displays an order's content, plugging in someone
> else's orderid to see if the system will let him see it. A properly
> designed webapp should not; neither should a web service.
>
> There's lots of ways to implement this in the back end. At Jive, for
> example, we have a low level DAO that handles the database calls with no
> authorization (could be JPA or Hibernate or whatever), and a higher level
> "manager" that checks whether the authenticated user has access to the the
> requested data. This is what all of the other business logic in the
> application calls -- the DAO is *only* used by the corresponding manager.
>
> The rules for determiningauthorized accessare necessarily domain specific,
> but for the use case being described it's pretty simple -- if the
> authenticated user owns the requested order, the data should be returned;
> otherwise it should not.
>
> And youshould use exactly the same logic behind a REST service and a
> corresponding webapp that supports HTML based access to the same data.
>
> Craig
>
> PS: Kevin, if you're really willing to give me your security token and your
> identity, as well as the URL for your order, give me a second to set up a
> little HTML form so you can give me your banking login credentials too :-).
>
I suppose what I am asking is, on every request, I basically have to perform some ACL on the given URL id and the authorization header id to make sure they match. IF they do (and the id is allowed to make the call via acl), then it continues. But, there is ALWAYS some logic in place to check this. Right now, for example, I simply use the Auth header to validate, if so, the call continues. In my current way of doing this, it's very possible someone could give someone else (or someone could steal it) the URL and anyone that authenticates, has access to it.
So basically I should probably first make sure the calling user via the Authorization header is valid. Then, make sure that matches up to the URL id (if one is applicable), and then make sure the id has the rights to access the method being requested. If all that goes thru, then the method can take place. That about right?
Craig, you wouldn't find much if you had access to my account. It's pretty empty sadly!
--- On Fri, 3/5/10, mike amundsen <mamund@...> wrote:
From: mike amundsen <mamund@...>
Subject: Re: [rest-discuss] Re: Thoughts about URLs for a REST driven website
To: craigmcc@...
Cc: rest-discuss@yahoogroups.com
Date: Friday, March 5, 2010, 8:23 PM
<snip>
> This approach works if a URL template is either valid or not for a
> particular user, but that doesn't cover Kevin's use case where
> "/orders/{id} " is valid for both of us, but we should not be able to see
> each other's orders.
</snip>
There are lots of possibilities:
detailed list
<user id="kevin">
<acl href="/orders/ 1" methods="*"/ >
<acl href="/orders/ 6" methods="*"/ >
<acl href="/orders/ 7" methods="*"/ >
<acl href="/orders/ 8" methods="*"/ >
</user>
template:
<user id="craig">
<acl href="/orders/ (2|3|4|5) " methods="*"/ >
</user>
change URI design:
order resource URI contains prefix (mca-001, mca-002, etc)
<user id="mamund">
<acl href="/orders/ mca-(.*)" methods="*" />
</user>
etc.
mca
http://amundsen. com/blog/
On Fri, Mar 5, 2010 at 22:44, Craig McClanahan <craigmcc@gmail. com> wrote:
>
>
> On Fri, Mar 5, 2010 at 6:04 PM, mike amundsen <mamund@yahoo. com> wrote:
>>
>>
>>
>> It sounds like you are asking about how to handle authorization (not
>> authentication) . If so, that's not really a REST issue, but an
>> implementation detail for HTTP apps.
>>
>> In my HTTP applications I refer to a list of URIs (or regexp-based URI
>> templates) plus a list of HTTP methods for each authenticated user; storage
>> and representation can vary to taste. Consider a user-centered
>> representation of an application' s access control list for a user:
>> <user id="mamund">
>> <acl href="/(.*)" methods="GET, HEAD,OPTIONS" /> <!-- default access -->
>> <acl href="/admin/ (.*)" methods="!"/ > <!-- deny all methods -->
>> <acl href="/mamund/ (.*)" methods="*"/ > <!-- allow all methods -->
>> </user>
>> All that is left is to compare the incoming request's METHOD + URI against
>> the above list. Any access failures result in a "403 Forbidden" response.
>
> This approach works if a URL template is either valid or not for a
> particular user, but that doesn't cover Kevin's use case where
> "/orders/{id} " is valid for both of us, but we should not be able to see
> each other's orders.
>
> As you mentioned, this is not a REST issue at all, because exactly the same
> scenario applies to a traditional webapp where a crafty user might try to
> hand modify the URL that displays an order's content, plugging in someone
> else's orderid to see if the system will let him see it. A properly
> designed webapp should not; neither should a web service.
>
> There's lots of ways to implement this in the back end. At Jive, for
> example, we have a low level DAO that handles the database calls with no
> authorization (could be JPA or Hibernate or whatever), and a higher level
> "manager" that checks whether the authenticated user has access to the the
> requested data. This is what all of the other business logic in the
> application calls -- the DAO is *only* used by the corresponding manager.
>
> The rules for determining authoriz ed access are necessarily domain specific,
> but for the use case being described it's pretty simple -- if the
> authenticated user owns the requested order, the data should be returned;
> otherwise it should not.
>
> And you should use exactly the same logic behind a REST service and a
> corresponding webapp that supports HTML based access to the same data.
>
> Craig
>
> PS: Kevin, if you're really willing to give me your security token and your
> identity, as well as the URL for your order, give me a second to set up a
> little HTML form so you can give me your banking login credentials too :-).
>
On Fri, Mar 5, 2010 at 11:02 PM, Kevin Duffey <andjarnic@...> wrote:
> I suppose what I am asking is, on every request, I basically have to
> perform some ACL on the given URL id and the authorization header id to make
> sure they match. IF they do (and the id is allowed to make the call via
> acl), then it continues. But, there is ALWAYS some logic in place to check
> this. Right now, for example, I simply use the Auth header to validate, if
> so, the call continues. In my current way of doing this, it's very possible
> someone could give someone else (or someone could steal it) the URL and
> anyone that authenticates, has access to it.
>
> So basically I should probably first make sure the calling user via the
> Authorization header is valid. Then, make sure that matches up to the URL id
> (if one is applicable), and then make sure the id has the rights to access
> the method being requested. If all that goes thru, then the method can take
> place. That about right?
>
Sounds good, as long as "matches up to the URL id" means "is authorized to
perform the request specified by the verb and URI that was submitted".
First make sure the user is properly authenticated (send 401 if not). Then,
ensure that the authenticated user is allowed to perform the request they
are attempting (send 403 or 404 if not). The rules for "allowed to perform"
are specific to the application -- in the use case we've been describing,
you could enforce a rule (for example) that the creator of an order can do
anything, but an administrative username (for producing reports) can only do
a GET.
For Jersey in particular, check out the @RolesAllowed annotation, which you
can couple with a security filter to perform this kind of check pretty
easily. The security filter can, for example, examine the path of the
request (so it could figure out which order you're trying to access) and
grant you the "owner" role only if you are indeed the owner of that order.
Then, putting "@RolesAllowed("owner")" on a resource method is sufficient to
trigger the 403 if you don't have the specified role.
>
>
> Craig, you wouldn't find much if you had access to my account. It's pretty
> empty sadly!
>
> :-)
Craig
Take a look at the links in this post:
http://tech.groups.yahoo.com/group/rest-discuss/message/14971
/Jørn
----- Original Message -----
From: "Kevin Duffey" <andjarnic@...>
To: <rest-discuss@yahoogroups.com>; "Jorn Wildt" <jw@...>
Sent: Friday, March 05, 2010 4:14 PM
Subject: Re: [rest-discuss] Re: Thoughts about URLs for a REST driven
website
Hi,
If I understand HATEOAS correctly, if I make a /orders call, get back a list
of orders with links on what I can do next, the server at this point has no
"state" memory.. it has no idea after it sent back the response, that it
just told me "here is what you can do now". I could come back 3 years later,
and use those URIs it sent back to me and it should work. Is that not
correct?
As for giving it to someone else, well, they would first have to pass
authentication as far as I know. But that does raise an interesting
question. Using your example, how would the RESTful api "prevent" any call
being made from anyone if they authenticate? That is, if you and I are both
authenticated users, and you pull a /orders with your info, and it gives you
a list back for your orders, you could then give me the URIs it returns, and
I could make those same calls on your behalf, using my authentication.
Correct? My only guess is that every single call made to a RESTful API
should perform validation checks which would be expensive I suppose if every
time I tried to call one of your URIs for a specific order, the RESTful api
had to verify that the user making the call is the same user that received
the URIs in the first place. I am not sure if that makes sense, rereading
it, but I am curious how this sort of issue is prevented. Generally, you
wouldn't hand me your URIs that you got back, but a man-in-the-middle could
easily catch those URIs (assuming that person defeated SSL when going over
SSL). So how is such a scenario handled.. or should be handled by REST apis?
Consider this situation: your are surfing the web and a friend of yours
comes by with an Amazon order ID. "Could you please check this order for
me?" he says. "Sure!" you answer. Now what do you do? I am quite sure that
you are not going to type the direct Amazon order url into your browser -
how would you know what it should look like? But this is equivalent to your
suggested REST API consumer - when you pass it an order ID, it utilizes it's
apriori ("at design time") knowledge of the REST API's URL structure.
On the user facing web you don't expect to have any knowledge of URL
structures. Instead you follow links, do searches, and let the end server
calculate the links for you. This is missing link in many REST APIs: they
force the client to completely understand the URL structure in order to use
the API. This in turns make the client brittle and will cause it to break
the first time REST API changes structure.
So for these reasons I cannot see other solutions than handing resource URLs
around as parameters in my website URLs.
What I am trying to, is to point out one of the consequences of consuming a
truely RESTful API. Namely that your API consumers cannot work with simple
item IDs, but have to pass complete resource URLs around.
/Jørn
"Jorn Wildt" wrote:
>
> Thanks for taking your time to discuss this.
>
> I'll dive right into this statement:
>
> > http://charger.bisonsystems.net/date?iso=2010-03-04 simply loads:
> > http://en.wiski.org/date?iso=2010-03-04 while adding some headers.
>
> Now, how does your server know how to transform the incoming
> http://charger.bisonsystems.net/date?iso=2010-03-04 request to the
> http://en.wiski.org/date?iso=2010-03-04 location?
>
Because that's how I've coded the resource to behave. The /date
resource on charger has both client and server connectors. Its client
connector is programmed to proxy responses from a remote server. It
would not matter if this back-end interaction weren't RESTful, what
matters is that the front-end is RESTful from the viewpoint of my end-
user application.
>
> You can only do this by having some knowledge of your backend REST
> API, namely that the URL template is
> http://en.wiski.org/date?iso={date}. If you did not have this
> information, how would you then know what to do with the "2010-03-04"
> value?
>
So?
I'm sincerely trying to help, here. You're obviously misunderstanding
something, I'm trying to figure out what, exactly. I think, perhaps,
you're misinterpreting Roy's comment, "A REST API should be entered
with no prior knowledge beyond the initial URI (bookmark)...".
This applies to the client's interaction with the system. In my /date
service, the "application" in the REST sense is to return metadata
corresponding to the given ISO 8601 data string. In my demo, the client
application sends this translation request to charger.bisonsystems.net,
and the response appears to come from charger.bisonsystems.net.
The client application has no knowledge of en.wiski.org, nor does it
need any. The interaction between charger.bisonsystems.net and en.
wiski.org is out-of-scope to the REST application. The user's agent
does not "enter" the back-end API, so the mapping between charger and
en does not constitute prior knowledge of the API. The API for en.
wiski.org isn't part of my system, only the API on charger.
bisonsystems.net.
>
> Lets see what should be in the search result. Links! Certainly, but
> links to what? It cannot return links to the www site since that
> would imply my REST API knew something about it's client. So it must
> return links to REST API resources. This mean we would get the REST
> API URL http://rest.mysite.com/feeds/peter back.
>
Links to the available state transitions. Your XHTML representations
link to XHTML representations. Your Atom representations link to Atom
representations. Each XHTML representation has a link rel='alternate'
pointing to its equivalent in Atom, and vice-versa.
Where in REST are you getting the notion that your API can't know
anything about the client? The whole premise of content negotiation is
that your API can tailor its response based on client capability.
>
> Now, what should my www site do with the
> http://rest.mysite.com/feeds/peter URL in order to generate a browser
> link to itself that can display the feed?
>
If the requesting client is Atom, then the 200 response includes a link
rel='alternate' to the equivalent URI on the frontend. If the
requesting client needs XHTML, then 301-redirect to that equivalent URI
on the frontend.
>
> My www site knows nothing
> about the URL format, so I has no way to figure out how to select
> "peter" and present a http://www.mysite.com/blogs/peter URL to the
> end user.
>
I can't conceive of any reason why a frontend would be coded without
knowledge of the backend it's proxying. All we're talking here is a
redirect of /blogs/{user} to /feeds/{user} on a different domain, and
vice-versa depending on media type. When the media type matches, it's
an internal redirect, if not it's a 301. Simple!
Setting up URL rewriting is a common practice. All it is, is mapping
one URI allocation scheme to another. There is no REST constraint
enjoining this behavior.
>
> And here is my point: the only thing the www site can do,
> is to include the full REST URL in the www URL.
>
If your assumptions were correct, then I'd agree. But, if you're
arriving at the conclusion that REST is somehow requiring you to do
this, then your assumptions must not be correct. I assure you, there
is no such constraint in REST. The ones that are there are hard enough
to get right, I strongly suggest not imagining others. ;-)
>
> You could argue that the REST search should return both the complete
> resource URL as well as the feed name "peter".
>
No, I would never make such an argument. It doesn't matter whether the
search is done against the frontend or the backend, or the results are
in XHTML or Atom -- the response will contain a link rel='alternate'
that user-agents can follow to the correct variant, if needed.
If a resource has two variants with unique URLs, just link them together
using rel='alternate' or 'source' or 'feed'. There is no need for both
URLs to be present in a search result in order for the resource to be
'discovered' and has nothing to do with the hypertext constraint.
>
> But then, again, the
> www site would have to know how to transform "peter" into the REST
> API URL http://rest.mysite.com/feeds/peter.
>
Of course. Again, so? REST is a layered architecture. A client
interacts with your front-end, it cannot "see" farther than that. Your
front-end resource knows how to map that request onto the back-end
system, but this interaction should be opaque to the client.
Requiring the client to know the back-end URI in order to formulate a
request to the front-end results in a coupling which breaks the
layered-system constraint. The back-end URI allocation scheme can't be
changed without re-coding at least the front-end, if not the clients.
Whereas, in a decoupled system, the back-end URI allocation can change
and the front-end only needs to be reconfigured. The front-end would
not need to be re-coded to instruct clients of the new back-end URI
scheme (since the clients don't need to know about it), so clients would
just continue interacting with the front-end as before.
In the case of the /date service, everything about it could be changed.
I could move it to a different domain, or I could change the syntax of
the query. Or both. In which case, I would just change the mapping on
charger to reflect the modifications.
Now, imagine the mess updating the /date backend would cause, if all
requests looked something like this:
http://charger.bisonsystems.net/date?url=http://en.wiski.org/date?iso=
See how, by trying to pass the back-end URIs around inside the front-end
URIs, the system becomes coupled due to the requirement that the client
must "see" beyond the server it's interacting with. REST is quite
powerful in that it abstracts away any need for this sort of coupling.
I promise there's no RESTly reason for embedding back-end URLs in front-
end requests.
The fact that you've come up with a solution that's the opposite of REST
tells me that there's a fundamental error in your understanding of REST.
I don't mean to condescend or discourage, I'd just like to help isolate
that error and see if we can't get you straightened out.
-Eric
Thanks again for a thorough answer. I'll do my best to formulate what I
think. Apparently it's not that easy :-)
> Of course. Again, so? REST is a layered architecture. A client
> interacts with your front-end, it cannot "see" farther than that. Your
> front-end resource knows how to map that request onto the back-end
> system, but this interaction should be opaque to the client.
Yes! It should be opaque to the client. Certainly. One more reason not to
pass the complete URL back to the client. We certainly agree on that (it's
not that I want to pass the full resource URL, quite the opposite, I am just
trying to figure out how to avoid it in the most RESTful way).
> You're obviously misunderstanding
> something, I'm trying to figure out what, exactly. I think, perhaps,
> you're misinterpreting Roy's comment, "A REST API should be entered
> with no prior knowledge beyond the initial URI (bookmark)...".
Yes, this is the constraint I am working with.
> This applies to the client's interaction with the system.
Yes. But in this setup we have two clients. One client is our front-end
consuming the back-end API. Another client is the web-browser consuming our
front-end. Both consumers should behave RESTful and have as little apriori
knowledge as possible of the REST system it consumes.
> Whereas, in a decoupled system, the back-end URI allocation can change
> and the front-end only needs to be reconfigured.
Stop. You said "front-end needs to be reconfigured". This means you have a
static coupling between your front-end and your backend REST API. This is
what I am trying to avoid. Everybody says "Thou shall not have static
couplings to the REST API you consume".
By passing the complete resource URLs around I can modify my backend
structure as much as I want, and I'll never have to configure or recode
anything in the front-end.
> Now, imagine the mess updating the /date backend would cause, if all
> requests looked something like this:
>
> http://charger.bisonsystems.net/date?url=http://en.wiski.org/date?iso=
>
> See how, by trying to pass the back-end URIs around inside the front-end
> URIs, the system becomes coupled due to the requirement that the client
> must "see" beyond the server it's interacting with
But URLs are opaque - right? The client, a web-browser, can ignore anything
thats contained in "?url=...". The stuff in the dots is just an opqaue
identifier to the client. It could be anything. The fact that it is a URL is
irrelevant to the client. Some blogging system even has complete URLs as
their ATOM identifiers.
Lets assume the user makes a search for "peter" on our front-end. This in
turns makes the front-end forward the keyword "peter" to a published search
URL in the REST API. The REST API then returns an ATOM entry like below,
telling the REST API consumer (our front-end) where it can find the
resource:
<entry>
<link rel="self" href="http://rest.mysite.com/feeds/peter"/>
</entry>
REST is all about hypermedia and following links. So the REST consumer, our
front-end, should not care about the content of the URL. It should just
accept that URL as a resource reference and use it as such.
If our front-end was a .NET/Java application it would then keep that URL in
memory while it presented a selection list to the end-user. When the
end-user selected the "link", it would lookup the selection index, find the
resource URL, fetch it, and present peter's ATOM content to the end-user.
But unfortunately our client is a website and we want to present a single
URL for the end-user to click. A URL which would direct the end-user to a
page that loads the found resource, formats it using HTML, and return that
HTML to the end-user. How would that link look? Well, it would have to
contain the whole resource reference (and I can already here you scream
"noooo!", but read on).
But, yes, I am totaly aware that the ATOM entry also contains an identifier:
<entry>
<id>peter</id>
<link rel="self" href="http://rest.mysite.com/feeds/peter"/>
</entry>
Now, my front-end can decide to get the ID, use it's apriori knowledge of
the back-end URL format, and convert it to the complete URL when needed. But
from a RESTful point of view, what is best - to let the front-end construct
the URL based on some configuration of how the REST API behaves, or to just
follow the link that the back-end server has created for us?
> Where in REST are you getting the notion that your API can't know
> anything about the client?
> ...
> Each XHTML representation has a link rel='alternate'
Lets take another example. I want to make a website that shows Flickr
albums. I let my user enter search keywords on my website, forward that to
Flickr, parse the search result, and present it to my end user. Lets also
assume Flickr uses ATOM for the search result. Now, would Flickr have any
way to generate an "alternate" link for me? No, certainly not. So again I
can only take Flickr's complete album resource URL and put that into the
link for my "show_this_ Flickr album" webpage in my system.
But I actually got one solution suggested by Sebastian from Open Rasta. See
http://groups.google.com/group/openrasta/browse_thread/thread/f5994b6c7ce231b6
To avoid the static coupling we need a way for the REST API to announce it's
URL formats at runtime. This can be done using HTML forms. Now the front-end
can lookup the search form for blogs, decode it, and conclude that the blog
URL format is http://rest.mybackend.com/feeds?blogname={xxx} where "xxx" is
a query parameter defined with an <input name="blogname"/> element.
Thanks for reading all this through.
/J�rn
I just heard the news that Felipe Gaucho, one of the most active members on this list, passed away yesterday. Terrible and shocking news this is. May he rest in peace... http://weblogs.java.net/blog/claudio/archive/2010/03/06/felipe-gaucho-we-will-miss-you http://www.cejug.org/2010/03/06/noticia-triste-para-o-java-no-brasil-e-o-ceara/ http://www.devoxx.com/display/DV09/Home
Jrn Wildt wrote:
>
> > You're obviously misunderstanding
> > something, I'm trying to figure out what, exactly. I think,
> > perhaps, you're misinterpreting Roy's comment, "A REST API should
> > be entered with no prior knowledge beyond the initial URI
> > (bookmark)...".
>
> Yes, this is the constraint I am working with.
>
> > This applies to the client's interaction with the system.
>
> Yes. But in this setup we have two clients. One client is our
> front-end consuming the back-end API. Another client is the
> web-browser consuming our front-end. Both consumers should behave
> RESTful and have as little apriori knowledge as possible of the REST
> system it consumes.
>
OK, I can see where you're going wrong.
>
> > Whereas, in a decoupled system, the back-end URI allocation can
> > change and the front-end only needs to be reconfigured.
>
> Stop. You said "front-end needs to be reconfigured". This means you
> have a static coupling between your front-end and your backend REST
> API. This is what I am trying to avoid. Everybody says "Thou shall
> not have static couplings to the REST API you consume".
>
I'm not interested in what everybody says; everybody gets REST wrong.
I'm interested in what Roy says, which in this case would be along the
lines of, "REST doesn't eliminate the need for a clue." The wiski.org
/date service will eventually have an Xforms interface to describe the
API -- a URI template won't do, because the number of days in February
is algorithmic. But, that doesn't mean a client of the API is required
to be an Xforms client, or otherwise consume that form.
The client connector of my proxy component (charger) isn't required to
code itself, nor is it required to be some sort of AI driving a
hypertext form. Just as a Web page requires a user to make a choice.
All I need to know to write a simple proxy, is {YYYY-MM-DD}. My proxy
isn't required to discover that URI template by itself. It's still a
REST client, and hypertext is still the engine of application state.
REST is interested in the communication between charger and wiski.org,
and from the perspective of the protocol going across the wire, how
charger knows what resource to request from wiski.org remains opaque
behind the interface. The communication meets every REST constraint, I
assure you.
>
> > Now, imagine the mess updating the /date backend would cause, if all
> > requests looked something like this:
> >
> > http://charger.bisonsystems.net/date?url=http://en.wiski.org/date?iso=
> >
> > See how, by trying to pass the back-end URIs around inside the
> > front-end URIs, the system becomes coupled due to the requirement
> > that the client must "see" beyond the server it's interacting with
>
> But URLs are opaque - right? The client, a web-browser, can ignore
> anything thats contained in "?url=...". The stuff in the dots is just
> an opqaue identifier to the client. It could be anything. The fact
> that it is a URL is irrelevant to the client. Some blogging system
> even has complete URLs as their ATOM identifiers.
>
How does the client, making a request to charger, know the URI for
en.wiski.org? If it's getting this URI from wiski.org, then asking
charger to translate it, then the layered-system constraint is broken
and the system is coupled rather tightly.
If the client knows the en.wiski.org URI because charger told it so,
then charger obviously already knows wiski.org's API. For charger to
pass this on to the client, would be to induce the coupling incurred by
breaking the layered-system constraint.
Regardless of how the client is instructed of this URI-within-a-URI,
all bookmarks to charger break if wiski.org changes its URI template,
right? Different naming authorities -- why make charger responsible
for changes to wiski.org? You'd have to 301-redirect all old charger
URIs to their current equivalents, as well as have charger start
minting new URIs reflecting wiski.org's new URI template.
Please tell me you see how this is coupling, and a bad thing. If
charger obeys the layered-system constraint, as it does now, then none
of the URIs minted by charger in the past break, if wiski.org changes
its URI template. The client is only interacting with charger, so
charger shouldn't be exposing the URIs of another layer.
It doesn't really matter if the frontend or the backend or some other
system is generating those URIs. If a change to a backend URI format
causes frontend URIs to break, it's rather damning evidence of failure
to apply the layered-system constraint.
Granted, the URIs are opaque to the client, in that the client doesn't
recognize the wiski.org URIs as such, but the URIs still _break_ don't
they? You can't sidestep this coupling by declaring broken URIs opaque.
No REST constraint is being broken by having charger proxy wiski.org,
this is a common solution to common problems of Web development. Don't
believe any argument that leads you to conclude that setting up simple
forwarding or proxying of URIs is forbidden by REST. That just isn't
the case.
-Eric
Okay, so knowing the back-end URL format in the front-end is not unRESTful. Fine, I'll accept that it is not strickly unRESTful according to Roy's constraints. It's just that it is brittle and, as you say yourself, requires re-configuration of the front-end when the back-end changes. It simply anoys me: no one have ever told me to reconfigure my browser in order to lookup an ordernumber at Amazon, just because they changed their internal implementation and URL structure.
> Please tell me you see how this is coupling, and a bad thing.
Yes, the idea of passing the URL to the web-browser is also brittle as you point out, probably even more brittle. I totally agree with that.
So neither solutions are perfect. What can we do instead?
> The wiski.org
> /date service will eventually have an Xforms interface to describe the
> API -- a URI template won't do, because the number of days in February
> is algorithmic. But, that doesn't mean a client of the API is required
> to be an Xforms client, or otherwise consume that form.
Yes. Publishing search forms in the back-end solves most of the problems. Now my back-end has a way to instruct the front-end how to fetch the ATOM resource referenced externally by the simple string "peter".
Compare it to my previous Amazon ordernumber example: you, as a web-user, would never type the direct URL to Amazons order-information-page if all you had was an ordernumber. So instead you open the Amazon "search for orders" form and insert your ordernumber. Amazon can then redirect you to whatever internal URL they want.
The same can be done programmatically in the front-end if just the back-end has a machine-readable search form. This adds my required flexibility at the cost of a bit more code in the front-end.
> No REST constraint is being broken by having charger proxy wiski.org,
> this is a common solution to common problems of Web development. Don't
> believe any argument that leads you to conclude that setting up simple
> forwarding or proxying of URIs is forbidden by REST. That just isn't
> the case.
Agreed. It does not break any REST principles.
So what I am missing now is a standard format for publishing "these are the ways you should access items using the REST API" - in a way that can be parsed at runtime.
Here is one suggestion:
1) The official "here can you find the specs" kind of REST "sitemap".
A simple microformat. Look for anchors of class "rest.search". Requires only a shared knowledge of the identifiers "mysite-orders" and "mysite-customers".
<html>
<body>
<p>... human readable help and introduction ...</p>
<ul>
<li><a href="http://mysite.com/ordersearch.html" class="rest.search" id="mysite-orders">Orders</a></li>
<li><a href="..." class="rest.search" id="mysite-customers">Orders</a></li>
</ul>
</body>
</html>
2) The "mysite.orders" search specification at http://mysite.com/ordersearch.html
Requires only shared knowledge of the fact that there will be an <input> element with the ID "ordernumber".
<html>
<body>
<form id="mysite-orders" action="..." method="..." enctype="...">
<input id="ordernumber" name="amazon-internal-ordernumber"/>
<input type="submit"
</form>
</body>
<html>
All of this information could be auto-generated from a platform like OpenRasta. It should also be easy to create a library that consumes these documents at runtime (and caches them, so we don' have to fetch the search spec eveytime).
/Jrn
Volume 6 of This week in REST is up on the REST wiki - http://rest.blueoxen.net/cgi-bin/wiki.pl?RESTWeekly_Mar_1_2010 and the blog - http://wp.me/pMXr1-J. For contributing links this week visit http://rest.blueoxen.net/cgi-bin/wiki.pl?RESTWeekly_Mar_8_2010 Cheers! Ivan
"Jorn Wildt" wrote:
>
> Okay, so knowing the back-end URL format in the front-end is not
> unRESTful. Fine, I'll accept that it is not strickly unRESTful
> according to Roy's constraints.
>
Whenever I come across the notion of "Roy's REST" I can't help but
wonder, what other REST is there? If you have to qualify your REST
implementation as based on something other than Roy's constraints, then
you probably don't have REST. ;-)
>
> It's just that it is brittle and, as
> you say yourself, requires re-configuration of the front-end when the
> back-end changes. It simply anoys me: no one have ever told me to
> reconfigure my browser in order to lookup an ordernumber at Amazon,
> just because they changed their internal implementation and URL
> structure.
>
I said reconfigure the frontend, not reconfigure the client, though.
The interaction between charger and wiski.org is hidden from the client,
which is requesting ISO date-string translation from charger. The
wiski.org backend can change completely without affecting user-agent
interaction with the frontend. The frontend simply needs a new mapping
to the backend, this has no effect on the client (unless you've embedded
the backend URI in the request to the frontend).
>
> So neither solutions are perfect. What can we do instead?
>
I don't see any imperfection there. I'll humor you, though, and point
out that I could write the /date service proxy on charger to use an
actual URI template from wiski.org. I could then change the URI
allocation scheme on wiski.org, update its URI template, and the /date
service proxy on charger would automatically use the new URI allocation.
You could do this, if you didn't mind adding complexity to support an
optimization. I'm just trying to get the point across, that if you do
so you shouldn't point to it and say, "REST made me do it," because this
notion is really neither here nor there as far as REST is concerned.
As with any REST development, I can't stress enough that you should only
apply those REST constraints that bring about a benefit to your project,
while leaving room to add constraints in the future, as the system
scales to a point where they make sense in terms of cost-benefit. This
applies also, to any additional constraints you may wish to add.
So the question would be, in my case, what benefit would I achieve by
basing the system on a dynamic URI template instead of hard-coding?
The answer, even where REST constraints are involved, must make
pragmatic sense to me -- the answer never comes down to REST dogma,
only quantifiable benefits to the system. I see none, in fact I see
greater maintenance costs, so I'll let the hard-coding stand.
Your concerns are valid for clients interacting with frontends, but
they don't extend to the communication between layers in a system. The
wiski.org /date service and charger's /date proxy are both RESTful, even
if the communication between charger and wiski.org isn't hypertext
driven -- the API is, taken as a whole, as are its constituent parts.
I'm perhaps overly-wordy in my explanation, it's just difficult for me
to explain REST nuances to others, which come naturally to me for some
reason... perhaps because I read Roy's thesis through twice each year,
and refer to it regularly when engaged in REST development.
>
> > The wiski.org
> > /date service will eventually have an Xforms interface to describe
> > the API -- a URI template won't do, because the number of days in
> > February is algorithmic. But, that doesn't mean a client of the
> > API is required to be an Xforms client, or otherwise consume that
> > form.
>
> Yes. Publishing search forms in the back-end solves most of the
> problems. Now my back-end has a way to instruct the front-end how to
> fetch the ATOM resource referenced externally by the simple string
> "peter".
>
Nuance, again. Your back-end has a hypertext REST API which may be used
to instruct generic user-agents how to interact with it, as well as to
provide a self-documenting API which may be referenced by anyone
developing a client specific to your API. A client, i.e. the frontend,
doesn't actually have to consume the hypertext -- all REST says is that
the API must provide it.
These are component implementation details, hidden behind the generic
interface, and are thus out-of-scope to REST. If you insist on applying
some sort of constraint to component implementation, you can, but it
isn't a REST constraint. So, just as with any REST constraint, you'll
need to evaluate its pros and cons -- except you can't do that by
referring to REST since this constraint isn't in there.
>
> The same can be done programmatically in the front-end if just the
> back-end has a machine-readable search form. This adds my required
> flexibility at the cost of a bit more code in the front-end.
>
If this is a real requirement of your system, fine. If you're doing
this to "score REST points", don't -- there are none to be had.
>
> So what I am missing now is a standard format for publishing "these
> are the ways you should access items using the REST API" - in a way
> that can be parsed at runtime.
>
I don't understand. Hypertext is the engine of application state. I
refer you again to my demo site. All XHTML representations link to one
XSLT transformation. That XSLT transformation instructs the client, at
runtime, how and when to dereference the /date service -- when it
encounters a unique-to-page ISO 8601 date string, the XSLT code calls
this template:
<xsl:template name='date-service'>
<xsl:param name='iso-date'/>
<xsl:param name='date' select="document(concat('../../date?iso=',$iso-date))//xht:p"/>
<xsl:value-of select="concat($date/xht:abbr[1]/@title,', ')"/>
<xsl:value-of select="concat($date/xht:abbr[2]/@title,' ')"/>
<xsl:value-of select="concat($date/xht:abbr[3],' ')"/>
<xsl:value-of select='$date/xht:span'/>
</xsl:template>
I don't know what a "standard format" would be, all I know is that
there are many standard hypertext formats which may be employed to
instruct clients how to use a REST API. This is but one example, using
standard media types -- charger sends some hypertext (XSLT in this
case) which describes the use of a simple REST Web service, to replace
one string with another of a specific format.
>
> 1) The official "here can you find the specs" kind of REST "sitemap".
>
This is exactly the opposite of what Roy means by, "A REST API should be
entered with no prior knowledge beyond the initial URI." If, given a
URI for some resource in a system, I must consult some other "sitemap"
resource before I can request another URI in the system, then the API
is being driven by out-of-band knowledge, not hypertext.
While my /date service currently lacks rel='up', that will eventually
link all representations to the service document, i.e. an Xforms
interface. For now, though, regardless of entry point, any client that
groks rel='next' and rel='prev' as Link headers can traverse the entire
service output. IOW, the entire service is accessible through each and
every URI, this doesn't mean each and every URI needs a <form> of some
sort. HTH.
-Eric
> Whenever I come across the notion of "Roy's REST" I can't help but
> wonder, what other REST is there?
:^)
>> It's just that it is brittle and, as
>> you say yourself, requires re-configuration of the front-end when the
>> back-end changes.
> ...
> I said reconfigure the frontend, not reconfigure the client, though.
Yes, you won't have to reconfigure the browser client. But this may be one
of the places where we disagree on what the "client" is. In my head I have a
picture of a completely open back-end API. Open to the whole world that is.
Anyone that can access the website can also access the REST API directly
instead. I want everybody to be able to write different and/or better
front-ends than I do. This means I don't have control over who consumes my
back-end. In my scenario I have one REST API and a multitude of clients that
uses it. One of these happens to be my own website.
> ... use an actual URI template from wiski.org ...
> You could do this, if you didn't mind adding complexity to support an
> optimization. I'm just trying to get the point across, that if you do
> so you shouldn't point to it and say, "REST made me do it," because this
> notion is really neither here nor there as far as REST is concerned.
Ok.
> As with any REST development, I can't stress enough that you should only
> apply those REST constraints that bring about a benefit to your project,
> while leaving room to add constraints in the future, as the system
> scales to a point where they make sense in terms of cost-benefit.
Yeps. That makes sense. The more flexible solution is certainly more
expensive. Is it worth it? It depends. In my scenario with many different
clients I believe it's worth it. I won't be able to tell all my clients that
URL templates has changed unless I do something similar to the forms
requirement.
> The answer, even where REST constraints are involved, must make
> pragmatic sense to me -- the answer never comes down to REST dogma,
> only quantifiable benefits to the system. I see none, in fact I see
> greater maintenance costs, so I'll let the hard-coding stand.
Unfortunately, as a REST novice, I do not have experience enough with REST
to really tell when it makes sense to use it or not. Well, maybe I do, but
as newbies we often have to stick to the dogma since that is all we have. We
don't have the experience and understanding that tells us that a specific
part of REST can be left out or must be respected.
So it often comes down to dogma due to uncertainty of what will happen if we
don't do everything stricly by the book. In this cases my starting point
was - what benefits do I loose or gain if don't shuffle the full resource
URLs around. I think I got that covered now :-)
>> 1) The official "here can you find the specs" kind of REST "sitemap".
>
> This is exactly the opposite of what Roy means by, "A REST API should be
> entered with no prior knowledge beyond the initial URI."
Then I am lost again :-(
> If, given a URI for some resource in a system, I must consult some other
> "sitemap"
> resource before I can request another URI in the system, then the API
> is being driven by out-of-band knowledge, not hypertext.
This is not exactly what I am saying. You are _not_ "given a URI for some
resource in a system". You are given a simple identifier, a customer number,
an order number, or a blog name. Not the complete URL. That "sitemap" tells
the client where it can find the search forms for those numbers or names. By
looking at the sitemap you can get a URL to the search form for customers.
That search form tells you, that by doing a GET on a certain URL (the
action) and passing the customer number as "&numer=...", you will get a
resource describing the requested customer.
/J�rn
----- Original Message -----
From: "Eric J. Bowman" <eric@...>
To: "Jorn Wildt" <jw@...>
Cc: <rest-discuss@yahoogroups.com>
Sent: Monday, March 08, 2010 3:10 PM
Subject: Re: [rest-discuss] Re: Thoughts about URLs for a REST driven
website
> "Jorn Wildt" wrote:
>>
>> Okay, so knowing the back-end URL format in the front-end is not
>> unRESTful. Fine, I'll accept that it is not strickly unRESTful
>> according to Roy's constraints.
>>
>
> Whenever I come across the notion of "Roy's REST" I can't help but
> wonder, what other REST is there? If you have to qualify your REST
> implementation as based on something other than Roy's constraints, then
> you probably don't have REST. ;-)
>
>>
>> It's just that it is brittle and, as
>> you say yourself, requires re-configuration of the front-end when the
>> back-end changes. It simply anoys me: no one have ever told me to
>> reconfigure my browser in order to lookup an ordernumber at Amazon,
>> just because they changed their internal implementation and URL
>> structure.
>>
>
> I said reconfigure the frontend, not reconfigure the client, though.
> The interaction between charger and wiski.org is hidden from the client,
> which is requesting ISO date-string translation from charger. The
> wiski.org backend can change completely without affecting user-agent
> interaction with the frontend. The frontend simply needs a new mapping
> to the backend, this has no effect on the client (unless you've embedded
> the backend URI in the request to the frontend).
>
>>
>> So neither solutions are perfect. What can we do instead?
>>
>
> I don't see any imperfection there. I'll humor you, though, and point
> out that I could write the /date service proxy on charger to use an
> actual URI template from wiski.org. I could then change the URI
> allocation scheme on wiski.org, update its URI template, and the /date
> service proxy on charger would automatically use the new URI allocation.
>
> You could do this, if you didn't mind adding complexity to support an
> optimization. I'm just trying to get the point across, that if you do
> so you shouldn't point to it and say, "REST made me do it," because this
> notion is really neither here nor there as far as REST is concerned.
>
> As with any REST development, I can't stress enough that you should only
> apply those REST constraints that bring about a benefit to your project,
> while leaving room to add constraints in the future, as the system
> scales to a point where they make sense in terms of cost-benefit. This
> applies also, to any additional constraints you may wish to add.
>
> So the question would be, in my case, what benefit would I achieve by
> basing the system on a dynamic URI template instead of hard-coding?
> The answer, even where REST constraints are involved, must make
> pragmatic sense to me -- the answer never comes down to REST dogma,
> only quantifiable benefits to the system. I see none, in fact I see
> greater maintenance costs, so I'll let the hard-coding stand.
>
> Your concerns are valid for clients interacting with frontends, but
> they don't extend to the communication between layers in a system. The
> wiski.org /date service and charger's /date proxy are both RESTful, even
> if the communication between charger and wiski.org isn't hypertext
> driven -- the API is, taken as a whole, as are its constituent parts.
>
> I'm perhaps overly-wordy in my explanation, it's just difficult for me
> to explain REST nuances to others, which come naturally to me for some
> reason... perhaps because I read Roy's thesis through twice each year,
> and refer to it regularly when engaged in REST development.
>
>>
>> > The wiski.org
>> > /date service will eventually have an Xforms interface to describe
>> > the API -- a URI template won't do, because the number of days in
>> > February is algorithmic. But, that doesn't mean a client of the
>> > API is required to be an Xforms client, or otherwise consume that
>> > form.
>>
>> Yes. Publishing search forms in the back-end solves most of the
>> problems. Now my back-end has a way to instruct the front-end how to
>> fetch the ATOM resource referenced externally by the simple string
>> "peter".
>>
>
> Nuance, again. Your back-end has a hypertext REST API which may be used
> to instruct generic user-agents how to interact with it, as well as to
> provide a self-documenting API which may be referenced by anyone
> developing a client specific to your API. A client, i.e. the frontend,
> doesn't actually have to consume the hypertext -- all REST says is that
> the API must provide it.
>
> These are component implementation details, hidden behind the generic
> interface, and are thus out-of-scope to REST. If you insist on applying
> some sort of constraint to component implementation, you can, but it
> isn't a REST constraint. So, just as with any REST constraint, you'll
> need to evaluate its pros and cons -- except you can't do that by
> referring to REST since this constraint isn't in there.
>
>>
>> The same can be done programmatically in the front-end if just the
>> back-end has a machine-readable search form. This adds my required
>> flexibility at the cost of a bit more code in the front-end.
>>
>
> If this is a real requirement of your system, fine. If you're doing
> this to "score REST points", don't -- there are none to be had.
>
>>
>> So what I am missing now is a standard format for publishing "these
>> are the ways you should access items using the REST API" - in a way
>> that can be parsed at runtime.
>>
>
> I don't understand. Hypertext is the engine of application state. I
> refer you again to my demo site. All XHTML representations link to one
> XSLT transformation. That XSLT transformation instructs the client, at
> runtime, how and when to dereference the /date service -- when it
> encounters a unique-to-page ISO 8601 date string, the XSLT code calls
> this template:
>
> <xsl:template name='date-service'>
> <xsl:param name='iso-date'/>
> <xsl:param name='date'
> select="document(concat('../../date?iso=',$iso-date))//xht:p"/>
> <xsl:value-of select="concat($date/xht:abbr[1]/@title,', ')"/>
> <xsl:value-of select="concat($date/xht:abbr[2]/@title,' ')"/>
> <xsl:value-of select="concat($date/xht:abbr[3],' ')"/>
> <xsl:value-of select='$date/xht:span'/>
> </xsl:template>
>
> I don't know what a "standard format" would be, all I know is that
> there are many standard hypertext formats which may be employed to
> instruct clients how to use a REST API. This is but one example, using
> standard media types -- charger sends some hypertext (XSLT in this
> case) which describes the use of a simple REST Web service, to replace
> one string with another of a specific format.
>
>>
>> 1) The official "here can you find the specs" kind of REST "sitemap".
>>
>
> This is exactly the opposite of what Roy means by, "A REST API should be
> entered with no prior knowledge beyond the initial URI." If, given a
> URI for some resource in a system, I must consult some other "sitemap"
> resource before I can request another URI in the system, then the API
> is being driven by out-of-band knowledge, not hypertext.
>
> While my /date service currently lacks rel='up', that will eventually
> link all representations to the service document, i.e. an Xforms
> interface. For now, though, regardless of entry point, any client that
> groks rel='next' and rel='prev' as Link headers can traverse the entire
> service output. IOW, the entire service is accessible through each and
> every URI, this doesn't mean each and every URI needs a <form> of some
> sort. HTH.
>
> -Eric
Hi all,
I have a customer who raised a concern about REST "type safety". What he means may be best explained by a simple example.
Assume we have a resource that represents a folder, and by issueing a POST request, I can attach a file. What, however, if that post request wrongly goes to, let's say, another document resource instead of our folder, and the document resource also accepts POST (for some reason)?
In a RPC world, the document type would not understand an "addDocument()" call, and consequently return an exception. But what about REST? Of course, the POST's attributes most likely wouldn't be understood and something like 400 Bad Request returned. But what if they were understood?
So, are there means or patterns, to achieve (some sort) of type safety?
Thanks so much,
Juerg
On Tue, Mar 9, 2010 at 9:48 AM, <jumeier@...> wrote: > > > Hi all, > > I have a customer who raised a concern about REST "type safety". What he > means may be best explained by a simple example. > > Assume we have a resource that represents a folder, and by issueing a POST > request, I can attach a file. What, however, if that post request wrongly > goes to, let's say, another document resource instead of our folder, and the > document resource also accepts POST (for some reason)? > > In a RPC world, the document type would not understand an "addDocument()" > call, and consequently return an exception. But what about REST? Of course, > the POST's attributes most likely wouldn't be understood and something like > 400 Bad Request returned. But what if they were understood? > > So, are there means or patterns, to achieve (some sort) of type safety? > > This is one of the reasons I prefer to use specific media types, instead of generic things like "application/xml" or "application/json", for my resources. Then, if your "folder" resource accepts a media type that says "here is a new document resource", but your document resource doesn't (it accepts some other kind of POST request like "here is a comment to add to the discussion about this document" with a different media type), then you're fine ... a 4xx response would be appropriate (and, if you're using a framework like JAX-RS for Java, the framework will take care of this negotiation for you). An an extra for-free bonus, you can set things up so that the same resource can accept different POSTs (with different media types) to trigger different state changes and server responses. No need to create 3 "artificial" resource URIs to support 3 different POSTs that trigger state changes. If you use a generic media type, then your "document resource" will need to validate the details of the received input to make sure it obeys the rules for what kind of behavior a POST should trigger on a document. Of course, you should be doing that anyway ... so it should be pretty obvious that the incoming data doesn't have the right set of fields. This kind of "form validation" isn't really much different than what you should be doing in a browser-based webapp, where the same scenario is possible. > Thanks so much, > Juerg > Craig
I would add to Craig's excellent reply that I've sometimes used a token pattern to ensure tight control on updates to a server. If I want to allow a client app to update the server but still maintain tight control, I can require the client app to first request a ticket or token that grants that client the permission to perform the update. This token can be built any way the server wishes including using hash tags for the enclosing resource (for POST) or some other value. You can easily make the token time-limited and user- (or user-agent) specific, too. The way it is constructed is of no consequence to the client, of course. When the client performs the update (POST,PUT,DELETE) the value can be sent as part of the state representation or, in cases where the message body does not allow for this (e.g. binary uploads), the data can be included in as an HTTP Header. It's an extra step, but in cases where this information is critical and (as Craig already pointed out) the media-type in use is insufficient for the task, a validation token is useful. mca http://amundsen.com/blog/ On Tue, Mar 9, 2010 at 18:45, Craig McClanahan <craigmcc@...> wrote: > > > > > On Tue, Mar 9, 2010 at 9:48 AM, <jumeier@...> wrote: > >> >> >> Hi all, >> >> I have a customer who raised a concern about REST "type safety". What he >> means may be best explained by a simple example. >> >> Assume we have a resource that represents a folder, and by issueing a POST >> request, I can attach a file. What, however, if that post request wrongly >> goes to, let's say, another document resource instead of our folder, and the >> document resource also accepts POST (for some reason)? >> >> In a RPC world, the document type would not understand an "addDocument()" >> call, and consequently return an exception. But what about REST? Of course, >> the POST's attributes most likely wouldn't be understood and something like >> 400 Bad Request returned. But what if they were understood? >> >> So, are there means or patterns, to achieve (some sort) of type safety? >> >> This is one of the reasons I prefer to use specific media types, instead > of generic things like "application/xml" or "application/json", for my > resources. Then, if your "folder" resource accepts a media type that says > "here is a new document resource", but your document resource doesn't (it > accepts some other kind of POST request like "here is a comment to add to > the discussion about this document" with a different media type), then > you're fine ... a 4xx response would be appropriate (and, if you're using a > framework like JAX-RS for Java, the framework will take care of this > negotiation for you). An an extra for-free bonus, you can set things up so > that the same resource can accept different POSTs (with different media > types) to trigger different state changes and server responses. No need to > create 3 "artificial" resource URIs to support 3 different POSTs that > trigger state changes. > > If you use a generic media type, then your "document resource" will need to > validate the details of the received input to make sure it obeys the rules > for what kind of behavior a POST should trigger on a document. Of course, > you should be doing that anyway ... so it should be pretty obvious that the > incoming data doesn't have the right set of fields. This kind of "form > validation" isn't really much different than what you should be doing in a > browser-based webapp, where the same scenario is possible. > >> Thanks so much, >> Juerg >> > Craig > > > >
On Mar 9, 2010, at 9:48 AM, <jumeier@...> <jumeier@...> wrote: > I have a customer who raised a concern about REST "type safety". What he means may be best explained by a simple example. > > Assume we have a resource that represents a folder, and by issueing a POST request, I can attach a file. What, however, if that post request wrongly goes to, let's say, another document resource instead of our folder, and the document resource also accepts POST (for some reason)? > > In a RPC world, the document type would not understand an "addDocument()" call, and consequently return an exception. But what about REST? Of course, the POST's attributes most likely wouldn't be understood and something like 400 Bad Request returned. But what if they were understood? Ah, yes, a similar thing happens all the time when flaying frisbee in the park with my dog. When I throw the frisbee and the dog tries to catch it in his teeth, everyone seems to be happy. However, when the dog tries to throw the frisbee and I try to catch it in my teeth, it just doesn't seem to work well for either of us. > So, are there means or patterns, to achieve (some sort) of type safety? Yes. A RESTful interface is one where the server provides the links or forms that tells the client what operations to perform where. If you are telling the client to do the wrong thing, then the pattern would be to tell the client to do the right thing instead. Generally speaking, it works out better that way. ....Roy
--- In rest-discuss@yahoogroups.com, Jan Algermissen <algermissen1971@...> wrote: > On Feb 28, 2010, at 5:30 AM, Jan Vincent wrote: > > In addition to this, is it feasible to access multiple REST web services, thereby maintaining more than one current 'state'? > > Personally, I have not made up my mind on this. I guess that an application is limited to a single service unless the service itself point to another service itself. So what about mashups? I.e., applications that combine data from two or more separate services on the fly, where none of the services were built in anticipation of the possible combination (and therefore don't link to the others)? This seems like a common scenario in the Web, so I'd be interested to know how you explain the design of such systems in terms of REST. Thanks, Alistair
Alistair asks: > So what about mashups? I.e., > applications that combine data from two or more separate services on the fly, > where none of the services were built in anticipation of the possible > combination (and therefore don't link to the others)? > > This seems like a common scenario in the Web, so I'd be interested to know how you explain the > design of such systems in terms of REST. While no expert, all "Mashup Studios" have to define data access to work. For example, the Open Source Mashup system Apatar has Data Maps to define services.[1] Online Mashups like Dapper have a wizard and the first question is "Enter a URL".[2] Since most mashups are just doing GETs, this seems like the best/easiest case for RESTful services because the format of the URI is so predictable. Mark W. 1. http://www.apatarforge.org 2. http://www.dapper.net/dapp-factory.jsp
OK. Let me add to that. 1. Let's no call it RPC, it is just plain method invocation, for the matter at hand it is the same thing. In this case, the idea of type safety works just the same. For objects, you have type checking when you send a message to the wrong object. You also have type checking when you send parameters. But there could be the case where two different object (different classes) have a method named the same, and receiving the same number of parameters, and the same parameter types too! In this case you face the exact problem. SO the problem goes down to a clumsy API definition and a lost programmer that makes the wrong calls. 2. Now, in REST you are supposed to create your client to discover state transitions. You start with one URL, and from there, each resource representation will tell you, through hypermedia, where can you go next and what info you need to provide, also which operations to execute to perform that transition. If that is in place, and you are writing the client to go post to a document URL manually (and thus, causing the error you mention) then it is your fault, not REST, since you are not following the rules. On the other hand, if your resource representation provides the wrong URL, making your correctly programmed client to commit a mistake, then the one to blame is not REST, but the programmer that created the resource representation. If you think about it, the same error you get using any other thing than REST. Also, as with any other interaction technique, the one that receives data to be processed, should check the data is correct. If you send your social security number to a method expecting the time in milliseconds, you need to find a way in that method to validate the amount of milliseconds is a valid date. If it happens to be, then you are in trouble, being REST or not. William Martinez. --- In rest-discuss@yahoogroups.com, "Roy T. Fielding" <fielding@...> wrote: > > On Mar 9, 2010, at 9:48 AM, <jumeier@...> <jumeier@...> wrote: > > > I have a customer who raised a concern about REST "type safety". What he means may be best explained by a simple example. > > > > Assume we have a resource that represents a folder, and by issueing a POST request, I can attach a file. What, however, if that post request wrongly goes to, let's say, another document resource instead of our folder, and the document resource also accepts POST (for some reason)? > > > > In a RPC world, the document type would not understand an "addDocument()" call, and consequently return an exception. But what about REST? Of course, the POST's attributes most likely wouldn't be understood and something like 400 Bad Request returned. But what if they were understood? > > Ah, yes, a similar thing happens all the time when > flaying frisbee in the park with my dog. When I throw > the frisbee and the dog tries to catch it in his teeth, > everyone seems to be happy. However, when the dog tries > to throw the frisbee and I try to catch it in my teeth, > it just doesn't seem to work well for either of us. > > > So, are there means or patterns, to achieve (some sort) of type safety? > > Yes. A RESTful interface is one where the server provides the links > or forms that tells the client what operations to perform where. > If you are telling the client to do the wrong thing, then the > pattern would be to tell the client to do the right thing instead. > Generally speaking, it works out better that way. > > ....Roy >
Jrn Wildt wrote: > > > Whenever I come across the notion of "Roy's REST" I can't help but > > wonder, what other REST is there? > > :^) > Maybe we could call say "Street REST" vs. "Roy's REST", one you pick up on the street, the other by studying a doctoral dissertation... As with any language, street-speak is easier, but also less precise. > > Yes, you won't have to reconfigure the browser client. But this may > be one of the places where we disagree on what the "client" is. > Oh, I'm sure we both know what a client is, in the general sense. The problem is that REST requires more precision. "Client" in REST, can mean a client component or a client connector. We say user-agent when we mean client component. We _forget_ to say "origin server" to distinguish between server components and server connectors. We _ought_ to always qualify the terms "client" and "server" when discussing REST. You're treating all clients as being of the same "class". REST allows that user-agents and gateways may both have client connectors, but only the user-agent is a client component. In my system, charger's /date service implementation is a gateway (intermediary) component. (Have I been calling it a proxy? Sorry folks, my bad.) Different rules apply to the coding of user-agents and gateways. From the perspective of a protocol analyzer, however, requests to wiski.org coming from the client connector on charger's gateway, or the client connector of a user-agent manipulating a form, are indistinguishable. A user-agent consuming the /date service, from either wiski.org or charger, must be hypertext-driven. The same constraint does not apply to the gateway, where the rule is that implementation specifics are hidden behind the generic interface (as implemented by the gateway's client connector). So the /date service on charger is an intermediary component known as a gateway. As currently configured, it has a client connector and a server connector. It can be expanded, through layering, such that either the client connector or the server connector (or both), is behind a cache connector. My architectural philosophy will lead to a cache connector on the server connector only, for both charger and wiski.org output. This is Greek to most folks. When you fully understand REST, you'll be able to tick off the pros and cons, by hear, of using a cache connector on the server component vs. a reverse-proxy. The only way to understand these things, is to become fluent in the component-connector lingo of networked-software architecture -- this precision is required in order to advance beyond StreetREST into RoyREST. > > In my > head I have a picture of a completely open back-end API. Open to the > whole world that is. Anyone that can access the website can also > access the REST API directly instead. I want everybody to be able to > write different and/or better front-ends than I do. This means I > don't have control over who consumes my back-end. In my scenario I > have one REST API and a multitude of clients that uses it. One of > these happens to be my own website. > Exactly! I couldn't have put it better myself. I want the wiski.org /date translation service to be utilized far and wide. While I haven't provided a leap-year function, it's possible for anyone to code such a function for their own app against the /date API. If the cross-domain language handling is a problem, like it is for my app, I've shown how my front-end deals with the problem -- but that's just one possible consumer of the service, and the pending Xforms service document is just one other possible consumer of the service (when dereferenced by a compatible user agent). Without relying on 4xx response codes, a leap-year function can dereference Feb 28th for a given year, and compare its rel='next' value to the integer '29'. Other services I haven't considered in my service interface, or on my frontend, are possible -- because coders can infer them from hypertext, not because their code is consuming a URI template or a form. A leap-year function is an ad-hoc REST client of the /date REST service. This serendipitous re-use is made possible because a REST API is self- documenting (otherwise coders couldn't infer anything from hypertext). It would be a shame if serendipitous re-use were frowned on by REST, for failing to be hypertext driven -- this would limit the usefulness of any service to only the functionality its creator envisioned. > > > The answer, even where REST constraints are involved, must make > > pragmatic sense to me -- the answer never comes down to REST dogma, > > only quantifiable benefits to the system. I see none, in fact I see > > greater maintenance costs, so I'll let the hard-coding stand. > > Unfortunately, as a REST novice, I do not have experience enough with > REST to really tell when it makes sense to use it or not. Well, maybe > I do, but as newbies we often have to stick to the dogma since that > is all we have. We don't have the experience and understanding that > tells us that a specific part of REST can be left out or must be > respected. > > So it often comes down to dogma due to uncertainty of what will > happen if we don't do everything stricly by the book. > Stick to the thesis. Chapter 5 is explicit about what the tradeoffs are for each REST constraint; earlier chapters explain in more detail what benefits result from which architectural styles. Scale isn't something everyone needs from the get-go. You can make a decision about whether to use cookies for auth, based on your projected need for scale over time -- if you expect your system to grow huge, then you probably won't want to rip out cookies later to facilitate growth. Whereas if scale is never going to be a problem, then breaking a constraint which facilitates scaling isn't a big deal. You can read through Roy's thesis and cherry-pick those benefits you need, and what tradeoffs you're willing to make. The appropriate constraints are then added to the "null style," following the method laid out in the thesis. The resulting architecture, having been devised through informed decisions about benefits and drawbacks, would be firmly grounded in networked-software theory. As I've said before, REST is a tool, but to make use of it requires the disciplined approach of modeling the objectives of a system first, to have a blueprint for evaluating the evolving implementation. This process, informed by REST, may not always result in REST. However, if the result is an appropriate architecture for the system, any criticism of it as unRESTful is purely dogmatic. If, over time, the system's evolution comes to require REST, the developers will already be familiar with the discipline of REST development, and the notion of adding constraints to achieve known benefits vs. known drawbacks. -Eric A response to the rest of your post is coming.
2010/3/10 Roy T. Fielding <fielding@...> > > > Yes. A RESTful interface is one where the server provides the links > or forms that tells the client what operations to perform where. > If you are telling the client to do the wrong thing, then the > pattern would be to tell the client to do the right thing instead. > > All that is very well in a perfect world, but what if the client is of a malicious nature, and it's nature lead him to overcome what the server provides, by issuing a request that he may learned from introspection of another application state?
On Mar 10, 2010, at 6:50 AM, Antnio Mota wrote: > All that is very well in a perfect world, but what if the client is of a malicious nature, and it's nature lead him toovercome what the server provides, by issuing a request that he may learned from introspection of another application state? It does not take out the responsibility of input validation and other security measures. The hypertext can very well include markers that the server can verify to deal with malicious clients. (The recipes on one-time URIs in the RESTful Web Services Cookbook provide examples.) Subbu
Am 15.01.10 21:27, schrieb mike amundsen: Hi, > Recently, I've been thinking about how a coding framework or library > can influence the way developers implement applications. What would a > coding environment look like if it was meant to encourage results that > followed a particular _architectural_ style (not programming style). > > IOW, is there a way to craft a framework that constrains developers in > ways that results in a REST-ful implementation of the application? I'm currently toying around with implementing a restful service library in clojure (See compojure-rest at github I you like.) What I have for now is based on erlang's webmachine and I think it does not capture the "essence" of REST. > > I did some digging, but have yet to find any writing on this topic. > > Here are some "off-the-top-of-my-head" items: > > For example, a framework might exhibit these REST-like traits: > - there is a clear separation of concerns between resource > identifiers, resources, and representations > - developers must define a resource as the public application interface > - the Uniform Interface is enforced (e.g. those methods are the only > public members exposed for a resource) > - developers must always associate one or more representation formats > with a resource and/or resource method before the implementation is > valid > - there is no way to define and use server-side session state objects > > Some HTTP-specific traits might be: > - support for content negotiation is "baked-in" > - support for conditional requests is "baked-in" and automatic > - RPC-like implementation patterns (e.g. gateway URIs) are somehow > difficult to implement or are flagged as invalid These items are a good start. I'd like to add the following: - support for resource linking and qualification of links - support for different abstract resource types - collections (including paging) - hierachical resources (refer to parent/childs/siblinks) - mapping of URL to resource and vice versa. (The former is supported by all web frameworks I know of, the latter by hardly any). -billy. -- 404 signature not found.
> > Ah, yes, a similar thing happens all the time when > flaying frisbee in the park with my dog. When I throw > the frisbee and the dog tries to catch it in his teeth, > everyone seems to be happy. However, when the dog tries > to throw the frisbee and I try to catch it in my teeth, > it just doesn't seem to work well for either of us. > And your dog ends up complaining that you don't know how to do his job, is it? > > > > So, are there means or patterns, to achieve (some sort) of type safety? > > Yes. A RESTful interface is one where the server provides the links > or forms that tells the client what operations to perform where. > If you are telling the client to do the wrong thing, then the > pattern would be to tell the client to do the right thing instead. > Generally speaking, it works out better that way. > > ....Roy > > >
On Mar 10, 2010, at 6:50 AM, Antnio Mota wrote: > 2010/3/10 Roy T. Fielding <fielding@...> >> Yes. A RESTful interface is one where the server provides the links >> or forms that tells the client what operations to perform where. >> If you are telling the client to do the wrong thing, then the >> pattern would be to tell the client to do the right thing instead. >> > > All that is very well in a perfect world, but what if the client is of a malicious nature, and it's nature lead him to overcome what the server provides, by issuing a request that he may learned from introspection of another application state? How is that relevant to type safety? The only difference between a strongly typed distributed system and a weakly typed distributed system is that the former gives the attacker one more thing to lie about. The input has to be validated no matter how or where it has been defined. ....Roy
I am confused a bit about this. I was just thinking this today. If I provide a single URI point of entry, and an OPTION or GET request is sent, it returns some relevant links that can be called based on the state of the resource. Now, to get those links, I have to first access the point of entry URI. What happens if say a bot program (or even a client developer) decides to cache/save these URIs that return. Then at some point later, call those URIs directly instead of the point of entry URI first to get those URIs back. They could even navigate some links for a while, then save the various URIs deeper down. Later, call those directly.
My confusion of this is because the server side is stateless. It retains no state. So how can I validate a URI that a client/bot saved from some previous use, to make sure it's valid at the time of call? I have no state on the server side that says "this URI is being called BEFORE the URI that returns this URI was called.. it's a bad call". So I am unsure as to how to validate every single URI call to make sure it was called at a time when it should be called, and not just randomly out of order. I suppse we can use some sort of timestamp on every single URI that goes back, not sure entirely how that would work at this point, but I suppose the server would check this value when it came back in to the current server time stamp and make sure it's within so many minutes of when it was issued. But a smart client developer/bot could possibly figure that out, and update this value before making the request, and since the server keeps no state, if the modified
timestamp is within the right time of the server, it would void that route of validating a URI.
> All that is very well in a perfect world, but what if the client is of a malicious nature, and it's nature lead him to overcome what the server provides, by issuing a request that he may learned from introspection of another application state?
How is that relevant to type safety? The only difference between
a strongly typed distributed system and a weakly typed distributed
system is that the former gives the attacker one more thing to lie
about.
The input has to be validated no matter how or where it has been defined.
I've the same question... I don't believe there is any reason a client could not call more than one REST service to provide a robust UI with various pieces of different services. I can't imagine that REST dictates how a client is allowed to only access just the one service. Think of a store front site that makes use of paypal/google APIs along with perhaps a web cart REST service, and other mashups.
--- On Wed, 3/10/10, alistair.miles <alimanfoo@...> wrote:
From: alistair.miles <alimanfoo@...>
Subject: [rest-discuss] Re: Idea for a REST client
To: rest-discuss@yahoogroups.com
Date: Wednesday, March 10, 2010, 12:59 AM
--- In rest-discuss@ yahoogroups. com, Jan Algermissen <algermissen1971@ ...> wrote:
> On Feb 28, 2010, at 5:30 AM, Jan Vincent wrote:
> > In addition to this, is it feasible to access multiple REST web services, thereby maintaining more than one current 'state'?
>
> Personally, I have not made up my mind on this. I guess that an application is limited to a single service unless the service itself point to another service itself.
So what about mashups? I.e., applications that combine data from two or more separate services on the fly, where none of the services were built in anticipation of the possible combination (and therefore don't link to the others)?
This seems like a common scenario in the Web, so I'd be interested to know how you explain the design of such systems in terms of REST.
Thanks,
Alistair
On Fri, Mar 12, 2010 at 9:57 PM, Kevin Duffey <andjarnic@...> wrote: > I am confused a bit about this. I was just thinking this today. If I > provide a single URI point of entry, and an OPTION or GET request is > sent, it returns some relevant links that can be called based on the > state of the resource. Now, to get those links, I have to first > access the point of entry URI. What happens if say a bot program (or > even a client developer) decides to cache/save these URIs that > return. Then at some point later, call those URIs directly instead > of the point of entry URI first to get those URIs back. They could > even navigate some links for a while, then save the various URIs > deeper down. Later, call those directly. A URI that is saved -- iow, bookmarked -- counts as an entry point URI. This sort of bookmarking is required to implement many kinds of systems composed of multiple components that expose REST style interfaces. Of course, there is no guarantee that URIs saved by a client will remain valid over time. Clients that save URIs must accept that the resources those URIs name may disappear as any point in the future. Well behaved servers do not disable URIs capriciously (see Cool URIs), but resources do have life-cycles that are governed by their domain. > My confusion of this is because the server side is stateless. It > retains no state. So how can I validate a URI that a client/bot > saved from some previous use, to make sure it's valid at the time of > call? I have no state on the server side that says "this URI is > being called BEFORE the URI that returns this URI was called.. it's > a bad call". So I am unsure as to how to validate every single URI > call to make sure it was called at a time when it should be called, > and not just randomly out of order. I suppse we can use some sort of > timestamp on every single URI that goes back, not sure entirely how > that would work at this point, but I suppose the server would check > this value when it came back in to the current server time stamp and > make sure it's within so many minutes of when it was issued. But a > smart client developer/bot could possibly figure that out, and > update this value before making the request, and since the server > keeps no state, if the modified timestamp is within the right time > of the server, it would void that route of validating a URI. Why do you care if the resources are accessed "out of order"? If there are domain reasons for a resource to be available for a limited time, the application logic should destroy/deactivate the resource once it is no longer valid. In that case, this hypothetical client with get a 410 or 404 when it attempts to make requests of the now nonexistent resource. If there is no domain reason for the resource to expire why is it a problem for a client to save a URI and the access it later? Peter http://barelyenough.org
Hi Peter,
As for why I care.. I don't.. I thought it would not be HATEOAS if a URI could be accessed directly and not "discovered"? From all the posts, it seemed to me that if you accessed any URI directly, and not via an initial URI entry point that then gave you URIs you could use based on the resource state, I figured that was not the HATEOAS/REST way. I couldn't figure out how you could actually validate that a URI was called based on a URI you passed back previously, given the no state restraint in terms of resources on the server side.
I suppose tho, that a client should not save URIs for future direct access, as data may change, even resources could change, hence the ability to evolve the server side without breaking the client.
--- On Sat, 3/13/10, Peter Williams <pezra@...> wrote:
From: Peter Williams <pezra@...>
Subject: Re: [rest-discuss] HTTP request and "type safety"
To: "Kevin Duffey" <andjarnic@...>, "Rest List" <rest-discuss@yahoogroups.com>
Date: Saturday, March 13, 2010, 8:37 AM
On Fri, Mar 12, 2010 at 9:57 PM, Kevin Duffey <andjarnic@...> wrote:
> I am confused a bit about this. I was just thinking this today. If I
> provide a single URI point of entry, and an OPTION or GET request is
> sent, it returns some relevant links that can be called based on the
> state of the resource. Now, to get those links, I have to first
> access the point of entry URI. What happens if say a bot program (or
> even a client developer) decides to cache/save these URIs that
> return. Then at some point later, call those URIs directly instead
> of the point of entry URI first to get those URIs back. They could
> even navigate some links for a while, then save the various URIs
> deeper down. Later, call those directly.
A URI that is saved -- iow, bookmarked -- counts as an entry point
URI. This sort of bookmarking is required to implement many kinds of
systems composed of multiple components that expose REST style
interfaces.
Of course, there is no guarantee that URIs saved by a client will
remain valid over time. Clients that save URIs must accept that the
resources those URIs name may disappear as any point in the future.
Well behaved servers do not disable URIs capriciously (see Cool URIs),
but resources do have life-cycles that are governed by their domain.
> My confusion of this is because the server side is stateless. It
> retains no state. So how can I validate a URI that a client/bot
> saved from some previous use, to make sure it's valid at the time of
> call? I have no state on the server side that says "this URI is
> being called BEFORE the URI that returns this URI was called.. it's
> a bad call". So I am unsure as to how to validate every single URI
> call to make sure it was called at a time when it should be called,
> and not just randomly out of order. I suppse we can use some sort of
> timestamp on every single URI that goes back, not sure entirely how
> that would work at this point, but I suppose the server would check
> this value when it came back in to the current server time stamp and
> make sure it's within so many minutes of when it was issued. But a
> smart client developer/bot could possibly figure that out, and
> update this value before making the request, and since the server
> keeps no state, if the modified timestamp is within the right time
> of the server, it would void that route of validating a URI.
Why do you care if the resources are accessed "out of order"?
If there are domain reasons for a resource to be available for a
limited time, the application logic should destroy/deactivate the
resource once it is no longer valid. In that case, this hypothetical
client with get a 410 or 404 when it attempts to make requests of the
now nonexistent resource.
If there is no domain reason for the resource to expire why is it a
problem for a client to save a URI and the access it later?
Peter
http://barelyenough.org
> I couldn't figure out how you could actually validate that a URI was called based on a URI you passed back previously, given the no state restraint in terms of resources on the server side. Please see http://my.safaribooksonline.com/9780596809140/recipe-how-to-generate-one-time-uris (sorry - requires a purchase to read) Subbu > > > --- On Sat, 3/13/10, Peter Williams <pezra@...> wrote: > > From: Peter Williams <pezra@barelyenough.org> > Subject: Re: [rest-discuss] HTTP request and "type safety" > To: "Kevin Duffey" <andjarnic@...>, "Rest List" <rest-discuss@yahoogroups.com> > Date: Saturday, March 13, 2010, 8:37 AM > > On Fri, Mar 12, 2010 at 9:57 PM, Kevin Duffey <andjarnic@...> wrote: > > > I am confused a bit about this. I was just thinking this today. If I > > provide a single URI point of entry, and an OPTION or GET request is > > sent, it returns some relevant links that can be called based on the > > state of the resource. Now, to get those links, I have to first > > access the point of entry URI. What happens if say a bot program (or > > even a client developer) decides to cache/save these URIs that > > return. Then at some point later, call those URIs directly instead > > of the point of entry URI first to get those URIs back. They could > > even navigate some links for a while, then save the various URIs > > deeper down. Later, call those directly. > > A URI that is saved -- iow, bookmarked -- counts as an entry point > URI. This sort of bookmarking is required to implement many kinds of > systems composed of multiple components that expose REST style > interfaces. > > Of course, there is no guarantee that URIs saved by a client will > remain valid over time. Clients that save URIs must accept that the > resources those URIs name may disappear as any point in the future. > Well behaved servers do not disable URIs capriciously (see Cool URIs), > but resources do have life-cycles that are governed by their domain. > > > > My confusion of this is because the server side is stateless. It > > retains no state. So how can I validate a URI that a client/bot > > saved from some previous use, to make sure it's valid at the time of > > call? I have no state on the server side that says "this URI is > > being called BEFORE the URI that returns this URI was called.. it's > > a bad call". So I am unsure as to how to validate every single URI > > call to make sure it was called at a time when it should be called, > > and not just randomly out of order. I suppse we can use some sort of > > timestamp on every single URI that goes back, not sure entirely how > > that would work at this point, but I suppose the server would check > > this value when it came back in to the current server time stamp and > > make sure it's within so many minutes of when it was issued. But a > > smart client developer/bot could possibly figure that out, and > > update this value before making the request, and since the server > > keeps no state, if the modified timestamp is within the right time > > of the server, it would void that route of validating a URI. > > Why do you care if the resources are accessed "out of order"? > > If there are domain reasons for a resource to be available for a > limited time, the application logic should destroy/deactivate the > resource once it is no longer valid. In that case, this hypothetical > client with get a 410 or 404 when it attempts to make requests of the > now nonexistent resource. > > If there is no domain reason for the resource to expire why is it a > problem for a client to save a URI and the access it later? > > Peter > http://barelyenough.org > > > >
Kevin, This issue of entry point URI vs. <<some other type of URI>> comes up from time to time. Here is a post from a while ago from the middle of a previous discussion of the issue. Roy schooled me one the subject. :-) http://tech.groups.yahoo.com/group/rest-discuss/message/13616 Re: [rest-discuss] Newbie REST Question On Fri, Oct 2, 2009 at 12:21 AM, Roy T. Fielding <fielding@...<http://tech.groups.yahoo.com/group/rest-discuss/post?postID=SJClec4h4JuSITI9W3aFfYVckGf2FEUb9VZCMR0K7Fj8rfcWLW3zZbJ_-GyKvMgW5Bb4f_rP6x9S>> wrote: >> Where do you get the idea that not all URIs need be or should be cool? (If I am understanding you correctly...) > > Umm, maybe the several hundred conversations I've had on > the topic with TimBL in the room. Cool URIs are permanent, > so if you want to be cool then make permanence a design > criteria. That's all there is to it. Agreed. > Nobody is going to > argue against too much URI permanence. There is certainly > nothing about that in conflict with REST, so if you perceive > a conflict then I suggest you look at your reasoning and > kill the paper tiger. I'm glad to hear you confirm that there is no real conflict between URI permanence and REST. I'm also glad to hear you confirm that there is no real conflict between designs that depend on URI permanence and REST, eg out-of-band URI templates. (Which is how I read your other reply<http://tech.groups.yahoo.com/group/rest-discuss/message/13606> .) While others may use the word "conflict", for the record, I don't believe I used the word "conflict" in this thread -- I used the word "tension". And I quoted an email of yours<http://tech.groups.yahoo.com/group/rest-discuss/message/12101> from back in February that seemed to indicate that you did not completely disagree with the "tension" characterization: *If there is a tension between the desire to bookmark and the fact that REST encourages folks to break up an application into a state machine of reusable resource states, then I would consider it to be more like sexual tension. Just because you have it doesn't mean it is bad, and one way to improve things is to make the more important resource links look sexier than the less important ones.* I suppose the fundamental tension here (and perhaps in sexual tension as well -- who knows) is the tension between the desire for permanence and stability vs. the desire for adaptability and change. -- Nick Nick Gall Phone: +1.781.608.5871 Twitter: ironick AOL IM: Nicholas Gall Yahoo IM: nick_gall_1117 MSN IM: (same as email) Google Talk: (same as email) Email: nick.gall AT-SIGN gmail DOT com Weblog: http://ironick.typepad.com/ironick/ On Sun, Mar 14, 2010 at 11:50 AM, Kevin Duffey <andjarnic@...> wrote: > > > Hi Peter, > > As for why I care.. I don't.. I thought it would not be HATEOAS if a URI > could be accessed directly and not "discovered"? From all the posts, it > seemed to me that if you accessed any URI directly, and not via an initial > URI entry point that then gave you URIs you could use based on the resource > state, I figured that was not the HATEOAS/REST way. I couldn't figure out > how you could actually validate that a URI was called based on a URI you > passed back previously, given the no state restraint in terms of resources > on the server side. > > I suppose tho, that a client should not save URIs for future direct access, > as data may change, even resources could change, hence the ability to evolve > the server side without breaking the client. > > > --- On *Sat, 3/13/10, Peter Williams <pezra@...>* wrote: > > > From: Peter Williams <pezra@...> > Subject: Re: [rest-discuss] HTTP request and "type safety" > To: "Kevin Duffey" <andjarnic@...>, "Rest List" < > rest-discuss@yahoogroups.com> > Date: Saturday, March 13, 2010, 8:37 AM > > > On Fri, Mar 12, 2010 at 9:57 PM, Kevin Duffey <andjarnic@...<http://mc/compose?to=andjarnic@...>> > wrote: > > > I am confused a bit about this. I was just thinking this today. If I > > provide a single URI point of entry, and an OPTION or GET request is > > sent, it returns some relevant links that can be called based on the > > state of the resource. Now, to get those links, I have to first > > access the point of entry URI. What happens if say a bot program (or > > even a client developer) decides to cache/save these URIs that > > return. Then at some point later, call those URIs directly instead > > of the point of entry URI first to get those URIs back. They could > > even navigate some links for a while, then save the various URIs > > deeper down. Later, call those directly. > > A URI that is saved -- iow, bookmarked -- counts as an entry point > URI. This sort of bookmarking is required to implement many kinds of > systems composed of multiple components that expose REST style > interfaces. > > Of course, there is no guarantee that URIs saved by a client will > remain valid over time. Clients that save URIs must accept that the > resources those URIs name may disappear as any point in the future. > Well behaved servers do not disable URIs capriciously (see Cool URIs), > but resources do have life-cycles that are governed by their domain. > > > > My confusion of this is because the server side is stateless. It > > retains no state. So how can I validate a URI that a client/bot > > saved from some previous use, to make sure it's valid at the time of > > call? I have no state on the server side that says "this URI is > > being called BEFORE the URI that returns this URI was called.. it's > > a bad call". So I am unsure as to how to validate every single URI > > call to make sure it was called at a time when it should be called, > > and not just randomly out of order. I suppse we can use some sort of > > timestamp on every single URI that goes back, not sure entirely how > > that would work at this point, but I suppose the server would check > > this value when it came back in to the current server time stamp and > > make sure it's within so many minutes of when it was issued. But a > > smart client developer/bot could possibly figure that out, and > > update this value before making the request, and since the server > > keeps no state, if the modified timestamp is within the right time > > of the server, it would void that route of validating a URI. > > Why do you care if the resources are accessed "out of order"? > > If there are domain reasons for a resource to be available for a > limited time, the application logic should destroy/deactivate the > resource once it is no longer valid. In that case, this hypothetical > client with get a 410 or 404 when it attempts to make requests of the > now nonexistent resource. > > If there is no domain reason for the resource to expire why is it a > problem for a client to save a URI and the access it later? > > Peter > http://barelyenough.org > > > > >
Nick, On Mar 14, 2010, at 10:01 PM, Nick Gall wrote: > > I suppose the fundamental tension here (and perhaps in sexual tension as well -- who knows) is the tension between the desire for permanence and stability vs. the desire for adaptability and change. > I think you wrongly see a tension there. Server evolvability is not negatively impacted by URI permanence. You do not degrade the freedom of the service owner to change the service by mandating cool URIs. All the server needs to do is to maintain the URIs (even if they return 301 or 410) and maintain the semantics of the resource they identify. In fact, I think that those two things are the only elements of a Web application that a server *can* reasonably guarantee to keep stable and their stability allows the clients to maintain 'handles' (Bookmarks or 'open browser windows) to certain application states. This allows the client to efficiently jump back and forth between states to follow certain transitions. Freshness information and change notifications (303 See Other) allow the server to inform the client when it should refresh its views of these application states. As a side note: it is a misconception about REST that a client can only use the current steady state to choose transitions. It can also go back in its own history to find certain transitions it is looking for. Likewise, it is perfectly fine to look at several steady states simultaneously to work towards some desired goal. Jan
Volume 7 of This week in REST is up on the REST wiki - http://rest.blueoxen.net/cgi-bin/wiki.pl?RESTWeekly_Mar_8_2010 and the blog - http://wp.me/pMXr1-Q. For contributing links this week visit http://rest.blueoxen.net/cgi-bin/wiki.pl?RESTWeekly_Mar_15_2010 Note: the REST wiki is currently offline (anyone know who to contact to get it back up?), but the blog is alive and well Cheers, Ivan
Hi guys, So who's in Vegas for mix this week and wants to grab a beer? Seb
First, thanks to all for contributing so many ideas in this extensive thread!
--- On Wed, 3/10/10, Craig McClanahan <craigmcc@...> wrote:
From: Craig McClanahan <craigmcc@...>
Subject: Re: [rest-discuss] HTTP request and "type safety"
To: jumeier@...
Cc: rest-discuss@yahoogroups.com
Date: Wednesday, March 10, 2010, 12:45 AM
On Tue, Mar 9, 2010 at 9:48 AM, <jumeier@...> wrote:
Hi all,
I have a customer who raised a concern about REST "type safety". What he means may be best explained by a simple example.
Assume we have a resource that represents a folder, and by issueing a POST request, I can attach a file. What, however, if that post request wrongly goes to, let's say, another document resource instead of our folder, and the document resource also accepts POST (for some reason)?
In a RPC world, the document type would not understand an "addDocument()" call, and consequently return an exception. But what about REST? Of course, the POST's attributes most likely wouldn't be understood and something like 400 Bad Request returned. But what if they were understood?
So, are there means or patterns, to achieve (some sort) of type safety?
This is one of the reasons I prefer to use specific media types, instead of generic things like "application/xml" or "application/json", for my resources. Then, if your "folder" resource accepts a media type that says "here is a new document resource", but your document resource doesn't (it accepts some other kind of POST request like "here is a comment to add to the discussion about this document" with a different media type), then you're fine ... a 4xx response would be appropriate (and, if you're using a framework like JAX-RS for Java, the framework will take care of this negotiation for you). An an extra for-free bonus, you can set things up so that the same resource can accept different POSTs (with different media types) to trigger different state changes and server responses. No need to create 3 "artificial" resource URIs to support 3 different POSTs that trigger state changes.
>> Thanks for this interesting ideas with the media type. I made a similar observation - and now Roy will probably throw yet another frisbee - I looked at the new OASIS CMIS standard, more precisely at what they call "restful binding". This is based on HTTP and ATOM, with extensions. In the ATOM feeds/entries are number of attributes, that give certain hints on consistency, e.g. the doctype. But the problem I mentioned is solved as cmis:folder supports POST and no PUT, and document does exactly the contrary. Looks coherent to me.
>> As I am working mainly with Apache Sling, they introduced something called a "selector" as part of the URI. However, this controls renditions, thus applies for GETs only. But it's a good twist if you need radically different renditions from the same resource. E.g. /orders/123.list.html could provice a list with all line items, while /orders/123.tax.html may bring up information regarding tax calculations.
If you use a generic media type, then your "document resource" will need to validate the details of the received input to make sure it obeys the rules for what kind of behavior a POST should trigger on a document. Of course, you should be doing that anyway ... so it should be pretty obvious that the incoming data doesn't have the right set of fields. This kind of "form validation" isn't really much different than what you should be doing in a browser-based webapp, where the same scenario is possible.
>> Well, I definitely see that I can kiss goodbye the wonderful POST automisms from Sling...
Thanks so much,
Juerg
Craig
--- On Wed, 3/10/10, Roy T. Fielding <fielding@...> wrote:
> From: Roy T. Fielding <fielding@...>
> Subject: Re: [rest-discuss] HTTP request and "type safety"
> To: jumeier@...
> Cc: rest-discuss@yahoogroups.com
> Date: Wednesday, March 10, 2010, 1:53 AM
> On Mar 9, 2010, at 9:48 AM, <jumeier@...>
> <jumeier@...>
> wrote:
>
> > I have a customer who raised a concern about REST
> "type safety". What he means may be best explained by a
> simple example.
> >
> > Assume we have a resource that represents a folder,
> and by issueing a POST request, I can attach a file. What,
> however, if that post request wrongly goes to, let's say,
> another document resource instead of our folder, and the
> document resource also accepts POST (for some reason)?
> >
> > In a RPC world, the document type would not understand
> an "addDocument()" call, and consequently return an
> exception. But what about REST? Of course, the POST's
> attributes most likely wouldn't be understood and something
> like 400 Bad Request returned. But what if they were
> understood?
>
> Ah, yes, a similar thing happens all the time when
> flaying frisbee in the park with my dog. When I
> throw
> the frisbee and the dog tries to catch it in his teeth,
> everyone seems to be happy. However, when the dog
> tries
> to throw the frisbee and I try to catch it in my teeth,
> it just doesn't seem to work well for either of us.
>> Roy, I hope this email finds your dog well.
Sorry to double check on this image, but IMHO, images are a very good mean to transport abstract things like REST to customers. So, it'd prefer to have things clear.
So, you're suggesting that you are the server, the frisbee is the request, and the dog is the client? And *if* you are sending a bad message in the form of a poor shot or a very mal-formed frisbee, the dog won't be able to catch it? Thanks for clarification.
>
> > So, are there means or patterns, to achieve (some
> sort) of type safety?
>
> Yes. A RESTful interface is one where the server
> provides the links
> or forms that tells the client what operations to perform
> where.
> If you are telling the client to do the wrong thing, then
> the
> pattern would be to tell the client to do the right thing
> instead.
> Generally speaking, it works out better that way.
>> Thanks for reminding me of the basic things in REST-life. Clearly the way to go.
This just raises me one question. How would you flag to the client a link that is cool and MAY BE persisted, versus one that is of transient nature?
Thanks,
-- Juerg
>
> ....Roy
>
>
On Mar 16, 2010, at 3:05 PM, <jumeier@...> <jumeier@...> wrote: > Sorry to double check on this image, but IMHO, images are a very good mean to transport abstract things like REST to customers. So, it'd prefer to have things clear. > So, you're suggesting that you are the server, the frisbee is the request, and the dog is the client? No, I was suggesting that the customer should not be seeking RPC solutions (like type-safety checks) to apply in a RESTful design where data is provided at the direct instruction of the server. The scenario doesn't make any sense, unless of course the client isn't using REST at all and merely thinks that HTTP == REST. > This just raises me one question. How would you flag to the client a link that is cool and MAY BE persisted, versus one that is of transient nature? I would never use a transient link in the first place. Even my redirect links are persistent. Why do you need one? ....Roy
Hello all, I am on a contract where the chaps are using ADO.NET Data Services (now WCF Data Services). So I was wondering whether this qualifies as REST. It has all the url structures and stuff but (as far as I can tell) that doesn't make it REST. Just curious about what folks think about it. Regards, Eben
Hi Eben, > I am on a contract where the chaps are using > target="_blank" href="http://ADO.NET">ADO.NET Data Services (now WCF Data > Services). > So I was wondering whether this qualifies as REST. It > has all the url structures and stuff but (as far as I can tell) that doesn't > make it REST. O'Reilly has a book that covers using .Net 3.5 WCF to make RESTful web services. http://oreilly.com/catalog/9780596519216 There's nothing RESTful about ADO.Net itself but MS created some URI parsing routines that are similar to Jersey's. Cheers, Mark W.
Bill - I'm replying and CCing the REST discussion group so they see the message. I'm not involved in running the list but I hope someone who follows the group can or knows someone else who can give you access. Can anyone please help Bill out or ping someone to let him post? Thanks, Ivan On Wed, Mar 17, 2010 at 15:07, Bill Moseley <moseley@...> wrote: > Hi Ivan, > I noticed you posted about the rest Wiki on rest-discuss. > > By chance are you involved in running the list? I signed up and posted a > few messages but they have never shown up. And I've had no luck getting a > response from the moderators. > I'm a member of the list: > http://tech.groups.yahoo.com/group/rest-discuss/members?query=moseley&submit=Search&group=sub > But if I try and post > at: http://tech.groups.yahoo.com/group/rest-discuss/post there's a message > that says: > Your message must be approved by the group owner before being sent to the > group. > And the messages I have sent none had shown up. > Thanks, > > -- > Bill Moseley > moseley@... >
Jrn Wildt wrote: > > >> 1) The official "here can you find the specs" kind of REST > >> "sitemap". > > > > This is exactly the opposite of what Roy means by, "A REST API > > should be entered with no prior knowledge beyond the initial URI." > > Then I am lost again :-( > Let's see if we can't get you back on the path. > > > If, given a URI for some resource in a system, I must consult some > > other "sitemap" > > resource before I can request another URI in the system, then the > > API is being driven by out-of-band knowledge, not hypertext. > > This is not exactly what I am saying. You are _not_ "given a URI for > some resource in a system". You are given a simple identifier, a > customer number, an order number, or a blog name. Not the complete > URL. That "sitemap" tells the client where it can find the search > forms for those numbers or names. By looking at the sitemap you can > get a URL to the search form for customers. That search form tells > you, that by doing a GET on a certain URL (the action) and passing > the customer number as "&numer=...", you will get a resource > describing the requested customer. > Given 2010-03-20 as a simple identifier, how is the client instructed to build an URL with it? Not REST: Client has previously loaded some other document into memory (sitemap) instructing it to make a GET for /date?iso=2010-03-20 when it encounters an ISO date string. Client "somehow knows" this out-of-band info. REST: Retrieved representation links to some other document (sitemap), which may be cached locally, which contains a link for dereferencing. Client follows its nose -- i.e. checks another document for <a id='2010-03-20' href='/date?iso=2010-03-20'>. Also REST: Retrieved representation contains some URL-construction code (perhaps a form). Client follows its nose -- the values '2010', '03' and '20' are entered where appropriate. Also REST: Retrieved representation links to some document (not a sitemap) which contains URL-construction code. Client follows its nose -- in the case of my demo, the retrieved representations link to an XSLT stylesheet which (as I posted before) contains the code to convert ISO date-string instances into URLs for dereferencing and transformation. The key here, is for the client to follow hypertext included in the representation which returns the "simple identifier", to learn how to dereference an URL containing the "simple identifier". While a "sitemap" could be used, that really just adds another round-trip between client and server. What makes the Not REST example wrong, is that the client is expected to know how to create the mapping using some knowledge outside (not linked) the dereferenced representation which contains the "simple identifier". For example, using a browser's client-side storage to cache a lookup table, and using script to access name-value pairs from that client-side storage for all subsequent requests. While such a solution would work, the problem is that some prior URI must be dereferenced to create this lookup table. A dereferenced representation containing a script which references client-side storage would fail, unless that prior URI had been dereferenced. When the condition is met, that the client can follow its nose (using hypertext) to find everything needed to render a representation dereferenced from some URI, then no prior knowledge is needed beyond the URI being derefernced. OTOH, if the URI being dereferenced cannot be rendered without the client having prior knowledge of some other URI that the retrieved representation doesn't link to, that prior knowledge is out-of-band. To sum up, if your API requires me to first dereference some sort of sitemap, before dereferencing any other URIs will work, then your API must always be entered from the sitemap URI, instead of from any URI. -Eric
Hi
In Martin Fowlers recent article on the Richardson maturity model for
REST (http://martinfowler.com/articles/richardsonMaturityModel.html),
he has an example of using link relations in a way that I haven't seen
much of before. Here's an example:
<link rel = "royalhope.nhs.uk/linkrels/appointment/updateContactInfo"
uri = "patients/jsmith/contactInfo"/>
I find the link relationship to be very specific, reducing the value
of a uniform interface (a client would need to be familiar with this
specific rel type). In addition, the naming of the rel type in the
example looks suspiciously much like another place to put a method
name. I would be more comfortable with something like:
<link rel = "edit"
type="vnd.contact.info"
uri = "patients/jsmith/contactInfo"/>
But, there some very smart people who have reviewed the article, so
I'm sure there's a good reason for that example being that way it is.
It would be very interesting to hear your views on this.
/niklas
This use of "rel" is similar to the rel="stylesheet" use in common browsers today. It's true that clients will need to understand the meaning of these "rel" value in order to get the full benefit of their use. This is part of the media-type definition which the client needs to agree to when announcing support for the media type via the Accept header in HTTP. mca http://amundsen.com/blog/ On Mon, Mar 22, 2010 at 11:35, Niklas Gustavsson <niklas@...> wrote: > Hi > > In Martin Fowlers recent article on the Richardson maturity model for > REST (http://martinfowler.com/articles/richardsonMaturityModel.html), > he has an example of using link relations in a way that I haven't seen > much of before. Here's an example: > > <link rel = "royalhope.nhs.uk/linkrels/appointment/updateContactInfo" > uri = "patients/jsmith/contactInfo"/> > > I find the link relationship to be very specific, reducing the value > of a uniform interface (a client would need to be familiar with this > specific rel type). In addition, the naming of the rel type in the > example looks suspiciously much like another place to put a method > name. I would be more comfortable with something like: > > <link rel = "edit" > type="vnd.contact.info" > uri = "patients/jsmith/contactInfo"/> > > But, there some very smart people who have reviewed the article, so > I'm sure there's a good reason for that example being that way it is. > It would be very interesting to hear your views on this. > > /niklas > > > ------------------------------------ > > Yahoo! Groups Links > > > >
Niklas, On Mar 22, 2010, at 4:35 PM, Niklas Gustavsson wrote: > Hi > > In Martin Fowlers recent article on the Richardson maturity model for > REST (http://martinfowler.com/articles/richardsonMaturityModel.html), > he has an example of using link relations in a way that I haven't seen > much of before. Here's an example: > > <link rel = "royalhope.nhs.uk/linkrels/appointment/updateContactInfo" > uri = "patients/jsmith/contactInfo"/> > > I find the link relationship to be very specific, reducing the value > of a uniform interface (a client would need to be familiar with this > specific rel type). In addition, the naming of the rel type in the > example looks suspiciously much like another place to put a method > name. It depends how this relation is defined (the actual name is meaningless). What the relation definition should do is to tell the client what the semantics of the target resource are. These semantics might include that a PUT to such a resource results in an update of something. This certainly makes not very much sense with PUT because PUT already means update. Personally, I dislike such a design because it is (or at least looks like) yet another approach to give the usual OO-damaged developers what they think they need - instead of providing proper education. So, +1 to your suspicions :-) > I would be more comfortable with something like: > > <link rel = "edit" > type="vnd.contact.info" > uri = "patients/jsmith/contactInfo"/> What the above is missing is the semantics of "patients/jsmith/contactInfo". The correct way of doing it would be: > <link rel = "royalhope.nhs.uk/linkrels/appointment/ContactInfo" > uri = "patients/jsmith/contactInfo"/> to tell the cient where the contact info resource is. Given that, the client already knows that a PUT will do the update - if the client allows it. The 'edit' link relation is used to tell clients where the single authorable resource is that should be used for a resource in question. For example, an Atom entry might appear in several collection s and thus have several member resources. The single one resource intended by the server for authoring can be linked to using the edit link. The server could just as well acept a PUT to each member resource but redirect the PUT to the edit resource. > > But, there some very smart people who have reviewed the article, so > I'm sure there's a good reason for that example being that way it is. I do not see one - I think it is bad design. Jan > It would be very interesting to hear your views on this. > > /niklas > > > ------------------------------------ > > Yahoo! Groups Links > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
On 22 March 2010 16:39, Jan Algermissen <algermissen1971@...> wrote: > > It depends how this relation is defined (the actual name is meaningless). > I think that's the most important point. The rel value in the link is just another opaque URI. It's a key into a semantic that's defined elsewhere. That semantic describes the meaning of the linked resource in the context of the current representation. That semantic can be as narrow, and application-specific, as needed, or as wide and application-agnostic as you see fit. If it's proprietary to your application, you might even choose to include some of the acceptable HTTP idioms used to manipulate the linked resource in the description of that semantic (assuming it's an HTTP app). You might see it as an invitation to "infer" operation-like semantics: just blur your eyes and see it as the client sees it - another URI. Choosing to layer operation semantics on top is your own private affair. Once you start seeing URIs as nothing but identifiers and addresses, the whole noun/verb thing becomes a bit of a quaint parlour game. Kind regards ian
rfc2616 says about a POST: If a resource has been created on the origin server, the response SHOULD be 201 (Created) and contain an entity which describes the status of the request and refers to the new resource, and a Location header (see section 14.30). If the POST creates a resource at a single URL is it ok to just return the Location: header w/o any body content? Also, can someone provide examples what the body typically might contain? I see comments about "a list of locations" but no specific examples. (I'm returning body responses serialized as json). Thanks, -- Bill Moseley moseley@...
On Mon, Mar 22, 2010 at 12:41 PM, Bill Moseley <moseley@...> wrote: > > > rfc2616 says about a POST: > > If a resource has been created on the origin server, the response > SHOULD be 201 (Created) and contain an entity which describes the > status of the request and refers to the new resource, and a Location > header (see section 14.30). > > If the POST creates a resource at a single URL is it ok to just return the > Location: header w/o any body content? > > Also, can someone provide examples what the body typically might contain? > I see comments about "a list of locations" but no specific examples. > > I've implemented several cases where a POST to create a resource returns a 201 with the Location header, and also the representation that you would receive if you turned around and did an immediate GET against the URI returned in the location header. It saves an extra network round trip for what is likely to be a common need, especially when the server has "fleshed out" the representation with additional information beyond what the client submitted in the POST -- not the least of which might be additional hypermedia links for the next available state changes. If you do this, remember to pay attention to the notes (in RFC 2616 about not caching the response unless the server has explicitly returned headers stating that this is OK. Craig (I'm returning body responses serialized as json). > > Thanks, > > -- > Bill Moseley > moseley@... > >
On Mon, Mar 22, 2010 at 1:56 PM, Subbu Allamaraju <subbu@...> wrote: > Craig, > > The 201 response is just a status message (such as "OK. I just created the resource for you at ..."). When you do include a representation of the resource in the response, the client can not assume that it it is indeed the representation unless there is a Content-Location header with the same value as the Location header. > Good point. Craig > Subbu > > On Mar 22, 2010, at 1:32 PM, Craig McClanahan wrote: > >> >> >> >> >> On Mon, Mar 22, 2010 at 12:41 PM, Bill Moseley <moseley@...> wrote: >> >> >> rfc2616 says about a POST: >> >> If a resource has been created on the origin server, the response >> SHOULD be 201 (Created) and contain an entity which describes the >> status of the request and refers to the new resource, and a Location >> header (see section 14.30). >> >> If the POST creates a resource at a single URL is it ok to just return the Location: header w/o any body content? >> >> Also, can someone provide examples what the body typically might contain? I see comments about "a list of locations" but no specific examples. >> >> >> I've implemented several cases where a POST to create a resource returns a 201 with the Location header, and also the representation that you would receive if you turned around and did an immediate GET against the URI returned in the location header. It saves an extra network round trip for what is likely to be a common need, especially when the server has "fleshed out" the representation with additional information beyond what the client submitted in the POST -- not the least of which might be additional hypermedia links for the next available state changes. >> >> If you do this, remember to pay attention to the notes (in RFC 2616 about not caching the response unless the server has explicitly returned headers stating that this is OK. >> >> Craig >> >> (I'm returning body responses serialized as json). >> >> Thanks, >> >> -- >> Bill Moseley >> moseley@... >> >> >> >> > >
I'm adding a (hopefully it's) RESTful API on to an existing web application.
I'm not sure I understand what correct response codes to send in some
cases.
These seem clear to me:
POST /user -- returns 201 created with header "Location:
http://example.com/user/1234".
(Thanks for the comments in the previous thread about this.)
GET /user/1234 -- get valid user returns 200 with user serialized in body
(or 304 -- where supported).
PUT /user/1234 -- returns 204 with no body.
GET /user/9999 -- "9999" does not exist so return 404.
Are these the correct approach?:
If PUT /user/1234 is missing some required data in the body (say
"username") I return a 400 and to be nice I return an entity body that
includes an error explaining the problem (e.g. errors => { username =>
"username is required" }). Correct approach?
What about PUT /user/9999 where "9999" does not exist. I assume this is a
404 before even inspecting the entity body for correct format. I.e. a 404
before a 400.
GET /user/abcd -- ID provided is invalid format (not an integer). Is this a
400 or a 404? I tend to do a 404 as it points to something that does not
exist (e.g. maybe non-integer user ids will be acceptable keys in the
future??).
GET /user/4321 -- Say this resource exists but belongs to another "account"
-- that is the actor making the request will never have access to that
specific user object. I know the actor making the request because of some
other data in the request (e.g. a cookie). Is that a 404 or a 403?
I lean toward using a 404 because 1) that user object does not exist in that
actor's "account namespace" -- meaning it doesn't exist to them just like
/user/abde and /user/9999 don't exist. Also, I prefer not to let the
requester know that /user/9999 does not exist, but /user/4321 does exist but
they can't access it. Maybe that's a little "security by obscurity" but it
seems to make sense because it's a resource that will never be available to
the actor making the request -- as far as they are concerned it just does
not exist.
I would use a 403 if maybe at a later time access might be allowed.
Do these seem reasonable?
Thanks very much,
--
Bill Moseley
moseley@...
On Mon, Mar 22, 2010 at 5:58 PM, Ian Robinson <iansrobinson@...> wrote:
> You might see it as an invitation to "infer" operation-like semantics: just
> blur your eyes and see it as the client sees it - another URI. Choosing to
> layer operation semantics on top is your own private affair. Once you start
> seeing URIs as nothing but identifiers and addresses, the whole noun/verb
> thing becomes a bit of a quaint parlour game.
I'm not saying that the examples in the article is incorrect, I'm
merely interested in the design options here.
With the (obvious) lack of documentation in this case, and therefore
not blurring my eyes, my guess would be that in the example:
link rel = "royalhope.nhs.uk/linkrels/appointment/updateContactInfo"
uri = "patients/jsmith/contactInfo"/>
the rel type would mean something along the lines of "this is where
you update contact info". Since the article mentions using Atom
semantics for the links, I'm thinking it would make sense to attempt
to use the already existing Atom rel types, "edit" would likely fit
here. Also, continuing on the guess on the semantics of this rel type,
I'm wondering on the option between sticking the "content type" (I
guess that's what "ContactInfo" represents in this case) in the rel
attribute versus using the "type" attribute.
/niklas
On Mar 23, 2010, at 2:29 PM, Niklas Gustavsson wrote: > On Mon, Mar 22, 2010 at 5:58 PM, Ian Robinson <iansrobinson@...> wrote: >> You might see it as an invitation to "infer" operation-like semantics: just >> blur your eyes and see it as the client sees it - another URI. Choosing to >> layer operation semantics on top is your own private affair. Once you start >> seeing URIs as nothing but identifiers and addresses, the whole noun/verb >> thing becomes a bit of a quaint parlour game. > > I'm not saying that the examples in the article is incorrect, I'm > merely interested in the design options here. > > With the (obvious) lack of documentation in this case, and therefore > not blurring my eyes, my guess would be that in the example: > link rel = "royalhope.nhs.uk/linkrels/appointment/updateContactInfo" > uri = "patients/jsmith/contactInfo"/> > > the rel type would mean something along the lines of "this is where > you update contact info". Yes - if an example shows a link rel like this it to me implies how the author aproaches the issue. (an operation name in a relation name is confusing matters. Note that 'edit' in AtomPub does not mean 'PUT here to edit' it means 'here is the authorable resource'). If you can afford the round trip, you would not even need that because a redirected PUT on the member resource would do the trick. > Since the article mentions using Atom > semantics for the links, I'm thinking it would make sense to attempt > to use the already existing Atom rel types, "edit" would likely fit > here. Right. Using a link with the semantics 'here is the contactInfo' is enough because HTTP already tells you how to update and delete that contact info. > Also, continuing on the guess on the semantics of this rel type, > I'm wondering on the option between sticking the "content type" (I > guess that's what "ContactInfo" represents in this case) in the rel > attribute versus using the "type" attribute. I do not see contactInfo to imply a media type. Jan > > /niklas > > > ------------------------------------ > > Yahoo! Groups Links > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
Am 22.03.10 23:10, schrieb Bill Moseley: > > > I'm adding a (hopefully it's) RESTful API on to an existing web > application. I'm not sure I understand what correct response codes to > send in some cases. http://bitbucket.org/justin/webmachine/wiki/BigHTTPGraph I found the HTTP chart which Webmachine is based on very helpful. -billy.
On 23 March 2010 14:05, Jan Algermissen <algermissen1971@...> wrote: > > (an operation name in a relation name is confusing matters. Note that > 'edit' in AtomPub does not mean 'PUT here to edit' it means 'here is the > authorable resource'). If you can afford the round trip, you would not even > need that because a redirected PUT on the member resource would do the > trick. > > Sorry, but I disagree with this Jan. AtomPub specifies which HTTP idioms to use to manipulate a resource. It constrains HTTP. Sure, the specification of which method to use is not part of the "edit" definition - you're correct there - but AtomPub explicitly associates PUT with updates to a member URI (as indicated by an "edit" link). HTTP doesn't tell me how to delete or update members, just as it doesn't tell me how to delete or update contact info. AtomPub, however, tells me how to delete and update members. Please don't overload HTTP method definitions with some domain-specific operation semantics. (And whilst in this case it's the AtomPub spec that glues together an HTTP idiom and a link relation value in the context of a specific application goal, I see no reason why more specialized link relation definitions can't take it upon themselves to make this association. It all depends on how "general" you want the link relation to be. I think Subbu's talked about this a bit before). It's unfortunate that you've chosen "edit" over "updateContactInfo", because taken all by itself, "edit" looks just like a verb... Of course, it could be a substantive, much as "update" can be. The definition of edit, however, makes it clear that this isn't an operation definition. Just as the definition of "updateContactInfo" might. The real concern I have here is the attempt to find nouns and verbs in URIs. Someone took a wrong turn quite some time ago on this, and we all seem to be suffering with its legacy. Kind regards ian
On Mar 23, 2010, at 4:38 PM, Ian Robinson wrote: > > > On 23 March 2010 14:05, Jan Algermissen <algermissen1971@...> wrote: > > (an operation name in a relation name is confusing matters. Note that 'edit' in AtomPub does not mean 'PUT here to edit' it means 'here is the authorable resource'). If you can afford the round trip, you would not even need that because a redirected PUT on the member resource would do the trick. > > Sorry, but I disagree with this Jan. AtomPub specifies which HTTP idioms to use to manipulate a resource. It constrains HTTP. Yes, it does in a numner of areas - and wrongly so. AtomPub creates the impression that the client can makes certain assumptions (but it actually musn't). For example, AtomPub also errs by prescribing the representation type (Atom feed) to return for a GET on a collection. > Sure, the specification of which method to use is not part of the "edit" definition - you're correct there - but AtomPub explicitly associates PUT with updates to a member URI (as indicated by an "edit" link). HTTP doesn't tell me how to delete or update members, just as it doesn't tell me how to delete or update contact info. Yes it does: it gives you PUT and DELETE. AtomPubs treatment of these 'editing patterns' is just overspecification that leads to clients with unRESTful built-ion assumptions. > AtomPub, however, tells me how to delete and update members. Please don't overload HTTP method definitions with some domain-specific operation semantics. I do not see any overloading going on: PUT means 'store the entity at the specified resource' and DELETE means 'delete any representations of the specified resource' there is nothing more to it really. > > (And whilst in this case it's the AtomPub spec that glues together an HTTP idiom and a link relation value in the context of a specific application goal, I see no reason why more specialized link relation definitions can't take it upon themselves to make this association. Yes it can - it is just not necessary. If I know that a resource is a 'contact info resource' then PUT simply means update that contact info. The situation is different with POST, where you might want to say that posting to the order-processor resource has the domain semantics of 'placing an order'. > It all depends on how "general" you want the link relation to be. I think Subbu's talked about this a bit before). > > It's unfortunate that you've chosen "edit" over "updateContactInfo", because taken all by itself, "edit" looks just like a verb... Yes, 'source' would probably have been a better name for the link rel. > Of course, it could be a substantive, much as "update" can be. The definition of edit, however, makes it clear that this isn't an operation definition. Just as the definition of "updateContactInfo" might. Ok, but I am pretty sure that this was not the original intention. I much too often see updateContactInfo deleteContactInfo when people should just use contactInfo and get the PUT and DELETE for free. > > The real concern I have here is the attempt to find nouns and verbs in URIs. Someone took a wrong turn quite some time ago on this, and we all seem to be suffering with its legacy. Right - that is why action-like URIs bother me so much. Even more so when POST comes into play as in POST /order/pay [empty body] which is unREStful because the message is not self descriptive but depends on the current server side state of /order. That should really be: POST /payment-processor <order> .. </order> Jan > > Kind regards > > ian > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
Hi all,
>Ok, but I am pretty sure that this was not the original intention. I much too often see
>
>updateContactInfo
>deleteContactInfo
>
>when people should just use
>
>contactInfo and get the PUT and DELETE for free.
I don't agree with this personally. How do I know that I can call PUT, DELETE, GET or POST on that one url? By this I mean.. why should a client waste another round trip hoping something works, when it might not. I much prefer something like:
<link rel="edit" href="..." type="..." methods="get,delete"/>
I don't see why this is bad... it allows a client to still discover the URL to call, but also provides the specific methods it can call on that URL at that time. It saves the client from making a request that will result in a method not allowed or some other error response. It also saves the server side from inundated requests that are going to respond with errors that could quite easily be avoided by simply providing the methods allowed on the resource in the first place.
Is there perhaps a better way to do what I've said above?
As well, if I return a response with several links, how do I specify in those the media types to use? For example, if the URL that was called is /orders and the media type is application/vnd.org.company.orders+xml, then I know that if the response comes back with links with /orders in it, I can use the same media type. But what if I return other links too, that allow a user to pull up history, using a history resource, and some other things. Where do I specify the media type? Does the user discover this URL (link), then make an OPTIONS call to it to get the media type it should use for that resource?
My thought is something like
GET /orders
<links>
<link rel="self" href="../orders"/>
<link rel="edit" href="../orders"/>
<link rel="history" href=".../history" type="application/vnd.org.company.history+xml"/>
</links>
Perhaps we provide methods="..." in the history link as well, allowing the client to not only discover it, but know what methods can be performed on that url at that time.
Kevin Duffey wrote: > > <link rel="edit" href="..." type="..." methods="get,delete"/> > (...) > > Is there perhaps a better way to do what I've said above? > Yes, particularly since there's no such thing as @methods... What you can do, in response to any request method invoked on a resource, is send "Allow: HEAD, GET, DELETE" (don't forget HEAD) and Accept headers. The 'edit' link relation is commonly understood to support GET, PUT and DELETE. If your application isn't accepting PUT, then respond 405 or 501 as appropriate, along with the Allow: header. If PUT is accepted, but the wrong media type is used, respond 415 -- what's authoritative in REST is the response code for any given request, i.e. the only way to know if DELETE is actually allowed, is if a DELETE request yields a 2xx response. You don't have to specify all this implementation detail in your markup. Including it in headers still qualifies as "hypertext" so don't sweat it. The Accept and Allow headers are part of implementing the self- descriptive messaging constraint, and the hypertext constraint. Of course, the Allow and Accept headers can't tell you what media type to associate with what method. Although in this case, it's implied by the media type definitions themselves to some extent: Accept: application/atom+xml, application/x-www-urlencoded Allow: HEAD, GET, PUT, POST, DELETE Content-Type: application/atom+xml I wouldn't expect to be able to PUT application/x-www-urlencoded. But headers aren't fine-grained enough to express whether I can POST application/atom+xml. The rest of the hypertext constraint is met via markup, which is fine-grained enough to express a self-documenting API. But, this isn't done with <link/>, it's done with <form>, either Xforms or HTML 5 allow you to read the href of a <link rel='edit'/> (or rel= 'source') and express not only _that_ it may have its GET or DELETE method called, but _how_ to call those methods, i.e. user dereferences a link or clicks a 'delete' button. -Eric
On Tue, Mar 23, 2010 at 11:45 PM, Kevin Duffey <andjarnic@...> wrote: > I don't agree with this personally. How do I know that I can call PUT, DELETE, GET or POST on that one url? Because it's an URL. > By this I mean.. why should a client waste another round trip hoping something works, when it might not. If you're looking for guarantees, stick to a LAN. But even then... does including "delete" in @methods, and making that page cacheable for a day mean that all DELETE operations on that URL for the next day will never return anything other than 2xx? Of course not. Stuff changes, and you need to start writing code that can deal with that. WADL anyone? Mark.
Hola,
>> I don't agree with this personally. How do I know that I can call PUT, DELETE, GET >>or POST on that one url?
>Because it's an URL.
So then, I am forced to make multiple requests to figure out what I can do? Because it's a URL, I hope that a PUT works.. and if it doesn't, I've wasted a round trip to be told it doesn't accept a PUT? That seems really ass backwards to me to be honest. Maybe it's the right way, but it sure seems silly that we have the capability to return a response that includes enough info in it to save the client side (and server side) from wasted round trips just because that's the way it's supposed to be.
>> By this I mean.. why should a client waste another round trip hoping something >>works, when it might not.
>If you're looking for guarantees, stick to a LAN. But even then...
>does including "delete" in @methods, and making that page cacheable
>for a day mean that all DELETE operations on that URL for the next day
>will never return anything other than 2xx? Of course not. Stuff
>changes, and you need to start writing code that can deal with that.
Agreed.. and plan to.. but why not help reduce network load, server load, and wasting client time when we can provide the info that does all that? I guess I am failing to understand who made up these rules that a client can just assume all HTTP methods will work... and oops.. if only we had been given a wee bit more info that a PUT was not acceptable on this URL before we made that call and got slapped on the wrist. It seems silly to me that this is the way it must work or it's not REST. Or maybe I am misunderstanding something... but it sounds like Mark, you are saying, too bad, deal with it, code for it, that's the way it works. Period.
On Mar 24, 2010, at 6:17 AM, Kevin Duffey wrote: > > > Hola, > > > > >> I don't agree with this personally. How do I know that I can call PUT, DELETE, GET >>or POST on that one url? > > >Because it's an URL. > > So then, I am forced to make multiple requests to figure out what I can do? Because it's a URL, I hope that a PUT works.. and if it doesn't, I've wasted a round trip to be told it doesn't accept a PUT? That seems really ass backwards to me to be honest. Ask yourself *why* you PUT in the first place. Surely not just to see if something happens - the media types involved will tell you the semantics of link targets (e.g. "this is contact info"). If your intention is to update contact info PUT is the natural thing to do because the HTTP spec tells you so. What other method would you use to update the contact info? DELETE? It makes no sense to specifiy the supported methods because they are not interchangeable. (HTML forms just do that because they are a single hypermedia control for two entirely different kinds of operations (indexing vs. data submission). Better would probably have been some kind of <indexable href=""><input ...></indexable> and <dataSink href=""><input.../></dataSink>. Instead of trying to read some fancy mechanisms into the hypermedia constraint, focus on proper media type design instead - that is hard enough already... > Maybe it's the right way, but it sure seems silly that we have the capability to return a response that includes enough info in it to save the client side (and server side) from wasted round trips just because that's the way it's supposed to be. Maybe it helps if you sketch some interactions where you think you would need some @methods? > > > >> By this I mean.. why should a client waste another round trip hoping something >>works, when it might not. > > >If you're looking for guarantees, stick to a LAN. But even then... > >does including "delete" in @methods, and making that page cacheable > >for a day mean that all DELETE operations on that URL for the next day > >will never return anything other than 2xx? Of course not. Stuff > >changes, and you need to start writing code that can deal with that. > > Agreed.. and plan to.. but why not help reduce network load, server load, and wasting client time when we can provide the info that does all that? Does *what*? Suppose you try to PUT and the server tells you that PUT is not supported? Do you think you would change the method and retry the request? What the server is really telling you is that your assumptions are broken and changing the method will not help that. > I guess I am failing to understand who made up these rules that a client can just assume all HTTP methods will work... and oops.. if only we had been given a wee bit more info that a PUT was not acceptable on this URL before we made that call and got slapped on the wrist. Hey - do you PUT to a resource because there is some information that tells you that the resourece accets PUT? Or do you also use other information about the resource? Jan > It seems silly to me that this is the way it must work or it's not REST. Or maybe I am misunderstanding something... but it sounds like Mark, you are saying, too bad, deal with it, code for it, that's the way it works. Period. > > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
2010/3/24 Jan Algermissen <algermissen1971@...> > > > > On Mar 24, 2010, at 6:17 AM, Kevin Duffey wrote: > > > > > > > Hola, > > > > > > > > >> I don't agree with this personally. How do I know that I can call PUT, > DELETE, GET >>or POST on that one url? > > > > >Because it's an URL. > > > > So then, I am forced to make multiple requests to figure out what I can > do? Because it's a URL, I hope that a PUT works.. and if it doesn't, I've > wasted a round trip to be told it doesn't accept a PUT? That seems really > ass backwards to me to be honest. > > Ask yourself *why* you PUT in the first place. Surely not just to see if > something happens - the media types involved will tell you the semantics of > link targets (e.g. "this is contact info"). If your intention is to update > contact info PUT is the natural thing to do because the HTTP spec tells you > so. What other method would you use to update the contact info? > But that is what HTTP says, not what REST says. REST mandates a Uniform Interface, not *the* uniform interface GET/PUT/POST/DELETE. There's nothing unrestfull if I decide to create a REST-based architecture with a Uniform Interface compose of XPTA, XPTB, XPTC and XPTD... How, then, you know what is "natural" or not? How, then, do you "formally" associate a media-type with this methods, in terms of semantics? You know what I mean? You have to go from a generalization (REST) to a implementation (REST over HTTP), you cannot assume specifics of a implementation, otherwise there is no point in having generalizations...
To be a little less general and abstract, let me point to a concrete situation. We have a multi protocol REST-ish "midleware", where at some point we had the need to extend our HTTP-based uniform interface with one more verb, LISTEN. So the uniform inrterface will be GET, POST, PUT, DELETE, LISTEN. How will one describe the use of LISTEN in a hipermedia/media-type semantic way? _________________________________________________ Melhores cumprimentos / Beir beannacht / Best regards Antnio Manuel dos Santos Mota http://card.ly/amsmota _________________________________________________ 2010/3/24 Antnio Mota <amsmota@...> > > 2010/3/24 Jan Algermissen <algermissen1971@...> > > >> >> >> On Mar 24, 2010, at 6:17 AM, Kevin Duffey wrote: >> >> > >> > >> > Hola, >> > >> > >> > >> > >> I don't agree with this personally. How do I know that I can call >> PUT, DELETE, GET >>or POST on that one url? >> > >> > >Because it's an URL. >> > >> > So then, I am forced to make multiple requests to figure out what I can >> do? Because it's a URL, I hope that a PUT works.. and if it doesn't, I've >> wasted a round trip to be told it doesn't accept a PUT? That seems really >> ass backwards to me to be honest. >> >> Ask yourself *why* you PUT in the first place. Surely not just to see if >> something happens - the media types involved will tell you the semantics of >> link targets (e.g. "this is contact info"). If your intention is to update >> contact info PUT is the natural thing to do because the HTTP spec tells you >> so. What other method would you use to update the contact info? >> > > But that is what HTTP says, not what REST says. REST mandates a Uniform > Interface, not *the* uniform interface GET/PUT/POST/DELETE. > > There's nothing unrestfull if I decide to create a REST-based architecture > with a Uniform Interface compose of XPTA, XPTB, XPTC and XPTD... How, then, > you know what is "natural" or not? How, then, do you "formally" associate a > media-type with this methods, in terms of semantics? > > You know what I mean? You have to go from a generalization (REST) to a > implementation (REST over HTTP), you cannot assume specifics of a > implementation, otherwise there is no point in having generalizations... >
On Mar 24, 2010, at 11:42 AM, Antnio Mota wrote: > How will one describe the use of LISTEN in a hipermedia/media-type semantic way? Look at http://tools.ietf.org/html/rfc5789 Jan ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@acm.org Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
On Mar 24, 2010, at 11:11 AM, Antnio Mota wrote: > . REST mandates a Uniform Interface, not *the* uniform interface GET/PUT/POST/DELETE. Neither does HTTP: http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html#sec9 > > There's nothing unrestfull if I decide to create a REST-based architecture with a Uniform Interface compose of XPTA, XPTB, XPTC and XPTD... How, then, you know what is "natural" or not? How, then, do you "formally" associate a media-type with this methods, in terms of semantics? Erm - write specifications... > > You know what I mean? You have to go from a generalization (REST) to a implementation (REST over HTTP), REST == arch style, HTTP+URI == an architecture (of the REST style) Apache, Firefox, Squid etc are implementations of elements (connectors and components) of that architecture. > you cannot assume specifics of a implementation, Sure you can - HTTP specifies such specifics for any implementations of connectors and components. For exampe, HTTP specifies that GET is safe and therefore any component implementation may issue any number of GET request without worrying about side effects. > otherwise there is no point in having generalizations... I do not understand that statement. Jan > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
I didn't ask correctly, it's not how to define the method, is how to let clients know where to use that new method by associating it with the semantics of a media-type, instead of just do something like <link href="..." method="LISTEN"> Sorry for my bad english... _________________________________________________ Melhores cumprimentos / Beir beannacht / Best regards Antnio Manuel dos Santos Mota http://card.ly/amsmota _________________________________________________ 2010/3/24 Jan Algermissen <algermissen1971@...> > > On Mar 24, 2010, at 11:42 AM, Antnio Mota wrote: > > > How will one describe the use of LISTEN in a hipermedia/media-type > semantic way? > > Look at http://tools.ietf.org/html/rfc5789 > > Jan > > > ----------------------------------- > Jan Algermissen, Consultant > NORD Software Consulting > > Mail: algermissen@... > Blog: http://www.nordsc.com/blog/ > Work: http://www.nordsc.com/ > ----------------------------------- > > > > >
On Mar 24, 2010, at 12:01 PM, Antnio Mota wrote: > > > I didn't ask correctly, it's not how to define the method, is how to let clients know where to use that new method by associating it with the semantics of a media-type, Not sure what you are looking for, can you explain? Sometimes it makes sense to state sth like: "A 'orders' link points to a resource that is a collection of all orders placed. Placing a new order is achieved by POSTing to such a collection" In AtomPub terms you could say: "A collection that accepts application/order contains the placed orders. You can order sth by creating a new order in such a collection (via POST)". For PUT/DELETE you might do sth like: 'lock' links refer to lock resources. Creating a new lock results in foo being locked, deletion of the lock resource results in the lock to be removed. No, if you have GET /doc/1 200 Ok Link: </doc/1/props?lock>;rel=lock you immediately know that PUT /doc/1/props?lock creates the lock and that DELETE /doc/1/props?lock deletes it. HTH, Jan > instead of just do something like > > <link href="..." method="LISTEN"> > > Sorry for my bad english... > > _________________________________________________ > > Melhores cumprimentos / Beir beannacht / Best regards > > Antnio Manuel dos Santos Mota > > http://card.ly/amsmota > _________________________________________________ > > > > 2010/3/24 Jan Algermissen <algermissen1971@...> > > On Mar 24, 2010, at 11:42 AM, Antnio Mota wrote: > > > How will one describe the use of LISTEN in a hipermedia/media-type semantic way? > > Look at http://tools.ietf.org/html/rfc5789 > > Jan > > > ----------------------------------- > Jan Algermissen, Consultant > NORD Software Consulting > > Mail: algermissen@... > Blog: http://www.nordsc.com/blog/ > Work: http://www.nordsc.com/ > ----------------------------------- > > > > > > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
On Tue, Mar 23, 2010 at 7:09 AM, Philipp Meier <meier@...> wrote: > > > http://bitbucket.org/justin/webmachine/wiki/BigHTTPGraph > > I found the HTTP chart which Webmachine is based on very helpful. > Thanks, that's a nice resource. I'm breaking that a bit by returning a 404 instead of a 403, I guess. But, I don't think it will break anything and seems reasonable that a resource I never will have access to pretends to not exist. So, everything else seems reasonable? Thanks, -- Bill Moseley moseley@...
2010/3/24 Jan Algermissen <algermissen1971@...> > > On Mar 24, 2010, at 12:01 PM, Antnio Mota wrote: > > > > > > > I didn't ask correctly, it's not how to define the method, is how to let > clients know where to use that new method by associating it with the > semantics of a media-type, > > Not sure what you are looking for, can you explain? > > > When I said the "client" I meant the "user-agent", I was talking about machine to machine... Nevertheless, it's not a important question...
I think I've done this before, but I am working on a service now and
want to make sure I am doing it "as right' as it could be to remain a
true REST based API. If this should be under a different thread, that is
fine, I just figured it's along the discussion we're having here now.
Basically, I have this service that I provide a single URL http://www.mycompany.com/service
I
thought that a OPTIONS should be requested before anything else? A GET
could be used too I suppose, but basically am I understanding correctly
that with this single URL, the right call to discover the services that
can then be acted on is to send OPTIONS? If not, what is the "right way"
a client should make the first call to the initial published URL?
In response, I hope I am right in understanding that zero or more
services should be returned. How exactly I am
still at a loss due to this thread... but how I have my service right
now is like such:
OPTIONS /service
<service>
<links>
<link rel="resource1" href="http://www.mycompany.com/resource1"
type="application/vnd.com.mycompany.resource1+xml"/>
<link rel="resource2" href="http://www.mycompany.com/resource2"
type="application/vnc.com.mycompany.resource2+xml"/>
</links>
</service>
Despite what this thread has
talked about, I am not quite sure what to respond with.. which is why
the above is what I assumed was ok. Short of knowing prior to the call
to the published URL what services are available... I figured this gave
back a discoverable list of links that a client can use. Is this
correct... if not, please explain what should come back from the
published URL. I am also still trying to figure out, from various posts
here and other
threads, how do you "discover" the media type to set the Content-Type
to for specific resources if they don't provide a type="" in the links
that are in the response.
Now, assuming my above response is
not too far off... a client can then use any of those service URLs to,
according to Jan I believe post/put/get/delete/head/options to. This is
where I was arguing that it would be beneficial to avoid multiple round
trips if the above returned with specific methods allowed on a given
resource. Regardless of the outcome of that argument, for the sake of
this we'll assume that with any resource returned, ALL method types can
at be called on it, naturally, with some possibly returning responses
that that require yet another call to the server with a different
request method. Generally speaking, the initial URL resources should be
to a collection, and thus a GET call can pull ALL /resource1 types, a
POST to it can create a new one, and a PUT to
/resource1/<item> can update an existing one if it exists.. and
so on.
I'll stop there to avoid continuing on in the wrong
direction if that is wrong. I am hoping Jan/et all will provide some
constructive criticism and corrections to how the initial first call to
the published URL should work.. how it should respond. I'd appreciate an
example of XML snippet on what the response would look like if what I
have above is incorrect or not RESTful.
Thanks all.
Kevin, it seems that somewhere along you path into REST you took a wrong turn :-) I suggest you try to erase some assumptions about how it works and 'start over'. Look at AtomPub and OpenSearch and see how the related service documents work; how clients do the discovery with these kinds of services. Then go to Amazon and analyse yourself while you step through a purchase. Try to view yourself as a machine client and the human targetted links as part of some some online shopping specific media type. Consider Amazon's home page as the service document that tells you where search is, where to browse categories, etc. Basically, it is all about the media types (and link relations of you use those) you define for your domain. Look at AtomPub service documents which is about the best you can find at the moment. You can also go and read some stuff I wrote on service documents[1][2]. But I suggest you take a look at the other stuff first. HTH, Jan [1] http://www.nordsc.com/blog/?p=80 [2] http://www.nordsc.com/blog/?cat=13 On Mar 25, 2010, at 12:34 AM, Kevin Duffey wrote: > > > I think I've done this before, but I am working on a service now and want to make sure I am doing it "as right' as it could be to remain a true REST based API. If this should be under a different thread, that is fine, I just figured it's along the discussion we're having here now. > > Basically, I have this service that I provide a single URL http://www.mycompany.com/service > > I thought that a OPTIONS should be requested before anything else? A GET could be used too I suppose, but basically am I understanding correctly that with this single URL, the right call to discover the services that can then be acted on is to send OPTIONS? If not, what is the "right way" a client should make the first call to the initial published URL? > > In response, I hope I am right in understanding that zero or more services should be returned. How exactly I am still at a loss due to this thread... but how I have my service right now is like such: > > OPTIONS /service > > <service> > <links> > <link rel="resource1" href="http://www.mycompany.com/resource1" type="application/vnd.com.mycompany.resource1+xml"/> > <link rel="resource2" href="http://www.mycompany.com/resource2" type="application/vnc.com.mycompany.resource2+xml"/> > </links> > </service> > > Despite what this thread has talked about, I am not quite sure what to respond with.. which is why the above is what I assumed was ok. Short of knowing prior to the call to the published URL what services are available... I figured this gave back a discoverable list of links that a client can use. Is this correct... if not, please explain what should come back from the published URL. I am also still trying to figure out, from various posts here and other threads, how do you "discover" the media type to set the Content-Type to for specific resources if they don't provide a type="" in the links that are in the response. > > > Now, assuming my above response is not too far off... a client can then use any of those service URLs to, according to Jan I believe post/put/get/delete/head/options to. This is where I was arguing that it would be beneficial to avoid multiple round trips if the above returned with specific methods allowed on a given resource. Regardless of the outcome of that argument, for the sake of this we'll assume that with any resource returned, ALL method types can at be called on it, naturally, with some possibly returning responses that that require yet another call to the server with a different request method. Generally speaking, the initial URL resources should be to a collection, and thus a GET call can pull ALL /resource1 types, a POST to it can create a new one, and a PUT to /resource1/<item> can update an existing one if it exists.. and so on. > > I'll stop there to avoid continuing on in the wrong direction if that is wrong. I am hoping Jan/et all will provide some constructive criticism and corrections to how the initial first call to the published URL should work.. how it should respond. I'd appreciate an example of XML snippet on what the response would look like if what I have above is incorrect or not RESTful. > > Thanks all. > > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
Hi Jan, et all,
What assumptions exactly am I incorrect on? Having looked at OpenSearch, it's not very clear to me where it shows a walk through from the initial URL request all the way through an example. I did however see the response they return includes <url..> elements with a type="..." that explains the media type for that resource. So I am unclear as to why what I am doing is incorrect.
I am sorry Jan, but your responses don't seem to help much unfortunately. If anything, they confuse me even more.
You've responded on several occasions that it's all about the media types, yet when I ask you how a response that returns with different resources that can be used would instruct a non-human machine on what media type to use when using the resource, I don't get a clear cut example or response. Specifically I am trying to understand how either a human or a machine can make use of one of potentially several different resources it receives from a previous request, and properly set the Content-Type to the right media type when making the request to the resource. If my response returns five different resources that a machine/client can use, and I don't specify media types that are to be set to use those resources, they can't just know it. There has to be a way a machine could discover the right media type to use with no prior info on the resource. If I upgrade my server and add three new resources, the machine/client should be able to use them simply by the
<link..> info returned.
Having read several articles/blogs on the subject, along with many threads here, I am simply not digesting the initial published URL request/response.. what the response might look like. If you could be so kind as to provide an example of what a response might look like for a simple webcart service, or some other service, that might offer a few sub-services initially, that would be most beneficial. Prior to all this HATEOAS stuff, I simply provided a SDK document with info on the resources, what they accepted, what they may return, what they were used for. Now it seems the REST/HATEOAS way is to not provide that sort of document, instead just publish a single URL and the client can learn what is possible simply by making a request to this one URL. So throw me a bone here would ya.. don't tell me it's about the media types..I know this.. show me with a simple snippet and some info on the makeup of the response or something to better illustrate this.
Thank you.
On Mar 25, 2010, at 4:35 AM, Kevin Duffey wrote: > > > Hi Jan, et all, > > What assumptions exactly am I incorrect on? Having looked at OpenSearch, it's not very clear to me where it shows a walk through from the initial URL request all the way through an example. I did however see the response they return includes <url..> elements with a type="..." that explains the media type for that resource. So I am unclear as to why what I am doing is incorrect. Did not say it was incorrect - I only said that you seem to think in too complicated terms. > > I am sorry Jan, but your responses don't seem to help much unfortunately. If anything, they confuse me even more. What about OpenSearch description and Atom Service documents? These should explain things pretty well. > > You've responded on several occasions that it's all about the media types, yet when I ask you how a response that returns with different resources that can be used would instruct a non-human machine on what media type to use when using the resource, I don't get a clear cut example or response. That is because at least I do not really understand what you are looking for. What do you mean by "what media type to use when using the resource", for example. > Specifically I am trying to understand how either a human or a machine can make use of one of potentially several different resources it receives from a previous request, and properly set the Content-Type to the right media type when making the request to the resource. Which Content-Type? Do you mean the Accept header? Clients should put in there those types they understand and that they think make sense for the given request (use Firebug to trace what FireFox is doing as an example). > If my response returns five different resources that a machine/client can use, and I don't specify media types that are to be set to use those resources, they can't just know it. So what? The type attributes are only hints anyway, not guarantees. > There has to be a way a machine could discover the right media type to use with no prior info on the resource. It is simple: use those you understand. You will only find out when the request is made which type you actually got. > If I upgrade my server and add three new resources, the machine/client should be able to use them simply by the <link..> info returned. > > Having read several articles/blogs on the subject, along with many threads here, I am simply not digesting the initial published URL request/response.. what the response might look like. AtomPub service doc, opensearch description doc.... But any other bookmarkable enry state is fine, too. You can use Amazon even if the first interaction is a product page and not the home page, eh? > If you could be so kind as to provide an example of what a response might look like for a simple webcart service, or some other service, that might offer a few sub-services initially, that would be most beneficial. Prior to all this HATEOAS stuff, I simply provided a SDK document with info on the resources, what they accepted, what they may return, what they were used for. Now it seems the REST/HATEOAS way is to not provide that sort of document, instead just publish a single URL and the client can learn what is possible simply by making a request to this one URL. So throw me a bone here would ya.. don't tell me it's about the media types..I know this.. show me with a simple snippet and some info on the makeup of the response or something to better illustrate this. Design you media types so that a client can persue its goals from any possible entry point. Amazon for example tells you where you can search on *every* page, not just the home page. So you need not go back to the home page to enter a new search. Jan > > Thank you. > > > > > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
Hi,
I am reading the SOAP 1.2 spec. (http://www.w3.org/TR/soap12-part0/) . I understand that "few people use SOAP 1.2 and of those that do, even fewer use the WebMethod property" [Ref: Stefan T.]. Given that caching is enabled because a) one can change the verb to a GET (with the SOAP Response MEP) and b) that the URI is unique, I am struggling to understand why it has not been adopted more?
Thanks,
Sean.
Hi Jan,
>Did not say it was incorrect - I only said that you seem to think in too complicated terms.
Ok, can you elaborate on what I am doing that may be construed as too complicated?
>What about OpenSearch description and Atom Service documents? These
should >explain things pretty well.
Well, maybe it's just me, but the OpenSearch doc is not easy to read for me. I found a few documents, but they aren't really explaining it to my understanding I guess. I've read specs/docs before, but the OpenSearch, unless I am looking at the wrong one, just doesn't give me the info I am seeking to understand this.
>That is because at least I do not really understand what you are looking
for. What do >you mean by "what media type to use when using the
resource", for example.
I thought I said... I'll try again. But the atom-pub examples I've seen seem to also return the media type to use. What I mean is.. if I make a GET call to the initial URL (first time client using the API), I am going to get various URLs back to different services available to me (the client) at that time. That initial call has no resource state yet..it's more of a discovery call to the published API service to figure out what is possible.. what resources are available to be called. Each of the URLs I return in the <link..> elements provides the href URL to the service, the rel tag which I don't honestly know what good it does in this specific first call/response, and the type= which is the media type a client will use to specify the Content-Type when making a request to that service. That is what I mean by my <link> elements returning a type. IT lets the client know "this is the media type you must use for this resource URL". It would also be the one
set on the Accept header. I don't know if an accompanying document for the API SDK like the OpenSearch Document shows, might explain that if the type is "application/vnd.package.someService+xml" that it ALSO can return generic application/xml if specified as the Accept header.
What I meant by my above statement is that some examples I've seen do NOT return the media type in the <link> element in a response. So, if I were to get back a response that had 8 <link> elements, each to different resources (like search, history, etc), how would I know what to set the Content-Type header to for each of those IF the <link> elements for those resources did not say "for search using application/vnd....search+xml and for history using application/vnd...history+xml"? Without the link elements telling me the type, I would not know what media type to use short of a generic application/xml type. Is that more clear?
>Which Content-Type? Do you mean the Accept header? Clients should
put in there >those types they understand and that they think make sense
for the given request (use >Firebug to trace what FireFox is doing as an
example).
I don't understand this? How would a client know to use the application/vnd.package.search+xml media type for Content-Type header (and Accept header) for a link with href="http://www.service.com/search"? Nothing in the URL tells me that the media type should be set unless I am to just arbitrarily pull the last word off the end of the URL path and append that to application/vnd... and hope for the best.
>> If my response returns five different resources that a
machine/client can use, and I >>don't specify media types that are to be
set to use those resources, they can't just >>know it.
>So what? The
type attributes are only hints anyway, not guarantees.
Huh? What do you mean they are not guarantees? If a link element says type="application/vnd.package.serviceName+xml", why would I not use it.. or why would it only be a hint? What else might it be if it's not what it specifies?
>> There has to be a way a machine could discover the right media type to use with no >>prior info on the resource.
>It is simple: use those you understand. You will only find out when the
request is made >which type you actually got.
Huh? So wait.. I make the initial entry call to the public service URL. I get back a response with some number of <link> elements, each to a different service, specifying a type="" with specific media type to use for that service. It sounds like what you are saying is, if there is no type="" with it, just make a call to it, and it will return back in the Content-Type of the response the media type. So then use that. Is that right?
I don't quite understand tho.. what call am I making to the service? If for example I get back a service named xxyyzz, I set up my request with that URL, I don't know the media type, so I dont' set Content-Type or Accept headers, because I simply don't know yet. I can't possibly know or guess at this point that the media type on the server side is set to handle application/vnd.package.xxyyzz+xml. So when I make a request to the service, it's going to return back with a media type not supported. As I use Jersey on the server side, Jersey wont even get to my service methods that would fill out the response headers properly.. so I wont get back a response with the Content-Type header indicating to me the right media type to use either. So I am no better off. Hence, I am confused short of a link element specifying the media type to use, how you figure it out by just discovering it.. no document, no prior knowledge. How can a machine/bot figure it out?
>AtomPub service doc, opensearch description doc.... But any other bookmarkable enry >state is fine, too. You can use Amazon even if the first interaction is a product page and >not the home page, eh?
I don't know that a web page like amazon is a good example. All the links are using the same media type throughout. In the case of a RESTful API that has more than one service, each service is mapped to a specific media type, it's a bit different to me than a web page with <a href..> links all throughout. I know what you mean by first page is product not home. That is.. if at a later time a client uses a cached URL to a service, not one form the initial API URL, that it should work the same. Agreed. I get that. It fits the cache-able restraint. I planned on every single <link> element throughout my entire API to always return the type="" so that whether its a link resource from the initial public API URL, or a cached link that was called months later, it will respond with the <links> elements with type="" in them so that service calls can be made from that point.
>Design you media types so that a client can persue its goals from any
possible entry >point. Amazon for example tells you where you can search
on *every* page, not just the >home page. So you need not go back to the
home page to enter a new search.
yes, again.. cached URLs can be entry points later on so need to provide the right info in responses to continue from. I get this. You say "design your media types..." I don't quite know what you mean by that. I think you know Java/Jersey and how you set up a path to a service, and for get/put/delete/post/options, etc you use the annotations on the java methods and so forth. Each service I provide as part of my API would have each method for that service returning with response header of Content-Type set for the media type of that service. I don't understand tho what you mean by design your media types. I assumed that setting the media types this one on the service methods was the design. Responding with the right media type set is also correct as far as I know. So am I missing something?
Thanks Jan.
On Mar 26, 2010, at 5:58 AM, Kevin Duffey wrote: > > >So what? The type attributes are only hints anyway, not guarantees. > > Huh? What do you mean they are not guarantees? If a link element says type="application/vnd.package.serviceName+xml", why would I not use it.. or why would it only be a hint? Because the server might change between you looking at the type atribute and your actual request. The server will tell you what the media type of its response is right there, in the response. And you just need to deal with what you get. > > It sounds like what you are saying is, if there is no type="" with it, just make a call to it, and it will return back in the Content-Type of the response the media type. So then use that. Is that right? Well, of course :-) And even if there is a type attribute it is still the same. A type attribute is just another form of client driven content negotiation that removes the roundtrip you would have with a 300 Multiple Choices response: instead of selecting an alternative from the 300 response's body the server gives you the alternatives up front. > > I don't quite understand tho.. what call am I making to the service? If for example I get back a service named xxyyzz, I set up my request with that URL, I don't know the media type, so I dont' set Content-Type or Accept headers, because I simply don't know yet. Wrong! You know what *you* can handle, right? That is what you put into the Accept header. The Accept header communicates the client's capabilities, not it's assumptions. > I can't possibly know or guess at this point that the media type on the server side is set to handle application/vnd.package.xxyyzz+xml. So when I make a request to the service, it's going to return back with a media type not supported. Actually, it should return a 406 Not Aceptable. > How can a machine/bot figure it out? As I said: *you* know what *you* can handle. That is what you tell the server. > You say "design your media types..." I don't quite know what you mean by that. For example: when you design a machine2machine shopping system, you need one (or several) media types to serialize the application states. Just as you need Atompub/Atom for blogging and OpenSearch for searching, and HTML for human targetted Web pages. *All* design activity when you create a REST service is specifying (or extending) media types (and/or link relations). > I think you know Java/Jersey and how you set up a path to a service, and for get/put/delete/post/options, etc you use the annotations on the java methods and so forth. Each service I provide as part of my API would have each method for that service returning with response header of Content-Type set for the media type of that service. I don't understand tho what you mean by design your media types. Try this thought experiment: write a machine client to Amazon and design a media type application/shopping+xml that provides the necessary semantics for the machine client to e.g. select from a catalogue, compare prices, compare product features, place an order, cacncel an order, obtain a report on past orders, etc. See http://www.nordsc.com/blog/?cat=13 for a more extensive treatment[1]. Jan [1] I hope to soon continue that series, I am just too busy with other things > Thanks Jan. > > > > > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
Volume 9 of This week in REST is up on the REST wiki - http://rest.blueoxen.net/cgi-bin/wiki.pl?RESTWeekly_Mar_22_2010 and the blog - http://wp.me/pMXr1-1d. For contributing links this week visit http://rest.blueoxen.net/cgi-bin/wiki.pl?RESTWeekly_Mar_29_2010 Cheers, Ivan
'lo all, Over the weekend I started working on a very small HATEOAS sample application to try and demonstrate, what in my mind should make a good REST/HATEOAS API based on the various topics and ideas I've been reading about lately, the projects up on github with the intention of having others contribute/fork/patch and improve on the API in such a way that we can have a simple project which demonstrates what a good API should theoretically be. The project is up at: http://github.com/talios/wellrested The code is a simple maven2 based clojure/compojure web application. The README file in the root of the repository (and on the front of the URL above) serves as documentation for the various media types in the API, and the conventions used. I'm keen on hearing the collective thoughts of your minds on this, some questions I've althought started to wonder include: * is using PATCH for sending non-CRUD operations a good, or a bad thing? * is adding a rel attribute to standard media types good/bad? such as "application/x-www-form-urlencoded; rel=wellrested-newstaffmember" over "application/newstaff+x-www-form-urlencoded"? Look forward to your comments.. Mark -- Pull me down under...
Am 29.03.10 10:21, schrieb Mark Derricutt: Hi Mark, nice to see that you used compojure. However as this is a general REST discussion list I don't want to dig into technical detail of your implementation and I'll restrict my comments to the area of REST. > The code is a simple maven2 based clojure/compojure web application. > The README file in the root of the repository (and on the front of the > URL above) serves as documentation for the various media types in the > API, and the conventions used. > > I'm keen on hearing the collective thoughts of your minds on this, some > questions I've althought started to wonder include: > > * is using PATCH for sending non-CRUD operations a good, or a bad thing? > * is adding a rel attribute to standard media types good/bad? such as > "application/x-www-form-urlencoded; rel=wellrested-newstaffmember" over > "application/newstaff+x-www-form-urlencoded"? I don't like how you're using the PATCH command. You are not _patching_ the resource "stafflist". Instead you're transferring an announcement resource to the server. So I'd think a POST to /stafflist/announcements would be more appropriate. The PATCH method changes the state of the patched resource. In this case I doubt the state of the stafflist is changed -- the server just prints the announcements and the stafflist isn't altered at all. Next, a technical issue, however not specific to compojure: I see that you hardcoded the base URL of the application for generating links. The base URL can be extracted from the servlet environment, of course, but the mapping of URL to resource remains split-brained: url to impl-of-resource is mapped by the web framework (compojure) but the reverse way impl-of-resource to url is done from within the resource code. So there is a duplication of mapping and danger of inconsistency. Is there any RESTful framework that addresses this issue? I suppose you'll need an separation between the resource implementation and the dispatcher of the resources. And this dispatcher must provide information at runtime on how a certain resource can be addressed. (This one of the issues I like to solve in a next beat of my compojure-rest library) -billy.
Mark,
in general, it makes it easier to ensure you produce a RESTful API if you limit all your descriptive effort to media type specifications. This means that, instead of describing the API you should *only* describe media type semantics. Any description of your API leads to undesired coupling if the client implements aspects of the description.
From the short look I took, the description is somwhat 'media type oriented', but I'd suggest you refactor the API description into 'specifications' of the media type profiles you use (and I'd suggest to use a 'profile' parameter instead of a 'rel' parameter).
Regarding the JSON example you gave, there is a flaw though:
<quote>
Link: <xxxxx>; rel=\"create\"; type=\"application/x-www-form-urlencoded; rel=wellrested-newstaffmember\"; title=\"Create a new staff member\"; method=\"POST\""
Accept-Patch: application/x-www-form-urlencoded; rel=wellrested-announcement
[
{
"links":[
{
"rel":"view",
"url":"xxxxx",
"type":"application\/wellrested-staffmember+json"
},{
"rel":"delete",
"url":"xxxxx",
"method":"DELETE"
}],
"id":"0c6d5bfc-5e1f-455e-b46d-3b873868519a",
"name":"Mark",
"status":"active"
}
]
<quote>
You do not need link relations such as create,view or delete because HTTP already gives you POST, GET and DELETE. Take a look at the Atom Pubishing Protocol (RFC 5023) and re-design your approach along the lines of collections, categories and accept values declared in a service document.
As a general rule, use link relation to describe what the nature of a resource is ("the collection of staff members") instead of what operations the resource is for ("rel=create").
Jan
On Mar 29, 2010, at 10:21 AM, Mark Derricutt wrote:
>
>
> 'lo all,
>
> Over the weekend I started working on a very small HATEOAS sample application to try and demonstrate, what in my mind should make a good REST/HATEOAS API based on the various topics and ideas I've been reading about lately, the projects up on github with the intention of having others contribute/fork/patch and improve on the API in such a way that we can have a simple project which demonstrates what a good API should theoretically be.
>
> The project is up at:
>
> http://github.com/talios/wellrested
>
> The code is a simple maven2 based clojure/compojure web application. The README file in the root of the repository (and on the front of the URL above) serves as documentation for the various media types in the API, and the conventions used.
>
> I'm keen on hearing the collective thoughts of your minds on this, some questions I've althought started to wonder include:
>
> * is using PATCH for sending non-CRUD operations a good, or a bad thing?
> * is adding a rel attribute to standard media types good/bad? such as "application/x-www-form-urlencoded; rel=wellrested-newstaffmember" over "application/newstaff+x-www-form-urlencoded"?
>
> Look forward to your comments..
> Mark
>
>
>
>
>
>
> --
> Pull me down under...
>
>
>
>
-----------------------------------
Jan Algermissen, Consultant
NORD Software Consulting
Mail: algermissen@...
Blog: http://www.nordsc.com/blog/
Work: http://www.nordsc.com/
-----------------------------------
On 03/29/2010 05:02 AM, Jan Algermissen wrote: > > > Mark, > > in general, it makes it easier to ensure you produce a RESTful API if > you limit all your descriptive effort to media type specifications. This > means that, instead of describing the API you should *only* describe > media type semantics. Any description of your API leads to undesired > coupling if the client implements aspects of the description. If you create a set of actions in the link (rel='JarJar') dont you have the same type of coupling? -- bk
What would be the recommended approach for describing Semantic RESTful Web Services? On the syntactical level there is WADL which does not yet have a wide adoption and there is OWL-S which tries to provide an abstract framework for describing Web Services. What about using WSDL 2.0 for describing RESTful Web Services? Two possible approaches I have seen are: - Using RDFa to semantically annotate XHTML web pages so they are machine readable. - Extending OWL-S to define an ontology that describes the service, in particular the service grounding part. See http://en.wikipedia.org/wiki/OWL-S, Best regards, Dário
On Mar 31, 2010, at 4:57 PM, Drio Abdulrehman wrote: > > > What would be the recommended approach for describing Semantic RESTful Web Services? What is your definiton of "Semantic RESTful Web Services"? Why not just "RESTful Web Service"? > On the syntactical level there is WADL which does not yet have a wide adoption and there is OWL-S which tries to provide an abstract framework for describing Web Services. > What about using WSDL 2.0 for describing RESTful Web Services? > I think AtomPub service documents are next to[1] enough for *any* service. jan [1] http://www.nordsc.com/blog/?p=80 > Two possible approaches I have seen are: > - Using RDFa to semantically annotate XHTML web pages so they are machine readable. > - Extending OWL-S to define an ontology that describes the service, in particular the service grounding part. See http://en.wikipedia.org/wiki/OWL-S, > > Best regards, > Drio > > > > > > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
On Wed, Mar 31, 2010 at 4:24 PM, Jan Algermissen <algermissen1971@...>wrote: > > On Mar 31, 2010, at 4:57 PM, Dário Abdulrehman wrote: > > > > > > > What would be the recommended approach for describing Semantic RESTful > Web Services? > > What is your definiton of "Semantic RESTful Web Services"? Why not just > "RESTful Web Service"? > > Given the REST architectural style the 'Semantic' adjective becomes superfluous since resources become in principle machine readable and discoverable by using REST principles (HATEOS, etc.) > > > > On the syntactical level there is WADL which does not yet have a wide > adoption and there is OWL-S which tries to provide an abstract framework for > describing Web Services. > > What about using WSDL 2.0 for describing RESTful Web Services? > > > > I think AtomPub service documents are next to[1] enough for *any* service. > > jan > > [1] http://www.nordsc.com/blog/?p=80 > > > It seems what you are achieving with AtomPub could also be achieved using OWL-S. Thanks for the pointer. > > > > Two possible approaches I have seen are: > > - Using RDFa to semantically annotate XHTML web pages so they are machine > readable. > > - Extending OWL-S to define an ontology that describes the service, in > particular the service grounding part. See > http://en.wikipedia.org/wiki/OWL-S, > > > > Best regards, > > Dário > > > > > > > > > > > > > > > > > > ----------------------------------- > Jan Algermissen, Consultant > NORD Software Consulting > > Mail: algermissen@... > Blog: http://www.nordsc.com/blog/ > Work: http://www.nordsc.com/ > ----------------------------------- > > > > >
On Mar 31, 2010, at 5:39 PM, Drio Abdulrehman wrote: > > It seems what you are achieving with AtomPub could also be achieved using OWL-S. > Thanks for the pointer. Yes. I just do not favor RDF so much because the entry barrier is rather high (for the usual enterprise developer); it is not as common as 'Documents'. (See comments to the blog entry) But RDF should probably be the way to go in the long run (e.g. given its merging capabilities). Jan > > > > > > Two possible approaches I have seen are: > > - Using RDFa to semantically annotate XHTML web pages so they are machine readable. > > - Extending OWL-S to define an ontology that describes the service, in particular the service grounding part. Seehttp://en.wikipedia.org/wiki/OWL-S, > > > > Best regards, > > Drio > > > > > > > > > > > > > > > > > > ----------------------------------- > Jan Algermissen, Consultant > NORD Software Consulting > > Mail: algermissen@... > Blog: http://www.nordsc.com/blog/ > Work: http://www.nordsc.com/ > ----------------------------------- > > > > > > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
Hello Sean. Not sure, your question is too open, I mean, any answer will fit. Now, please bear in mind developers usually do not see SOAP as something they had to work manually. So, they wait for tools to offer that, and probably there are no tools supporting new versions. I was once working for one big tool vendor, on a product evaluation, and I told them JAX-RPC had the days counted. They launched the tool a few days after JAX-RPC was deprecated. See what I mean? William Martinez. --- In rest-discuss@...m, Sean Kennedy <seandkennedy@...> wrote: > > Hi, > I am reading the SOAP 1.2 spec. (http://www.w3.org/TR/soap12-part0/) . I understand that "few people use SOAP 1.2 and of those that do, even fewer use the WebMethod property" [Ref: Stefan T.]. Given that caching is enabled because a) one can change the verb to a GET (with the SOAP Response MEP) and b) that the URI is unique, I am struggling to understand why it has not been adopted more? > > Thanks, > Sean. >
Hello all. I got the newsletter from InfoQ this week, and suddenly I noticed something that has been there since long, but till now I didn't realize it. In the SOA channel articles and news, there were only one of each. One article about the REST maturity levels, and one news item about REST security. Oh my... I thought there was some mistake, that those two items belong to the REST channel, and then I realized that there was no REST channel in InfoQ! Then, I had a quick twitter chat with Ryan Sloboyan, Editor from InfoQ. It seems, REST is seen just as a way to create services, opposite to SOAP / WS-* lineage. Ryan told me there is always a possibility to create a REST only channel, but he thinks that will be a narrow one, with not so many readers. The expectation is then REST readers will come just to learn how to create easy services nos using SOAP. Now, Jack Vaughan from SearchSOA was in a fireside chat with me, at the Java Symposium from TheServerSide, where I was to talk about REST apis, and their real meaning. He told me his idea of REST was similar to that one of the new way of doing services for SOA. We got a full room, and I asked if someone as ever read Roy's dissertation. None, and some with faces of "who the hell is that Roy guy?". The question of how many thought REST was a new way of doing services yield to several hands up. Also the one about REST as an HTTP driven RPC. So, all in all, it seems the idea of REST as an Easy Services Creation technique is strong, even influenced in the InfoQ categorization of articles and news. I know some of you do post on InfoQ, and are even editors. What are your thoughts? Do you think it is good to keep that idea? Do you think that is actually the idea? What do you think of posting REST as an architectural style? I want to hear your opinions, since I feel that would be an interesting discussion. William Martinez Pomares.
Interesting... when I first heard of REST a few years ago, I thought I created a REST api. I used POST for all calls, and it was basically in the form or <url>/method aka.. HTTP RPC. I hadn't read anything else about it.. just that it was basically a much easier way of doing xml-rpc using http.
Then I learned about Jersey almost 2 years ago or so.. and started to understand the use of the various methods for CRUD like operations. I still used it much like HTTP RPC, only now it was to resources and I avoided query params except for filtering results on a GET call to a collection resource.
Lately, as Jan and others can testify to, I've been asking a lot of questions to try to grasp the real REST meaning, and I've struggled with a few remaining concepts of it, but I have learned a lot from this forum and the jersey forum, a few books, and such.
I tend to think most developers read a little bit about REST and come away with much the same thing I did.. an easier RPC like mechanism using HTTP methods and headers. Even the use of media types is not very common as those I talk to just use application/xml or application/json for request and response types.
I'll be honest.. learning the "true" RESTful API style has put me off a little bit. For example, while working on a service that you would make a REST call to enable/disable something, it seemed less likely that REST was the proper architecture to use, yet I wanted to provide a single API, not two disjointed APIs. I am still not 100% clear on if such things are within the realm of using REST to change application state to turn something on or off for example, I think it's fine, but I do see how it can be quite confusing trying to learn about resource state and application state, especially with many of us coming from the servlet/httpsession stateful side of things and trying to grasp this concept of the client maintains the state (when there are two states) and so forth.
--- On Wed, 3/31/10, William Martinez Pomares <wmartinez@acoscomp.com> wrote:
From: William Martinez Pomares <wmartinez@...>
Subject: [rest-discuss] What do you think about REST being a synonym of Service creation technique?
To: rest-discuss@yahoogroups.com
Date: Wednesday, March 31, 2010, 6:25 PM
Hello all.
I got the newsletter from InfoQ this week, and suddenly I noticed something that has been there since long, but till now I didn't realize it.
In the SOA channel articles and news, there were only one of each. One article about the REST maturity levels, and one news item about REST security. Oh my...
I thought there was some mistake, that those two items belong to the REST channel, and then I realized that there was no REST channel in InfoQ!
Then, I had a quick twitter chat with Ryan Sloboyan, Editor from InfoQ. It seems, REST is seen just as a way to create services, opposite to SOAP / WS-* lineage. Ryan told me there is always a possibility to create a REST only channel, but he thinks that will be a narrow one, with not so many readers. The expectation is then REST readers will come just to learn how to create easy services nos using SOAP.
Now, Jack Vaughan from SearchSOA was in a fireside chat with me, at the Java Symposium from TheServerSide, where I was to talk about REST apis, and their real meaning. He told me his idea of REST was similar to that one of the new way of doing services for SOA. We got a full room, and I asked if someone as ever read Roy's dissertation. None, and some with faces of "who the hell is that Roy guy?". The question of how many thought REST was a new way of doing services yield to several hands up. Also the one about REST as an HTTP driven RPC.
So, all in all, it seems the idea of REST as an Easy Services Creation technique is strong, even influenced in the InfoQ categorization of articles and news.
I know some of you do post on InfoQ, and are even editors.
What are your thoughts? Do you think it is good to keep that idea?
Do you think that is actually the idea?
What do you think of posting REST as an architectural style?
I want to hear your opinions, since I feel that would be an interesting discussion.
William Martinez Pomares.
Hi Dario, We use a microformat-like language called hRESTS [1] to structure Web pages documenting Web APIs (the most common approach adopted right now) and on top of this we annotations a la SAWSDL based on what we refer to as MicroWSMO [2]. This approach gives an RPC view over these Web APIs in a way that can be automatically processed by machines to identify that there is a service behind and how it looks like. Arguably this may not be the most appropriate approach for truly RESTful services but we are targeting Web APIs and based on our surveys a bit more than 50% of the Web APIs out there are not entirely RESTful anyway. We are also looking into providing other solutions more aligned with RESTful principles and we'll then try and see to which extent we can map both ways of representing services. We currently have an editing tool called SWEET [3] that supports you in annotating Web APIs based on the technologies above and a repository for hosting these annotations called iServe [4] which is able to extract the annotations and generate RDF out of it. The RDF generated is exposed following linked data principles to better support its retrieval, querying and use. Hope you find this of interest. Cheers, Carlos [1] http://www.vitvar.com/doc/WI2008-KopeckyGV.pdf [2] http://cms-wg.sti2.org/TR/d12/v0.1/ [3] http://sweet.kmi.open.ac.uk/ [4] http://technologies.kmi.open.ac.uk/iserve/ -- Dr. Carlos Pedrinaci Knowledge Media Institute - The Open University Walton Hall, Milton Keynes, MK7 6AA United Kingdom Tel: +44 1908 654773 Fax: +44 1908 653169 Dário Abdulrehman wrote: > > > What would be the recommended approach for describing Semantic RESTful > Web Services? > On the syntactical level there is WADL which does not yet have a wide > adoption and there is OWL-S which tries to provide an abstract framework > for describing Web Services. > What about using WSDL 2.0 for describing RESTful Web Services? > > Two possible approaches I have seen are: > - Using RDFa to semantically annotate XHTML web pages so they are > machine readable. > - Extending OWL-S to define an ontology that describes the service, in > particular the service grounding part. See > http://en.wikipedia.org/wiki/OWL-S <http://en.wikipedia.org/wiki/OWL-S>, > > Best regards, > Dário > > > > > >
Hi William,
Interesting... My gut feeling is that even though the SOAP 1.2 spec is quite RESTful the tool implementations are not. I have no proof of that though. I also figure that very few will adopt SOAP 1.2 because a) both the client and the server must change and b) given that the client request is basically a HTTP GET (assuming SOAP Response MEP), one may as well go the whole hog and adopt REST...
Sean.
________________________________
From: William Martinez Pomares <wmartinez@...>
To: rest-discuss@yahoogroups.com
Sent: Thu, 1 April, 2010 2:09:49
Subject: [rest-discuss] Re: poor adoption of SOAP 1.2?
Hello Sean.
Not sure, your question is too open, I mean, any answer will fit.
Now, please bear in mind developers usually do not see SOAP as something they had to work manually. So, they wait for tools to offer that, and probably there are no tools supporting new versions. I was once working for one big tool vendor, on a product evaluation, and I told them JAX-RPC had the days counted. They launched the tool a few days after JAX-RPC was deprecated. See what I mean?
William Martinez.
--- In rest-discuss@ yahoogroups. com, Sean Kennedy <seandkennedy@ ...> wrote:
>
> Hi,
> I am reading the SOAP 1.2 spec. (http://www.w3. org/TR/soap12- part0/) . I understand that "few people use SOAP 1.2 and of those that do, even fewer use the WebMethod property" [Ref: Stefan T.]. Given that caching is enabled because a) one can change the verb to a GET (with the SOAP Response MEP) and b) that the URI is unique, I am struggling to understand why it has not been adopted more?
>
> Thanks,
> Sean.
>
Hi William,
Interesting... My gut feeling is that even though the SOAP 1.2 spec is quite RESTful the tool implementations are not. I have no proof of that though. I also figure that very few will adopt SOAP 1.2 because a) both the client and the server must change and b) given that the client request is basically a HTTP GET (assuming SOAP Response MEP), one may as well go the whole hog and adopt REST...
Sean.
________________________________
From: William Martinez Pomares <wmartinez@...>
To: rest-discuss@yahoogroups.com
Sent: Thu, 1 April, 2010 2:09:49
Subject: [rest-discuss] Re: poor adoption of SOAP 1.2?
Hello Sean.
Not sure, your question is too open, I mean, any answer will fit.
Now, please bear in mind developers usually do not see SOAP as something they had to work manually. So, they wait for tools to offer that, and probably there are no tools supporting new versions. I was once working for one big tool vendor, on a product evaluation, and I told them JAX-RPC had the days counted. They launched the tool a few days after JAX-RPC was deprecated. See what I mean?
William Martinez.
--- In rest-discuss@ yahoogroups. com, Sean Kennedy <seandkennedy@ ...> wrote:
>
> Hi,
> I am reading the SOAP 1.2 spec. (http://www.w3. org/TR/soap12- part0/) . I understand that "few people use SOAP 1.2 and of those that do, even fewer use the WebMethod property" [Ref: Stefan T.]. Given that caching is enabled because a) one can change the verb to a GET (with the SOAP Response MEP) and b) that the URI is unique, I am struggling to understand why it has not been adopted more?
>
> Thanks,
> Sean.
>
Hi William,
Interesting... My gut feeling is that even though the SOAP 1.2 spec is quite RESTful the tool implementations are not. I have no proof of that though. I also figure that very few will adopt SOAP 1.2 because a) both the client and the server must change and b) given that the client request is basically a HTTP GET (assuming SOAP Response MEP), one may as well go the whole hog and adopt REST...
Sean.
________________________________
From: William Martinez Pomares <wmartinez@...>
To: rest-discuss@yahoogroups.com
Sent: Thu, 1 April, 2010 2:09:49
Subject: [rest-discuss] Re: poor adoption of SOAP 1.2?
Hello Sean.
Not sure, your question is too open, I mean, any answer will fit.
Now, please bear in mind developers usually do not see SOAP as something they had to work manually. So, they wait for tools to offer that, and probably there are no tools supporting new versions. I was once working for one big tool vendor, on a product evaluation, and I told them JAX-RPC had the days counted. They launched the tool a few days after JAX-RPC was deprecated. See what I mean?
William Martinez.
--- In rest-discuss@ yahoogroups. com, Sean Kennedy <seandkennedy@ ...> wrote:
>
> Hi,
> I am reading the SOAP 1.2 spec. (http://www.w3. org/TR/soap12- part0/) . I understand that "few people use SOAP 1.2 and of those that do, even fewer use the WebMethod property" [Ref: Stefan T.]. Given that caching is enabled because a) one can change the verb to a GET (with the SOAP Response MEP) and b) that the URI is unique, I am struggling to understand why it has not been adopted more?
>
> Thanks,
> Sean.
>
Hi William,
Interesting... My gut feeling is that even though the SOAP 1.2 spec is quite RESTful the tool implementations are not. I have no proof of that though. I also figure that very few will adopt SOAP 1.2 because a) both the client and the server must change and b) given that the client request is basically a HTTP GET (assuming SOAP Response MEP), one may as well go the whole hog and adopt REST...
Sean.
________________________________
From: William Martinez Pomares <wmartinez@...>
To: rest-discuss@yahoogroups.com
Sent: Thu, 1 April, 2010 2:09:49
Subject: [rest-discuss] Re: poor adoption of SOAP 1.2?
Hello Sean.
Not sure, your question is too open, I mean, any answer will fit.
Now, please bear in mind developers usually do not see SOAP as something they had to work manually. So, they wait for tools to offer that, and probably there are no tools supporting new versions. I was once working for one big tool vendor, on a product evaluation, and I told them JAX-RPC had the days counted. They launched the tool a few days after JAX-RPC was deprecated. See what I mean?
William Martinez.
--- In rest-discuss@ yahoogroups. com, Sean Kennedy <seandkennedy@ ...> wrote:
>
> Hi,
> I am reading the SOAP 1.2 spec. (http://www.w3. org/TR/soap12- part0/) . I understand that "few people use SOAP 1.2 and of those that do, even fewer use the WebMethod property" [Ref: Stefan T.]. Given that caching is enabled because a) one can change the verb to a GET (with the SOAP Response MEP) and b) that the URI is unique, I am struggling to understand why it has not been adopted more?
>
> Thanks,
> Sean.
>
Yes Kevin. As you say, there is a lot of people coming from development side, the practical one, where I need just to know how it works and then jump into coding right away. So, one thing is your code actually works for you, as someone in the audience asked. If that works, why bother trying to make it work as REST if that complicates things without adding much benefits? Well, REST was made to obtain some particular benefits, given a very particular architecture: the web. So, if you don't need those benefits, but some others you can get with simpler things, then it is common sense to go KISS. The question is still the same. REST discussions are held in the SOA arena, as if REST is a SOA sub product. If SOA is dead, is then REST dead the same? Does REST lives only in the SOA realm? Is REST just a web services alternative? Is Services the only topic we can talk about in REST? I wonder. William Martinez Pomares --- In rest-discuss@yahoogroups.com, Kevin Duffey <andjarnic@...> wrote: > > > Interesting... when I first heard of REST a few years ago, I thought I created a REST api. I used POST for all calls, and it was basically in the form or <url>/method aka.. HTTP RPC. I hadn't read anything else about it.. just that it was basically a much easier way of doing xml-rpc using http. > > Then I learned about Jersey almost 2 years ago or so.. and started to understand the use of the various methods for CRUD like operations. I still used it much like HTTP RPC, only now it was to resources and I avoided query params except for filtering results on a GET call to a collection resource. > > Lately, as Jan and others can testify to, I've been asking a lot of questions to try to grasp the real REST meaning, and I've struggled with a few remaining concepts of it, but I have learned a lot from this forum and the jersey forum, a few books, and such. > > I tend to think most developers read a little bit about REST and come away with much the same thing I did.. an easier RPC like mechanism using HTTP methods and headers. Even the use of media types is not very common as those I talk to just use application/xml or application/json for request and response types. > > I'll be honest.. learning the "true" RESTful API style has put me off a little bit. For example, while working on a service that you would make a REST call to enable/disable something, it seemed less likely that REST was the proper architecture to use, yet I wanted to provide a single API, not two disjointed APIs. I am still not 100% clear on if such things are within the realm of using REST to change application state to turn something on or off for example, I think it's fine, but I do see how it can be quite confusing trying to learn about resource state and application state, especially with many of us coming from the servlet/httpsession stateful side of things and trying to grasp this concept of the client maintains the state (when there are two states) and so forth. > > >
Jan Algermissen wrote: > On Mar 26, 2010, at 5:58 AM, Kevin Duffey wrote: > > >>> So what? The type attributes are only hints anyway, not guarantees. >>> >> Huh? What do you mean they are not guarantees? If a link element says type="application/vnd.package.serviceName+xml", why would I not use it.. or why would it only be a hint? >> > > Because the server might change between you looking at the type atribute and your actual request. The server will tell you what the media type of its response is right there, in the response. And you just need to deal with what you get. > I disagree with that point because the intention of the link (which is what the client cares about) does not alter with regards to changes on the server side. I don't see the point in a type attribute if it is not intended as a 'guarantee' of the correct terms of negotiation for that step of the application. I agree that it's not intended as a guarantee of the response but this is due specifically to statelessness and the way that the conneg mechanism is defined by HTTP. I can't think of a good reason a client, if it understands such an attribute, should not alter its Accept header accordingly. >> It sounds like what you are saying is, if there is no type="" with it, just make a call to it, and it will return back in the Content-Type of the response the media type. So then use that. Is that right? >> > > Well, of course :-) And even if there is a type attribute it is still the same. A type attribute is just another form of client driven content negotiation that removes the roundtrip you would have with a 300 Multiple Choices response: instead of selecting an alternative from the 300 response's body the server gives you the alternatives up front. > > If it's "client driven content negotiation" then why would this not alter the way clients express their preferences in that given instance i.e. why would a client not alter their Accept header to reflect the desire for an application state the link is driving them towards? >> I don't quite understand tho.. what call am I making to the service? If for example I get back a service named xxyyzz, I set up my request with that URL, I don't know the media type, so I dont' set Content-Type or Accept headers, because I simply don't know yet. >> > > Wrong! You know what *you* can handle, right? That is what you put into the Accept header. The Accept header communicates the client's capabilities, not it's assumptions. > Wrong! It communicates client capabilities if, and only if, the hypermedia driving the client's application state has not been specific in the given link. Which is supported by 2616 where the Accept header is defined to represent the preferences of a *request*, not a client: " Accept headers can be used to indicate that the request is specifically limited to a small set of desired types, as in the case of a request for an in-line image." Cheers, Mike
How should a client decide its next step? Looking at the mime type, representation content and relations or is it ok for the client to keep track of its path on its desired process (the one he expects *all* servers to follow)? It seems like the second one is not REST, is it correct? So the client should always infer its next step only based on its current representation, media type and relations? Regards Guilherme Silveira Caelum | Ensino e Inovao http://www.caelum.com.br/
Hi Guilherme, Your question reminds me of this previous discussion: http://tech.groups.yahoo.com/group/rest-discuss/messages/14643?threaded=1&m=e&var=1&tidx=1 <http://tech.groups.yahoo.com/group/rest-discuss/messages/14643?threaded=1&m=e&var=1&tidx=1> Ivan On Sun, Apr 4, 2010 at 03:33, Guilherme Silveira < guilherme.silveira@...> wrote: > > > How should a client decide its next step? Looking at the mime type, > representation content and relations or is it ok for the client to > keep track of its path on its desired process (the one he expects > *all* servers to follow)? > > It seems like the second one is not REST, is it correct? So the client > should always infer its next step only based on its current > representation, media type and relations? > > Regards > > Guilherme Silveira > Caelum | Ensino e Inovação > http://www.caelum.com.br/ > >
Guilherme, On Apr 4, 2010, at 3:33 AM, Guilherme Silveira wrote: > How should a client decide its next step? Looking at the mime type, > representation content and relations or is it ok for the client to > keep track of its path on its desired process (the one he expects > *all* servers to follow)? > > It seems like the second one is not REST, is it correct? So the client > should always infer its next step only based on its current > representation, media type and relations? - the client sends a request to some 'entry point' resources - the client receives 'main' representation of resource - according to its configuration for the media type of that 'main' representation the client automatically performs sub requests until it reaches the next steady state (e.g. Web page with images and style sheets etc.) - the steady state reached is the current application state - from this current application state the client tries to follow an available transition that takes it towards its overall goal So, yes: client decides next step based on available transitions from current steady state and its own overall goal only. Client does not keep track of its prior interactions. (But the prior interactions can of course change the set of rules the client applies to every current stady state. IOW, the state that the client is in regarding its overall goal can of course affect the decisions at makes). Jan > > Regards > > Guilherme Silveira > Caelum | Ensino e Inovao > http://www.caelum.com.br/ > > > ------------------------------------ > > Yahoo! Groups Links > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@acm.org Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
Jan Algermissen wrote: > Guilherme, > > On Apr 4, 2010, at 3:33 AM, Guilherme Silveira wrote: > > >> How should a client decide its next step? Looking at the mime type, >> representation content and relations or is it ok for the client to >> keep track of its path on its desired process (the one he expects >> *all* servers to follow)? >> >> It seems like the second one is not REST, is it correct? So the client >> should always infer its next step only based on its current >> representation, media type and relations? >> > > - the client sends a request to some 'entry point' resources > - the client receives 'main' representation of resource > - according to its configuration for the media type of that > 'main' representation the client automatically performs sub > requests until it reaches the next steady state (e.g. Web page > with images and style sheets etc.) > - the steady state reached is the current application state > - from this current application state the client tries to > follow an available transition that takes it towards its overall > goal > > So, yes: client decides next step based on available transitions from > current steady state and its own overall goal only. Client does not > keep track of its prior interactions. > > (But the prior interactions can of course change the set of rules the > client applies to every current stady state. IOW, the state that the client > is in regarding its overall goal can of course affect the decisions at > makes). > Whether or not the client keeps track of prior interactions will depend on an application and its hypermedia. Complex application flows will require clients to maintain an understanding of their context in order to intelligently traverse from one state to another, IOW 'prior interactions will change the set of rules' - is this really different to clients keeping track of prior interactions? I think it's hard to make a case for that in a determinate, state-machine world. On the human web - we have the benefit of symbolism and the powerful methods of inference that come with that; GUI stuff like icons, images, breadcrumbs, and other stuff like natural language, etc. The machine web doesn't have any of this because machines don't have common sense. This is where, for me, the parallel between the human web and the machine web begins to break down a bit and the comparison stops making sense - and so it's also where the meaning and value of a 'steady state' becomes a bit confused. Cheers, Mike
My understanding of REST is that the stateless constraint is on the server, not the client. The client is free to (and often should) keep track of the application state (its history of interactions and progress toward its goal). Think of the various RESTful shopping carts we've discussed on this list. In most cases, as far as I can remember, the client was advised to keep track of the shopping cart. Does anybody disagree? (I have been wrong before, and am always ready to be proved wrong again....)
Thanks guys, I will try to keep with amnesia on the client side, whenever it makes the client still easy to understand and maintain. Regards Guilherme Silveira Caelum | Ensino e Inovao http://www.caelum.com.br/ 2010/4/4 Bob Haugen <bob.haugen@...> > > > My understanding of REST is that the stateless constraint is on the > server, not the client. > > The client is free to (and often should) keep track of the application > state (its history of interactions and progress toward its goal). > > Think of the various RESTful shopping carts we've discussed on this > list. In most cases, as far as I can remember, the client was advised > to keep track of the shopping cart. > > Does anybody disagree? (I have been wrong before, and am always ready > to be proved wrong again....) > >
On Apr 4, 2010, at 1:21 PM, Mike Kelly wrote: > > Whether or not the client keeps track of prior interactions will depend > on an application and its hypermedia. Let me put it into different words: The meaning (its intended interpretation) of a state in a Web application must not depend on prior interactions. But the client of course can remember and use as much as it wants during its path through the application. Better? Jan > Complex application flows will > require clients to maintain an understanding of their context in order > to intelligently traverse from one state to another, IOW 'prior > interactions will change the set of rules' - is this really different to > clients keeping track of prior interactions? I think it's hard to make a > case for that in a determinate, state-machine world. > > On the human web - we have the benefit of symbolism and the powerful > methods of inference that come with that; GUI stuff like icons, images, > breadcrumbs, and other stuff like natural language, etc. The machine web > doesn't have any of this because machines don't have common sense. This > is where, for me, the parallel between the human web and the machine web > begins to break down a bit and the comparison stops making sense - and > so it's also where the meaning and value of a 'steady state' becomes a > bit confused. > > Cheers, > Mike > > > ------------------------------------ > > Yahoo! Groups Links > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
On Apr 4, 2010, at 5:34 PM, Jan Algermissen wrote: > > On Apr 4, 2010, at 1:21 PM, Mike Kelly wrote: > >> >> Whether or not the client keeps track of prior interactions will depend >> on an application and its hypermedia. > > Let me put it into different words: > > The meaning (its intended interpretation) of a state in a Web application must not depend on prior interactions. But the client of course can remember and use as much as it wants during its path through the application. > > Better? Yet in other words: Any steady state of a Web application serves as a potential entry point into the application. It is understandable in the absence of any prior interactions. *How* a client progresses from that entry state is totally up to the client (and might be influenced by whatever it did before). Jan > > Jan > > >> Complex application flows will >> require clients to maintain an understanding of their context in order >> to intelligently traverse from one state to another, IOW 'prior >> interactions will change the set of rules' - is this really different to >> clients keeping track of prior interactions? I think it's hard to make a >> case for that in a determinate, state-machine world. >> >> On the human web - we have the benefit of symbolism and the powerful >> methods of inference that come with that; GUI stuff like icons, images, >> breadcrumbs, and other stuff like natural language, etc. The machine web >> doesn't have any of this because machines don't have common sense. This >> is where, for me, the parallel between the human web and the machine web >> begins to break down a bit and the comparison stops making sense - and >> so it's also where the meaning and value of a 'steady state' becomes a >> bit confused. >> >> Cheers, >> Mike >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> > > ----------------------------------- > Jan Algermissen, Consultant > NORD Software Consulting > > Mail: algermissen@... > Blog: http://www.nordsc.com/blog/ > Work: http://www.nordsc.com/ > ----------------------------------- > > > > > > > ------------------------------------ > > Yahoo! Groups Links > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
On Apr 4, 2010, at 1:21 PM, Mike Kelly wrote: > This > is where, for me, the parallel between the human web and the machine web > begins to break down a bit and the comparison stops making sense - and > so it's also where the meaning and value of a 'steady state' becomes a > bit confused. We need not put any semantic 'magic' in the notion of steady state: When the application state of some Web application consists of parts that have different volatility or otherwise make no sense to stuff into a entity the processing rules of the media type of the primary representation can provide rules for downloading the various parts (think along the lines of HTML and images, style sheets, etc). These processing rules are independent of the application in question, they are defined in the context of the media type of the primary representation only. For example, HTML's processing rules apply regardless of what Web application a browsers interacts with. User agents can provide configuration options that allow controlling the actual behavior regarding the processing rules. For example, most browsers allow us to turn off the automatic image downloading. One the user agent's component for handling the given media type is done with applying the processing rules it reaches a steady state and hands control to the next layer. The differentiation between human and machine client really only applies *after* the steady state has been reached. Before that it is all automatic, media type specific processing rules. Jan ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
On Sun, Apr 4, 2010 at 10:34 AM, Jan Algermissen <algermissen1971@...> wrote: > The meaning (its intended interpretation) of a state in a Web application must not depend on prior interactions. > > Better? No. Where did you get that as a constraint on the client? I do understand it as a constraint on the server, but not on the client. You did qualify it with your following statement "But the client of course can remember and use as much as it wants during its path through the application", but that I think contradicts your previous statement.
On Apr 4, 2010, at 6:17 PM, Bob Haugen wrote: > On Sun, Apr 4, 2010 at 10:34 AM, Jan Algermissen > <algermissen1971@...> wrote: >> The meaning (its intended interpretation) of a state in a Web application must not depend on prior interactions. >> >> Better? > > No. Where did you get that as a constraint on the client? I do > understand it as a constraint on the server, but not on the client. > > You did qualify it with your following statement "But the client of > course can remember and use as much as it wants during its path > through the application", but that I think contradicts your previous > statement. Next try: A Web application design cannot require the client to have ssen some previous state in order to be able to completely understand any other of the application states. Jan > > > ------------------------------------ > > Yahoo! Groups Links > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
On Apr 4, 2010, at 6:17 PM, Bob Haugen wrote: > On Sun, Apr 4, 2010 at 10:34 AM, Jan Algermissen > <algermissen1971@...> wrote: >> The meaning (its intended interpretation) of a state in a Web application must not depend on prior interactions. >> >> Better? > > No. Where did you get that as a constraint on the client? Not sure which REST constraint it is, I guess a combination of the stateless server-, hypermedia- and message self-descriptiveness constraints. Here is a relevant quote though: "Each state can be completely understood by the representation(s) it contains [...]" <http://tech.groups.yahoo.com/group/rest-discuss/message/5841> Jan > I do > understand it as a constraint on the server, but not on the client. > > You did qualify it with your following statement "But the client of > course can remember and use as much as it wants during its path > through the application", but that I think contradicts your previous > statement. > > > ------------------------------------ > > Yahoo! Groups Links > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
Sorry, I'd meant to send this to the list, but because Jan CC'd it and
Yahoo! Groups, annoyingly, adds neither a Reply-To nor a List-Post
header to the emails it relays when they've been CC'd to the list.
On Sun, Apr 04, 2010 at 06:37:28PM +0200, Jan Algermissen wrote:
>
> On Apr 4, 2010, at 6:17 PM, Bob Haugen wrote:
>
> > On Sun, Apr 4, 2010 at 10:34 AM, Jan Algermissen
> > <algermissen1971@...> wrote:
> >> The meaning (its intended interpretation) of a state in a Web
> >> application must not depend on prior interactions.
> >>
> >> Better?
> >
> > No. Where did you get that as a constraint on the client? I do
> > understand it as a constraint on the server, but not on the client.
> >
> > You did qualify it with your following statement "But the client of
> > course can remember and use as much as it wants during its path
> > through the application", but that I think contradicts your previous
> > statement.
>
> Next try: A Web application design cannot require the client to have
> ssen some previous state in order to be able to completely understand
> any other of the application states.
That's not entirely complete either. Rather, the server must allow any
information pertaining to a given steady state to be discoverable by the
client, i.e., no resource can be a dead end, and must reference its
context by way of hypermedia.
The ability of a client to discover resource context via hypermedia is,
after all, what allows the client to be as stateful as it needs to be
while the server remains stateless.
K.
--
Keith Gaughan - k@... - http://stereochro.me/ - CF9F6473
An idea that is not dangerous is unworthy
of being called an idea at all.
-- Oscar Wilde
Am 04.04.10 13:26, schrieb Bob Haugen: > > > My understanding of REST is that the stateless constraint is on the > server, not the client. I think you're not correct. REST only adovcates no _shared_ conversational state between client and server. The client can be statefull (e.g. the current loaded resource). The server typically will be statefull (the state of the served resources). -billy.
On Sun, Apr 04, 2010 at 11:16:33PM +0200, Philipp Meier wrote: > Am 04.04.10 13:26, schrieb Bob Haugen: > > > > > > My understanding of REST is that the stateless constraint is on the > > server, not the client. > > I think you're not correct. REST only adovcates no _shared_ > conversational state between client and server. > > The client can be statefull (e.g. the current loaded resource). > The server typically will be statefull (the state of the served resources). I think you just might be misunderstanding him: of course there's going to be some state on the server--the resource state--otherwise there's be little[1] point in the client contacting the server. Clients maintain application state. The state that the statelessness of REST refers[2] to is the kind of state that would be carried in, for instance a session, whereby a client, when making a request to a server, doesn't need to supply the fully amount of information the server would need to process the request. It's this kind of state that REST prohibits. > -billy. K. [1] No state at all means no resources, after all. [2] Connection or session state, for want of a better term. -- Keith Gaughan - k@... - http://stereochro.me/ - CF9F6473 There are no words, man. Solder on is all I can say.
Guilherme Silveira wrote: > > How should a client decide its next step? > This thread needs a terminology scrub. In REST, the application is defined as "what the user is trying to accomplish", and the user's agent is the "client". The client never decides what to do next, only the user does. The client is there to carry out the user's orders. The "user" of course, is not required to be human. But, this thread is as clear as mud, because "user" and "user-agent" are being combined into "client". When I as a human am driving a REST application, I am not the "client" nor am I part of the "client component". As best I can tell, I'm the "user component" the client sees through a "middleware connector" just like BIND components interact with client, server and intermediary components through resolver (also middleware) connectors. I see a machine user the same way -- as a "user component". I think it would help the discussion of m2m interaction to enforce the distinction between "client" and "user". A machine user is trying to accomplish a set task, just like a human user. How the machine user utilizes the client component to accomplish its task is a discrete problem, separate from the problem of the client's application state. With a shopping-cart system, the human user is presented with a series of application steady-states. These steady-states contain many links that are of no interest to me if I'm trying to check out my cart. If my goal is to check out my cart, I only have interest in the specific state transitions presented which advance me towards that goal. The problem is, how does a machine user deduce which of the presented state-transition options will advance it towards its goal? This is a problem orthogonal to REST, which is not to say off-topic to rest- discuss. Once the client component arrives at the proper steady-state REST doesn't enter the equation again, until the user requests some transition to the next steady-state in their specific application. This is a "vocabulary problem". As we discussed here late last year, it would be nice if there existed some standardized RDFa and link relations to describe state transitions specific to checking out a shopping cart, and provide common markup for Name, CC#, sec code, billing/delivery address(es) and other standard form fields. Such efforts are, in fact, underway. It would then be trivial to write a machine agent capable of finding the best price on toilet tissue (etc.) every few months, and placing a resupply order on my behalf. This machine agent knows exactly how to fill out the forms and drive the shopping-cart application because it has knowledge of standard link relations, media types, and HTTP methods -- in addition to supplemental knowledge of the various domain-specific vocabularies it will encounter until some standard is arrived at, and of course specific knowledge about my brand, color, scent and pricing preferences, plus of course my billing and shipping information (which the merchant may also have stored as application state in the representation my agent retrieved). The GETting and POSTing of information using self-documenting HTML form interfaces is the purview of REST. How these application steady-states inform the user as to how to proceed towards various goals (each choice representing a different possible "application" in its own right), is opaque behind the Uniform Interface, just like system logic on the server. Informing human users is simple -- provided the system speaks their human vocabulary (English, Spanish etc.). Informing machine users how to drive the exact same application logic (series of forms in the checkout process, for example) a human uses, is a problem that needs to be addressed in markup, not over the wire -- IOW, the shopping-cart problem is best solved for m2m without creating any new media types. The middleware connector between the user-agent and user components could be called a "user connector" or an "agent connector", but I would not use both, as it would imply that there's some difference to REST whether data input is programmatic, or keyboard and/or mouse (etc.). The important notion is that "user" and "user-agent" are different components in a REST application. Keith already addressed the other common terminology problem -- there are both resource state and application state, and the two are not the same (even when they appear to be). To answer the OP, "So the client should always infer its next step only based on its current representation, media type and relations?" The server instructs the client, using common media types and link relations, how to derive and render a steady-state. This application steady-state may or may not be the same as the state of the resource indicated by the initially-dereferenced representation. This process is bound by REST constraints. (You won't see a full set of links on my demo unless your client processes the initially-dereferenced resource's representation through a linked XSLT stylesheet -- the rendered, CSS-styled result is the application steady-state, which is mashed up from multiple source documents dereferenced during XSLT processing -- if dereferenced directly, most of these source documents aren't containers, so the resulting application state would be identical to that resource's state -- communicated via the dereferenced representation -- for some unknown instance of time before or after the immediate present.) It's this derived application state that presents the client with one or more possible state transitions. The client presents the user with these options. The human user can track and evaluate whatever the heck it wants to, when deciding which state transition to pursue. Your AI user is equally at liberty in what it can evaluate when determining how to proceed -- the price of TP from different suppliers, for example. This process is unbound by any REST constraints. -Eric
Volume 10 of This week in REST is up on the REST wiki - http://rest.blueoxen.net/cgi-bin/wiki.pl?RESTWeekly_Mar_29_2010 and the blog - http://wp.me/pMXr1-1i. For contributing links this week visit http://rest.blueoxen.net/cgi-bin/wiki.pl?RESTWeekly_Apr_5_2010 Cheers, Ivan
On Apr 2, 2010, at 1:08 AM, William Martinez Pomares wrote: > The question is still the same. REST discussions are held in the SOA arena, as if REST is a SOA sub product. If SOA is dead, is then REST dead the same? Does REST lives only in the SOA realm? Is REST just a web services alternative? Is Services the only topic we can talk about in REST? > > I wonder. As you can define SOA to mean whatever suits your purposes, it all depends. I happened to be InfoQ's SOA lead editor for a long time, and I personally define SOA to be a high-level approach to an organization's IT holistically, not as an architectural style. I've personally found REST to be the most compelling architectural style, and RESTful HTTP as the most useful technology stack, to achieve those high-level goals (much more so than SOAP/WSDL/WS-* and whatever you'd like to call the architectural style it embodies, if you happen to believe that it actually does so). Only if you define SOA (as I believe Roy and many other REST folks do) as the unnamed architectural style underlying WSDL, SOAP & Co., this seems a conflict. One of the more interesting experiences I had at InfoQ (I'm no longer the lead editor and only loosely associated) was that it got harder and harder to find anyone willing to write a useful technical article that was not REST-related. Of course this may have been selection bias, but I really tried but at some point 90% of the people I respected from the WS-* side of things had become RESTafarians. Best, Stefan
Comment inline.
________________________________
From: Eric J. Bowman <eric@...>
To: Guilherme Silveira <guilherme.silveira@...>
Cc: rest-discuss <rest-discuss@yahoogroups.com>
Sent: Sun, April 4, 2010 11:24:52 PM
Subject: Re: [rest-discuss] client keeps its state
The problem is, how does a machine user deduce which of the presented
state-transition options will advance it towards its goal? This is a
problem orthogonal to REST, which is not to say off-topic to rest-
discuss. Once the client component arrives at the proper steady-state
REST doesn't enter the equation again, until the user requests some
transition to the next steady-state in their specific application.
That's rather extreme. Implementers clearly are curious how to retain the constraints of the architecture and build m2m agents. While the techniques for building goal-directed agents aren't particular to REST, they're certainly of interest to this audience, and it's been a sorely lacking area of exploration, IMO.
>
Otherwise, I agree with your other comments.
__________________________________________________________________
Looking for the perfect gift? Give the gift of Flickr!
http://www.flickr.com/gift/Comments inline.
________________________________
From: Jan Algermissen <algermissen1971@...>
To: Bob Haugen <bob.haugen@...>
Cc: rest-discuss <rest-discuss@yahoogroups.com>
Sent: Sun, April 4, 2010 9:47:44 AM
Subject: Re: [rest-discuss] client keeps its state
On Apr 4, 2010, at 6:17 PM, Bob Haugen wrote:
> On Sun, Apr 4, 2010 at 10:34 AM, Jan Algermissen
> <algermissen1971@ mac.com> wrote:
>> The meaning (its intended interpretation) of a state in a Web application must not depend on prior interactions.
>>
>> Better?
>
> No. Where did you get that as a constraint on the client?
Not sure which REST constraint it is, I guess a combination of the stateless server-, hypermedia- and message self-descriptivenes s constraints.
Here is a relevant quote though:
"Each state can be completely
understood by the representation( s) it contains [...]"
<http://tech. groups.yahoo. com/group/ rest-discuss/ message/5841>
I believe agent does not not have quite "amnesia" from prior interactions, though it does evaluate its next transition based on the current available options. An user agent's state is continually updated based on prior cached representations, and might need to change if those representations become stale. Cached representations and their control data tend to be important to the application (i.e. this is basic optimistic concurrency control).
>Secondly, in a linked-data m2m situation, the data graph can be fairly complex. At the steady state, the agent may
>a) "need more information", and proceed to another resource (and its immediate links), or
>b) backtrack to a prior state, or
>c) decide to "change resource state" via POST/PUT etc.
>
>
The backtracking part is the interesting one. The application retains the history of where it came from, and if the current available transitions aren't applicable to its goal, it can return to a prior state (and refresh its representation if necessary).
>
Abstractly:
>- agent GETs a resource's representation ,
>- sees a POST link relation, doesn't think it's important yet, needs to GET more information ,
>- continues to GET immediately available links, traversing steady states, updating the app state,
>- runs out of options based on the current state, and backtracks until...
>- it sees the resource representation with the POST link relation, now realizes that is appropriate to its goal,
>- performs a resource state change on that link with POST.
>>None of the above relaxes the "hypermedia", "statelessness" and "self-description" constraints of the uniform interface. The generic semantics of the operations remain visible to intermediaries, and each message is understandable on its own by connectors and intermediary components.
>
>>
Finally, it seems there's an implied hyperlink graph traversal depth restriction in reaching a steady state, but I'm not sure it's enforceable in practice. For example, let's say the rule is "only GET immediate links that are 1-level deep". Unfortunately, that doesn't apply to HTML+CSS, since the CSS itself can contain image links that are part of the current state! OTOH, it may be good practice to limit graph traversal for m2m situations, as it simplifies the agent's implementation.
>Cheers
>Stu
__________________________________________________________________
Yahoo! Canada Toolbar: Search from anywhere on the web, and bookmark your favourite sites. Download it now
http://ca.toolbar.yahoo.com.
To answer your questions:
- I don't think it's a good idea to equate REST with "easy services creation". I'd be more inclined if people equated specifics like "Atom/Atompub" or "OData" with "easy DATA services creation", personally.
Enterprises should keep building SOAP web services if they are happy with RPC or messaging systems, and can force their vendors to make it "easier".
- The actual *idea* for REST is not "easy services creation" -- it is "large-scale information sharing & manipulation". It is for interoperability at very-large scale, a "system of systems" architecture: http://www.infoed.com/Open/PAPERS/systems.htm
As such, it is not necessarily "easy" to apply to all situations, it requires knowledge & experience (like most things).
- Explaining REST as an architectural style works for some, but more likely there's also a need to explain REST in the context of SOA's verbiage (e.g. governance, interoperability, loose coupling, contracts).
- Probably there will need to be more mainstream books and tooling for the lay-developer, that gets into *how* the development experience is different. Unfortunately, that's a moving target.
Some comments:
The general trend among the SOA crowd has been to equate REST with "Plain XML over HTTP", and to this day, it remains the popular understanding in most enterprises. Most don't really understand the key features (URIs and hyperlinks), though I have had some exposure that some are beginning to explore deeper once they dig into Atom & Atompub.
The problem with using REST as "easy services creation", is that you may be committing greater sins to your maintainability than using WSDL & SOAP if you don't use HTTP & URIs properly (e.g. performing state changes with GET, using a single URI as an "endpoint", including application-specific methods in the XML body, etc. ) At least with SOAP & WSDL there is tooling & infrastructure for governing the little interoperability you do get with it. With REST, the tooling & written literature is still very young, and the vast majority of developers are never going to read Roy's thesis.
Part of the problem is that the Web Architecture and REST are very different ways to think about distributed systems design, whereas SOAP-style SOA services are an evolutionary descendent of (depending who you talk to) distributed objects ala CORBA/COM, or message queues ala MQ or TIBCO, which have a much longer history in some people's minds.
Cheers
Stu
________________________________
From: William Martinez Pomares <wmartinez@...>
To: rest-discuss@yahoogroups.com
Sent: Wed, March 31, 2010 6:25:48 PM
Subject: [rest-discuss] What do you think about REST being a synonym of Service creation technique?
Hello all.
I got the newsletter from InfoQ this week, and suddenly I noticed something that has been there since long, but till now I didn't realize it.
In the SOA channel articles and news, there were only one of each. One article about the REST maturity levels, and one news item about REST security. Oh my...
I thought there was some mistake, that those two items belong to the REST channel, and then I realized that there was no REST channel in InfoQ!
Then, I had a quick twitter chat with Ryan Sloboyan, Editor from InfoQ. It seems, REST is seen just as a way to create services, opposite to SOAP / WS-* lineage. Ryan told me there is always a possibility to create a REST only channel, but he thinks that will be a narrow one, with not so many readers. The expectation is then REST readers will come just to learn how to create easy services nos using SOAP.
Now, Jack Vaughan from SearchSOA was in a fireside chat with me, at the Java Symposium from TheServerSide, where I was to talk about REST apis, and their real meaning. He told me his idea of REST was similar to that one of the new way of doing services for SOA. We got a full room, and I asked if someone as ever read Roy's dissertation. None, and some with faces of "who the hell is that Roy guy?". The question of how many thought REST was a new way of doing services yield to several hands up. Also the one about REST as an HTTP driven RPC.
So, all in all, it seems the idea of REST as an Easy Services Creation technique is strong, even influenced in the InfoQ categorization of articles and news.
I know some of you do post on InfoQ, and are even editors.
What are your thoughts? Do you think it is good to keep that idea?
Do you think that is actually the idea?
What do you think of posting REST as an architectural style?
I want to hear your opinions, since I feel that would be an interesting discussion.
William Martinez Pomares.
__________________________________________________________________
Connect with friends from any web browser - no download required. Try the new Yahoo! Canada Messenger for the Web BETA at http://ca.messenger.yahoo.com/webmessengerpromo.phpStu, very well said! Jan On Apr 5, 2010, at 10:38 PM, Stuart Charlton wrote: > > > > To answer your questions: > - I don't think it's a good idea to equate REST with "easy services creation". I'd be more inclined if people equated specifics like "Atom/Atompub" or "OData" with "easy DATA services creation", personally. > > Enterprises should keep building SOAP web services if they are happy with RPC or messaging systems, and can force their vendors to make it "easier". > > - The actual *idea* for REST is not "easy services creation" -- it is "large-scale information sharing & manipulation". It is for interoperability at very-large scale, a "system of systems" architecture: http://www.infoed.com/Open/PAPERS/systems.htm > > As such, it is not necessarily "easy" to apply to all situations, it requires knowledge & experience (like most things). > > - Explaining REST as an architectural style works for some, but more likely there's also a need to explain REST in the context of SOA's verbiage (e.g. governance, interoperability, loose coupling, contracts). > > - Probably there will need to be more mainstream books and tooling for the lay-developer, that gets into *how* the development experience is different. Unfortunately, that's a moving target. > > Some comments: > > The general trend among the SOA crowd has been to equate REST with "Plain XML over HTTP", and to this day, it remains the popular understanding in most enterprises. Most don't really understand the key features (URIs and hyperlinks), though I have had some exposure that some are beginning to explore deeper once they dig into Atom & Atompub. > > The problem with using REST as "easy services creation", is that you may be committing greater sins to your maintainability than using WSDL & SOAP if you don't use HTTP & URIs properly (e.g. performing state changes with GET, using a single URI as an "endpoint", including application-specific methods in the XML body, etc. ) At least with SOAP & WSDL there is tooling & infrastructure for governing the little interoperability you do get with it. With REST, the tooling & written literature is still very young, and the vast majority of developers are never going to read Roy's thesis. > > Part of the problem is that the Web Architecture and REST are very different ways to think about distributed systems design, whereas SOAP-style SOA services are an evolutionary descendent of (depending who you talk to) distributed objects ala CORBA/COM, or message queues ala MQ or TIBCO, which have a much longer history in some people's minds. > > Cheers > Stu > > > > From: William Martinez Pomares <wmartinez@...> > To: rest-discuss@yahoogroups.com > Sent: Wed, March 31, 2010 6:25:48 PM > Subject: [rest-discuss] What do you think about REST being a synonym of Service creation technique? > > Hello all. > I got the newsletter from InfoQ this week, and suddenly I noticed something that has been there since long, but till now I didn't realize it. > > In the SOA channel articles and news, there were only one of each. One article about the REST maturity levels, and one news item about REST security. Oh my... > I thought there was some mistake, that those two items belong to the REST channel, and then I realized that there was no REST channel in InfoQ! > > Then, I had a quick twitter chat with Ryan Sloboyan, Editor from InfoQ. It seems, REST is seen just as a way to create services, opposite to SOAP / WS-* lineage. Ryan told me there is always a possibility to create a REST only channel, but he thinks that will be a narrow one, with not so many readers. The expectation is then REST readers will come just to learn how to create easy services nos using SOAP. > > Now, Jack Vaughan from SearchSOA was in a fireside chat with me, at the Java Symposium from TheServerSide, where I was to talk about REST apis, and their real meaning. He told me his idea of REST was similar to that one of the new way of doing services for SOA. We got a full room, and I asked if someone as ever read Roy's dissertation. None, and some with faces of "who the hell is that Roy guy?". The question of how many thought REST was a new way of doing services yield to several hands up. Also the one about REST as an HTTP driven RPC. > > So, all in all, it seems the idea of REST as an Easy Services Creation technique is strong, even influenced in the InfoQ categorization of articles and news. > > I know some of you do post on InfoQ, and are even editors. > What are your thoughts? Do you think it is good to keep that idea? > Do you think that is actually the idea? > What do you think of posting REST as an architectural style? > > I want to hear your opinions, since I feel that would be an interesting discussion. > > William Martinez Pomares. > > > > Looking for the perfect gift? Give the gift of Flickr! > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
Stuart Charlton wrote: > >> The problem is, how does a machine user deduce which of the presented >> state-transition options will advance it towards its goal? This is a >> problem orthogonal to REST, which is not to say off-topic to rest- >> discuss. Once the client component arrives at the proper steady- >> state REST doesn't enter the equation again, until the user requests >> some transition to the next steady-state in their specific >> application. > > That's rather extreme. > No, saying this has nothing to do with REST or declaring it off-topic to rest-discuss would be extreme; I did neither. ;-) > > Implementers clearly are curious how to retain the constraints of the > architecture and build m2m agents. While the techniques for building > goal-directed agents aren't particular to REST, they're certainly of > interest to this audience, and it's been a sorely lacking area of > exploration, IMO. > Agreed. To more explicitly state my position: Discussions of m2m REST consistently violate the layered-system and self-descriptive-messaging constraints. We need to change the discussion so these m2m agents are manipulating the API, not the other way 'round... First, user and user-agent are combined into client-component. This leads to (amongst other horrors) APIs where a separate media type is used to represent each resource state, enforcing a 1:1 relationship between resource state and application state -- itself a violation of the layered-system constraint -- by solving a vocabulary problem over the wire, i.e. at the protocol layer. Solving a vocabulary problem over the wire with custom media types results from a violation of the layered-system constraint, and carries that error forward. The resulting API violates the self-descriptive- messaging constraint and HTTP by failing to use well-known, registered media types to derive application steady-states. All led to from the notion that an m2m client is a user-agent not a user, and that a REST application instructs these m2m user-agents what to do. Which is entirely backwards from what a REST application *is*. The user informs the user-agent of the next step, the series of steps from initial URI to completion of some task is defined as a "REST application". Not the other way around! Not even for m2m! No! This mess may be avoided from the get-go by applying some REST discipline and recognizing that a user and a user-agent are indeed separate layers in a system, regardless of the nature of the user. So, in order to have a discussion about m2m REST, we must distinguish between user and user-agent, avoiding the paper tiger of machine vs. human user-agents -- such a distinction being a violation of the layered-system constraint. The distinction between human and machine belongs in the user component of a REST system. The problem is, how do we inform the user of the meaning of the possible state transitions? When the user is human, the solution is simple -- natural language. When the user is a machine, the solution is no less simple -- machine language -- just harder to implement. Either way, these domain-specific (even if standardized) vocabularies must be embedded within the standard methods, media types and link relations making up the REST API. The first thing you need in a REST API are standard link relations, methods and media types to instruct user-agents how to arrive at an application steady-state when a URI is dereferenced. Domain-specific vocabularies are used which allow the user-agent to inform the user what options there are and what information is required to proceed, i.e. natural-language descriptions of form fields and submission buttons in a shopping-cart system. It's the human user instructing the user-agent how to proceed. Domain-specific vocabularies which allow the user-agent to inform a machine user what options there are and what information is required to proceed, i.e. machine-language descriptions of form fields and submission buttons in a shopping-cart system, are embedded within the steady-state just like natural-language vocabularies, except as metadata instead of as content. It's the machine user instructing the user-agent how to proceed. This is RESTful m2m development and must be emphasized. It must also be emphasized that "user decides what to do" isn't part of a REST application -- it *defines* any given REST application (what the user is trying to do). So please, folks, stop writing m2m HTTP APIs which instruct the *user* how to proceed and calling the result a REST application. REST ends at "user-agent informs the user what it can do", while "user decides what to do" is out-of-scope. This isn't extremist, it's central to having the entire m2m discussion; the point is, the discussion must be framed properly as "how does the user-agent inform the user of its options" not "how does the API instruct the user of the next step" (which leaps right across the user-agent layer, while standing the definition of "REST application" on its head, you see). -Eric
----- Original Message ----
From: Eric J. Bowman <eric@...>
To: Stuart Charlton <stuartcharlton@...>
Cc: rest-discuss@yahoogroups.com
Sent: Mon, April 5, 2010 3:41:50 PM
Subject: Re: [rest-discuss] client keeps its state
>> That's rather extreme.
>>
> No, saying this has nothing to do with REST or declaring it off-topic
> to rest-discuss would be extreme; I did neither. ;-)
Apparently I misread. ;-) " This is a problem orthogonal to REST, which is not to say off-topic to rest-discuss.", I had read as "because this is orthogonal to REST, it is off topic".
> REST ends at "user-agent informs the user what it can do", while "user
> decides what to do" is out-of-scope. This isn't extremist, it's central
> to having the entire m2m discussion; the point is, the discussion must
> be framed properly as "how does the user-agent inform the user of its
> options" not "how does the API instruct the user of the next step"
> (which leaps right across the user-agent layer, while standing the
> definition of "REST application" on its head, you see).
I think I agree with the layered dichotomy between "user" and "user agent" components here. Taking it further, it may be useful to think about these discussions as extending REST to consider the user as a component and the existence of connector(s) between the user and the user agent.
In such an extension, I do believe that media type design has a major impact on the connector between a machine user and a user agent. HTML, for example, could be seen as being conducive to a "render" connector to a human user, or a "spider" connector to a search engine user. Whereas Atom, Atompub, etc. may have a different sort of User component & User/UA connector.
Cheers
Stu
__________________________________________________________________
Be smarter than spam. See how smart SpamGuard is at giving junk email the boot with the All-new Yahoo! Mail. Click on Options in Mail and switch to New Mail today or register for free at http://mail.yahoo.ca
VoiceXML seems very much to me like a "machine-oriented" media type w/o the need to identify an additional layer or connector. mca http://amundsen.com/blog/ On Mon, Apr 5, 2010 at 19:22, Stuart Charlton <stuartcharlton@yahoo.com> wrote: > > > > > ----- Original Message ---- > From: Eric J. Bowman <eric@...> > To: Stuart Charlton <stuartcharlton@...> > Cc: rest-discuss@yahoogroups.com > Sent: Mon, April 5, 2010 3:41:50 PM > Subject: Re: [rest-discuss] client keeps its state > >>> That's rather extreme. >>> > >> No, saying this has nothing to do with REST or declaring it off-topic >> to rest-discuss would be extreme; I did neither. ;-) > > > Apparently I misread. ;-) " This is a problem orthogonal to REST, which is not to say off-topic to rest-discuss.", I had read as "because this is orthogonal to REST, it is off topic". > >> REST ends at "user-agent informs the user what it can do", while "user >> decides what to do" is out-of-scope. This isn't extremist, it's central >> to having the entire m2m discussion; the point is, the discussion must >> be framed properly as "how does the user-agent inform the user of its >> options" not "how does the API instruct the user of the next step" >> (which leaps right across the user-agent layer, while standing the >> definition of "REST application" on its head, you see). > > > I think I agree with the layered dichotomy between "user" and "user agent" components here. Taking it further, it may be useful to think about these discussions as extending REST to consider the user as a component and the existence of connector(s) between the user and the user agent. > > In such an extension, I do believe that media type design has a major impact on the connector between a machine user and a user agent. HTML, for example, could be seen as being conducive to a "render" connector to a human user, or a "spider" connector to a search engine user. Whereas Atom, Atompub, etc. may have a different sort of User component & User/UA connector. > > Cheers > Stu > > > __________________________________________________________________ > Be smarter than spam. See how smart SpamGuard is at giving junk email the boot with the All-new Yahoo! Mail. Click on Options in Mail and switch to New Mail today or register for free at http://mail.yahoo.ca > > > ------------------------------------ > > Yahoo! Groups Links > > > >
--- In rest-discuss@yahoogroups.com, Stuart Charlton <stuartcharlton@...> wrote: > I think I agree with the layered dichotomy between "user" and "user agent" components here. Taking it further, it may be useful to think about these discussions as extending REST to consider the user as a component and the existence of connector(s) between the user and the user agent. > > In such an extension, I do believe that media type design has a major impact on the connector between a machine user and a user agent. HTML, for example, could be seen as being conducive to a "render" connector to a human user, or a "spider" connector to a search engine user. Whereas Atom, Atompub, etc. may have a different sort of User component & User/UA connector. > Stuart, This layering is _exactly_ what I was getting at in my discussion with you on your blog: http://www.stucharlton.com/blog/archives/2010/03/building-a-restful-hypermedia.html#comments I was calling the "user-agent" the "hypermedia processor" and the "user" the "client platform". I didn't call it "user" because I think even with a web browser, there are more layers to go through immediately beneath the user agent before you get to the user, e.g. the window manager or OS. Also, in some standards-based REST clients (e.g. CCXML clients) there is no "user" (well there's a user, but they are so far removed that you wouldn't think of them making "decisions" about what links to follow). I really believe that the nature of this connector between the "user-agent"/"hypermedia processor" and the layer below is absolutely central to media type design. As I was saying in the blog comments, I believe that the messages sent "up" to the user-agent must be designed so that they do not impose processing constraints on the user-agent that are carried over to the service (as this would over-constrain the services preventing service heterogeneity or service evolution). The means for doing this is restricting the messages to being events (as opposed to commands as defined here: http://bill-poole.blogspot.com/2008/04/avoid-command-messages.html ) or if they are commands, they shouldn't have processing semantics beyond those of the uniform interface. For example, if they message from the "lower" layer was a "BuyBook" command message, then the processing constraints requiring that a book must be ordered would be imposed on the service. A "BookDesired" event would allow an application to do other things (e.g. survey what books clients are interested in). I think the "client error handling" expressed by Jan in other threads is just another way of saying this. Requiring that the "client" should handle the service not doing what it expects is the same thing as weakening the command message processing semantics to the point where they become events -- e.g. BuyBook becomes BookDesired. The messages "down" from the User-agent to the layer below can be commands -- e.g. "draw this on the screen" or just "data" (document messages in Bill Poole's nomenclature. Anyways, I'm starting to repeat what I said in the blog comment thread (sorry to subject you to this twice Stuart, your posting here just happened to be the one where I connected the dots between the two threads). In a nutshell: the user-agent is a mediator (in GoF terms) between the user/platform and the uniform interface; however the mediator logic is dynamic, controlled by hypermedia. This, IMO, is the root of RESTful client-server decoupling. The impact on hypermedia design is that you need to think about how the hypermedia document is translated into the command/document messages to the layer below and how the events coming up are translated into requests to the service. This means that links (via relations or other info) should identify the event types they map to -- some like <link> with "stylesheet" relations are triggered by a user-agent internal event like page load to supplement the client steady state with extra data and others (like <a> links) are triggered by events from the platform. Regards, Andrew
--- In rest-discuss@yahoogroups.com, mike amundsen <mamund@...> wrote: > > VoiceXML seems very much to me like a "machine-oriented" media type > w/o the need to identify an additional layer or connector. > > mca > http://amundsen.com/blog/ > I wouldn't call VoiceXML machine-oriented -- it describes a voice-driven UI similar to how HTML describes a visual, text-based UI. CCXML, its sister language, is machine-oriented though. Both VoiceXML and CCXML tend to be implemented as an "interpreter"/user-agent that communicates with a "platform" below via some sort of connector (I've worked with VoiceXML and CCXML for about 10 years and have looked under the hood at various implementations and they are all built this way). For example, a VoiceXML interpreter would send a message down to the platform telling it to queue an audio file in the prompt queue, another message might tell the platform to start playing the prompt queue and an event from the platform would tell the interpreter when the prompt playing had completed. At this point the interpreter would do whatever the page told it to do in response to that event such as follow a link to another page. In fact the VoiceXML specification explicitly describes this relationship between an interpreter and a platform: http://www.w3.org/TR/voicexml20/#dml1.2.1 And when you look at it this way, there is no difference between machine-oriented and user-oriented really. Each hypermedia language is just a declarative program (well some hypermedia languages like VoiceXML are a little more imperative than they should be) for an interpreter that drives a specific flavor of platform. Some platforms just happen to implement a UI. Regards, Andrew
mike amundsen wrote: > > VoiceXML seems very much to me like a "machine-oriented" media type > w/o the need to identify an additional layer or connector. > It's no different than HTML. VoiceXML instructs the user-agent what options to present to the (human) user, for proceeding to the next steady-state along the path to some goal, some of which may require user input (keying a CC# into a telephone keypad is no different than keying it into an HTML form). The human user drives the REST telephony application from one steady- state to the next based on natural-language instructions received from the user-agent. The problem is, how do I write an agent to refill my pharmacy prescriptions once a month over my pharmacy's VoiceXML system? This is a vocabulary problem. If I could have my modem dial up my pharmacy's telephony system, there would need to be some sort of domain- specific vocabulary -- there is no way to HTTP into the telephony system and directly manipulate VoiceXML in this example or the reality it's based on. Leaving us with Voice Recognition -- the vocabulary and voice talent is identical for all pharmacies using the same system, i.e. it's domain- specific -- so accommodating several different phrasings to accomplish the task of "refill" would cover pretty much all U.S. pharmacies. This metadata, i.e. "press '1' for refill", can also be expressed by extending VoiceXML to encompass this domain-specific vocabulary (pharmacy refills). "Enter Rx# followed by #" can also be either voice- recognized or gleaned from VoiceXML metadata for m2m. Without VR or metadata, I can only code an app which is capable of dialing my modem into my pharmacy and following a series of hard-coded steps. Regardless, the user-agent is what renders the steady-state for user consumption, and the user makes the decision which path the user- agent follows to the next steady-state. So it follows, then, that there is a need to discuss a "user component" which may be interchangeable between human and machine. VoiceXML is intended as a language for a user-agent to present choices of state transition to human users. Machine users, however, can't decipher the application states, unless the steady-state's hypertext includes some sort of metadata markup (or we use VR, the equivalent of screen- scraping) targeted towards m2m user components. If this isn't the case, don't we lose key advantages of REST, like the ease of debugging self-documenting natural-language APIs which utilize self-descriptive messaging such that the API may be understood and maintained by humans? An m2m interface bolted onto a human interface solves the problem of human interpretation and maintenance of the API. That problem isn't solved by m2m APIs which drive application state. Thanks for mentioning VoiceXML, Mike! -Eric
"wahbedahbe" wrote: > > I wouldn't call VoiceXML machine-oriented -- it describes a > voice-driven UI similar to how HTML describes a visual, text-based > UI. CCXML, its sister language, is machine-oriented though. > As VoiceXML is to HTML, so CCXML is to PHP. To users in the system, any PHP and/or CCXML code driving resource state is opaque behind the Uniform Interface. CCXML is no more relevant to application state on the client than PHP is, so I don't think it solves for the problem of RESTful m2m media types for representing resource state to user-agents. -Eric
comments inline
Sent from my iPad
On 2010-04-05, at 8:20 PM, "wahbedahbe" <andrew.wahbe@...> wrote:
--- In rest-discuss@yahoogroups.com, Stuart Charlton <stuartcharlton@...> wrote:
> I think I agree with the layered dichotomy between "user" and "user agent" components here. Taking it further, it may be useful to think about these discussions as extending REST to consider the user as a component and the existence of connector(s) between the user and the user agent.
>
> In such an extension, I do believe that media type design has a major impact on the connector between a machine user and a user agent. HTML, for example, could be seen as being conducive to a "render" connector to a human user, or a "spider" connector to a search engine user. Whereas Atom, Atompub, etc. may have a different sort of User component & User/UA connector.
>
Stuart,
This layering is _exactly_ what I was getting at in my discussion with you on your blog: http://www.stucharlton.com/blog/archives/2010/03/building-a-restful-hypermedia.html#comments
I was thinking of you as I was writing the OP ;).
I was calling the "user-agent" the "hypermedia processor" and the "user" the "client platform". I didn't call it "user" because I think even with a web browser, there are more layers to go through immediately beneath the user agent before you get to the user, e.g. the window manager or OS. Also, in some
In this extension, I'd consider the "client platform" as a connector to the User. The browser's GUI is that connector for example.
I really believe that the nature of this connector between the "user-agent"/"hypermedia processor" and the layer below is absolutely central to media type design. As I was saying in the blog comments, I believe that the messages sent "up" to the user-agent must be designed so that they do not impose processing constraints on the user-agent that are carried over to the service (as this would over-constrain the services preventing service heterogeneity or service evolution).
And I am concerned that this goes too far...
The means for doing this is restricting the messages to being events (as opposed to commands as defined here: http://bill-poole.blogspot.com/2008/04/avoid-command-messages.html ) or if they are commands, they shouldn't have processing semantics beyond those of the uniform interface. For example, if they message from the "lower" layer was a "BuyBook" command message, then the processing constraints requiring that a book must be ordered would be imposed on the service. A "BookDesired" event would allow an application to do other things (e.g. survey what books clients are interested in).
Firstly, some terminology to clear: the messages in RESTful HTTP (assuming POST) shouldn't ever be "command messages". They are messages that transfer state without a mandatory action semantic buried in the content (noting that it can happen optionally in some cases, plenty of HTML forms that do it.)
The server may interpret such a message as a command, but the user agent needs to look at this purely in terms of data that is expected in the request, and the expected (but not guaranteed) post-conditions that were included in hypermedia surrounding the link relation or in the specification of the link relation itself.
The link relation may be standardized to domains like "buy" and "desire" and "order", which in turn describe the expected effects of transferring state to a denoted processing resource. I don't necessarily believe that overly abstract link relations (eg. "desire" over "buy") are superior, though perhaps we'd have to dig into a concrete example to explore further.
The impact on hypermedia design is that you need to think about how the hypermedia document is translated into the command/document messages to the layer below and how the events coming up are translated into requests to the service. This means that links (via relations or other info) should identify the event types they map to -- some like <link> with "stylesheet" relations are triggered by a user-agent internal event like page load to supplement the client steady state with extra data and others (like <a> links) are triggered by events from the platform.
As I mentioned on my blog, I think there is a need to explore this sort of "reactive" user agent in greater detail, but I still hold that there is good reason why so many, including myself, are on the "goal-driven" path. ;). But I'm willing to try to apply both approaches to a few test cases. Blog fodder!
Stu
__________________________________________________________________
Looking for the perfect gift? Give the gift of Flickr!
http://www.flickr.com/gift/On Apr 5, 2010, at 10:38 PM, Stuart Charlton wrote: > Part of the problem is that the Web Architecture and REST are very different ways to think about distributed systems design, whereas SOAP-style SOA services are an evolutionary descendent of (depending who you talk to) distributed objects ala CORBA/COM, or message queues ala MQ or TIBCO, which have a much longer history in some people's minds. > The above statement deserves special emphasis. Leveraging the value of REST not only requires a different way to think about the solution but also a different way to think about the problem. Jan ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
On Mon, Apr 5, 2010 at 2:24 AM, Eric J. Bowman <eric@...>wrote: > Guilherme Silveira wrote: > > > > How should a client decide its next step? > > > > This thread needs a terminology scrub. In REST, the application is > defined as "what the user is trying to accomplish", and the user's > agent is the "client". The client never decides what to do next, only > the user does. The client is there to carry out the user's orders. > > The "user" of course, is not required to be human. But, this thread is > as clear as mud, because "user" and "user-agent" are being combined > into "client". When I as a human am driving a REST application, I am > not the "client" nor am I part of the "client component". > Eric, I agree with, and like, your distinctions among "application," "user", and "user agent". But what needs the terminology scrub is not just this thread, but (at least) the 5.3.3 Data View<http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm#sec_5_3_3>section of Roy's Thesis. Here is an example of what I mean: The model application is therefore an engine that moves from one state to the next by examining and choosing from among the alternative state transitions in the current set of representations. Not surprisingly, this exactly matches the user interface of a hypermedia browser. However, the style does not assume that all applications are browsers. In fact, the application details are hidden from the server by the generic connector interface, and thus a user agent could equally be an automated robot performing information retrieval for an indexing service, a personal agent looking for data that matches certain criteria, or a maintenance spider busy patrolling the information for broken references or modified content [39<http://www.ics.uci.edu/~fielding/pubs/dissertation/references.htm#ref_39> ]. Two observations: 1. Saying that a "user agent" can be a bot, a personal agent (presumably an automated one, not a human one), or a spider suggests that Roy is NOT making your distinction between user (human) and user agent (automaton). 2. Roy appears to use "application" in two different (inconsistent?) ways. First, to refer to "a cohesive structure of information and control alternatives through which a user can perform a desired task" (earlier in section 5.3.3). Second, to refer to a software component. For example, "does not assume that all applications are browsers." I'd be curious to know if you agree with my observations, and if so, would you be willing to edit section 5.3.3 to clear up the confusion between user and user agent and between application as "information and control alternatives" vs application as informal expression for a software component such as a browser. Thanks. -- Nick
Great input, Stefan!
Actually, Bryan used similar words to explain to me what SOA was for him. He said:"my take: REST and WS-* are the main choices for an SOA, so REST news is under SOA (where SOA != WS-*, but holistic arch choice)".
That is interesting, since it is the first time I see it that way.
Sorry, I'm more academic than practical, still being pragmatic. That means I do differentiate both concepts based on the concepts rather that just how things are done in a de facto way.
Since my evil part is on, please excuse me if the following analysis sounds like breaking your logic. (It is clearly trying to do so but for academic purposes.)
Any reader can jump to the summary part, if not compelled to read much blah, blah :D
Ok. First the style concept. At any time, when building system, you have the architecture (the real thing, the instance) of the work already done (meaning, the first line of code starts creating an architecture). You can also have your architecture design, which is the architecture-to-be, which many Agile people confuse with the architecture itself. And you can have a style, which is the set of architectural element types, relations and principles that, if you follow them, will give some benefits after some trade of. You can even have several styles, applied together to form a bigger one.
So, from your take, there is also the "high-level approach to an organization's IT holistically", which, if I understand correctly, is the actual "way of doing things" for the complete IT department. That means all individual systems should be created and integrated that way, following those rule and generic goals. That is of course not the architecture of one system, but the organization of all systems as a whole. A system of systems? At the end all that is an organizations of elements and interactions, where an element can be a complete system by itself. To me that fits the architecture definition, just in a higher level. And as such, we can talk about an underlying style. See, no difference to me if I tweak a little bit.
The other part is also interesting. REST is indeed an architectural style that may be a great fit for some "way of doing things", but not for all. It is specific to a particular type of system, with particular needs in transfer, with a particular workflow technique.
From your take, that style is the best to achieve the "way of doing things". Good. From mine, that is a style used to implement another style. Humm.
Finally, the concept of SOA as a WS-* style. You are right about how other people see it. Let me tell you how do I see it to be clear. SOA is an architectural style that uses a component called Service, which behaves like a business service metaphor. There are other components as well, and interactions defined (using documents as data elements and messaging as a transport). The idea of this style is to get the architecture as close to business as it can.
Now, we have the Web Services Architecture, WSA, and that is an standard, and that is not SOA. The WSA has also components based on services that live in the web (that restriction is very important!). It defines some standards like the WSDL and SOAP (worst choice ever!), with a resource view and all.
Now, I know that in practice, a service in WSA is not more that a decorated RPC, that people believes SOA is WS-*, that people like "REST" because they can use URLS and HTTP to create services (many times just RPC) without SOAP. This is what this kind of discussions tries to clarify.
So, there is a difference, SOA is a style with architectural elements and interactions based on business and the service metaphor, with underlying messaging transport and documents as data elements, while REST is a mashup of styles based on the resource concept as data element, optimized for large hypermedia transfers on top of a networked system, with an state machine based workflow. Humm.
In Summary:
Your take is SOA is not an architectural style but a "high-level approach to an organization's IT holistically" ("way of doing things") for the whole IT department, and that the best style to do that is REST. From that, I take that either: SOA is service oriented and thus REST is then a style base on services or to create services, or SOA is not Service related anymore, and REST is the underlying concept.
My gut feeling is the first one is the one people accept the most. I've been asking around, outside this forum, and you see that REST is a synonym of service to many.
Yep. I am complicated, I know. But it is interesting to compare how people see from outside and from inside. Your take at InfoQ is very interesting, but from my outsider point of view you were just publishing REST under SOA because REST is to create services. That is why I asked, and now I see I was wrong. Still, there are many out there with the same point of view of mine.
Next question would be: Should we change that perception? If no one writes about SOA anymore, but only about REST, should we think about renaming SOA channel to REST channel? Is REST community interested in keeping this view of REST under SOA?
Cheers!
William Martinez.
--- In rest-discuss@yahoogroups.com, Stefan Tilkov <stefan.tilkov@...> wrote:
>
> On Apr 2, 2010, at 1:08 AM, William Martinez Pomares wrote:
>
> > The question is still the same. REST discussions are held in the SOA arena, as if REST is a SOA sub product. If SOA is dead, is then REST dead the same? Does REST lives only in the SOA realm? Is REST just a web services alternative? Is Services the only topic we can talk about in REST?
> >
> > I wonder.
>
>
> As you can define SOA to mean whatever suits your purposes, it all depends. I happened to be InfoQ's SOA lead editor for a long time, and I personally define SOA to be a high-level approach to an organization's IT holistically, not as an architectural style. I've personally found REST to be the most compelling architectural style, and RESTful HTTP as the most useful technology stack, to achieve those high-level goals (much more so than SOAP/WSDL/WS-* and whatever you'd like to call the architectural style it embodies, if you happen to believe that it actually does so). Only if you define SOA (as I believe Roy and many other REST folks do) as the unnamed architectural style underlying WSDL, SOAP & Co., this seems a conflict.
>
> One of the more interesting experiences I had at InfoQ (I'm no longer the lead editor and only loosely associated) was that it got harder and harder to find anyone willing to write a useful technical article that was not REST-related. Of course this may have been selection bias, but I really tried but at some point 90% of the people I respected from the WS-* side of things had become RESTafarians.
>
> Best,
> Stefan
>
On Apr 6, 2010, at 2:58 PM, William Martinez Pomares wrote: > Is REST community interested in keeping this view of REST under SOA? Incidently, I am personally in the process of slowly developing the idea that the notion of 'service' is[1] actually harmful to REST-oriented thinking. I think 'service' is commonly perceived in the context of 'service layer', of exposing business functionality as a set of operations and operation-oriented thinking is somewhat contrary to REST. Maybe it is nit-picking, but maybe it is necessary to re-think networked systems development from the ground up to overcome the many apparent misconceptions about REST (and the associated dangers in making things worse as Stu mentioned). Jan [1] or 'might be' because I have not yet made up my mind :-)
Hello Stuart. Thanks, very complete answers!. 1. Actually, I don't think either that REST="easy services creation" idea is good, so I'm with you totally. On the contrary, I feel that idea is carving into developers that jump right into "REST" after reading a couple of blogs, and then find out they didn't got REST at all. 2. Making SOAP Services (Read RPC) is the easiest thing to do. Making them work together is another issue. Making real Web Services (using documents and messaging) is a pain in your little finger since there is not top down approach. Your system may be a fit for WSA, or a fit for SOA (slightly different benefits and requirements), but the implementation side is failing terribly. 3. The "actual" idea is not the "popular" idea, right? It is the Idea we need to pursue to become popular. Agree totally that is not the easy one to understand, apply and identify as the best to use. 4. "SOA's verbiage" you mention is not actually from SOA, but sadly those concepts were made popular by SOA and from there people get confused. Governance and contracts came from the business world, the first one to increase the interest of business people to invest on the technology (yep, a worm to catch the fish) and the second one came from WSA. Interoperability and loose coupling came from OO world, and thus created the illusion that SOA was a OO system in disguise. Actually, many SOA implementations out there are Distributed OO in disguise. A pity, since Pure SOA has nothing to do with it. 5. Totally agree with your point to maintainability! We need to make that clear and loud. (Just don't call them sinners). 6. Same, I agree about the descendants part. Still, the SOA idea is different from Implementation. I'll be naive enough to think the SOA and Services idea were born pure, and when implementation started, tool vendors took what they have at that time (SOAP as the next Corba/Dcom generation, and Message queues) and did some rebranding to get to sell SOA tools! So, I still think we can save the pure ideals of SOA (but not with REST, which is a different kind of beast). Now, question, what should we do about it? Thanks again, Stuart. Cheers! William Martinez --- In rest-discuss@yahoogroups.com, Stuart Charlton <stuartcharlton@...> wrote: > > > > To answer your questions: > - I don't think it's a good idea to equate REST with "easy services creation". I'd be more inclined if people equated specifics like "Atom/Atompub" or "OData" with "easy DATA services creation", personally. > > Enterprises should keep building SOAP web services if they are happy with RPC or messaging systems, and can force their vendors to make it "easier". > > - The actual *idea* for REST is not "easy services creation" -- it is "large-scale information sharing & manipulation". It is for interoperability at very-large scale, a "system of systems" architecture: http://www.infoed.com/Open/PAPERS/systems.htm > > As such, it is not necessarily "easy" to apply to all situations, it requires knowledge & experience (like most things). > > - Explaining REST as an architectural style works for some, but more likely there's also a need to explain REST in the context of SOA's verbiage (e.g. governance, interoperability, loose coupling, contracts). > > - Probably there will need to be more mainstream books and tooling for the lay-developer, that gets into *how* the development experience is different. Unfortunately, that's a moving target. > > Some comments: > > The general trend among the SOA crowd has been to equate REST with "Plain XML over HTTP", and to this day, it remains the popular understanding in most enterprises. Most don't really understand the key features (URIs and hyperlinks), though I have had some exposure that some are beginning to explore deeper once they dig into Atom & Atompub. > > The problem with using REST as "easy services creation", is that you may be committing greater sins to your maintainability than using WSDL & SOAP if you don't use HTTP & URIs properly (e.g. performing state changes with GET, using a single URI as an "endpoint", including application-specific methods in the XML body, etc. ) At least with SOAP & WSDL there is tooling & infrastructure for governing the little interoperability you do get with it. With REST, the tooling & written literature is still very young, and the vast majority of developers are never going to read Roy's thesis. > > Part of the problem is that the Web Architecture and REST are very different ways to think about distributed systems design, whereas SOAP-style SOA services are an evolutionary descendent of (depending who you talk to) distributed objects ala CORBA/COM, or message queues ala MQ or TIBCO, which have a much longer history in some people's minds. > > Cheers > Stu > > > > > > ________________________________ > From: William Martinez Pomares <wmartinez@...> > To: rest-discuss@yahoogroups.com > Sent: Wed, March 31, 2010 6:25:48 PM > Subject: [rest-discuss] What do you think about REST being a synonym of Service creation technique? > > > Hello all. > I got the newsletter from InfoQ this week, and suddenly I noticed something that has been there since long, but till now I didn't realize it. > > In the SOA channel articles and news, there were only one of each. One article about the REST maturity levels, and one news item about REST security. Oh my... > I thought there was some mistake, that those two items belong to the REST channel, and then I realized that there was no REST channel in InfoQ! > > Then, I had a quick twitter chat with Ryan Sloboyan, Editor from InfoQ. It seems, REST is seen just as a way to create services, opposite to SOAP / WS-* lineage. Ryan told me there is always a possibility to create a REST only channel, but he thinks that will be a narrow one, with not so many readers. The expectation is then REST readers will come just to learn how to create easy services nos using SOAP. > > Now, Jack Vaughan from SearchSOA was in a fireside chat with me, at the Java Symposium from TheServerSide, where I was to talk about REST apis, and their real meaning. He told me his idea of REST was similar to that one of the new way of doing services for SOA. We got a full room, and I asked if someone as ever read Roy's dissertation. None, and some with faces of "who the hell is that Roy guy?". The question of how many thought REST was a new way of doing services yield to several hands up. Also the one about REST as an HTTP driven RPC. > > So, all in all, it seems the idea of REST as an Easy Services Creation technique is strong, even influenced in the InfoQ categorization of articles and news. > > I know some of you do post on InfoQ, and are even editors. > What are your thoughts? Do you think it is good to keep that idea? > Do you think that is actually the idea? > What do you think of posting REST as an architectural style? > > I want to hear your opinions, since I feel that would be an interesting discussion. > > William Martinez Pomares. > > > > > > __________________________________________________________________ > Connect with friends from any web browser - no download required. Try the new Yahoo! Canada Messenger for the Web BETA at http://ca.messenger.yahoo.com/webmessengerpromo.php >
Then we may need to start discussing about the pros and cons of using a concept. Problem is many use a name detached from the meaning, which leads to people understanding different things. For instance, Service is not RPC. People started using them as synonyms. When I came to do a Service, all that tools can offer is RPC, under the service name, and I cannot do my work!. What you get? JAX-RS, a REST framework, that allows you to put an annotation before your method and some other annotations before my parameters, to map to a URI context and query section. That is REST services support in Java!??? Evil. William Martinez. --- In rest-discuss@yahoogroups.com, Jan Algermissen <algermissen1971@...> wrote: > > > On Apr 6, 2010, at 2:58 PM, William Martinez Pomares wrote: > > > Is REST community interested in keeping this view of REST under SOA? > > Incidently, I am personally in the process of slowly developing the idea that the notion of 'service' is[1] actually harmful to REST-oriented thinking. > > I think 'service' is commonly perceived in the context of 'service layer', of exposing business functionality as a set of operations and operation-oriented thinking is somewhat contrary to REST. > > Maybe it is nit-picking, but maybe it is necessary to re-think networked systems development from the ground up to overcome the many apparent misconceptions about REST (and the associated dangers in making things worse as Stu mentioned). > > Jan > > [1] or 'might be' because I have not yet made up my mind :-) >
--- In rest-discuss@yahoogroups.com, "Eric J. Bowman" <eric@...> wrote: > > "wahbedahbe" wrote: > > > > I wouldn't call VoiceXML machine-oriented -- it describes a > > voice-driven UI similar to how HTML describes a visual, text-based > > UI. CCXML, its sister language, is machine-oriented though. > > > > As VoiceXML is to HTML, so CCXML is to PHP. To users in the system, > any PHP and/or CCXML code driving resource state is opaque behind the > Uniform Interface. CCXML is no more relevant to application state on > the client than PHP is, so I don't think it solves for the problem of > RESTful m2m media types for representing resource state to user-agents. > > -Eric I couldn't disagree more. A CCXML client makes HTTP requests for CCXML pages that run on a platform that controls client-side resources. Many people write static CCXML pages, but that doesn't make it any less of a representation language -- each page still has a URI. CCXML is NOT used to generate VoiceXML or any other markup language (not in any system I've seen). I'm puzzled as to why you'd have this view. Can you point me to a system where CCXML is used like PHP? Regards, Andrew
On 6 April 2010 14:18, Jan Algermissen <algermissen1971@...> wrote: > > Incidently, I am personally in the process of slowly developing the idea > that the notion of 'service' is[1] actually harmful to REST-oriented > thinking. > > That is my opinion too, fwiw, the terms (and notions) of "service" and "API" should not be used in REST...
Quick Fe de Erratas.
Whenever you see "Components" mentioned about architecture styles, please correctly read "Elements". Yep, they are different things/concepts, and being as found of concepts as I am, you bet I read back the message in horror.
Sorry about that one.
William Martinez Pomares.
--- In rest-discuss@yahoogroups.com, "William Martinez Pomares" <wmartinez@...> wrote:
>
> Great input, Stefan!
> Actually, Bryan used similar words to explain to me what SOA was for him. He said:"my take: REST and WS-* are the main choices for an SOA, so REST news is under SOA (where SOA != WS-*, but holistic arch choice)".
> That is interesting, since it is the first time I see it that way.
>
> Sorry, I'm more academic than practical, still being pragmatic. That means I do differentiate both concepts based on the concepts rather that just how things are done in a de facto way.
>
> Since my evil part is on, please excuse me if the following analysis sounds like breaking your logic. (It is clearly trying to do so but for academic purposes.)
>
> Any reader can jump to the summary part, if not compelled to read much blah, blah :D
>
> Ok. First the style concept. At any time, when building system, you have the architecture (the real thing, the instance) of the work already done (meaning, the first line of code starts creating an architecture). You can also have your architecture design, which is the architecture-to-be, which many Agile people confuse with the architecture itself. And you can have a style, which is the set of architectural element types, relations and principles that, if you follow them, will give some benefits after some trade of. You can even have several styles, applied together to form a bigger one.
>
> So, from your take, there is also the "high-level approach to an organization's IT holistically", which, if I understand correctly, is the actual "way of doing things" for the complete IT department. That means all individual systems should be created and integrated that way, following those rule and generic goals. That is of course not the architecture of one system, but the organization of all systems as a whole. A system of systems? At the end all that is an organizations of elements and interactions, where an element can be a complete system by itself. To me that fits the architecture definition, just in a higher level. And as such, we can talk about an underlying style. See, no difference to me if I tweak a little bit.
>
> The other part is also interesting. REST is indeed an architectural style that may be a great fit for some "way of doing things", but not for all. It is specific to a particular type of system, with particular needs in transfer, with a particular workflow technique.
> From your take, that style is the best to achieve the "way of doing things". Good. From mine, that is a style used to implement another style. Humm.
>
> Finally, the concept of SOA as a WS-* style. You are right about how other people see it. Let me tell you how do I see it to be clear. SOA is an architectural style that uses a component called Service, which behaves like a business service metaphor. There are other components as well, and interactions defined (using documents as data elements and messaging as a transport). The idea of this style is to get the architecture as close to business as it can.
>
> Now, we have the Web Services Architecture, WSA, and that is an standard, and that is not SOA. The WSA has also components based on services that live in the web (that restriction is very important!). It defines some standards like the WSDL and SOAP (worst choice ever!), with a resource view and all.
>
> Now, I know that in practice, a service in WSA is not more that a decorated RPC, that people believes SOA is WS-*, that people like "REST" because they can use URLS and HTTP to create services (many times just RPC) without SOAP. This is what this kind of discussions tries to clarify.
>
> So, there is a difference, SOA is a style with architectural elements and interactions based on business and the service metaphor, with underlying messaging transport and documents as data elements, while REST is a mashup of styles based on the resource concept as data element, optimized for large hypermedia transfers on top of a networked system, with an state machine based workflow. Humm.
>
> In Summary:
> Your take is SOA is not an architectural style but a "high-level approach to an organization's IT holistically" ("way of doing things") for the whole IT department, and that the best style to do that is REST. From that, I take that either: SOA is service oriented and thus REST is then a style base on services or to create services, or SOA is not Service related anymore, and REST is the underlying concept.
>
> My gut feeling is the first one is the one people accept the most. I've been asking around, outside this forum, and you see that REST is a synonym of service to many.
>
> Yep. I am complicated, I know. But it is interesting to compare how people see from outside and from inside. Your take at InfoQ is very interesting, but from my outsider point of view you were just publishing REST under SOA because REST is to create services. That is why I asked, and now I see I was wrong. Still, there are many out there with the same point of view of mine.
>
> Next question would be: Should we change that perception? If no one writes about SOA anymore, but only about REST, should we think about renaming SOA channel to REST channel? Is REST community interested in keeping this view of REST under SOA?
>
> Cheers!
>
> William Martinez.
>
> --- In rest-discuss@yahoogroups.com, Stefan Tilkov <stefan.tilkov@> wrote:
> >
> > On Apr 2, 2010, at 1:08 AM, William Martinez Pomares wrote:
> >
> > > The question is still the same. REST discussions are held in the SOA arena, as if REST is a SOA sub product. If SOA is dead, is then REST dead the same? Does REST lives only in the SOA realm? Is REST just a web services alternative? Is Services the only topic we can talk about in REST?
> > >
> > > I wonder.
> >
> >
> > As you can define SOA to mean whatever suits your purposes, it all depends. I happened to be InfoQ's SOA lead editor for a long time, and I personally define SOA to be a high-level approach to an organization's IT holistically, not as an architectural style. I've personally found REST to be the most compelling architectural style, and RESTful HTTP as the most useful technology stack, to achieve those high-level goals (much more so than SOAP/WSDL/WS-* and whatever you'd like to call the architectural style it embodies, if you happen to believe that it actually does so). Only if you define SOA (as I believe Roy and many other REST folks do) as the unnamed architectural style underlying WSDL, SOAP & Co., this seems a conflict.
> >
> > One of the more interesting experiences I had at InfoQ (I'm no longer the lead editor and only loosely associated) was that it got harder and harder to find anyone willing to write a useful technical article that was not REST-related. Of course this may have been selection bias, but I really tried but at some point 90% of the people I respected from the WS-* side of things had become RESTafarians.
> >
> > Best,
> > Stefan
> >
>
I've posted a blog entry labeled "A RESTful Hypermedia API in Three Easy Steps"[1]. I used Fielding's "REST APIs must be hypertext-driven"[2] as a reference. I'd appreciate all the feedback anyone would like to offer regarding the concepts, terminology, and implementation details described there. If you prefer not to clutter this list, feel free to comment on the blog or email me directly. I also hang out in the #rest IRC channel on freednode if you'd like to carry on there. Thanks in advance. [1] http://amundsen.com/blog/archives/1041 [2] http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven mca http://amundsen.com/blog/
You're going to be pissed with me because I didn't even read the entire article (I'll do it at home tonigth) but after reading this sentence "this API will define a simple list management service." and since I just said in another post that IMO "API" and "service" should not be used to describe REST, I tried this simple semantic analysis CTRL+F resource and no single result was found! Now, should a description of a REST-something have more emphasis in "resources" rather than "services"? It can be just a question of "naming", but is not a important one? Please forgive me if this somewhat sounds like "flame", that is not my intention... On 6 April 2010 16:53, mike amundsen <mamund@...> wrote: > > > I've posted a blog entry labeled "A RESTful Hypermedia API in Three > Easy Steps"[1]. I used Fielding's "REST APIs must be > hypertext-driven"[2] as a reference. > > I'd appreciate all the feedback anyone would like to offer regarding > the concepts, terminology, and implementation details described there. > If you prefer not to clutter this list, feel free to comment on the > blog or email me directly. I also hang out in the #rest IRC channel on > freednode if you'd like to carry on there. > > Thanks in advance. > > [1] http://amundsen.com/blog/archives/1041 > [2] http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven > > mca > http://amundsen.com/blog/ > >
2010/4/6 Antnio Mota <amsmota@...> > > > You're going to be pissed with me because I didn't even read the entire article (I'll do it at home > tonigth) but after reading this sentence > > "this API will define a simple list management service." > > and since I just said in another post that IMO "API" and "service" should not be used to describe > REST, I tried this simple semantic analysis > > CTRL+F resource > > and no single result was found! > > Now, should a description of a REST-something have more emphasis in "resources" rather than > "services"? It can be just a question of "naming", but is not a important one? I'd extend this comment to say it'd be better if it stuck to resource/representations instead of using 'data structure' that sort of blurs the two. This too may seem nit-picky but using imprecise terminology just raises the communication overhead I think. Thanks, --tim
Comments inline
Sent from my iPad
On 2010-04-06, at 6:21 AM, "William Martinez Pomares" <wmartinez@...> wrote:
2. Making SOAP Services (Read RPC) is the easiest thing to do. Making them work together is another issue. Making real Web Services (using documents and messaging) is a pain in your little finger since there is not top down approach. Your system may be a fit for WSA, or a fit for SOA (slightly different benefits and requirements), but the implementation side is failing terribly.
I tend to believe this is because "real services" are a concept created by SOA consultants (I was one of them, sorry).... it is hard for vendors to apply to their products.
That and my sense is that the adoption of WS technology peaked in the WSDL 1.1 days, doing "just enough" for most cases that the truly advanced cases with WS-AT or WS-RM only have relatively small investments behind them. There needs to be $$$ to drive vendors to improve these stacks, and it's clearly not there to the extent they had hoped.
4. "SOA's verbiage" you mention is not actually from SOA, but sadly those concepts were made popular by SOA and from there people get confused. Governance and contracts came from the business world, the first one to increase the interest of business people to invest on the technology (yep, a worm to catch the fish) and the second one came from WSA. Interoperability and loose coupling came from OO world, and thus created the illusion that SOA was a OO system in disguise. Actually, many SOA implementations out there are Distributed OO in disguise. A pity, since Pure SOA has nothing to do with it.
It's a bit of a stretch to say pure SOA has nothing to do with OO; design by contract was a big innovation in a couple OO languages, as were many of the tenants such as separation of interface from implementation, composition, etc. SOA eschews object state and identity, which is ironic, because that's exactly what REST embraces, if you look at it through an object-oriented lens.
Otherwise, again speaking from my former life as a SOA consultant with BEA, things like governance and contracts weren't just sell jobs, they were an attempt to separate and save the valuable ideas in SOA from the atrocities of the WSA...
My broader point is that REST as an architectural style to enable SOA has been an idea I have heard and supported dating back to 2003 or so... the problem is that the REST community has limited interest in SOA, and the SOA community thinks in different terms from architectural constraints and styles. So if there is going to be bridge building, there has to be some terminology agreement, but I haven't really seen much effort there for several years. After the W3C workshop on the "Web of Services", agreement was declared, but the reality was that most advocates moved on from bridge-building and stayed in their own world.
6. Same, I agree about the descendants part. Still, the SOA idea is different from Implementation. I'll be naive enough to think the SOA and Services idea were born pure, and when implementation started, tool vendors took what they have at that time (SOAP as the next Corba/Dcom generation, and Message queues) and did some rebranding to get to sell SOA tools! So, I still think we can save the pure ideals of SOA (but not with REST, which is a different kind of beast).
Now, question, what should we do about it?
Well, it's not clear what can be done. Enterprises clearly don't have money to replace yet another generation of middleware, so there's little incentive for investment to learn or adopt a different approach (by customers or vendors). Most of the REST improvements and understandings are happening on the Web and in consulting engagements, not by traditional vendors and their patrons. That may change, but it will require some economic shifts and/or killer product breakthroughs. Cloud computing, for example, seems to be very REST-infected, for better or worse.
Stu
__________________________________________________________________
Yahoo! Canada Toolbar: Search from anywhere on the web, and bookmark your favourite sites. Download it now
http://ca.toolbar.yahoo.com.Hi Stuart. > Sent from my iPad Cool! Quick adoption that is. > It's a bit of a stretch to say pure SOA has nothing to do with OO; Didn't mean really that. I actually mean what you said: SOA concepts were created somewhere else. Governance and Contracts are pure business ones, although OO may have used the contract metaphor in some languages, the concept is not from there, as OO didn't have it originally. It became a core concept in WSA. I meant to say SOA is different from Distributed OO. May share some concepts, but try to get them in a different way. You know, that is actually another discussion in the SOA forums. How much a Service reassembles an Object, if there is any difference in properties, etc. That is off-topic in this forum, I guess, but my take is SOA and OO share quality properties, but they differ in the core metaphor. Nice discussion! William Martinez. --- In rest-discuss@yahoogroups.com, Stuart Charlton <stuartcharlton@...> wrote: > > Comments inline > > Sent from my iPad > > On 2010-04-06, at 6:21 AM, "William Martinez Pomares" <wmartinez@...> wrote: > > 2. Making SOAP Services (Read RPC) is the easiest thing to do. Making them work together is another issue. Making real Web Services (using documents and messaging) is a pain in your little finger since there is not top down approach. Your system may be a fit for WSA, or a fit for SOA (slightly different benefits and requirements), but the implementation side is failing terribly. > > I tend to believe this is because "real services" are a concept created by SOA consultants (I was one of them, sorry).... it is hard for vendors to apply to their products. > > That and my sense is that the adoption of WS technology peaked in the WSDL 1.1 days, doing "just enough" for most cases that the truly advanced cases with WS-AT or WS-RM only have relatively small investments behind them. There needs to be $$$ to drive vendors to improve these stacks, and it's clearly not there to the extent they had hoped. > > 4. "SOA's verbiage" you mention is not actually from SOA, but sadly those concepts were made popular by SOA and from there people get confused. Governance and contracts came from the business world, the first one to increase the interest of business people to invest on the technology (yep, a worm to catch the fish) and the second one came from WSA. Interoperability and loose coupling came from OO world, and thus created the illusion that SOA was a OO system in disguise. Actually, many SOA implementations out there are Distributed OO in disguise. A pity, since Pure SOA has nothing to do with it. > > It's a bit of a stretch to say pure SOA has nothing to do with OO; design by contract was a big innovation in a couple OO languages, as were many of the tenants such as separation of interface from implementation, composition, etc. SOA eschews object state and identity, which is ironic, because that's exactly what REST embraces, if you look at it through an object-oriented lens. > > Otherwise, again speaking from my former life as a SOA consultant with BEA, things like governance and contracts weren't just sell jobs, they were an attempt to separate and save the valuable ideas in SOA from the atrocities of the WSA... > > My broader point is that REST as an architectural style to enable SOA has been an idea I have heard and supported dating back to 2003 or so... the problem is that the REST community has limited interest in SOA, and the SOA community thinks in different terms from architectural constraints and styles. So if there is going to be bridge building, there has to be some terminology agreement, but I haven't really seen much effort there for several years. After the W3C workshop on the "Web of Services", agreement was declared, but the reality was that most advocates moved on from bridge-building and stayed in their own world. > > 6. Same, I agree about the descendants part. Still, the SOA idea is different from Implementation. I'll be naive enough to think the SOA and Services idea were born pure, and when implementation started, tool vendors took what they have at that time (SOAP as the next Corba/Dcom generation, and Message queues) and did some rebranding to get to sell SOA tools! So, I still think we can save the pure ideals of SOA (but not with REST, which is a different kind of beast). > > Now, question, what should we do about it? > > Well, it's not clear what can be done. Enterprises clearly don't have money to replace yet another generation of middleware, so there's little incentive for investment to learn or adopt a different approach (by customers or vendors). Most of the REST improvements and understandings are happening on the Web and in consulting engagements, not by traditional vendors and their patrons. That may change, but it will require some economic shifts and/or killer product breakthroughs. Cloud computing, for example, seems to be very REST-infected, for better or worse. > > Stu > > > > > > > > __________________________________________________________________ > Yahoo! Canada Toolbar: Search from anywhere on the web, and bookmark your favourite sites. Download it now > http://ca.toolbar.yahoo.com. >
Hi William,
Frankly, I'm not interested in discussing what SOA is or is not, as there's nobody who can decide who's right. It suits my purpose to define SOA as something that's high-level enough (or vague enough, if you prefer); the reason is that I see many people who want something from SOA (easy interoperability, loose coupling, wide support in different platforms, ) that they can achieve more easily with RESTful HTTP than with SOAP/WSDL/WS-*. I can give them REST and they can still have their SOA cake.
Related to that, I define "service" more as a mini-application than as an individual interface. When implemented using RESTful HTTP, it becomes a set of related resources (more commonly called a Web app) instead of a Web service (a set of WSDL-described individual SOAP interfaces).
If you define SOA differently, this logic is flawed. Which is OK.
Best,
Stefan
On Apr 6, 2010, at 2:58 PM, William Martinez Pomares wrote:
> Great input, Stefan!
> Actually, Bryan used similar words to explain to me what SOA was for him. He said:"my take: REST and WS-* are the main choices for an SOA, so REST news is under SOA (where SOA != WS-*, but holistic arch choice)".
> That is interesting, since it is the first time I see it that way.
>
> Sorry, I'm more academic than practical, still being pragmatic. That means I do differentiate both concepts based on the concepts rather that just how things are done in a de facto way.
>
> Since my evil part is on, please excuse me if the following analysis sounds like breaking your logic. (It is clearly trying to do so but for academic purposes.)
>
> Any reader can jump to the summary part, if not compelled to read much blah, blah :D
>
> Ok. First the style concept. At any time, when building system, you have the architecture (the real thing, the instance) of the work already done (meaning, the first line of code starts creating an architecture). You can also have your architecture design, which is the architecture-to-be, which many Agile people confuse with the architecture itself. And you can have a style, which is the set of architectural element types, relations and principles that, if you follow them, will give some benefits after some trade of. You can even have several styles, applied together to form a bigger one.
>
> So, from your take, there is also the "high-level approach to an organization's IT holistically", which, if I understand correctly, is the actual "way of doing things" for the complete IT department. That means all individual systems should be created and integrated that way, following those rule and generic goals. That is of course not the architecture of one system, but the organization of all systems as a whole. A system of systems? At the end all that is an organizations of elements and interactions, where an element can be a complete system by itself. To me that fits the architecture definition, just in a higher level. And as such, we can talk about an underlying style. See, no difference to me if I tweak a little bit.
>
> The other part is also interesting. REST is indeed an architectural style that may be a great fit for some "way of doing things", but not for all. It is specific to a particular type of system, with particular needs in transfer, with a particular workflow technique.
> From your take, that style is the best to achieve the "way of doing things". Good. From mine, that is a style used to implement another style. Humm.
>
> Finally, the concept of SOA as a WS-* style. You are right about how other people see it. Let me tell you how do I see it to be clear. SOA is an architectural style that uses a component called Service, which behaves like a business service metaphor. There are other components as well, and interactions defined (using documents as data elements and messaging as a transport). The idea of this style is to get the architecture as close to business as it can.
>
> Now, we have the Web Services Architecture, WSA, and that is an standard, and that is not SOA. The WSA has also components based on services that live in the web (that restriction is very important!). It defines some standards like the WSDL and SOAP (worst choice ever!), with a resource view and all.
>
> Now, I know that in practice, a service in WSA is not more that a decorated RPC, that people believes SOA is WS-*, that people like "REST" because they can use URLS and HTTP to create services (many times just RPC) without SOAP. This is what this kind of discussions tries to clarify.
>
> So, there is a difference, SOA is a style with architectural elements and interactions based on business and the service metaphor, with underlying messaging transport and documents as data elements, while REST is a mashup of styles based on the resource concept as data element, optimized for large hypermedia transfers on top of a networked system, with an state machine based workflow. Humm.
>
> In Summary:
> Your take is SOA is not an architectural style but a "high-level approach to an organization's IT holistically" ("way of doing things") for the whole IT department, and that the best style to do that is REST. From that, I take that either: SOA is service oriented and thus REST is then a style base on services or to create services, or SOA is not Service related anymore, and REST is the underlying concept.
>
> My gut feeling is the first one is the one people accept the most. I've been asking around, outside this forum, and you see that REST is a synonym of service to many.
>
> Yep. I am complicated, I know. But it is interesting to compare how people see from outside and from inside. Your take at InfoQ is very interesting, but from my outsider point of view you were just publishing REST under SOA because REST is to create services. That is why I asked, and now I see I was wrong. Still, there are many out there with the same point of view of mine.
>
> Next question would be: Should we change that perception? If no one writes about SOA anymore, but only about REST, should we think about renaming SOA channel to REST channel? Is REST community interested in keeping this view of REST under SOA?
>
> Cheers!
>
> William Martinez.
>
> --- In rest-discuss@yahoogroups.com, Stefan Tilkov <stefan.tilkov@...> wrote:
> >
> > On Apr 2, 2010, at 1:08 AM, William Martinez Pomares wrote:
> >
> > > The question is still the same. REST discussions are held in the SOA arena, as if REST is a SOA sub product. If SOA is dead, is then REST dead the same? Does REST lives only in the SOA realm? Is REST just a web services alternative? Is Services the only topic we can talk about in REST?
> > >
> > > I wonder.
> >
> >
> > As you can define SOA to mean whatever suits your purposes, it all depends. I happened to be InfoQ's SOA lead editor for a long time, and I personally define SOA to be a high-level approach to an organization's IT holistically, not as an architectural style. I've personally found REST to be the most compelling architectural style, and RESTful HTTP as the most useful technology stack, to achieve those high-level goals (much more so than SOAP/WSDL/WS-* and whatever you'd like to call the architectural style it embodies, if you happen to believe that it actually does so). Only if you define SOA (as I believe Roy and many other REST folks do) as the unnamed architectural style underlying WSDL, SOAP & Co., this seems a conflict.
> >
> > One of the more interesting experiences I had at InfoQ (I'm no longer the lead editor and only loosely associated) was that it got harder and harder to find anyone willing to write a useful technical article that was not REST-related. Of course this may have been selection bias, but I really tried but at some point 90% of the people I respected from the WS-* side of things had become RESTafarians.
> >
> > Best,
> > Stefan
> >
>
>
Nick Gall wrote: > > Eric J. Bowman wrote: > > > Guilherme Silveira wrote: > > > > > > How should a client decide its next step? > > > > > > > This thread needs a terminology scrub. In REST, the application is > > defined as "what the user is trying to accomplish", and the user's > > agent is the "client". The client never decides what to do next, > > only the user does. The client is there to carry out the user's > > orders. > > > > The "user" of course, is not required to be human. But, this > > thread is as clear as mud, because "user" and "user-agent" are > > being combined into "client". When I as a human am driving a REST > > application, I am not the "client" nor am I part of the "client > > component". > > > > Eric, I agree with, and like, your distinctions among "application," > "user", and "user agent". But what needs the terminology scrub is not > just this thread, but (at least) the 5.3.3 Data > View<http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm#sec_5_3_3>section > of Roy's Thesis. > Yeah, that paragraph is poorly worded, and is inconsistent with 5.2.3, "A user agent uses a client connector to initiate a request and becomes the ultimate recipient of the response. The most common example is a Web browser...". I give the thesis an A- ;-). Roy has stated that we're allowed to reasonably disagree with his thesis, yet all the same I somewhat expect to be clobbered on this... Roy, I'm not attempting to re-write your thesis, the suggested edits which follow are meant to illustrate the problems we're having with discussing m2m in terms of the REST style. My edits are off-the-cuff, not the product of a doctoral effort, of course. > > Two observations: > > 1. Saying that a "user agent" can be a bot, a personal agent > (presumably an automated one, not a human one), or a spider suggests > that Roy is NOT making your distinction between user (human) and user > agent (automaton). > I agree with Roy entirely that REST ends at the user agent -- m2m presents a problem that's tangential to REST. But, REST is a style, not a blueprint. What I'm trying to accomplish is to apply REST's terminology to this problem area, for the reasons Stuart pointed out: "Implementers clearly are curious how to retain the constraints of the architecture and build m2m agents. While the techniques for building goal-directed agents aren't particular to REST, they're certainly of interest to this audience, and it's been a sorely lacking area of exploration, IMO." I start by altering Table 5-2, "resolver = DNS interface" because components don't care what the underlying resolver library is, they all have the same generic interface, that to me is the resolver connector. Then I alter Table 5-3, "dns = BIND (or other library) component" and "user = machine or human decider". If I'm wrong that BIND is a component beyond the resolver connector, then the basis of my thinking about "user component" is misguided. > > 2. Roy appears to use "application" in two different > (inconsistent?) ways. First, to refer to "a cohesive structure of > information and control alternatives through which a user can perform > a desired task" (earlier in section 5.3.3). Second, to refer to a > software component. For example, "does not assume that all > applications are browsers." > I've always had a problem with the last paragraph in 5.3.3, and I agree that it's a hindrance to learning REST. Elsewhere the term "browser application" is used, and it would be nice if application without such prefix only referred to "REST application" in Chapter 5. This holds true, except for the paragraph in question, but I don't recall Roy being called on it before... :-) > > I'd be curious to know if you agree with my observations, and if so, > would you be willing to edit section 5.3.3 to clear up the confusion > between user and user agent and between application as "information > and control alternatives" vs application as informal expression for a > software component such as a browser. > Here goes: " The model user agent is therefore an engine that moves from one state to the next by examining the alternative state transitions in the current set of representations, presenting these options to the user, and following the user's chosen transition. Not surprisingly, this exactly matches the user interface of a hypermedia browser. However, the style does not assume that all user agents are browsers. In fact, user agent details are hidden from the server by the generic connector interface, and thus a user agent could equally be an automated robot performing information retrieval for an indexing service, a personal agent looking for data that matches certain criteria, or a maintenance spider busy patrolling the information for broken references or modified content. " Googlebot is dispatched by different Google "machine users", so it's a user agent for an m2m process -- one which works by crawling human- oriented APIs. A personal agent resupplying my toilet paper stocks is a user agent for an m2m process on behalf of a human user. A link- checker is a user agent for an m2m or m2h(uman) process, depending on whether it's reporting broken links to a human user or reporting broken links to an automated repair process. So I think my approach is a valid application of the REST style, even if REST's wording isn't an exact match. This approach lines up with my worldview that REST is a process, not a result. I'm using REST as a tool to understand and evaluate m2m development issues that are beyond the scope of REST, by introducing the notion of a "user component" to help such development interoperate within the constraints of REST. My devious agenda is to nip in the bud the notion that a RESTful system needs separate APIs for humans and machines. I feel it's important to point out that a REST API designed to enable a task to be completed over the wire, can cater to both. Common sense dictates that one API is easier to maintain than two APIs, particularly when they are both designed to enable the same tasks to be completed over the wire. What makes no sense, is a m2m API that can't be debugged by a human, because it's a different API than the one meant for humans. Having two parallel APIs, one for machines and one for humans, doesn't violate any constraint of REST. I argue that it violates the spirit and intent of REST, again by introducing the notion of a "user component" into the discussion as a layer worthy of consideration despite being opaque behind the generic connector interface. So to clarify, my position that having an API instruct a client how to proceed violates the layered-system constraint, is based on the acceptance of a "user component" as a layer in the system. While that isn't strictly REST, I believe it's a valid extension of REST terminology to encompass a specific problem REST isn't meant to solve. -Eric
"wahbedahbe" wrote: > >"Eric J. Bowman" wrote: > > > > "wahbedahbe" wrote: > > > > > > I wouldn't call VoiceXML machine-oriented -- it describes a > > > voice-driven UI similar to how HTML describes a visual, text-based > > > UI. CCXML, its sister language, is machine-oriented though. > > > > > > > As VoiceXML is to HTML, so CCXML is to PHP. To users in the system, > > any PHP and/or CCXML code driving resource state is opaque behind > > the Uniform Interface. CCXML is no more relevant to application > > state on the client than PHP is, so I don't think it solves for the > > problem of RESTful m2m media types for representing resource state > > to user-agents. > > > > -Eric > > I couldn't disagree more. A CCXML client makes HTTP requests for > CCXML pages that run on a platform that controls client-side > resources. Many people write static CCXML pages, but that doesn't > make it any less of a representation language -- each page still has > a URI. CCXML is NOT used to generate VoiceXML or any other markup > language (not in any system I've seen). I'm puzzled as to why you'd > have this view. Can you point me to a system where CCXML is used like > PHP? Regards, > I stand corrected. My PHP analogy wasn't meant to suggest that CCXML generates VoiceXML, but it does route calls to VoiceXML apps for handling, right? VoiceXML doesn't need CCXML any more than HTML needs PHP. But I thought CCXML mostly needed VoiceXML, just as PHP mostly needs HTML. The similarity I was going for was server-side language, I wasn't aware that CCXML is also intended for clients. -Eric
On Tue, Apr 6, 2010 at 4:59 PM, Eric J. Bowman <eric@...>wrote: > "wahbedahbe" wrote: > > > >"Eric J. Bowman" wrote: > > > > > > "wahbedahbe" wrote: > > > > > > > > I wouldn't call VoiceXML machine-oriented -- it describes a > > > > voice-driven UI similar to how HTML describes a visual, text-based > > > > UI. CCXML, its sister language, is machine-oriented though. > > > > > > > > > > As VoiceXML is to HTML, so CCXML is to PHP. To users in the system, > > > any PHP and/or CCXML code driving resource state is opaque behind > > > the Uniform Interface. CCXML is no more relevant to application > > > state on the client than PHP is, so I don't think it solves for the > > > problem of RESTful m2m media types for representing resource state > > > to user-agents. > > > > > > -Eric > > > > I couldn't disagree more. A CCXML client makes HTTP requests for > > CCXML pages that run on a platform that controls client-side > > resources. Many people write static CCXML pages, but that doesn't > > make it any less of a representation language -- each page still has > > a URI. CCXML is NOT used to generate VoiceXML or any other markup > > language (not in any system I've seen). I'm puzzled as to why you'd > > have this view. Can you point me to a system where CCXML is used like > > PHP? Regards, > > > > I stand corrected. My PHP analogy wasn't meant to suggest that CCXML > generates VoiceXML, but it does route calls to VoiceXML apps for > handling, right? VoiceXML doesn't need CCXML any more than HTML needs > PHP. But I thought CCXML mostly needed VoiceXML, just as PHP mostly > needs HTML. The similarity I was going for was server-side language, I > wasn't aware that CCXML is also intended for clients. > > -Eric > No problem. I think the best way to think about CCXML and VoiceXML is as cooperating markup languages. The CCXML processor can invoke the VoiceXML processor to handle a portion of the overall interaction. A good analogy might be an Atom client that interacts with or invokes an HTML client to display the content portion of the feed. CCXML doesn't typically embed the VoiceXML though -- usually it just contains a URI for an initial VoiceXML page. The processors could be part of the same client or distributed. RFC 5552 would be one way that distributed processors could interact: http://tools.ietf.org/html/rfc5552#section-1.1.4 But from a REST perspective, you could think of them being part of a single distributed client (well, more or less -- cookies can make that a bit challenging). Regards, Andrew
Hi Again, Stefan > Frankly, I'm not interested in discussing what SOA is or is not, as there's nobody who can decide who's right. It suits my purpose to define SOA as something that's high-level enough (or vague enough, if you prefer); That is totally fair!. My idea of setting it is to be clear upfront what the named concept means for each of us. Now I know yours and you know mine. :D Actually, I agree completely with your point of view about what people want and how should they get it. What I dislike a little bit is the naming confusion. I came up with a great idea. Let's create a new style called Ice Cream Oriented Architecture (ICOA). It has restrictions A and B, that will give you benefits X and Y. Someone reads about benefits, ignores the restrictions, and starts doing a flawed ICOA. Even worse: there is no Ice Cream, only nuts. After some years, someone else discovers the NOA (I'll let you figure out the acronym), that using nuts alone can get benefits W, X and Y. All is fine, you try to sell NOA and nobody buys it. Then you rename NOA as the next generation of ICOA (ICOA-NG), that uses the revolutionary nutty flavored HARD ice cream, that does not melt and has no milk (And it doesn't even need cooling anymore), and voila! Everybody loves it. Branding that is. And nobody, ever, enjoyed an Ice Cream in the process. So we agree in the core, not sure if we agree that the names can get to confusion, and may cause trouble thereafter. William. --- In rest-discuss@yahoogroups.com, Stefan Tilkov <stefan.tilkov@...> wrote: > > Hi William, > > Frankly, I'm not interested in discussing what SOA is or is not, as there's nobody who can decide who's right. It suits my purpose to define SOA as something that's high-level enough (or vague enough, if you prefer); the reason is that I see many people who want something from SOA (easy interoperability, loose coupling, wide support in different platforms, ) that they can achieve more easily with RESTful HTTP than with SOAP/WSDL/WS-*. I can give them REST and they can still have their SOA cake. > > Related to that, I define "service" more as a mini-application than as an individual interface. When implemented using RESTful HTTP, it becomes a set of related resources (more commonly called a Web app) instead of a Web service (a set of WSDL-described individual SOAP interfaces). > > If you define SOA differently, this logic is flawed. Which is OK. > > Best, > Stefan > > On Apr 6, 2010, at 2:58 PM, William Martinez Pomares wrote: >
Andrew Wahbe wrote: > > But from a REST perspective, you could think of them being part of a > single distributed client... > Not sure what you mean. In REST, "client" specifically means "client connector", so do you mean a single distributed client connector, or a single distributed user agent? Or is it a single distributed user, driving numerous user agents (like Google driving googlebot)? Actually, at second glance, CCXML seems more akin to Xforms -- is it an MVC application the server transfers to the user agent? MVC on the user agent is a powerful REST design pattern that can be adapted to m2m. -Eric
On Tue, Apr 6, 2010 at 6:07 PM, Eric J. Bowman <eric@...>wrote: > Andrew Wahbe wrote: > > > > But from a REST perspective, you could think of them being part of a > > single distributed client... > > > > Not sure what you mean. In REST, "client" specifically means "client > connector", so do you mean a single distributed client connector, or a > single distributed user agent? Or is it a single distributed user, > driving numerous user agents (like Google driving googlebot)? > > Yes I see how that's confusing. By "client" I mean the "thing running the application" -- perhaps "distributed user-agent" is the right terminology here. Consider an application that consists of multiple hypermedia formats, could be VoiceXML + CCXML or Atom + HTML. It could be the case that the markup is processed by a single process or it could be that different processes are handling the individual markup languages and coordinating somehow. The server is just seeing the HTTP requests and shouldn't really care how the user agent is internally constructed. Of course as I mentioned cookies break this -- it's another way that they are not ideal. VoiceXML/CCXML systems can sometimes be broken into as many as 3 separate components all making requests related to a single application session: the CCXML processor, the VoiceXML processor and a speech processor (performing speech recognition and fetching grammar files). Some of the related protocols have mechanisms to try and coordinate cookies: e.g. http://tools.ietf.org/html/draft-ietf-speechsc-mrcpv2-20#section-6.2.15 <http://tools.ietf.org/html/draft-ietf-speechsc-mrcpv2-20#section-6.2.15>Anyways, it's just food for thought. Actually, at second glance, CCXML seems more akin to Xforms -- is it an > MVC application the server transfers to the user agent? MVC on the > user agent is a powerful REST design pattern that can be adapted to > m2m. > > -Eric > That's maybe one way to think about it. It is a finite state machine that communicates via messages/events to resources in an underlying client platform. Events cause state transitions, transitions handlers can send messages back to the platform or place HTTP requests to transition to a new page (or do various other things). I see parallels between this model and an Ajax application -- which can be thought of as a state machine: each "view" is a different state often labelled with a URI fragment (e.g. #inbox in Gmail) Regards, Andrew
While reading through section 5.3.3[1] I am wondering, whether my understanding of "Application" actually matches Roy's. He writes: "A data view of an architecture reveals the application state as information flows through the components. Since REST is specifically targeted at distributed information systems, it views an application as a cohesive structure of information and control alternatives through which a user can perform a desired task. For example, looking-up a word in an on-line dictionary is one application, as is touring through a virtual museum, or reviewing a set of class notes to study for an exam. Each application defines goals for the underlying system, against which the system's performance can be measured." Thinking through this (and the following paragraphs) I get the impression that a specific application is 'created' only when a user[2] chooses a goal it intends to pursue and turns to the RESTful system (the Web) to start pursuing it. The application thereby brought to life might span several, unrelated 'services'. Another way one might say this is 'The application is defined by the current use of the system (the Web) for the given user intention' (and the current application state is "defined by its pending requests, the topology of connected components (some of which may be filtering buffered data), the active requests on those connectors, the data flow of representations in response to those requests, and the processing of those representations as they are received by the user agent."[1] If that understanding makes sense at all, it has the consequence, that application design is actually done on the client side and *not* on the server side. In the context of machine clients this would mean that applications are defined by the client side developer's interpretations of and assumptions about the envisioned media types (and link relations) and rules for choosing transitions. Comments most welcome... Jan [1] http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm#sec_5_3_3 [2] 'User' in this context would be a human user or someone who prepares (codes or configures) a client component to persue a certain goal
Besides the "denomination" issues, where I agree with Tim, I also noticed
something that "bothers" me. I actually enjoyed a lot the fact that you
wrote the media-type with a "protocol agnostic" style in mind, which is my
real use-case.
However, in Step Three you use as examples
HTTP GET {collection-uri} returns a valid list manager document with
multiple existing items.
...
FTP RETR {collection-uri} returns a list manager document with multiple
existing items.
Does this imply that your application has, in fact, not a Uniform Interface
but "n" Uniform Interfaces, one per protocol it uses?
HTTP: GET, POST, PUT, DELETE
FTP: RETR, STOR, DELE
I hope I could made myself clear....
2010/4/6 Tim Williams <williamstw@...>
> 2010/4/6 Antnio Mota <amsmota@...>
> >
> >
> > You're going to be pissed with me because I didn't even read the entire
> article (I'll do it at home
> > tonigth) but after reading this sentence
> >
> > "this API will define a simple list management service."
> >
> > and since I just said in another post that IMO "API" and "service" should
> not be used to describe
> > REST, I tried this simple semantic analysis
> >
> > CTRL+F resource
> >
> > and no single result was found!
> >
> > Now, should a description of a REST-something have more emphasis in
> "resources" rather than
> > "services"? It can be just a question of "naming", but is not a important
> one?
>
> I'd extend this comment to say it'd be better if it stuck to
> resource/representations instead of using 'data structure' that sort
> of blurs the two. This too may seem nit-picky but using imprecise
> terminology just raises the communication overhead I think.
> Thanks,
> --tim
>
Hello Jan, let me try. We are talking about the data view here, as a way to define the state of an app by looking at the information flows. I would define an application as the full set of transitions between states that would result in a particular goal. It depends on what the initial state is, those transitions and intermediate states may differ to achieve the same goal. Now, not to confuse the application state with the particular state the application is in. That is explained with the steady states, meaning you may have an app state that is you waiting for some request to end. The requests are actually information transfers, but the info does not move, it is "copied and converted" to the requestor in the requestor's format needed. That means my state changes depending on the information that I as a client have, my world view. So, in Roy's examples: looking for a word in a dictionary is an easy two state app (one of app waiting for the word to look up, and the next one the app showing the meaning) or a complex state machine (several ways of looking up, intermediate states to refine search, etc). Now, one of REST underlying styles is client-server. This is required here to separate concerns, allowing each component to evolve independently. That means, server is not worried about clients, and clients not worried about servers. Each has too much on their hands to worry about: clients should control app state and servers the data concerns. The important thing here is that, for an app to actually work, we need both worlds. You cannot create an app solely with the client, nor completely in the server. The app as a set of states is possible by the combination of data in servers and the definition of the state in the clients, based on the information the client has at one moment in time. Thus, the design of the app is a multilevel thing. One goal can be achieved using my system only, or using an information mashup from several systems. The design of the server side information allows all the states to be created. The information to allows state transitions is also there. What is very complicated then is to provide all paths needed for foreseen apps and the ones the client may want to pursue. The server level system design allows the same client to achieve the same goal with different states, and thus different performance. For instance, in the word look up, a two state thing is faster (less network interactions) but may not yield the best result. Having the client to go through much more states will give better results, but impact more on performance. A server side design that allows the client to choose any of those two paths, is a better option. On another level, we have the user agent implementation, with the ability to hide from the user the complex interaction, information retrieval from multiple sources, and rendering of that. One important thing to note here is user interacts with the user agent only when it is needed (when there is a need for the user to make a decision). That means if the next step is clearly just one, user agent should not wait until the user hits a button to proceed, unless that button is a confirmation. Roy's mentions the drawback of this client-server dichotomy by indicating we can have problems with clients that do not share same semantics for the app, because the server cannot retain control on the app consistent behavior. That is easy to understand, and means clients should work independently, but following the same semantics. With all this blah, I came to a similar conclusion: apps are there, with paths the user must discover. System designer should provide all info, state possibilities and transitions to obtain certain goals, and thus providing some possible apps. Apps will not be "instantiated" unless a client tries to achieve that goal. Many clients achieving the same goal may be at different states in one particular moment, and may get to the goal using different paths and states! Modifications of paths and addition of states may allow new apps to be created, and those should not affect the ones already in place, even if there are several instances of those apps running. It can be a client achieves a goal that was not intended in the first place, but was possible given the states. Etc. So it is not all in the client side. A good initial state could contain a list of supported goals, with step by step instructions throughout each state. Think on entering one big Las Vegas hotel and ask for the suites. Someone will show you a sign and you go there, from there you see another sign and you follow, and so one. Goals are provided, client doesn't have to "create" them and then try to use the system to achieve them. Client may also turn left because he saw something interesting in the way, and may indeed create a new path with whole new goal that was not there from the beginning! Maybe even going to another hotel through the passways! This means the server can show the path, the client has all free will to follow it, change it or even go somewhere else. Ok, too much blah. These are my 2 cents. Cheers! William Martinez Pomares. --- In rest-discuss@yahoogroups.com, Jan Algermissen <algermissen1971@...> wrote: > > While reading through section 5.3.3[1] I am wondering, whether my understanding of "Application" actually matches Roy's. He writes: > > "A data view of an architecture reveals the application state as information flows through the components. Since REST is specifically targeted at distributed information systems, it views an application as a cohesive structure of information and control alternatives through which a user can perform a desired task. For example, looking-up a word in an on-line dictionary is one application, as is touring through a virtual museum, or reviewing a set of class notes to study for an exam. Each application defines goals for the underlying system, against which the system's performance can be measured." > > Thinking through this (and the following paragraphs) I get the impression that a specific application is 'created' only when a user[2] chooses a goal it intends to pursue and turns to the RESTful system (the Web) to start pursuing it. The application thereby brought to life might span several, unrelated 'services'. > > Another way one might say this is 'The application is defined by the current use of the system (the Web) for the given user intention' (and the current application state is "defined by its pending requests, the topology of connected components (some of which may be filtering buffered data), the active requests on those connectors, the data flow of representations in response to those requests, and the processing of those representations as they are received by the user agent."[1] > > If that understanding makes sense at all, it has the consequence, that application design is actually done on the client side and *not* on the server side. > > In the context of machine clients this would mean that applications are defined by the client side developer's interpretations of and assumptions about the envisioned media types (and link relations) and rules for choosing transitions. > > > Comments most welcome... > > Jan > > > [1] http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm#sec_5_3_3 > > [2] 'User' in this context would be a human user or someone who prepares (codes or configures) a client component to persue a certain goal >
I partially agree with the naming issues. But I see it under a different light. See, this post is developer driven, thought to get the dev into code in three simple steps. Haven't read it in deep, just over read, and I see first that we try the API as a normal RPC API to begin with. Then we go into defining what we need, for instance a data structure, that is not more that a resource actual representation. All that is fine for me, but I feel it needs the mapping explanation. Since I come from the architectural side, I would start by defining why we need the API, and how the API is RESTFull in the REST sense. Then I may come with this regular API and explain what changes should I make to create a RESTFull one. Then I explain what the resources would be (a data dictionary fits fine), and then the possible representations (structure) and then etc. This is more work, I know, and probably a developer wants to have the code to just copy paste, but if we intend to teach about REST I would suggest to include the what (concept) and then the how (in dev terms). In other words, do not replace, add. Will read it in calm and post more about it later. Cheers! William Martinez Pomares --- In rest-discuss@yahoogroups.com, Tim Williams <williamstw@...> wrote: > > 2010/4/6 Antnio Mota <amsmota@...> > > > > > > You're going to be pissed with me because I didn't even read the entire article (I'll do it at home > > tonigth) but after reading this sentence > > > > "this API will define a simple list management service." > > > > and since I just said in another post that IMO "API" and "service" should not be used to describe > > REST, I tried this simple semantic analysis > > > > CTRL+F resource > > > > and no single result was found! > > > > Now, should a description of a REST-something have more emphasis in "resources" rather than > > "services"? It can be just a question of "naming", but is not a important one? > > I'd extend this comment to say it'd be better if it stuck to > resource/representations instead of using 'data structure' that sort > of blurs the two. This too may seem nit-picky but using imprecise > terminology just raises the communication overhead I think. > Thanks, > --tim >
On Wed, Apr 7, 2010 at 8:10 AM, William Martinez Pomares <wmartinez@...> wrote: > With all this blah, I came to a similar conclusion: apps are there, with paths the user must discover. > System designer should provide all info, state possibilities and transitions to obtain certain goals, > and thus providing some possible apps. I can see that for apps that somebody designed, e.g. shopping at Amazon. But what about serendipitous apps, e.g. mashups? Google Waves?
Antnio:
First, I understand the comments about the terminology used here. I
purposely did not focus on the terms "resource" and "representation" here as
I was aiming at the notion of media-type design. But I see how leaving thse
out might be a problem.
Second, the example implementations are there "illustrate" the
protocol-agnostic nature of the media type definition and nothing else.
IOW, if your actual implementation of this media type were to be using FTP
as your primary protocol, I offer a mapping of the FTP uniform interface to
the media type's hypermedia links. Same for the HTTP protocol
implementation.
Does that clarify your question?
mca
http://amundsen.com/blog/
2010/4/7 Antnio Mota <amsmota@...>
>
>
> Besides the "denomination" issues, where I agree with Tim, I also noticed
> something that "bothers" me. I actually enjoyed a lot the fact that you
> wrote the media-type with a "protocol agnostic" style in mind, which is my
> real use-case.
>
> However, in Step Three you use as examples
>
> HTTP GET {collection-uri} returns a valid list manager document with
> multiple existing items.
> ...
> FTP RETR {collection-uri} returns a list manager document with multiple
> existing items.
>
>
> Does this imply that your application has, in fact, not a Uniform Interface
> but "n" Uniform Interfaces, one per protocol it uses?
>
> HTTP: GET, POST, PUT, DELETE
> FTP: RETR, STOR, DELE
>
>
> I hope I could made myself clear....
>
>
>
>
> 2010/4/6 Tim Williams <williamstw@...>
>
> 2010/4/6 Antnio Mota <amsmota@...>
>> >
>> >
>> > You're going to be pissed with me because I didn't even read the entire
>> article (I'll do it at home
>> > tonigth) but after reading this sentence
>> >
>> > "this API will define a simple list management service."
>> >
>> > and since I just said in another post that IMO "API" and "service"
>> should not be used to describe
>> > REST, I tried this simple semantic analysis
>> >
>> > CTRL+F resource
>> >
>> > and no single result was found!
>> >
>> > Now, should a description of a REST-something have more emphasis in
>> "resources" rather than
>> > "services"? It can be just a question of "naming", but is not a
>> important one?
>>
>> I'd extend this comment to say it'd be better if it stuck to
>> resource/representations instead of using 'data structure' that sort
>> of blurs the two. This too may seem nit-picky but using imprecise
>> terminology just raises the communication overhead I think.
>> Thanks,
>> --tim
>>
>
>
>
>
But isn't that the wrong approach to REST and one of the reasons that is so wrongly interpreted and worse implemented? (and I'm not implying that I know the correct one, far from it, actually the more I read this list the more I think I don;t know nothing about it) What I mean is, shouldn't REST be approached as a different paradigm than RPC, not only different but incompatible? It reminds me a lot when I started using XSLT, the different "way of thought" you have to have if you come from a procedural or a oo field? At some point I acted almost if I have a switch inside my head, turn on for XSLT, turn off for Java... From this point of view, the approach of the article is from the point of view of RPC all right, like "How I Explain REST To My RPC Friend", but from a REST point of view, shouldn't the approach be precisely the opposite sequence of that taken in the article? First, decide your Uniform Interface Second, define your media-types Third, define your Resources And leave the API's well hidden behind the server, as a mere implementation detail, and *never* show it to the clients/user-agents... _________________________________________________ Melhores cumprimentos / Beir beannacht / Best regards Antnio Manuel dos Santos Mota http://card.ly/amsmota _________________________________________________ On 7 April 2010 14:28, William Martinez Pomares <wmartinez@...>wrote: > > > > I partially agree with the naming issues. > But I see it under a different light. > > See, this post is developer driven, thought to get the dev into code in > three simple steps. Haven't read it in deep, just over read, and I see first > that we try the API as a normal RPC API to begin with. Then we go into > defining what we need, for instance a data structure, that is not more that > a resource actual representation. > > All that is fine for me, but I feel it needs the mapping explanation. Since > I come from the architectural side, I would start by defining why we need > the API, and how the API is RESTFull in the REST sense. Then I may come with > this regular API and explain what changes should I make to create a RESTFull > one. Then I explain what the resources would be (a data dictionary fits > fine), and then the possible representations (structure) and then etc. > > This is more work, I know, and probably a developer wants to have the code > to just copy paste, but if we intend to teach about REST I would suggest to > include the what (concept) and then the how (in dev terms). In other words, > do not replace, add. > > Will read it in calm and post more about it later. > > Cheers! > > William Martinez Pomares > > --- In rest-discuss@yahoogroups.com <rest-discuss%40yahoogroups.com>, Tim > Williams <williamstw@...> wrote: > > > > 2010/4/6 Antnio Mota <amsmota@...> > > > > > > > > > > You're going to be pissed with me because I didn't even read the entire > article (I'll do it at home > > > tonigth) but after reading this sentence > > > > > > "this API will define a simple list management service." > > > > > > and since I just said in another post that IMO "API" and "service" > should not be used to describe > > > REST, I tried this simple semantic analysis > > > > > > CTRL+F resource > > > > > > and no single result was found! > > > > > > Now, should a description of a REST-something have more emphasis in > "resources" rather than > > > "services"? It can be just a question of "naming", but is not a > important one? > > > > I'd extend this comment to say it'd be better if it stuck to > > resource/representations instead of using 'data structure' that sort > > of blurs the two. This too may seem nit-picky but using imprecise > > terminology just raises the communication overhead I think. > > Thanks, > > --tim > > > > >
Jan,
I tend to agree: the basic way to get to "standard media types" for a variety of application domains is to see the client-side as driving the application definition. On the other hand, the server side (some combination of servers with resources) has to actually provide the necessary capabilities.
The analogy I use is supply & demand in microeconomics. A consumer-centered view of an application is "demand-driven". But you need a provider and a form of standard interchange to make transactions possible.
Sometimes I can "single source" the application, as when I purchase a book from Amazon, other times I have to access several different trust domains. Naturally, a web business is incentivized to provide you all the information and state transitions needed to fill a market need. (Wow, i feel dorky writing it In those words ;)
Stu
Sent from my iPad
On 2010-04-07, at 4:40 AM, Jan Algermissen <algermissen1971@...> wrote:
While reading through section 5.3.3[1] I am wondering, whether my understanding of "Application" actually matches Roy's. He writes:
"A data view of an architecture reveals the application state as information flows through the components. Since REST is specifically targeted at distributed information systems, it views an application as a cohesive structure of information and control alternatives through which a user can perform a desired task. For example, looking-up a word in an on-line dictionary is one application, as is touring through a virtual museum, or reviewing a set of class notes to study for an exam. Each application defines goals for the underlying system, against which the system's performance can be measured."
Thinking through this (and the following paragraphs) I get the impression that a specific application is 'created' only when a user[2] chooses a goal it intends to pursue and turns to the RESTful system (the Web) to start pursuing it. The application thereby brought to life might span several, unrelated 'services'.
Another way one might say this is 'The application is defined by the current use of the system (the Web) for the given user intention' (and the current application state is "defined by its pending requests, the topology of connected components (some of which may be filtering buffered data), the active requests on those connectors, the data flow of representations in response to those requests, and the processing of those representations as they are received by the user agent."[1]
If that understanding makes sense at all, it has the consequence, that application design is actually done on the client side and *not* on the server side.
In the context of machine clients this would mean that applications are defined by the client side developer's interpretations of and assumptions about the envisioned media types (and link relations) and rules for choosing transitions.
Comments most welcome...
Jan
[1] http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm#sec_5_3_3
[2] 'User' in this context would be a human user or someone who prepares (codes or configures) a client component to persue a certain goal
__________________________________________________________________
Looking for the perfect gift? Give the gift of Flickr!
http://www.flickr.com/gift/The uniform interface belongs to the protocol, not the media type. This post is about designing a media-type. It is my assertion that mapping the media-type's hypermedia links to a particular protocol is an implementation detail. I agree that some media-types assume a particular primary protocol (e.g. HTML assumes HTTP) and that media-type's documentation reflects this assumption [1]. However, I do not think all media types must claim a primary protocol in order to be valid. I'd like to hear from others on this idea. - MUST a media-type assume a primary protocol and reflect that in the documentation? - SHOULD a media-type identify a primary protocol and reflect that in the documentation? More to the point, is the idea of a true protocol-agnostic media-type meaningless? unhelpful? Thanks. [1] http://www.whatwg.org/specs/web-apps/current-work/multipage/association-of-controls-and-forms.html#constructing-form-data-set (see step 15) mca http://amundsen.com/blog/ 2010/4/7 Antnio Mota <amsmota@...> > > But isn't that the wrong approach to REST and one of the reasons that is so > wrongly interpreted and worse implemented? (and I'm not implying that I know > the correct one, far from it, actually the more I read this list the more I > think I don;t know nothing about it) > > What I mean is, shouldn't REST be approached as a different paradigm than > RPC, not only different but incompatible? It reminds me a lot when I started > using XSLT, the different "way of thought" you have to have if you come from > a procedural or a oo field? At some point I acted almost if I have a switch > inside my head, turn on for XSLT, turn off for Java... > > From this point of view, the approach of the article is from the point of > view of RPC all right, like "How I Explain REST To My RPC Friend", but from > a REST point of view, shouldn't the approach be precisely the opposite > sequence of that taken in the article? > > First, decide your Uniform Interface > Second, define your media-types > Third, define your Resources > > And leave the API's well hidden behind the server, as a mere implementation > detail, and *never* show it to the clients/user-agents... > > _________________________________________________ > > Melhores cumprimentos / Beir beannacht / Best regards > > Antnio Manuel dos Santos Mota > > http://card.ly/amsmota > _________________________________________________ > > > > > On 7 April 2010 14:28, William Martinez Pomares <wmartinez@...>wrote: > >> >> >> >> I partially agree with the naming issues. >> But I see it under a different light. >> >> See, this post is developer driven, thought to get the dev into code in >> three simple steps. Haven't read it in deep, just over read, and I see first >> that we try the API as a normal RPC API to begin with. Then we go into >> defining what we need, for instance a data structure, that is not more that >> a resource actual representation. >> >> All that is fine for me, but I feel it needs the mapping explanation. >> Since I come from the architectural side, I would start by defining why we >> need the API, and how the API is RESTFull in the REST sense. Then I may come >> with this regular API and explain what changes should I make to create a >> RESTFull one. Then I explain what the resources would be (a data dictionary >> fits fine), and then the possible representations (structure) and then etc. >> >> This is more work, I know, and probably a developer wants to have the code >> to just copy paste, but if we intend to teach about REST I would suggest to >> include the what (concept) and then the how (in dev terms). In other words, >> do not replace, add. >> >> Will read it in calm and post more about it later. >> >> Cheers! >> >> William Martinez Pomares >> >> --- In rest-discuss@...m <rest-discuss%40yahoogroups.com>, Tim >> Williams <williamstw@...> wrote: >> > >> > 2010/4/6 Antnio Mota <amsmota@...> >> >> > > >> > > >> > > You're going to be pissed with me because I didn't even read the >> entire article (I'll do it at home >> > > tonigth) but after reading this sentence >> > > >> > > "this API will define a simple list management service." >> > > >> > > and since I just said in another post that IMO "API" and "service" >> should not be used to describe >> > > REST, I tried this simple semantic analysis >> > > >> > > CTRL+F resource >> > > >> > > and no single result was found! >> > > >> > > Now, should a description of a REST-something have more emphasis in >> "resources" rather than >> > > "services"? It can be just a question of "naming", but is not a >> important one? >> > >> > I'd extend this comment to say it'd be better if it stuck to >> > resource/representations instead of using 'data structure' that sort >> > of blurs the two. This too may seem nit-picky but using imprecise >> > terminology just raises the communication overhead I think. >> > Thanks, >> > --tim >> > >> >> > > >
On Apr 7, 2010, at 3:50 PM, Antnio Mota wrote: > But isn't that the wrong approach to REST and one of the reasons that is so wrongly interpreted and worse implemented? (and I'm not implying that I know the correct one, far from it, actually the more I read this list the more I think I don;t know nothing about it) > > What I mean is, shouldn't REST be approached as a different paradigm than RPC, not only different but incompatible? > But what's RPCish about this example (which I consider excellent, BTW)? I think that starting with the media type is a very good way (if not the only one) to come up with a RESTful design. The uniform meaning of the verbs is defined by the respective protocols; the actual resource URIs are handed to the client dynamically; the whole thing is self-descriptive because the media type name is non-generic. Maybe the difference to the RPC approach would have been even more obvious if Mike had used different host names in the example URIs? Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/ > It reminds me a lot when I started using XSLT, the different "way of thought" you have to have if you come from a procedural or a oo field? At some point I acted almost if I have a switch inside my head, turn on for XSLT, turn off for Java... > > From this point of view, the approach of the article is from the point of view of RPC all right, like "How I Explain REST To My RPC Friend", but from a REST point of view, shouldn't the approach be precisely the opposite sequence of that taken in the article? > > First, decide your Uniform Interface > Second, define your media-types > Third, define your Resources > > And leave the API's well hidden behind the server, as a mere implementation detail, and *never* show it to the clients/user-agents... > > _________________________________________________ > > Melhores cumprimentos / Beir beannacht / Best regards > > Antnio Manuel dos Santos Mota > > http://card.ly/amsmota > _________________________________________________ > > > > > On 7 April 2010 14:28, William Martinez Pomares <wmartinez@...> wrote: > > > I partially agree with the naming issues. > But I see it under a different light. > > See, this post is developer driven, thought to get the dev into code in three simple steps. Haven't read it in deep, just over read, and I see first that we try the API as a normal RPC API to begin with. Then we go into defining what we need, for instance a data structure, that is not more that a resource actual representation. > > All that is fine for me, but I feel it needs the mapping explanation. Since I come from the architectural side, I would start by defining why we need the API, and how the API is RESTFull in the REST sense. Then I may come with this regular API and explain what changes should I make to create a RESTFull one. Then I explain what the resources would be (a data dictionary fits fine), and then the possible representations (structure) and then etc. > > This is more work, I know, and probably a developer wants to have the code to just copy paste, but if we intend to teach about REST I would suggest to include the what (concept) and then the how (in dev terms). In other words, do not replace, add. > > Will read it in calm and post more about it later. > > Cheers! > > William Martinez Pomares > > --- In rest-discuss@yahoogroups.com, Tim Williams <williamstw@...> wrote: > > > > 2010/4/6 Antnio Mota <amsmota@...> > > > > > > > > > > > You're going to be pissed with me because I didn't even read the entire article (I'll do it at home > > > tonigth) but after reading this sentence > > > > > > "this API will define a simple list management service." > > > > > > and since I just said in another post that IMO "API" and "service" should not be used to describe > > > REST, I tried this simple semantic analysis > > > > > > CTRL+F resource > > > > > > and no single result was found! > > > > > > Now, should a description of a REST-something have more emphasis in "resources" rather than > > > "services"? It can be just a question of "naming", but is not a important one? > > > > I'd extend this comment to say it'd be better if it stuck to > > resource/representations instead of using 'data structure' that sort > > of blurs the two. This too may seem nit-picky but using imprecise > > terminology just raises the communication overhead I think. > > Thanks, > > --tim > > > > > > >
On Apr 7, 2010, at 4:14 PM, mike amundsen wrote: > I'd like to hear from others on this idea. > - MUST a media-type assume a primary protocol and reflect that in the documentation? > - SHOULD a media-type identify a primary protocol and reflect that in the documentation? > > More to the point, is the idea of a true protocol-agnostic media-type meaningless? unhelpful? I believe in the general case, a media type should identify the protocol(s) it's supposed to work with (and how it does so) for simple practical reasons, even though I tend to believe that this is somewhat unRESTful. > > Thanks. > > [1] http://www.whatwg.org/specs/web-apps/current-work/multipage/association-of-controls-and-forms.html#constructing-form-data-set (see step 15) Excuse me while I cry for an hour about this style of spec writing. Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/
On Wed, Apr 7, 2010 at 7:40 AM, Jan Algermissen <algermissen1971@...>wrote: > Thinking through this (and the following paragraphs) I get the impression > that a specific application is 'created' only when a user[2] chooses a goal > it intends to pursue and turns to the RESTful system (the Web) to start > pursuing it. The application thereby brought to life might span several, > unrelated 'services'. > > Another way one might say this is 'The application is defined by the > current use of the system (the Web) for the given user intention' (and the > current application state is "defined by its pending requests, the topology > of connected components (some of which may be filtering buffered data), the > active requests on those connectors, the data flow of representations in > response to those requests, and the processing of those representations as > they are received by the user agent."[1] > I think you are on the right track, so let me add my observations of how I use the term "application" these days. Have you ever seen ads for craft products, eg glue, or fasteners, where they say "thousands of applications!". That's how I interpret "application" these days. Thus, properly speaking, an "application" isn't a thing, it's a use of a thing to get something done, ie achieve a goal. To put it into IT jargon, an "application" is a "use case" or in some contexts regarding state, "an instance of a use case".To see how well this fits, let's try to substitute the latter for the former in Roy's description: *Since REST is specifically targeted at distributed information systems, it views **a use case** as a cohesive structure of information and control alternatives through which a user can perform a desired task. For example, looking-up a word in an on-line dictionary is one **use case**, as is touring through a virtual museum, or reviewing a set of class notes to study for an exam. Each **use case **defines goals for the underlying system, against which the system's performance can be measured.* Works for me! One of the worst things to happen to IT was to use the label "application" for the thing being applied, ie a software system. This is the root of all the confusion. Other words for the software system would have been far better (and are sometimes actually used): tool, utility, program, service, site, widget, etc. Even the word "appliance" would have been better. Why? Because it makes perfect sense to say "this appliance has hundreds of * applications*" (think of a food processor). In contrast, "this application has hundreds of applications" is confusing to the point of meaninglessness. I think one of the reasons that the label "application" was slapped onto software systems (instead of being reserved to refer to the use of such systems), is that so many software systems are SO specialized that that have only a single use case, eg an expense report program has only a single *application *(ie use case): submitting expense reports. But the bad news is that the (mis)label is probably here to stay. People (especially people in IT) are going to continue to say things like "build an application", "deploy an application," "use an application", "the application crashed", etc. We can try to avoid the term in our personal conversations and writings, but that's unlikely to eliminate the problem any time soon. One possibility for reducing the confusion struck me as I wrote this. Given the popularity of the slang "app" as shorthand for "application", we could use "app" to refer to the "software system" and reserve "application" to mean "use case for the app" or "an instance of using the app". That way, we can say "this app has hundreds of applications" without nearly as much confusion. I think that's what I'm going to do from now on! So does clarifying the distinction between "application" (an instance of using the app) and an "app" (the software system being used) help much? Not too much IMO. Because it doesn't really address the orthogonal issue of how the software components that constitutes an "app" (in this case a distributed software system) are distributed. Some of the software components may run on a given client and some may run on the server (or worse, various servers), thus we still have to make decisions about where to store "state", ie information about where this particular user is in her "instance of using the system". Clarifying "app" vs "application" doesn't tell us anything about where to store state. Only an architectural style can tell us that. And REST tells us to store state on the many client "apps" (aka client-based software components) sharing a common server "app" (aka a server-based software components). However, it does help us say something like "the *application* state should be stored in the client components of an *app*, not the server components" and be a little less confused. -- Nick Nick Gall Phone: +1.781.608.5871 Twitter: ironick AOL IM: Nicholas Gall Yahoo IM: nick_gall_1117 MSN IM: (same as email) Google Talk: (same as email) Email: nick.gall AT-SIGN gmail DOT com Weblog: http://ironick.typepad.com/ironick/
On 7 April 2010 15:14, mike amundsen <mamund@...> wrote: > > > The uniform interface belongs to the protocol, not the media type. > Well, from my point of view, based only from a practical experience we had building a "midleware" based on a REST approach, that can't be the case. Basically because when I refered a "protocol agnostic" I was thinking not only agnostic but in "multi protocol" architecture. If the same application (meaning the same set of resources) are to be referenced by both HTTP and FTP, how can you have then two different Uniform Interface for that same application? So what I was saying is, in a situation like that, you should have: Application Uniform Interface (any names you want, these are just invented) FETCH, ATTACK, KILL in HTTP you translate that to GET, POST, DELETE in FTP, to RETR, STOR, DELE the translations being made by the specific connector, HTTPConnector, FTPConnector, etc... Otherwise you can't have a multi-protocol application, or having it you break the Uniform Interface constraint...
On Apr 7, 2010, at 7:20 AM, Stefan Tilkov wrote: >> >> Thanks. >> >> [1] http://www.whatwg.org/specs/web-apps/current-work/multipage/association-of-controls-and-forms.html#constructing-form-data-set (see step 15) > > Excuse me while I cry for an hour about this style of spec writing. LOL. It is like C code explained in English. Subbu
I was referring to the first part of the article where Mike says if i was coding a local application, i might use a set of function signatures that look like this: GetList() GetItem(id) AddItem(name, description, date-due, completed) UpdateItem(id, name, description, date-due, completed) DeleteItem(id) GetOpenItems() GetTodaysItems() GetItemsByDate(date-start,date-stop) and then go on explaining how he would do the same as invoking the methods with this signature by defining Media Types and applying the other REST principles. And this seems to me like follow a RPC style of thougth. "How do I do with REST what I would do in RPC like this?" I'm not saying, however, that the result he get is RPC hidden behind REST, though.. On 7 April 2010 15:15, Stefan Tilkov <stefan.tilkov@...> wrote: > On Apr 7, 2010, at 3:50 PM, Antnio Mota wrote: > > > But isn't that the wrong approach to REST and one of the reasons that is > so wrongly interpreted and worse implemented? (and I'm not implying that I > know the correct one, far from it, actually the more I read this list the > more I think I don;t know nothing about it) > > > > What I mean is, shouldn't REST be approached as a different paradigm than > RPC, not only different but incompatible? > > > > But what's RPCish about this example (which I consider excellent, BTW)? > > I think that starting with the media type is a very good way (if not the > only one) to come up with a RESTful design. The uniform meaning of the verbs > is defined by the respective protocols; the actual resource URIs are handed > to the client dynamically; the whole thing is self-descriptive because the > media type name is non-generic. Maybe the difference to the RPC approach > would have been even more obvious if Mike had used different host names in > the example URIs? > > Stefan > -- > Stefan Tilkov, http://www.innoq.com/blog/st/ > > > It reminds me a lot when I started using XSLT, the different "way of > thought" you have to have if you come from a procedural or a oo field? At > some point I acted almost if I have a switch inside my head, turn on for > XSLT, turn off for Java... > > > > From this point of view, the approach of the article is from the point of > view of RPC all right, like "How I Explain REST To My RPC Friend", but from > a REST point of view, shouldn't the approach be precisely the opposite > sequence of that taken in the article? > > > > First, decide your Uniform Interface > > Second, define your media-types > > Third, define your Resources > > > > And leave the API's well hidden behind the server, as a mere > implementation detail, and *never* show it to the clients/user-agents... > > > > _________________________________________________ > > > > Melhores cumprimentos / Beir beannacht / Best regards > > > > Antnio Manuel dos Santos Mota > > > > http://card.ly/amsmota > > _________________________________________________ > > > > > > > > > > On 7 April 2010 14:28, William Martinez Pomares <wmartinez@...> > wrote: > > > > > > I partially agree with the naming issues. > > But I see it under a different light. > > > > See, this post is developer driven, thought to get the dev into code in > three simple steps. Haven't read it in deep, just over read, and I see first > that we try the API as a normal RPC API to begin with. Then we go into > defining what we need, for instance a data structure, that is not more that > a resource actual representation. > > > > All that is fine for me, but I feel it needs the mapping explanation. > Since I come from the architectural side, I would start by defining why we > need the API, and how the API is RESTFull in the REST sense. Then I may come > with this regular API and explain what changes should I make to create a > RESTFull one. Then I explain what the resources would be (a data dictionary > fits fine), and then the possible representations (structure) and then etc. > > > > This is more work, I know, and probably a developer wants to have the > code to just copy paste, but if we intend to teach about REST I would > suggest to include the what (concept) and then the how (in dev terms). In > other words, do not replace, add. > > > > Will read it in calm and post more about it later. > > > > Cheers! > > > > William Martinez Pomares > > > > --- In rest-discuss@yahoogroups.com, Tim Williams <williamstw@...> > wrote: > > > > > > 2010/4/6 Antnio Mota <amsmota@...> > > > > > > > > > > > > > > > > You're going to be pissed with me because I didn't even read the > entire article (I'll do it at home > > > > tonigth) but after reading this sentence > > > > > > > > "this API will define a simple list management service." > > > > > > > > and since I just said in another post that IMO "API" and "service" > should not be used to describe > > > > REST, I tried this simple semantic analysis > > > > > > > > CTRL+F resource > > > > > > > > and no single result was found! > > > > > > > > Now, should a description of a REST-something have more emphasis in > "resources" rather than > > > > "services"? It can be just a question of "naming", but is not a > important one? > > > > > > I'd extend this comment to say it'd be better if it stuck to > > > resource/representations instead of using 'data structure' that sort > > > of blurs the two. This too may seem nit-picky but using imprecise > > > terminology just raises the communication overhead I think. > > > Thanks, > > > --tim > > > > > > > > > > > > > > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
How is a client meant to attribute semantics to your query elements ? Given that URIs are meant to be opaque I'd expect some kind of additional identifier either as the QName of an element or as something like a rel attribute. E.g.:
<query href="{query-uri?today}" rel="http://.../todays"/>
<query href="{query-uri?open}" rel="http://.../open"/>
<query href="{query-uri}" rel="http://.../date-range">
<data name="date-start"></data>
<data name="date-stop"></data>
</query>
or
<today href="{query-uri?today}" />
<open href="{query-uri?open}" />
<range href="{query-uri}">
<data name="date-start"></data>
<data name="date-stop"></data>
</range>
Without this a client can't identify the correct query URI to use for a particular purpose without needing knowledge of the URI structure.
Marc.
--- In rest-discuss@yahoogroups.com, mike amundsen <mamund@...> wrote:
>
> I've posted a blog entry labeled "A RESTful Hypermedia API in Three
> Easy Steps"[1]. I used Fielding's "REST APIs must be
> hypertext-driven"[2] as a reference.
>
> I'd appreciate all the feedback anyone would like to offer regarding
> the concepts, terminology, and implementation details described there.
> If you prefer not to clutter this list, feel free to comment on the
> blog or email me directly. I also hang out in the #rest IRC channel on
> freednode if you'd like to carry on there.
>
> Thanks in advance.
>
> [1] http://amundsen.com/blog/archives/1041
> [2] http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven
>
> mca
> http://amundsen.com/blog/
>
Totally Agree! I didn't mean the article should be RPC to please the developer. I meant I over read it and the first thing I saw was the procedure list. That is fine to explain to an RPC developer how to convert that to a REST API, meaning the REST is different approach and required a different design approach. Not sure if the article does that, will check while riding the bus. Cheers! William Martinez --- In rest-discuss@yahoogroups.com, Antnio Mota <amsmota@...> wrote: > > But isn't that the wrong approach to REST and one of the reasons that is so > wrongly interpreted and worse implemented? (and I'm not implying that I know > the correct one, far from it, actually the more I read this list the more I > think I don;t know nothing about it) > > What I mean is, shouldn't REST be approached as a different paradigm than > RPC, not only different but incompatible? It reminds me a lot when I started > using XSLT, the different "way of thought" you have to have if you come from > a procedural or a oo field? At some point I acted almost if I have a switch > inside my head, turn on for XSLT, turn off for Java... > > From this point of view, the approach of the article is from the point of > view of RPC all right, like "How I Explain REST To My RPC Friend", but from > a REST point of view, shouldn't the approach be precisely the opposite > sequence of that taken in the article? > > First, decide your Uniform Interface > Second, define your media-types > Third, define your Resources > > And leave the API's well hidden behind the server, as a mere implementation > detail, and *never* show it to the clients/user-agents... > > _________________________________________________ > > Melhores cumprimentos / Beir beannacht / Best regards > > Antnio Manuel dos Santos Mota > > http://card.ly/amsmota > _________________________________________________ > > > > On 7 April 2010 14:28, William Martinez Pomares <wmartinez@...>wrote: > > > > > > > > > I partially agree with the naming issues. > > But I see it under a different light. > > > > See, this post is developer driven, thought to get the dev into code in > > three simple steps. Haven't read it in deep, just over read, and I see first > > that we try the API as a normal RPC API to begin with. Then we go into > > defining what we need, for instance a data structure, that is not more that > > a resource actual representation. > > > > All that is fine for me, but I feel it needs the mapping explanation. Since > > I come from the architectural side, I would start by defining why we need > > the API, and how the API is RESTFull in the REST sense. Then I may come with > > this regular API and explain what changes should I make to create a RESTFull > > one. Then I explain what the resources would be (a data dictionary fits > > fine), and then the possible representations (structure) and then etc. > > > > This is more work, I know, and probably a developer wants to have the code > > to just copy paste, but if we intend to teach about REST I would suggest to > > include the what (concept) and then the how (in dev terms). In other words, > > do not replace, add. > > > > Will read it in calm and post more about it later. > > > > Cheers! > > > > William Martinez Pomares > > > > --- In rest-discuss@yahoogroups.com <rest-discuss%40yahoogroups.com>, Tim > > Williams <williamstw@> wrote: > > > > > > 2010/4/6 Antnio Mota <amsmota@> > > > > > > > > > > > > > > You're going to be pissed with me because I didn't even read the entire > > article (I'll do it at home > > > > tonigth) but after reading this sentence > > > > > > > > "this API will define a simple list management service." > > > > > > > > and since I just said in another post that IMO "API" and "service" > > should not be used to describe > > > > REST, I tried this simple semantic analysis > > > > > > > > CTRL+F resource > > > > > > > > and no single result was found! > > > > > > > > Now, should a description of a REST-something have more emphasis in > > "resources" rather than > > > > "services"? It can be just a question of "naming", but is not a > > important one? > > > > > > I'd extend this comment to say it'd be better if it stuck to > > > resource/representations instead of using 'data structure' that sort > > > of blurs the two. This too may seem nit-picky but using imprecise > > > terminology just raises the communication overhead I think. > > > Thanks, > > > --tim > > > > > > > > > >
Nick Gall wrote: > Jan Algermissen <algermissen1971@...> wrote: > > Thinking through this (and the following paragraphs) > > I get the impression that a specific application is > > 'created' only when a user[2] chooses a goal it > > intends to pursue and turns to the RESTful system > > (the Web) to start pursuing it. The application > > thereby brought to life might span several, > > unrelated 'services'. > > Have you ever seen ads for craft products, eg glue, > or fasteners, where they say "thousands of applications!". > That's how I interpret "application" these days. Thus, > properly speaking, an "application" isn't a thing, > it's a use of a thing to get something done, ie achieve > a goal. To put it into IT jargon, an "application" is a > "use case" or in some contexts regarding state, "an > instance of a use case". Correct, but... > One of the worst things to happen to IT was to use > the label "application" for the thing being applied, > ie a software system. This is the root of all the > confusion. Other words for the software system would > have been far better (and are sometimes actually used): > tool, utility, program, service, site, widget, etc. > Even the word "appliance" would have been better. Why? > Because it makes perfect sense to say "this appliance > has hundreds of applications" (think of a food processor). > In contrast, "this application has hundreds of > applications" is confusing to the point of meaninglessness. > > I think one of the reasons that the label "application" > was slapped onto software systems (instead of being > reserved to refer to the use of such systems), is that > so many software systems are SO specialized that that > have only a single use case, eg an expense report > program has only a single application (ie use case): > submitting expense reports. I think you're missing a bit of history, and consequently missing a logic loop. The reason that the label "application" was (correctly) slapped onto software systems was because they were use cases for "the computer". The software was an application of the hardware+OS. And you're right to notice that many of these were specialized dead-ends. But for many systems, each new subsystem implements a use case for lower layers, and yet may become a lower layer itself for further use cases. Fasteners, for example, serve applications like bushings (and many others), which serve applications like axles (and many others), which serve applications like vehicles (and many others), which serve applications like human transportation (and many others), which serve applications like getting to the Joe to see the Red Wings. Modern software mashups obviously have the same problem, but so do stacks like IP -> TCP -> HTTP -> XML -> XHTML -> expense report app -> my flight to London. Each is an application of the layer(s) below it. Some might be considered "appliances" but merely switching names doesn't obviate the layered structure. Robert Brewer fumanchu@...
Marc:
Good point.
In cases where the media-type defines specific queries, i agree that a
rel value should be used as you suggest.
As a follow up to this, are there query cases where a rel is not
needed (or should simply be set to rel="query")? For example, when the
query is an open template or where there are no specifications on the
query parameters.
In asking this question, I trying to see where over-specification of
relation links can hinder the value of the media-type.
Also, is the use of rel values important for all user-agent types? For
example, bots or other machine clients will most likely rely solely on
relation values for links. But clients that rely on human interaction
may not need relation values, but instead require text descriptions.
IOW, is the expected client type (agent? what is the right word here)
also an important factor in media-type design?
mca
http://amundsen.com/blog/
On Wed, Apr 7, 2010 at 10:52, marc_hadley <hadley@...> wrote:
> How is a client meant to attribute semantics to your query elements ? Given that URIs are meant to be opaque I'd expect some kind of additional identifier either as the QName of an element or as something like a rel attribute. E.g.:
>
> <query href="{query-uri?today}" rel="http://.../todays"/>
> <query href="{query-uri?open}" rel="http://.../open"/>
> <query href="{query-uri}" rel="http://.../date-range">
> <data name="date-start"></data>
> <data name="date-stop"></data>
> </query>
>
> or
>
> <today href="{query-uri?today}" />
> <open href="{query-uri?open}" />
> <range href="{query-uri}">
> <data name="date-start"></data>
> <data name="date-stop"></data>
> </range>
>
> Without this a client can't identify the correct query URI to use for a particular purpose without needing knowledge of the URI structure.
>
> Marc.
>
>
> --- In rest-discuss@yahoogroups.com, mike amundsen <mamund@...> wrote:
>>
>> I've posted a blog entry labeled "A RESTful Hypermedia API in Three
>> Easy Steps"[1]. I used Fielding's "REST APIs must be
>> hypertext-driven"[2] as a reference.
>>
>> I'd appreciate all the feedback anyone would like to offer regarding
>> the concepts, terminology, and implementation details described there.
>> If you prefer not to clutter this list, feel free to comment on the
>> blog or email me directly. I also hang out in the #rest IRC channel on
>> freednode if you'd like to carry on there.
>>
>> Thanks in advance.
>>
>> [1] http://amundsen.com/blog/archives/1041
>> [2] http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven
>>
>> mca
>> http://amundsen.com/blog/
>>
>
>
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
>
--- In rest-discuss@yahoogroups.com, Jan Algermissen <algermissen1971@...> wrote: > > > Thinking through this (and the following paragraphs) I get the impression that a specific application is 'created' only when a user[2] chooses a goal it intends to pursue and turns to the RESTful system (the Web) to start pursuing it. The application thereby brought to life might span several, unrelated 'services'. > > Another way one might say this is 'The application is defined by the current use of the system (the Web) for the given user intention' (and the current application state is "defined by its pending requests, the topology of connected components (some of which may be filtering buffered data), the active requests on those connectors, the data flow of representations in response to those requests, and the processing of those representations as they are received by the user agent."[1] > > If that understanding makes sense at all, it has the consequence, that application design is actually done on the client side and *not* on the server side. > > In the context of machine clients this would mean that applications are defined by the client side developer's interpretations of and assumptions about the envisioned media types (and link relations) and rules for choosing transitions. > > > Comments most welcome... > > Jan > > > [1] http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm#sec_5_3_3 > > [2] 'User' in this context would be a human user or someone who prepares (codes or configures) a client component to persue a certain goal > I agree with a lot of this but not all. Your definition of user ([2] above)is flawed in a very subtle way. The user is someone who takes a client (maybe they wrote it and maybe not) and points it at one or more URIs to accomplish their goal. ie. they configure the system at run-time. The client might then provide a UI for them to use or it might not. The lack of a UI might not imply that the goals are completely encoded in the client -- the URIs (and hence servers) selected by the user are just as important as the client used. ie. My goal could be to build a search index of my corporate intranet. The spider I write/choose to build indexes is one part of the system, but the corporate intranet is another important part. So I disagree that application design is all client side -- both the client and the servers (most importantly the representations they serve) are important to achieving the users' goals. Andrew
On Wed, Apr 7, 2010 at 11:12 AM, Robert Brewer <fumanchu@...> wrote:
> Nick Gall wrote:> I think one of the reasons that the label "application"
> > was slapped onto software systems (instead of being
> > reserved to refer to the use of such systems), is that
> > so many software systems are SO specialized that that
> > have only a single use case, eg an expense report
> > program has only a single application (ie use case):
> > submitting expense reports.
>
> I think you're missing a bit of history, and consequently missing a
> logic loop. The reason that the label "application" was (correctly)
> slapped onto software systems was because they were use cases for "the
> computer". The software was an application of the hardware+OS. And
> you're right to notice that many of these were specialized dead-ends.
>
You're absolutely right that the noun, "application", derived from its
original use as merely an adjective "application program". If we'd only kept
the full phrase, things would be fine. We could say "this application
program has hundreds of applications". Admittedly a bit odd sounding and a
bit confusing, but still much better than "this application has hundreds of
applications". I suppose that if we wanted to use a different distinction
than my suggested "this app has hundreds of applications", we could go with
"this program has hundreds of applications."
In contrast, what happened with home appliances. The phrase "electrical
appliance", later shortened to just "appliance" was derived from the concept
of applying electricity to accomplish a (household) task. In theory, they
could have used the phrase "electrical application" and we would have been
stuck calling fans, refrigerators, and washing machines "applications". But
fortunately, they didn't. I wish the computer pioneers had followed in their
footsteps and called the early software systems "computer appliances" so we
could have avoided the confusing overloading of the term "application" to
mean both the use of a thing and the thing being used. Oh well, language has
a mind of its own.
But for many systems, each new subsystem implements a use case for lower
> layers, and yet may become a lower layer itself for further use cases.
> Fasteners, for example, serve applications like bushings (and many
> others), which serve applications like axles (and many others), which
> serve applications like vehicles (and many others), which serve
> applications like human transportation (and many others), which serve
> applications like getting to the Joe to see the Red Wings. Modern
> software mashups obviously have the same problem, but so do stacks like
> IP -> TCP -> HTTP -> XML -> XHTML -> expense report app -> my flight to
> London. Each is an application of the layer(s) below it. Some might be
> considered "appliances" but merely switching names doesn't obviate the
> layered structure.
>
Yes, the uses relationship is highly layered, as David Parnas pointed out in
the 1970s. And I agree that distinguishing the use of a thing
("application") from the thing being used (what I am now calling "app")
doesn't change the layered structure. But it does clarify which layering one
is talking about: the layering of the "uses relationship" (a la Parnas) vs.
the physical layering of the programs being used.
-- Nick
I'm going to stick with "A REST system of any significant size will have an incalculable number of applications." (Or apps, if I want to abbreviate -- since when does abbreviating a word change its meaning? Talk about confusing...) I don't see any reason to change the existing terminology, that a REST system includes all its components. One may deem REST a layered-system architecture, just as one may deem it a client-server architecture. The term "system" makes a lot more sense than "app" or "application" when discussing what a REST developer actually develops. -Eric
"Eric J. Bowman" wrote: > > I'm going to stick with "A REST system of any significant size will > have an incalculable number of applications." (Or apps, if I want to > abbreviate -- since when does abbreviating a word change its meaning? > Talk about confusing...) > > I don't see any reason to change the existing terminology, that a REST > system includes all its components. One may deem REST a > layered-system architecture, just as one may deem it a client-server > architecture. The term "system" makes a lot more sense than "app" or > "application" when discussing what a REST developer actually develops. > The term I use interchangeably with "REST system", is "REST API", usually dropping "REST" and just saying "API" because REST is assumed in the context of this list. -Eric
My understanding is: you have a architectural style, you design a architecture that conforms to that style, you implement that design, you write applications that run over that architecture. So, REST imposes a Uniform Interface, I build a architecture wich I define as Uniform Interface: GET, POST, PUT, DELETE, LISTEN (this is based on a real use case). Note this interface is defined in the architecture, independent of any protocol beacause this architecture will support multi protocols, meaning the same resources will be adressed by several protocols via their specific connectors. Its based on the HTTP protocol because it will be the protocol that we foresee will be more used (and also has more support and tools available), but we also foresee other use cases that will use LISTEN, like support for JMS protocol via a JMS Connector. And that I can extend if needed by adding new verbs, without breaking existing users. Then we build applications on this architecture. We define media-types, relations, resources. One resource of the application can make use of GET, POST, PUT, DELETE verbs, another one GET and LISTEN only. None can use verbs non existing in the Uniform Interface, at least until we deploy Architecture 2.0 that can extend it with ERASE, for example. Note that I use the word Architecture to refer to a actual implementation of a achitecture, not only its design. (And that I use the term "I" loosely, as all this was/is a collective effort). This is not only my understending but the way we built our infrastructure, or midleware, that it has now some good months in production. And that "midlware" based on this is probably the more robust piece of software we use including several 3rd party frameworks, with exactly 0 bugs until now (even if the fact that it runs, for now, only on the intranet helps). And it supports HTTP, IMAP, JMS, intra-VM, JCR protocols. If I had the time I had built also a SOAP connector. Now from what I understood, you say that the architectural style imposes a Uniform Interface, but the actual architecture does not? It will have as much Uniform Interfaces as protocols it uses? And the Uniform Interface you apply to your media-types is the one that is mandated by the Architecture Style, and so we have two diferent levels of Uniform Interface, at the protocol level and at media-type level, and none at the architecture/application level? I admit perfectly my "misinterpretation" of REST if that's the case, and that what I described is wrong from a REST perspective. After all I read Fielding less than 2 years ago... But somehow what I desribe seems more "neat" to me, whatever the name. I don't know nothing about english poetry, but I remember (from a book of Gerald Weinberg, best wishes for him in his health) something like "a rose by any other name smells as sweet" :) On 7 Apr 2010 19:14, "mike amundsen" <mamund@...> wrote: Antnio: <snip> If the same application (meaning the same set of resources) are to be referenced by both HTTP and FT... </snip> First, this is not an application description here, but a media-type description. One can use the media-type to *implement* an application, but I did not go that far here. Possibly I've not made that as clear as it should be in this short blog post. Second, the media-type description I've described here does, in effect, have a uniform interface. There are three (and only three) hypermedia links described and the meanings for these links (including the possible operations) are detailed as well. Third, I think I hear you saying that your *application* has a uniform interface. That is fine, but not the same as saying an architectural style has a uniform interface. I read Fielding's Sec 5.1.5 [1] referring to "a uniform interface between components." For example, in a layered system some components may interact using the HTTP protocol on one layer and another set of components at some other layer may interact using FTP, BEEP or some other protocol. The fact that multiple protocols are used in the completion of a request does not, as I understand Fielding, reduce the RESTful nature of the architecture. As an aside, the use of your application-level interface terms (FETCH, ATTACK, KILL) strike me as solid candidates for relation link values. IOW, FETCH, ATTACK, KILL may be the equivalent of DSL reserved words for your application which, in a RESTful hypermedia implementation, need to be expressed as links w/ relation value annotations. mca http://amundsen.com/blog/ 2010/4/7 Antnio Mota <amsmota@gmail.com>: > > On 7 April 2010 15:14, mike amundsen <mamund@...> wrote: >> >> >> >> The uniform interface...
On Apr 7, 2010, at 5:53 PM, wahbedahbe wrote: > So I disagree that application design is all client side -- both the client and the servers (most importantly the representations they serve) are important to achieving the users' goals. Yes, that sounds better, I agree. Thanks. Jan
Antnio: Possibly we've gotten off the track a bit. Whether the uniform interface you are working with is the same for all layers, all connectors, etc. it not interesting to me while I am designing the media-type. What *is* interesting to me is whether the media-type design I employ is free of protocol-specific requirements. It is my assertion that, if you want to use the media-type I designed in my blog post, you can use that media-type with any uniform interface you wish. While I will accept that my blog post may be unclear on this point (I have some lazy language in the section regarding mixing protocols), that is one of the primary messages I tried to convey in my example. mca http://amundsen.com/blog/ 2010/4/7 Antnio Mota <amsmota@...> > > > My understanding is: you have a architectural style, you design a > architecture that conforms to that style, you implement that design, you > write applications that run over that architecture. > > So, REST imposes a Uniform Interface, I build a architecture wich I define > as Uniform Interface: GET, POST, PUT, DELETE, LISTEN (this is based on a > real use case). Note this interface is defined in the architecture, > independent of any protocol beacause this architecture will support multi > protocols, meaning the same resources will be adressed by several protocols > via their specific connectors. Its based on the HTTP protocol because it > will be the protocol that we foresee will be more used (and also has more > support and tools available), but we also foresee other use cases that will > use LISTEN, like support for JMS protocol via a JMS Connector. And that I > can extend if needed by adding new verbs, without breaking existing users. > > Then we build applications on this architecture. We define media-types, > relations, resources. One resource of the application can make use of GET, > POST, PUT, DELETE verbs, another one GET and LISTEN only. None can use verbs > non existing in the Uniform Interface, at least until we deploy Architecture > 2.0 that can extend it with ERASE, for example. > > Note that I use the word Architecture to refer to a actual implementation > of a achitecture, not only its design. (And that I use the term "I" loosely, > as all this was/is a collective effort). > > This is not only my understending but the way we built our infrastructure, > or midleware, that it has now some good months in production. And that > "midlware" based on this is probably the more robust piece of software we > use including several 3rd party frameworks, with exactly 0 bugs until now > (even if the fact that it runs, for now, only on the intranet helps). And it > supports HTTP, IMAP, JMS, intra-VM, JCR protocols. If I had the time I had > built also a SOAP connector. > > Now from what I understood, you say that the architectural style imposes a > Uniform Interface, but the actual architecture does not? It will have as > much Uniform Interfaces as protocols it uses? And the Uniform Interface you > apply to your media-types is the one that is mandated by the Architecture > Style, and so we have two diferent levels of Uniform Interface, at the > protocol level and at media-type level, and none at the > architecture/application level? > > I admit perfectly my "misinterpretation" of REST if that's the case, and > that what I described is wrong from a REST perspective. After all I read > Fielding less than 2 years ago... But somehow what I desribe seems more > "neat" to me, whatever the name. > > I don't know nothing about english poetry, but I remember (from a book of > Gerald Weinberg, best wishes for him in his health) something like "a rose > by any other name smells as sweet" :) > > On 7 Apr 2010 19:14, "mike amundsen" <mamund@...> wrote: > > Antnio: > > <snip> > > If the same application (meaning the same set of resources) are to be > referenced by both HTTP and FT... > > </snip> > > First, this is not an application description here, but a media-type > description. One can use the media-type to *implement* an application, > but I did not go that far here. Possibly I've not made that as clear > as it should be in this short blog post. > > Second, the media-type description I've described here does, in > effect, have a uniform interface. There are three (and only three) > hypermedia links described and the meanings for these links (including > the possible operations) are detailed as well. > > Third, I think I hear you saying that your *application* has a uniform > interface. That is fine, but not the same as saying an architectural > style has a uniform interface. I read Fielding's Sec 5.1.5 [1] > referring to "a uniform interface between components." For example, > in a layered system some components may interact using the HTTP > protocol on one layer and another set of components at some other > layer may interact using FTP, BEEP or some other protocol. The fact > that multiple protocols are used in the completion of a request does > not, as I understand Fielding, reduce the RESTful nature of the > architecture. > > As an aside, the use of your application-level interface terms (FETCH, > ATTACK, KILL) strike me as solid candidates for relation link values. > IOW, FETCH, ATTACK, KILL may be the equivalent of DSL reserved words > for your application which, in a RESTful hypermedia implementation, > need to be expressed as links w/ relation value annotations. > > > mca > http://amundsen.com/blog/ > > > > 2010/4/7 Antnio Mota <amsmota@...>: > > > > > On 7 April 2010 15:14, mike amundsen <mamund@...> wrote: > >> > >> > >> > >> The uniform interface... > > > >
Thanks for all the thoughtful replies - and the much better expressions for what I had on my mind. .. still digesting .. Jan On Apr 7, 2010, at 1:40 PM, Jan Algermissen wrote: > While reading through section 5.3.3[1] I am wondering, whether my understanding of "Application" actually matches Roy's. He writes: > [...]
mike amundsen wrote: > MUST a media-type assume a primary protocol and > reflect that in the documentation? No. > SHOULD a media-type identify a primary protocol > and reflect that in the documentation? No. > More to the point, is the idea of a true protocol-agnostic > media-type meaningless? unhelpful? Not meaningless at all. HTML documents have meaning even when they're sitting around on disk, not being transferred over a network protocol. This meaning and the syntax that communicates it belong in the media-type specification. Network protocol constraints belong in protocol specifications. For example, AtomPub (RFC 5023) defines two new media-types, but it is not itself a media type, it's a publishing protocol. Perhaps AtomPub is doing itself a disservice defining new types in the same draft wherein it defines a new application protocol? Hmm. Maybe I should split Shoji's media type into a separate document... Robert Brewer fumanchu@...
On Apr 7, 2010, at 1:51 PM, mike amundsen wrote: > It is my assertion that, if you want to use the media-type I designed in my blog post, you can use that media-type with any uniform interface you wish. While I will accept that my blog post may be unclear on this point (I have some lazy language in the section regarding mixing protocols), that is one of the primary messages I tried to convey in my example. > IMO, that is the right approach. Media types can remain independent of the how to use them on the protocol, and such protocol level semantics can be described by link relation types. Subbu
Serendipitous re-use, folks! I can't stress that enough. > > In the context of machine clients this would mean that applications > are defined by the client side developer's interpretations of and > assumptions about the envisioned media types (and link relations) and > rules for choosing transitions. > A REST application may be defined as, "What the user is trying to do." The user can only "do" what the origin server says (or implies) it may do. A third-party developer's assumptions don't mean squat to me -- they'll do things as the responses from my system constrain them to do things -- only my own assumptions (or abuses) define my API. The representations making up a self-documenting REST API, are *not* the API. They only describe it. (I say self-documenting to avoid any confusion with self-describing.) The purpose of providing a hypertext application is not to constrain anyone to follow it, its purpose is to expose the API (instead of relying on out-of-band documentation to describe the API) as standard methods, media types and link relations. What Googlebot is trying to do, is follow and index all links on my site, within whatever bounds Google sets for it (Googlebot won't follow all possible GET requests generated by some form, for example). I don't need to code a "index all links" API (although I may use Google sitemaps to provide this very API), Googlebot will do fine without my help. This application, "index all links", is constrained by the responses from my system -- with or without a robots.txt file. These two resources share the same definition (a weblog entry), yet their representations have different link-relation metadata: http://charger.bisonsystems.net/xmltest/2006/aug/09/11.xht http://charger.bisonsystems.net/xmltest/tags/pci-x/2006-aug-09.11.xht (you'll have to use your imagination there until the demo fleshes out) You'll have to imagine a "post new comment" form on each, with method= POST action=/xmltest/2006/aug/09/11/index.atom, which self-documents an Atom Protocol-based API. But, I could really care less about whether a user agent is POSTing to that Atom Feed using *any* of my HTML forms. Or following any sort of service document, for that matter. All I do care about, is that the client is POSTing a valid application/atom+xml document containing valid, allowable XHTML -- this is reflected in the response codes a third-party developer sees, whereas someone POSTing through my HTML representations won't see these failure codes, since my code won't allow invalid, or unallowable, code to be submitted and will always use the proper media type. IOW, I've self-documented my API in my representations, so ignore those restrictions at your own peril, third-party developers. A REST system exposes these API capabilities and restrictions using hypertext, to allow for serendipitous re-use by Googlebot or what-have-you. If "What the user is trying to do" is to post a comment on my weblog, then "post a comment" is the application. Regardless of whether the users are using the native hypertext client, or their user agent is some sort of Atom Protocol client, the system's responses indicate success or failure of the operation the same way -- the nature of the user agent is opaque to the API. "Each application defines goals for the underlying system, against which the system's performance can be measured." The goal of the underlying system, as defined by the "post a comment" application, is "accept a comment" and the system's performance in this regard may be easily measured. Regardless of user-agent, or path followed through the application to get to this point, what the API developer is interested in is the latency measured from the point in time the POST was fully received, to the point in time when the system starts responding 201 (if we're only measuring success performance). The average latency time of the aggregated 201 responses to POST requests across all comment threads, allows the REST developer to benchmark that aspect of the API. So performance of the "accept a comment" process is measured each time the "post a comment" application is executed, regardless of how it's executed, or which state transition option it came from (the form on the first URL vs. the form on the second URL for the same weblog entry vs. one from some third-party client directly manipulating my Atom representations and bypassing my HTML representations entirely). Serendipitous re-use. REST developers don't care who (assuming no security restrictions) is executing the "post a comment" application or how, only that the REST system is properly executing its "accept a comment" process whenever it's called. -Eric
I think what my understanding is diferent from what you say is that for me the Uniform Interface from th REST architectural style is to be defined at the Architecture level, while you say, probably with more insigth than me, that that Uniform Interface *is* the Protocol level. I do see the logic of it, and you probably rigth. However, if it is so, a Architecture, and consequently tha apps that run on it, can have several "regions" of it with diferent Uniform Interfaces acording to the protocol they use. And it seems OK from the quote you mentioned. But it still seems odd to me that a Uniform Interface is not uniform across all the components of a architecture. (They will be the same, of course, if the arhitecture only supports one protocol.) And has a consequence, the server side code you write to implement the resources must recognize diferent verbs that potentially mean the same (GET, RETR), meaning if you add support for another protocol you'll have to recode the resources. Or else write resources specific to each protocol. With "my" kind of implementation, adding support for a new protocol means only write a new connector, every else will stay the same, because will be the new connector that will "translate" the protocol specific verbs to the my architecture wide, specific Uniform Interface. It is why we support so many protocols with close to none effort and it will take ver little effort to write new ones (most of the code is already on our Abstract Connector). On 7 Apr 2010 21:52, "mike amundsen" <mamund@...> wrote: Antnio: Possibly we've gotten off the track a bit. Whether the uniform interface you are working with is the same for all layers, all connectors, etc. it not interesting to me while I am designing the media-type. What *is* interesting to me is whether the media-type design I employ is free of protocol-specific requirements. It is my assertion that, if you want to use the media-type I designed in my blog post, you can use that media-type with any uniform interface you wish. While I will accept that my blog post may be unclear on this point (I have some lazy language in the section regarding mixing protocols), that is one of the primary messages I tried to convey in my example. mca http://amundsen.com/blog/ 2010/4/7 Antnio Mota <amsmota@gmail.com> > > > > My understandin...
Saying it in other words, my view of "agnostic, multi protocol" is "all at once" (in Portuguese I would say "tudo ao molho" that is a somwhow humorous expression) and your view is "one at a time" (if I understood you correctly, of course). On 7 Apr 2010 22:59, "Antnio Mota" <amsmota@...> wrote: I think what my understanding is diferent from what you say is that for me the Uniform Interface from th REST architectural style is to be defined at the Architecture level, while you say, probably with more insigth than me, that that Uniform Interface *is* the Protocol level. I do see the logic of it, and you probably rigth. However, if it is so, a Architecture, and consequently tha apps that run on it, can have several "regions" of it with diferent Uniform Interfaces acording to the protocol they use. And it seems OK from the quote you mentioned. But it still seems odd to me that a Uniform Interface is not uniform across all the components of a architecture. (They will be the same, of course, if the arhitecture only supports one protocol.) And has a consequence, the server side code you write to implement the resources must recognize diferent verbs that potentially mean the same (GET, RETR), meaning if you add support for another protocol you'll have to recode the resources. Or else write resources specific to each protocol. With "my" kind of implementation, adding support for a new protocol means only write a new connector, every else will stay the same, because will be the new connector that will "translate" the protocol specific verbs to the my architecture wide, specific Uniform Interface. It is why we support so many protocols with close to none effort and it will take ver little effort to write new ones (most of the code is already on our Abstract Connector). > On 7 Apr 2010 21:52, "mike amundsen" <mamund@...> wrote: > Antnio: > Possibly we've gotten off the track a bit. > > Whether the uniform interface you are working with ... > > > > mca > http://amundsen.com/blog/ > > > > 2010/4/7 Antnio Mota <amsmota@...> > > > > >... > My understandin... > >
As regards the multi-protocol idea, there could be: - mixed (some actions w/ one protocol, some actions with another protocol) - side-by-side (all actions supported by all adopted protocols) - and possibly other ways to view this. I will say again that the notion of "protocol agnostic" is my point here. I meant to make no claim of the need or value of a "multi-protocol" implementation when designing a media-type, just that it was a possibility left open to *implementors* using the media-type. mca http://amundsen.com/blog/ 2010/4/7 Antnio Mota <amsmota@...> > Saying it in other words, my view of "agnostic, multi protocol" is "all at > once" (in Portuguese I would say "tudo ao molho" that is a somwhow humorous > expression) and your view is "one at a time" (if I understood you correctly, > of course). > > On 7 Apr 2010 22:59, "Antnio Mota" <amsmota@...> wrote: > > I think what my understanding is diferent from what you say is that for me > the Uniform Interface from th REST architectural style is to be defined at > the Architecture level, while you say, probably with more insigth than me, > that that Uniform Interface *is* the Protocol level. I do see the logic of > it, and you probably rigth. > > However, if it is so, a Architecture, and consequently tha apps that run on > it, can have several "regions" of it with diferent Uniform Interfaces > acording to the protocol they use. And it seems OK from the quote you > mentioned. But it still seems odd to me that a Uniform Interface is not > uniform across all the components of a architecture. (They will be the same, > of course, if the arhitecture only supports one protocol.) > > And has a consequence, the server side code you write to implement the > resources must recognize diferent verbs that potentially mean the same (GET, > RETR), meaning if you add support for another protocol you'll have to recode > the resources. Or else write resources specific to each protocol. > > With "my" kind of implementation, adding support for a new protocol means > only write a new connector, every else will stay the same, because will be > the new connector that will "translate" the protocol specific verbs to the > my architecture wide, specific Uniform Interface. It is why we support so > many protocols with close to none effort and it will take ver little effort > to write new ones (most of the code is already on our Abstract Connector). > > > On 7 Apr 2010 21:52, "mike amundsen" <mamund@...> wrote: > > > > > Antnio: > > > Possibly we've gotten off the track a bit. > > > > Whether the uniform interface you are working with ... > > > > > > > > > > mca > > http://amundsen.com/blog/ > > > > > > > > 2010/4/7 Antnio Mota <amsmota@...> > > > > > > > >... > > My understandin... > > > > > > > >
On Wed, Apr 7, 2010 at 3:47 PM, Jan Algermissen <algermissen1971@...> wrote: > On Apr 7, 2010, at 5:53 PM, wahbedahbe wrote: > > So I disagree that application design is all client side -- both the client and the servers (most importantly the representations they serve) are important to achieving the users' goals. > > Yes, that sounds better, I agree. Thanks. Should distinctions be made about types of systems that live on the Web (or wherever else RESTful systems live)? I mean, between the kinds of systems that clearly do assume a particular set of use cases, and want to lead users to a goal (e.g. Amazon shopping) and systems that may provide individual resources that might or might not lead to a goal (e.g. Wikipedia) and systems that aim at mashing up other resources (e.g. Google Wave) and probably some other systems that I did not mention? Some of these lead to serendipitous re-use better than others, and some (like Wave) are aimed at serendipitous re-use of other systems but themselves cannot be serendipitously re-used.
> > "Each application defines goals for the underlying system, against > which the system's performance can be measured." > > The goal of the underlying system, as defined by the "post a comment" > application, is "accept a comment" and the system's performance in > this regard may be easily measured... > Actually, my example showed how to measure the performance of the underlying origin server component, which is a useful component topology to measure. The performance of a larger set of the underlying component topology could be measured at the user agent, from the intiation of the POST request until the next steady-state is achieved, i.e. whatever the user agent does with the 201 response, like proceed to reload and render the comment thread. That way, the "in-circuit" component topology would include everything that affects the user-perceived performance of a REST application being executed within, say, a Web browser. The goal of the underlying system in this case, would be "post a comment and proceed to the next steady- state", every time it executes the "post a comment" application (still assuming the 201 case). -Eric
On Apr 6, 2010, at 3:18 PM, Jan Algermissen wrote: > > On Apr 6, 2010, at 2:58 PM, William Martinez Pomares wrote: > >> Is REST community interested in keeping this view of REST under SOA? > > Incidently, I am personally in the process of slowly developing the idea that the notion of 'service' is[1] actually harmful to REST-oriented thinking. The notion of 'service' is indeed harmful, IMHO, because it emphasizes interfaces as the 'point of contract' between client and server. REST on the other hand emphasizes representation semantics (media types, link relations, etc.) as establishing that contract. I have a hunch that this difference creates a tension that is the reason for much of the apparent misconceptions about REST. Jan > > I think 'service' is commonly perceived in the context of 'service layer', of exposing business functionality as a set of operations and operation-oriented thinking is somewhat contrary to REST. > > Maybe it is nit-picking, but maybe it is necessary to re-think networked systems development from the ground up to overcome the many apparent misconceptions about REST (and the associated dangers in making things worse as Stu mentioned). > > Jan > > [1] or 'might be' because I have not yet made up my mind :-)
2010/4/7 mike amundsen <mamund@...> > As regards the multi-protocol idea, there could be: > - mixed (some actions w/ one protocol, some actions with another protocol) > - side-by-side (all actions supported by all adopted protocols) > - and possibly other ways to view this. > > I will say again that the notion of "protocol agnostic" is my point here. I > meant to make no claim of the need or value of a "multi-protocol" > implementation when designing a media-type, just that it was a possibility > left open to *implementors* using the media-type. > > Yes, I understand that. And I guess I'm digressing to other areas beside your blog post. But then again... You wrote: "when i need to come up w/ a protocol-specific implementation of my media type. " So your media type *definition* is protocol agnostic but not your actual, usable media types - the implementations you mention. They are indeed protocol-specific. Meaning a effort in implementation and, worse, a maintenance effort that maybe neglectable with 2 protocols, but imagine with 4 or 5 or 6, multiplying by even only half-dozen media-types... And what I argue is, if we take away the task of mapping "application uniform interface" with "protocol interface" away from the media-types and put that directly at the architecture definition we avoid all that effort, and all the implementation work will be concentrated on the specific protocol connectors (when I say connectors I always referring to server-side connectors, not client-side). The effort will be the same for 1, 2, 10, 100 media-types... And I promise not digress again...
Ok Mike. I've read it and it seems a very simple example that shows the important, core aspects of media type definition. I also noticed you were following Roy's rant guide (I mean, the guide in the rant :D ). BTW, Antonio's point about multi protocols should be discussed a little bit in another comment. But now on this one: I read it and was able to understand the intention, although it was not explicit. If I change my REST knowledge, reducing it to a that of an RPC developer that knows REST is good and his boss told him to implement, quick and dirty, a REST API, I may have problems. I will be probably driven to think I'm defining a mapping for my RPC calls. So, my suggestion (bear with me as I treat all people as my university students) is to introduce the article by explaining a RESTSFull API is designed not by defining the procedures to call on a data element, but by defining a data element representation that has such and such attributes to allow such and such REST constrains. Then go on saying that regular RPC approach would be to define that list of operations you have there. Next, explain that your first step would be to define the resource (entity and semantics), and then define a representation (a media type) that will help user-agents work with that resource. Make clear that the actual List (resource) is implemented in the server in whatever form the designer wants: it may be a table in a database, an in memory array, a Vector, even a HashTable of some sort, or a hierarchical filesystem structure, you imagine. Client does not care, it receives the representation/media type your are defining. That is because regular developer may thing the list SHOULD be stored in the server as an XML like the one you post, in other words, you are defining the actual storage format. Again, foresee confusion and be defensive. I wont go on with the rest of the article, but that would be my style. Long, slow description making special emphasis on some concepts that may be confusing to the reader. For instance, when mapping the protocol, be clear you are not mapping the operations in the beginning to those of the protocol (making the protocol your PRC implementor), but using the protocol semantics to manipulate the resource, getting similar results as the operations above, and stating that the actual manipulation may be different altogether. Of course, you have your style, and the approach of the article is great (show by doing), but our discussion about perceptions of the meaning of some concepts makes me carefully trying not to confuse the reader. Cheers! William Martinez. --- In rest-discuss@yahoogroups.com, mike amundsen <mamund@...> wrote: > > I've posted a blog entry labeled "A RESTful Hypermedia API in Three > Easy Steps"[1]. I used Fielding's "REST APIs must be > hypertext-driven"[2] as a reference. > > I'd appreciate all the feedback anyone would like to offer regarding > the concepts, terminology, and implementation details described there. > If you prefer not to clutter this list, feel free to comment on the > blog or email me directly. I also hang out in the #rest IRC channel on > freednode if you'd like to carry on there. > > Thanks in advance. > > [1] http://amundsen.com/blog/archives/1041 > [2] http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven > > mca > http://amundsen.com/blog/ >
Antonio, Mike. One thing is the architecture style, that imposes restrictions to interactions and SOMETIMES, may impose technology, given that imposition is architecturally significant (critical to the architecture success). In this case, that is not the case. The concept of Uniform Interface is one that is not REST property. The implementation of it defined in REST includes the four restrictions: - identification of resources - manipulation of resources through representations - self-descriptive messages - hypermedia as the engine of application state Mainly, all of these are designed from the resources point of view, not the protocols. The last one imposes technology, to support the REST final implementation, that is the Web. Nothing there about protocols, just about the resource. Fielding states this in the rant: "A REST API should not be dependent on any single communication protocol, though its successful mapping to a given protocol may be dependent on the availability of metadata, choice of methods, etc." So, the API is defined with the resource. The user agent may select any protocol to use, depending on the protocols supported by servers and the protocols semantics in the operations. YOu should not change the protocol semantics to fit your needs. YOu may have protocols that are not suitable for what you need, then you should either adjust your resource semantics, or choose another protocols. In that case, Mike is right in terms of having different protocols for one client, where that client may require some operation not possible with another protocol. Antonio's approach is one happy solution to expandability. Your client may not know or cannot use one protocol, or you resource semantics are a perfect fit for one particular protocol, any other combination of protocols may require mapping. Thus, you create one resource definition for one protocol, and the create adapters for other clients using other protocols. That is a great idea, but be clear on a couple of things: 1. The REST API ends in the main protocol. The adapter layer is not part of API. The direct clients of the REST api are the adapters. 2. You may be introducing much impedance mismatch since the protocol semantics are sooo different. Or you are just tweaking the semantics of the secondary protocol to mean something different, which is possible but may not be as good. Cheers! William Martinez Pomares --- In rest-discuss@yahoogroups.com, Antnio Mota <amsmota@...> wrote: > > 2010/4/7 mike amundsen <mamund@...> > > > As regards the multi-protocol idea, there could be: > > - mixed (some actions w/ one protocol, some actions with another protocol) > > - side-by-side (all actions supported by all adopted protocols) > > - and possibly other ways to view this. > > > > I will say again that the notion of "protocol agnostic" is my point here. I > > meant to make no claim of the need or value of a "multi-protocol" > > implementation when designing a media-type, just that it was a possibility > > left open to *implementors* using the media-type. > > > > > Yes, I understand that. And I guess I'm digressing to other areas beside > your blog post. But then again... > > You wrote: > "when i need to come up w/ a protocol-specific implementation of my media > type. " > > So your media type *definition* is protocol agnostic but not your actual, > usable media types - the implementations you mention. They are indeed > protocol-specific. Meaning a effort in implementation and, worse, a > maintenance effort that maybe neglectable with 2 protocols, but imagine with > 4 or 5 or 6, multiplying by even only half-dozen media-types... > > And what I argue is, if we take away the task of mapping "application > uniform interface" with "protocol interface" away from the media-types and > put that directly at the architecture definition we avoid all that effort, > and all the implementation work will be concentrated on the specific > protocol connectors (when I say connectors I always referring to server-side > connectors, not client-side). The effort will be the same for 1, 2, 10, 100 > media-types... > > And I promise not digress again... >
Hello Bob. Not sure about your question. I mean, designer may design for users to obtain their goals, but I do not mean you have to provide everything! Actually, there are some resources created to be part of something else, not a standalone thing. The goal may perfectly be to work as a mashup component! William Martinez, --- In rest-discuss@yahoogroups.com, Bob Haugen <bob.haugen@...> wrote: > > On Wed, Apr 7, 2010 at 8:10 AM, William Martinez Pomares > <wmartinez@...> wrote: > > With all this blah, I came to a similar conclusion: apps are there, with paths the user must discover. > > System designer should provide all info, state possibilities and transitions to obtain certain goals, > > and thus providing some possible apps. > > I can see that for apps that somebody designed, e.g. shopping at > Amazon. But what about serendipitous apps, e.g. mashups? Google > Waves? >
Erick. I do differentiate the concepts. A system is that collection of components organized and with particular interactions. Those may support different applications, that are not more than a set of task to produce a goal. A system is my computer, and applications uses some of the components to achieve a goal. So, you may create a system that supports several apps, even some you didn't think of! Now, System is not an API. I actually argue about the term REST API since it comes, I think, from the idea that REST is a Service or RPC replacement. An API is a layer between your system and an external client. Of course, that client is not part of your system. The API may be. The API may be a facade, or an adapter. If a facade, if the system is REST, the API is a REST constrained simplification of the system. If an adapter, the API is something that will convert, say, RPC interactions into REST interactions. The most typical is you have non-REST system (SOA maybe, or a plain old OO) and you want to be REST. You create an API, Adapter, that will allow REST like clients to use your OO system as a REST one. The API will accept HTTP operations in one face, and in the other one will invoke object methods. See how I see it? William Martinez Pomares. --- In rest-discuss@yahoogroups.com, "Eric J. Bowman" <eric@...> wrote: > > "Eric J. Bowman" wrote: > > > > I'm going to stick with "A REST system of any significant size will > > have an incalculable number of applications." (Or apps, if I want to > > abbreviate -- since when does abbreviating a word change its meaning? > > Talk about confusing...) > > > > I don't see any reason to change the existing terminology, that a REST > > system includes all its components. One may deem REST a > > layered-system architecture, just as one may deem it a client-server > > architecture. The term "system" makes a lot more sense than "app" or > > "application" when discussing what a REST developer actually develops. > > > > The term I use interchangeably with "REST system", is "REST API", > usually dropping "REST" and just saying "API" because REST is assumed > in the context of this list. > > -Eric >
> Thanks for the feedback. I appreciate the time you spent to provide me > with good suggestions. > > I see your point about the overall presentation. This short blog post > skips quite a bit when attempting to educate users on the basics of > REST. Also, I received feedback from others on the possibility that > this particular posting doesn't do enough to clear up the issue of > RPC-like approaches to implementing web interfaces. In fact, a longer > draft of this material included an additional section directly > addressing typical RPC ways to express the service. I actually have an > example SOAP implementation and an example RPC-over-HTTP > implementation, but skipped that in the blog as it too long for this > medium. > > Again, thanks for the helpful feedback and suggestions. > > mca > http://amundsen.com/blog/ > > > > > On Thu, Apr 8, 2010 at 09:05, William Martinez Pomares > <wmartinez@...> wrote: >> Ok Mike. >> I've read it and it seems a very simple example that shows the important, core aspects of media type definition. I also noticed you were following Roy's rant guide (I mean, the guide in the rant :D ). >> BTW, Antonio's point about multi protocols should be discussed a little bit in another comment. >> >> But now on this one: I read it and was able to understand the intention, although it was not explicit. If I change my REST knowledge, reducing it to a that of an RPC developer that knows REST is good and his boss told him to implement, quick and dirty, a REST API, I may have problems. I will be probably driven to think I'm defining a mapping for my RPC calls. >> >> So, my suggestion (bear with me as I treat all people as my university students) is to introduce the article by explaining a RESTSFull API is designed not by defining the procedures to call on a data element, but by defining a data element representation that has such and such attributes to allow such and such REST constrains. >> Then go on saying that regular RPC approach would be to define that list of operations you have there. Next, explain that your first step would be to define the resource (entity and semantics), and then define a representation (a media type) that will help user-agents work with that resource. Make clear that the actual List (resource) is implemented in the server in whatever form the designer wants: it may be a table in a database, an in memory array, a Vector, even a HashTable of some sort, or a hierarchical filesystem structure, you imagine. Client does not care, it receives the representation/media type your are defining. That is because regular developer may thing the list SHOULD be stored in the server as an XML like the one you post, in other words, you are defining the actual storage format. Again, foresee confusion and be defensive. >> >> I wont go on with the rest of the article, but that would be my style. Long, slow description making special emphasis on some concepts that may be confusing to the reader. For instance, when mapping the protocol, be clear you are not mapping the operations in the beginning to those of the protocol (making the protocol your PRC implementor), but using the protocol semantics to manipulate the resource, getting similar results as the operations above, and stating that the actual manipulation may be different altogether. >> >> Of course, you have your style, and the approach of the article is great (show by doing), but our discussion about perceptions of the meaning of some concepts makes me carefully trying not to confuse the reader. >> >> Cheers! >> >> William Martinez. >> >> >> --- In rest-discuss@yahoogroups.com, mike amundsen <mamund@...> wrote: >>> >>> I've posted a blog entry labeled "A RESTful Hypermedia API in Three >>> Easy Steps"[1]. I used Fielding's "REST APIs must be >>> hypertext-driven"[2] as a reference. >>> >>> I'd appreciate all the feedback anyone would like to offer regarding >>> the concepts, terminology, and implementation details described there. >>> If you prefer not to clutter this list, feel free to comment on the >>> blog or email me directly. I also hang out in the #rest IRC channel on >>> freednode if you'd like to carry on there. >>> >>> Thanks in advance. >>> >>> [1] http://amundsen.com/blog/archives/1041 >>> [2] http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven >>> >>> mca >>> http://amundsen.com/blog/ >>> >> >> >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> >> > >
On 8 April 2010 14:25, William Martinez Pomares <wmartinez@...>wrote: > Antonio's approach is one happy solution to expandability. Your client > may not know or cannot use one protocol, or you resource semantics are a > perfect fit for one particular protocol, any other combination of protocols > may require mapping. Thus, you create one resource definition for one > protocol, > No, no, I did nothing of that. The resources (that are all instances of a single Resource) know nothing about any protocol. I did no such thing as"create one resource definition for one protocol". The only "protocol" the resource knows about it's my architecture or application specific protocol that happens to follow HTTP protocol (GET...) but *only* for convenience. I could define that also as FETCH, ATACK, KILL, or something like XPTA, XPTB, XPTC. > and the create adapters for other clients using other protocols. > I also did no such thing as "create adapters for other clients using other protocols". I created Server-Side Connectors, that are a architecture component, that are independent of one another, for each protocol supoorted. I did *not* create a adaptor to adapt JMS to HTTP for instance. All the connectors are at the same level, the HTTP one is at the same level as the JMS one. They don't communicate with each other, they don;t even know each other exists. Each one communicate only with the Resource. I could drop the HTTP connector and the application will continue to work only with JMS, for example. > That is a great idea, but be clear on a couple of things: > 1. The REST API ends in the main protocol. The adapter layer is not part of > API. The direct clients of the REST api are the adapters. > As exposed I don't have a main protocol and I don't have adapters, I have connectors that are a architecture component and so are part of the Architecture. I don't quite understand what you mean "The REST API ends in the main protocol." - but then again I don;t even know the exact meaning of REST API :) > 2. You may be introducing much impedance mismatch since the protocol > semantics are sooo different. Or you are just tweaking the semantics of the > secondary protocol to mean something different, which is possible but may > not be as good. > > There is no secondary protocol, so all protocols made the same kind of "adaptation" between the transport protocol interface and my application interface... Not only the Verb but the headers, parameters and other data and metadata in the message. Hope this clarifies, but this should be probably another thread?
Hi Antonio. Sorry I misunderstood, and yes, this is a nice topic to be in a different thread!. William. --- In rest-discuss@yahoogroups.com, Antnio Mota <amsmota@...> wrote: > > On 8 April 2010 14:25, William Martinez Pomares <wmartinez@...>wrote: > > > Antonio's approach is one happy solution to expandability. Your client > > may not know or cannot use one protocol, or you resource semantics are a > > perfect fit for one particular protocol, any other combination of protocols > > may require mapping. Thus, you create one resource definition for one > > protocol, > > > > No, no, I did nothing of that. The resources (that are all instances of a > single Resource) know nothing about any protocol. I did no such thing > as"create one resource definition for one protocol". The only "protocol" the > resource knows about it's my architecture or application specific protocol > that happens to follow HTTP protocol (GET...) but *only* for convenience. I > could define that also as FETCH, ATACK, KILL, or something like XPTA, XPTB, > XPTC. > > > > and the create adapters for other clients using other protocols. > > > > I also did no such thing as "create adapters for other clients using other > protocols". I created Server-Side Connectors, that are a architecture > component, that are independent of one another, for each protocol supoorted. > I did *not* create a adaptor to adapt JMS to HTTP for instance. All the > connectors are at the same level, the HTTP one is at the same level as the > JMS one. They don't communicate with each other, they don;t even know each > other exists. Each one communicate only with the Resource. I could drop the > HTTP connector and the application will continue to work only with JMS, for > example. > > > That is a great idea, but be clear on a couple of things: > > 1. The REST API ends in the main protocol. The adapter layer is not part of > > API. The direct clients of the REST api are the adapters. > > > As exposed I don't have a main protocol and I don't have adapters, I have > connectors that are a architecture component and so are part of the > Architecture. I don't quite understand what you mean "The REST API ends in > the main protocol." - but then again I don;t even know the exact meaning of > REST API :) > > > 2. You may be introducing much impedance mismatch since the protocol > > semantics are sooo different. Or you are just tweaking the semantics of the > > secondary protocol to mean something different, which is possible but may > > not be as good. > > > > There is no secondary protocol, so all protocols made the same kind of > "adaptation" between the transport protocol interface and my application > interface... Not only the Verb but the headers, parameters and other data > and metadata in the message. > > Hope this clarifies, but this should be probably another thread? >
On Apr 8, 2010, at 3:43 PM, William Martinez Pomares wrote: > Now, System is not an API. I actually argue about the term REST API since it comes, I think, from the idea that REST is a Service or RPC replacement. I think the whole notion of 'API' is hindering RESTful thinking. The notions of 'service', 'interface' and 'API' suggest that the contract that informs and guides client side development is located at the server, that the client developer is coding against the interface provided by the server. With REST, however, the contract is provided by the media types (and link rels) only. The client developer cannot even anticipate to what servers and services the interaction will lead the application. I think it is helpful to view server side development as exposing server-component managed (business-) state for serendipitous consumptions instead of viewing it as providing an 'application programmer's interface'. A nice side effect of this view is that it immediately becomes evident that there cannot be any client side coding before specific media types have been decided upon. Jan ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
On Apr 9, 2010, at 11:26 AM, Jan Algermissen wrote: > > I think it is helpful to view server side development as exposing server-component managed (business-) state for serendipitous consumptions instead of viewing it as providing an 'application programmer's interface'. Server side development should not be driven by the question 'how should clients interact with this service?' but focus on exposing the server-side system in a way that maximizes the serendipitous reuse of its state and capabilities. And (maybe) server side maintenance should explicitly included the activity of constantly aiming to refine the server to enhance its possible reuse. Jan > > A nice side effect of this view is that it immediately becomes evident that there cannot be any client side coding before specific media types have been decided upon. > > Jan > > > > ----------------------------------- > Jan Algermissen, Consultant > NORD Software Consulting > > Mail: algermissen@... > Blog: http://www.nordsc.com/blog/ > Work: http://www.nordsc.com/ > ----------------------------------- > > > > > > > ------------------------------------ > > Yahoo! Groups Links > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
I would come at this from a different angle and note that client features tend to drive the creation and evolution of markup languages. Client requirements drive markup languages which drive client code. The existence of clients that support a markup language drives the development of services that represent their resources in that language in order to reach those clients. Serendipitous re-use tends to come from the declarative nature of markup languages that, via the principle of least power, enables secondary clients (e.g. spiders) to also be driven by the markup language designed around the primary client. The primary client can also drive the behavior of the secondary clients. For example, Google's spider isn't just about indexing the data in web pages. It must prioritize the data presented to users when they view a page in the browser. If "invisible" data (e.g. meta-data or even data hidden via CSS) was given the same priority as visible data, then the service would be less valuable to its users. Google isn't indexing the browser input -- it is indexing the browser output (to your screen). The declarative nature of HTML makes it easy to derive output from input. The single biggest flaw in most REST system design today is that markup languages are being designed around the service's view of the data. The markup should be a declarative program for the client -- saying "what" the client does with the data rather than "how" it does it. Folks are using markup to denote "what" the data means, but to the service rather than the client. The clients then have to be designed around the service semantics -- this binds them to the service just as much as RPC does. I've been saying for years ( http://tech.groups.yahoo.com/group/rest-discuss/message/8411 ) that with REST, the API is in the client. The markup is a declarative program written to that API -- and being declarative means it can be adapted to other APIs. The scripts are optional imperative programs to fill in the feature gaps of the markup language. Content negotiation allows a single service to use other markup languages to target other sets of APIs. If you design your markup languages around this line of reasoning, you will get a lot more mileage out of them. Andrew --- In rest-discuss@yahoogroups.com, Jan Algermissen <algermissen1971@...> wrote: > > > On Apr 9, 2010, at 11:26 AM, Jan Algermissen wrote: > > > > > I think it is helpful to view server side development as exposing server-component managed (business-) state for serendipitous consumptions instead of viewing it as providing an 'application programmer's interface'. > > Server side development should not be driven by the question 'how should clients interact with this service?' but focus on exposing the server-side system in a way that maximizes the serendipitous reuse of its state and capabilities. > > And (maybe) server side maintenance should explicitly included the activity of constantly aiming to refine the server to enhance its possible reuse. > > Jan > > > > > > > > > > > > > A nice side effect of this view is that it immediately becomes evident that there cannot be any client side coding before specific media types have been decided upon. > > > > Jan > > > > > > > > ----------------------------------- > > Jan Algermissen, Consultant > > NORD Software Consulting > > > > Mail: algermissen@... > > Blog: http://www.nordsc.com/blog/ > > Work: http://www.nordsc.com/ > > ----------------------------------- > > > > > > > > > > > > > > ------------------------------------ > > > > Yahoo! Groups Links > > > > > > > > ----------------------------------- > Jan Algermissen, Consultant > NORD Software Consulting > > Mail: algermissen@... > Blog: http://www.nordsc.com/blog/ > Work: http://www.nordsc.com/ > ----------------------------------- >
On Fri, Apr 9, 2010 at 5:39 AM, Jan Algermissen <algermissen1971@...>wrote: > > On Apr 9, 2010, at 11:26 AM, Jan Algermissen wrote: > > > > > I think it is helpful to view server side development as exposing > server-component managed (business-) state for serendipitous consumptions > instead of viewing it as providing an 'application programmer's interface'. > > Server side development should not be driven by the question 'how should > clients interact with this service?' but focus on exposing the server-side > system in a way that maximizes the serendipitous reuse of its state and > capabilities. > > And (maybe) server side maintenance should explicitly included the activity > of constantly aiming to refine the server to enhance its possible reuse. Perhaps we should rename REST API to REST GPI (General Purpose Interface). After all, it is (the principle of) generality that leads to serendipitous (re)use. This would serve as a constant reminder to interface/media type designers to focus on generality, generality, generality. -- Nick
Hey all, A few days ago I wrote a blog post on why I think understanding REST is hard and what we can and *should* do about it - http://wp.me/poYaf-34. Since a lot of what I wrote was inspired by following this group and since this group is the most relevant (only?) place for discussing REST - I'd like to know what you think. Short version (since the post is really long): Although there's a lot of great blog posts, papers and mailing list discussions, the current material on REST is a mess which makes REST hard to understand and confusing to discuss: * there is no agreed upon and widely used terminology, but a lot of unexplained and overlapping terms, * discussions are fragmented all over the web and often unnecessarily repeat previous discussions, * there are no (formal or semi-formal) models of (important) concepts. Therefore, I think people involved in and enthusiastic about REST (mostly people on this group) should: 1. Agree that there is a problem worth fixing – do we think that we can create a better, clearer and more systematized way of understanding and discussing about REST? 2. Express interest in fixing it – is this something people want to contribute their time to? 3. Agree on how to fix it – what should be our output (a RESTopedia, a document, video tutorials) and how would we all contribute to and moderate the process? 4. Do it – spend time discussing and developing the output. 5. Eat our dogfood – use whatever we produce. If we don’t use the terminology and models we agree upon, the the mess has only gotten bigger. Cheers, Ivan (hoping that all of this doesn't sound as babbling of an overeagerly naive megalomaniac)
Ivan Žužak wrote: > > Therefore, I think people involved in and enthusiastic about REST > (mostly people on this group) should: > I don't think it's up to us, and I don't think you're taking the actual problem into account -- REST being a buzzword. There is no motivation for, say, REST-* to have a damn thing to do with REST because they're applying the term due to profit motive, not REST enthusiasm. Same goes for most of the contradictory crap out there that has absolutely no basis in REST-the-thesis -- most of those folks only care about REST-the-buzzword. Once the buzzword makes for stale marketing, general discussions about REST will recover their sanity, and those not- REST APIs will call themselves whatever the next buzzword turns out to be. -Eric
To clarify what I mean by an "incalculable number" of REST applications
for any given REST system, an analogy is that the 26 letters of the
alphabet lead to an incalculable number of words. Some words are
incredibly common -- conjunctions come to mind. Same with REST
applications; there will be common usage patterns, of varying length,
and these may be optimized.
"William Martinez Pomares" wrote:
>
> I do differentiate the concepts. A system is that collection of
> components organized and with particular interactions. Those may
> support different applications, that are not more than a set of task
> to produce a goal. A system is my computer, and applications uses
> some of the components to achieve a goal.
>
I'm only interested in using the term "system" in the REST sense so
clearly evident in the thesis. I may, in other contexts, use the term
"system" in the sysadmin sense, to refer to a physical system. I don't
like to talk about system in the sysadmin sense on rest-discuss, even
as an analogy, to avoid confusion and because hardware setup isn't
relevant to REST.
We don't call REST systems "applications" because that takes a whole
slew of components and connectors for granted, in order for it to
work. So I don't like the analogy -- a browser application uses other
components resident on the same physical system, but others are located
across the network. REST applications are distributed.
>
> So, you may create a system that supports several apps, even some you
> didn't think of!
>
Exactly what's meant by "serendipitous re-use". ;-) This is exactly
what I mean when I say a REST system has an "incalculable number" of
applications. The developer can't possibly imagine the possible use
cases, more on that in my next response to Jan...
>
> Now, System is not an API. I actually argue about the term REST API
> since it comes, I think, from the idea that REST is a Service or RPC
> replacement.
>
You're right, the system encompasses many components, but these things
are out of the REST developer's hands. Everything that is in the REST
developer's hands, is the API for making the overall system do work.
I think the term REST API was derived from the fundamental nature of
REST itself. This is the basis for my interchangeable usage of "REST
system" and "REST API"...
"The resource implementation details are hidden behind the interface."
While the connection to and nature of the backend are certainly parts
of your overall system architecture, they've nothing to do with REST
architecture. REST is all about the communication between connectors
over the wire -- REST applications are distributed.
So we're only concerned with the self-documenting interfaces that
describe a REST system to the world, and the response codes a system
generates to various methods invoked on various resources. You know --
the API... ;-).
>
> An API is a layer between your system and an external client. Of
> course, that client is not part of your system.
>
User agent is a better term than client. The user agent is absolutely a
part of the system. In fact, REST applications are executed within
user agents -- the application state isn't always held entirely within
the representation of a resource, it usually must be rendered to obtain
an application steady-state. The combination of representation plus
user agent *is* the REST application. If HTML + browser, it's a browser
application (with, of course, a client connector).
"[T]he interaction of a real system usually involves an extensive number
of components, resulting in an overall view that is obscured by the
details."
An API is the layer between your system and external components, like
caches. The user agent may not ever contact the origin server directly,
and a cache may be able to serve all requests without contacting the
origin server either. That cache becomes part of your system -- one of
those components that we take for granted -- it's executing a REST
application against the origin server, to determine resource state
using conditional requests.
So a REST API is truly a programming interface for applications,
whether they're executing on the requesting user-agent or on some
intermediary. How would a cache know how to service requests on behalf
of my origin server, if it can't be programmed to understand my service
parameters? I program intermediaries the same way I program user
agents, with a REST API.
>
> API may be a facade, or an adapter. If a facade, if the system is
> REST, the API is a REST constrained simplification of the system. If
> an adapter, the API is something that will convert, say, RPC
> interactions into REST interactions.
>
I don't see the distinction, and I don't see how such a distinction can
be made. Give me a bunch of RPC endpoints, and I can code you a REST
system to drive them. Since the implementation details are hidden
behind the interface, the API is something that will transfer
representations of application state to user agents, and instruct them
how to transfer representations of desired application state to the
origin server.
REST is a layered-system architecture. I apply REST constraints to the
communication between the connectors of my back-end system, but this is
not part of my API since it's an implementation detail hidden behind
the interface to the world. That interface to the world is my API.
>
> The most typical is you have non-REST system (SOA maybe, or a plain
> old OO) and you want to be REST. You create an API, Adapter, that
> will allow REST like clients to use your OO system as a REST one. The
> API will accept HTTP operations in one face, and in the other one
> will invoke object methods.
>
Bear with me...
In my case, the non-REST system is WordPress. I create a REST layer to
encapsulate this legacy system, just like the thesis says. WP only
generates Atom Entry, Atom Feed and Atom Category documents. WP creates
the URI allocation scheme you see in my demo via a module with a couple
of support functions for my template, which generates Atom instead of
HTML. But, WP is incapable of responding to that URI allocation scheme
directly without extensive modification.
Which is OK by me, since I don't want to expose WP to the world; it
runs on its own IP and rejects requests from any IP that isn't the
encapsulation layer's or mine (initially, I'll use the WP interface
until development is completed on the Xforms interface, then I'll cut
out the CP, and have the main page return a configuration form to the
admin user when an OPTIONS request is made with proper credentials).
The encapsulation layer converts all requests to /index.php?p={x}, and
acts as a reverse-proxy/cache for the response. A request to post a
comment to a thread is handled by the encapsulation layer; once handled
(validated and such), the request is passed on to WP using Atom
Protocol. The encapsulation layer creates separate Atom Feeds for each
thread, whereas WP's Atom Protocol capability considers the entire
weblog as a single collection.
The future of my project consists of replacing WP with an Atom Store.
The encapsulation layer (REST wrapper) has a modular design, to support
interfacing with multiple source generators. So a new module is needed
which, instead of abstracting out /index.php?p={x} into a hierarchical
URI allocation scheme, abstracts out the URI allocation scheme of the
Atom Store (which is the same as that used in my atom:id's, btw) into my
site's URI allocation scheme.
So, I can swap out a MySQL-driven WP content generation layer for a
simple Atom Store XMLDB-driven content generation layer that's not in
any way similar except for its abilities to generate Atom content that
exactly matches what WP generates, and accept posts via Atom Protocol.
The REST wrapper completely isolates this madness from the outside
world. Commenters on my weblog won't notice a thing -- because I
haven't changed the API in the slightest. (My back-end API changes a
bit, but it's using Atom Protocol either way.)
By sharing the self-documenting interfaces to my system with the world,
I am providing a stable API which is unaffected by the nature of my
back-end implementation. The API is the interface behind which the
implementation details are hidden. REST API "Implementations are
decoupled from the services they provide, which encourages independent
evolvability."
Do you see how I just described a REST API which evolves from your
"adapter" into your "facade" over time? WP isn't RESTful, but the Atom
Store certainly will be. This distinction has no impact whatsoever on
any application running against my system, because it's hidden behind
the generic connector interface, which is the point of a REST API.
>
> See how I see it?
>
Nope, sorry. You're trying to make a distinction which assumes the API
is coupled to the back-end implementation, when such coupling doesn't
exist in a REST API.
-Eric
Jan Algermissen wrote:
>
> William Martinez Pomares wrote:
>
> > Now, System is not an API. I actually argue about the term REST API
> > since it comes, I think, from the idea that REST is a Service or
> > RPC replacement.
>
>
> I think the whole notion of 'API' is hindering RESTful thinking. The
> notions of 'service', 'interface' and 'API' suggest that the contract
> that informs and guides client side development is located at the
> server, that the client developer is coding against the interface
> provided by the server.
>
I think trying to fit the notion of "contract" into REST is hindering
RESTful thinking. ;-) What is a client developer? Someone building a
user agent? Or a third party writing their own REST application against
my REST (API | system | service)? "Client-side development" makes no
sense -- when I do client-side development I'm not building user agents,
I'm explicitly writing a self-documenting API of the interfaces
provided by the server. Building user agents is client-side
development of a different sort.
The only contract in REST, is that REST constrains me to use standard
methods, media types and link relations to describe this API to user
agents. If you want a contract for yor API it needs to be contained
within representations of standard media types. To me a contract would
be declared in domain-specific vocabulary beyond the scope of REST.
>
> I think it is helpful to view server side development as exposing
> server-component managed (business-) state...
>
Do you mean resource state? I expose resource state through
representations of requested resources. Resource state is independent
of application state. If an image is dereferenced, the representation
of resource state is returned, but that isn't an application state if
the image was dereferenced as part of the process of rendering some
other representation into an application steady-state.
If by server-side development, you mean figuring out what resources
combine into what application steady-states in what media types and
naming them, then the goal of server-side development goes beyond
exposing resource state and carries into developing application steady-
states for the native hypertext, i.e. explicitly writing a self-
documenting API of the interfaces provided by the server. Even though
they execute on the client, you can't develop on the server without a
knowledge of your intended application steady-states.
"An origin server uses a server connector to govern the namespace for a
requested resource. It is the definitive source for representations of
its resources and must be the ultimate recipient of any request that
intends to modify the value of its resources. Each origin server
provides a generic interface to its services as a resource hierarchy.
The resource implementation details are hidden behind the interface."
On my demo, application steady-state requires XSLT processing which in
turn requires /date service translation. So /date is both part of my
weblog API and an API in its own right. As an API in its own right,
it's pretty useless, but it's still (mostly, atm) RESTful. Another
service is the Atom-Protocol-based API, also part of my weblog API and
a (mostly) REST API in its own right. The weblog API described by my
hypertext (my "REST API") instructs user agents how to combine these
two services into an incalculable number of REST applications.
So my origin server provides a generic connector interface to a
resource hierarchy of RESTful services. This resource hierarchy, what
REST applications are programmed against, is the API. This REST API
uses self-descriptive messaging and self-documenting representations
(i.e. a generic connector interface) to instruct user agents how to
manipulate resource state. Application steady-state isn't the same
thing as the state of any given resource it's composed of.
The steady-states are what allow for serendipitous re-use. The API
described by these steady-states can be referenced by other user agents
than the ones the API developer intended. The more resources and the
more metadata exposed by these steady-states, the greater the
opportunities for serendipitous re-use.
>
> With REST, however, the contract is provided by the media types (and
> link rels) only. The client developer cannot even anticipate to what
> servers and services the interaction will lead the application.
>
By client developer, do you mean someone writing a client connector for
an intermediary component? Someone writing the API to a REST system?
A third-party REST application developer? While it's true that a
third-party REST application can't anticipate what targets and methods
will be used, the developer can know (by looking at my API) where to
*look* for this information.
There is plenty of metadata to choose from in the steady-states of my
REST API, which may be used for any purpose a third-party developer
sees fit, based on whatever their assumptions may be (constrained by
the actual responses of my origin server). I make no promises about
how stable any of it is, unless it's part of the API described by the
sum of the actual responses of my origin server and my hypertext.
If a third-party REST application is based on the API I've described in
my hypertext through an understanding of its domain-specific vocabulary,
then that domain-specific vocabulary is probably the "contract" you're
looking for. Any deviation from the API I've described is unsupported,
even if it does work for now, whereas I'll be supporting the contract.
>
> A nice side effect of this view is that it immediately becomes
> evident that there cannot be any client side coding before specific
> media types have been decided upon.
>
Not really, no. In many cases, the data may be represented as Atom and
interacted with using Atom Protocol, as a prototyping exercise. Thus
the cost-benefit analysis vs. Just Use Atom/AtomPub can be done, and a
baseline exists for performance analysis of further system development.
Consider this case, not only as an example of design following the
above paragraph, but a result machine users can contractually follow to
execute tasks. (It's similar to my demo and the process I went
through, but not exactly so, and I'm riffing on that to boot).
Let's say I'm at the blank-sheet phase of designing the system my demo
represents. I have some idea how I want to present my weblog, i.e.
following a link to an original post contains a link to that post with
its comment thread contains links to those comments as standalone
documents. On the homepage, I want to present only a summary of the
content, with a "more" link which when clicked updates the page in-
place with the content from the standalone source.
From this vague idea, I can create the "outline" of a single document
containing the entire content of the weblog API I'm creating. I can now
create a filesystem hierarchy with folders titled as per the outline,
i.e. Roman numeral 'I' is a folder name, containing folders named
capital 'A, B etc.', containing folders '1, 2 etc.' containing folders
'a, b etc.' and so on and so forth.
Into these folders I can place numbered text files containing random
gibberish. I've now modeled my main resources and their relationships
to one another. I can now create Atom representations for each snippet
of gibberish, still as text, and start naming directories and files and
assigning them titles, then tie them together with standard link
relations. I can now set up eXist and manipulate things with Atom
Protocol -- there's my initial prototype, based on text/plain with no
domain specific vocabulary.
Now I can proceed to think about what my media types are. Atom works
for me, to some extent, so I'm going to build onto this core using
XHTML and Xforms. That I'm using HTML and forms is kind of a foregone
conclusion, but even if it weren't, what follows would be the same, and
it doesn't really lock you into anything, only fleshes out the
architectural model you're developing.
What I need now are RDF tuples to describe the interfaces in terms of
mapping my domain-specific vocabulary to standard link relations. For
example, the root-level resource has an index.rdf file which follows the
following pattern to *locate* interfaces:
<!-- I'm uncertain about RDF syntax but think I'm on the right track -->
<rdf:Description about="{//*[@instanceof='wiski:weblog-entry']}">
<!-- in my demo the particular element is li but this isn't required -->
<link rel='edit' href=
"{document(./@about)//*[@rel='edit']/@href}"/>
<!-- in my demo the particular element is a but doesn't have @rel -->
<link rel='replies' href=
"{document(./@about)//*[@rel='replies']/@href}"/>
</rdf:Description>
My documentation states that //*[@instanceof='wiski:weblog-entry' and
@id='post-0'] identifies the interface for me to make a new entry on my
weblog, if it's at root level, otherwise it identifies the interface to
make a new comment somewhere deeper in my hierarchy (even if I use CSS
to make it appear at the bottom of the Web page, but HTML/CSS is
getting ahead of myself here). And, collections don't have rel='edit',
comments don't have rel='replies', but weblog entries have both.
Otherwise, without documentation, my RDF describes the interfaces to
existing resources located at {./@about}, which is also where to find
the locations for editing and replying (based on link relation). I
suppose this does require using XML, but the media type is up in the
air still -- I can choose anything from which I can point rel=
'transform' to an XSLT file which outputs index.rdf (GRDDL) in the
above pattern.
Of course, at this point I can swap eXist for CouchDB if I want, using
different URIs to define Views. But the RDF pattern would still hold
equally true. The rel='replies' target may change, and so may its media
type, and so may the representation from which the RDF is derived (and
its media type). What wouldn't change is my API, i.e. the pattern the
RDF describes, if my Atom <link/>s are redone as HTTP Links seeing as
how the JSON media type CouchDB uses doesn't define links.
The RDF is generated from the steady-states using a domain-specific
machine-targeted vocabulary. So the RDF always tells a user agent
capable of GRDDL, where to look in the source document in terms of
Xpath, to find URIs of interest. The user agent follows its nose to
figure out *how* to post a new entry, post a new comment or edit an
entry or comment, from the representations its receives.
A third-party spambot developer can script together some standard
libraries, to crawl my site and report back the @href for every rel=
'replies' by deducing the RDF pattern above from the XSLT GRDDL
transformation (if not just running the XSLT transformation) and
POSTing some Atom-wrapped spam to each one.
Since each rel='replies' responds that it will Accept POSTs made in
application/atom+xml, unless some mechanism gets in the way the spambot
will be incredibly successful on my weblog. If I change eXist to
CouchDB the spambot breaks because the developer didn't write it to
follow its nose through any REST application, so the spambot tries to
POST Atom to a resource that now only Accepts application/json.
If, instead of a spambot, we consider my XHTML representations being
used to drive the application, the user agent is following its nose and
all it requires to display JSON as text/plain inside a <textarea>
instead of HTML as text/plain inside a <textarea> is how to render a
<textarea>. The API is no different, really, only a media type has
been changed or added. What method to use on what URI and what media
type to send may have changed, but any user agent following hypertext
instead of third-party developer assumptions will be automatically
updated when my representations change.
The contract in REST is that the API be spelled out using standard
methods, media types and link relations. Any contract you want beyond
that, has to do with domain-specific vocabulary. My RDF guarantees
that the steady-state from which it was derived will instruct the user
agent where and how to make and edit posts, while deliberately saying
nothing about media types or methods or URI patterns (only metadata).
If your spambot GRDDL-crawls my site and learns where and how to post a
new comment to a thread by following my RDF and RDFa, then I change my
site all around but still base it on the same RDF and RDFa vocabulary,
your spambot won't break over time because it's sticking to the third-
party developer contract elucidated through domain-specific metadata
residing in application steady-states (with a smidgen of documentation,
i.e. wiski:weblog-entry is a nodeset corresponding to an entry that
isn't a comment -- wiski:weblog-comment does that -- to explain the API
in terms of domain-specific vocabulary exposed in steady-states).
IOW, if your spambot groks RDF and the media types I'm using to
describe the API, it can spam me with text sent as any media type. No
service document needed. But, I can also write a killbot which knows
how to crawl my site, detect and remove spam using the same API.
-Eric
Hi Eric, > The only contract in REST, is that REST constrains me to use standard > methods, media types and link relations to describe this API to user > agents. That actually sounds like a pretty good definition of a contract to me. If you gave me that information, I could bind to your service without a priori knowledge of its implementation. And no hint of W*DL required, which is nice. Jim
Nick Gall wrote: > > Perhaps we should rename REST API to REST GPI (General Purpose > Interface). After all, it is (the principle of) generality that leads > to serendipitous (re)use. This would serve as a constant reminder to > interface/media type designers to focus on generality, generality, > generality. > Or, we could say a REST API describes the parameters and limits of a system's generic connector interface. My API isn't the GCI itself, just something that describes it for a particular implementation -- not meant to be general purpose. -Eric
Hello Erick.
I see the discussion it getting interesting.
All your points make sense and I do agree with most of them. Still, I think due to my quick writing, I was no clear and you understood some concepts a little bit different from what I was trying to say. Also, you take some other concepts under a light that I do think may lead to some confusion. So, quickly, let me clarify.
1. I'm not comparing REST to a computer, nor to hardware. I was given an example of a system, any system. Sorry about that.
2. To me, any interface is in the limits of a system. What is outside the limits is the client of the system. And here is a confusion many engineering students have when we look at client-server style. They assume the server is a system and the client the one that uses it, but in that style both elements are part of the system!
Same in REST. When I mention client of an API, I mean the client that will use the system (and thus, not part of it) through an interface. REST uses the client-server style, in which the client is part of the system, not outside or it. It may need an interface to communicate with the server, but this interface is not an external one.
This is a little hard to explain. We have a REST system, which includes the clients in the terms of the client-server style. The BIG REST system is the web, as a whole. Clients are part of that system, and the particular thing about the API those In-system client use, is that API is a network based API (as opposed as a library based one).
3. So... All you say is totally correct for the clients (user agents) in the REST system (your REST system, not the Web). Now, on the web, you also have clients of your REST system, that is also on the web (we assume you can have a private REST system in your own cloud, that is not a sin). YOur system maybe is not a REST system (as you mention of WP), and you made that REST wrapper (an API) so it can be used in the web (the big system) so other people in the web can use your system. You are building an API, that starts as an adapter and (if you change WP to become RESTfull) ends up as a facade.
You are building then and API. For you WP system. Two things. That was my point: what I actually see is people having a non-REST system that want to build a REST API. An adapter.
So, I guess we are on the same page, only that you use the term API and System as similar/the same thing (blurring the limit between the API and the encompassed system, as if they were parts of a whole, which is not bad), while I actually want to state the difference. When a client comes to me saying: "We want to move on and become REST", I ask : Do you want your old system to be re-architected using REST, or do you want to build an API around it?
When a client comes saying they want an API, I surely know there is a system (certainly not REST) on the back, that wants to be exposed to the web.
William Martinez Pomares
--- In rest-discuss@yahoogroups.com, "Eric J. Bowman" <eric@...> wrote:
>
> To clarify what I mean by an "incalculable number" of REST applications
> for any given REST system, an analogy is that the 26 letters of the
> alphabet lead to an incalculable number of words. Some words are
> incredibly common -- conjunctions come to mind. Same with REST
> applications; there will be common usage patterns, of varying length,
> and these may be optimized.
>
> "William Martinez Pomares" wrote:
> >
> > I do differentiate the concepts. A system is that collection of
> > components organized and with particular interactions. Those may
> > support different applications, that are not more than a set of task
> > to produce a goal. A system is my computer, and applications uses
> > some of the components to achieve a goal.
> >
>
> I'm only interested in using the term "system" in the REST sense so
> clearly evident in the thesis. I may, in other contexts, use the term
> "system" in the sysadmin sense, to refer to a physical system. I don't
> like to talk about system in the sysadmin sense on rest-discuss, even
> as an analogy, to avoid confusion and because hardware setup isn't
> relevant to REST.
>
> We don't call REST systems "applications" because that takes a whole
> slew of components and connectors for granted, in order for it to
> work. So I don't like the analogy -- a browser application uses other
> components resident on the same physical system, but others are located
> across the network. REST applications are distributed.
>
> >
> > So, you may create a system that supports several apps, even some you
> > didn't think of!
> >
>
> Exactly what's meant by "serendipitous re-use". ;-) This is exactly
> what I mean when I say a REST system has an "incalculable number" of
> applications. The developer can't possibly imagine the possible use
> cases, more on that in my next response to Jan...
>
> >
> > Now, System is not an API. I actually argue about the term REST API
> > since it comes, I think, from the idea that REST is a Service or RPC
> > replacement.
> >
>
> You're right, the system encompasses many components, but these things
> are out of the REST developer's hands. Everything that is in the REST
> developer's hands, is the API for making the overall system do work.
>
> I think the term REST API was derived from the fundamental nature of
> REST itself. This is the basis for my interchangeable usage of "REST
> system" and "REST API"...
>
> "The resource implementation details are hidden behind the interface."
>
> While the connection to and nature of the backend are certainly parts
> of your overall system architecture, they've nothing to do with REST
> architecture. REST is all about the communication between connectors
> over the wire -- REST applications are distributed.
>
> So we're only concerned with the self-documenting interfaces that
> describe a REST system to the world, and the response codes a system
> generates to various methods invoked on various resources. You know --
> the API... ;-).
>
> >
> > An API is a layer between your system and an external client. Of
> > course, that client is not part of your system.
> >
>
> User agent is a better term than client. The user agent is absolutely a
> part of the system. In fact, REST applications are executed within
> user agents -- the application state isn't always held entirely within
> the representation of a resource, it usually must be rendered to obtain
> an application steady-state. The combination of representation plus
> user agent *is* the REST application. If HTML + browser, it's a browser
> application (with, of course, a client connector).
>
> "[T]he interaction of a real system usually involves an extensive number
> of components, resulting in an overall view that is obscured by the
> details."
>
> An API is the layer between your system and external components, like
> caches. The user agent may not ever contact the origin server directly,
> and a cache may be able to serve all requests without contacting the
> origin server either. That cache becomes part of your system -- one of
> those components that we take for granted -- it's executing a REST
> application against the origin server, to determine resource state
> using conditional requests.
>
> So a REST API is truly a programming interface for applications,
> whether they're executing on the requesting user-agent or on some
> intermediary. How would a cache know how to service requests on behalf
> of my origin server, if it can't be programmed to understand my service
> parameters? I program intermediaries the same way I program user
> agents, with a REST API.
>
> >
> > API may be a facade, or an adapter. If a facade, if the system is
> > REST, the API is a REST constrained simplification of the system. If
> > an adapter, the API is something that will convert, say, RPC
> > interactions into REST interactions.
> >
>
> I don't see the distinction, and I don't see how such a distinction can
> be made. Give me a bunch of RPC endpoints, and I can code you a REST
> system to drive them. Since the implementation details are hidden
> behind the interface, the API is something that will transfer
> representations of application state to user agents, and instruct them
> how to transfer representations of desired application state to the
> origin server.
>
> REST is a layered-system architecture. I apply REST constraints to the
> communication between the connectors of my back-end system, but this is
> not part of my API since it's an implementation detail hidden behind
> the interface to the world. That interface to the world is my API.
>
> >
> > The most typical is you have non-REST system (SOA maybe, or a plain
> > old OO) and you want to be REST. You create an API, Adapter, that
> > will allow REST like clients to use your OO system as a REST one. The
> > API will accept HTTP operations in one face, and in the other one
> > will invoke object methods.
> >
>
> Bear with me...
>
> In my case, the non-REST system is WordPress. I create a REST layer to
> encapsulate this legacy system, just like the thesis says. WP only
> generates Atom Entry, Atom Feed and Atom Category documents. WP creates
> the URI allocation scheme you see in my demo via a module with a couple
> of support functions for my template, which generates Atom instead of
> HTML. But, WP is incapable of responding to that URI allocation scheme
> directly without extensive modification.
>
> Which is OK by me, since I don't want to expose WP to the world; it
> runs on its own IP and rejects requests from any IP that isn't the
> encapsulation layer's or mine (initially, I'll use the WP interface
> until development is completed on the Xforms interface, then I'll cut
> out the CP, and have the main page return a configuration form to the
> admin user when an OPTIONS request is made with proper credentials).
>
> The encapsulation layer converts all requests to /index.php?p={x}, and
> acts as a reverse-proxy/cache for the response. A request to post a
> comment to a thread is handled by the encapsulation layer; once handled
> (validated and such), the request is passed on to WP using Atom
> Protocol. The encapsulation layer creates separate Atom Feeds for each
> thread, whereas WP's Atom Protocol capability considers the entire
> weblog as a single collection.
>
> The future of my project consists of replacing WP with an Atom Store.
> The encapsulation layer (REST wrapper) has a modular design, to support
> interfacing with multiple source generators. So a new module is needed
> which, instead of abstracting out /index.php?p={x} into a hierarchical
> URI allocation scheme, abstracts out the URI allocation scheme of the
> Atom Store (which is the same as that used in my atom:id's, btw) into my
> site's URI allocation scheme.
>
> So, I can swap out a MySQL-driven WP content generation layer for a
> simple Atom Store XMLDB-driven content generation layer that's not in
> any way similar except for its abilities to generate Atom content that
> exactly matches what WP generates, and accept posts via Atom Protocol.
> The REST wrapper completely isolates this madness from the outside
> world. Commenters on my weblog won't notice a thing -- because I
> haven't changed the API in the slightest. (My back-end API changes a
> bit, but it's using Atom Protocol either way.)
>
> By sharing the self-documenting interfaces to my system with the world,
> I am providing a stable API which is unaffected by the nature of my
> back-end implementation. The API is the interface behind which the
> implementation details are hidden. REST API "Implementations are
> decoupled from the services they provide, which encourages independent
> evolvability."
>
> Do you see how I just described a REST API which evolves from your
> "adapter" into your "facade" over time? WP isn't RESTful, but the Atom
> Store certainly will be. This distinction has no impact whatsoever on
> any application running against my system, because it's hidden behind
> the generic connector interface, which is the point of a REST API.
>
> >
> > See how I see it?
> >
>
> Nope, sorry. You're trying to make a distinction which assumes the API
> is coupled to the back-end implementation, when such coupling doesn't
> exist in a REST API.
>
> -Eric
>
I think we both fundamentally agree on REST, however I'm going to argue every point you just made, anyway. REST is technology, but discussing REST crosses into the philosophical, so I believe there's a happy medium between how you are saying things and how I am saying things. We just have to find it. "wahbedahbe" wrote: > > I would come at this from a different angle and note that client > features tend to drive the creation and evolution of markup languages. > What client? Are you referring to a user agent or its client connector? Sometimes, end-user features (accessibility) drive development of markup languages, which adopt a compatible accessibility mechanism and incorporate the new markup as an evolution, which user agents then implement. Evolution is circular and not coupled to the needs of any specific component or connector or hypertext language. > > Client requirements drive markup languages which drive client code. > The existence of clients that support a markup language drives the > development of services that represent their resources in that > language in order to reach those clients. > Or, user requirements drive markup languages which drive user agents. The existence of user agents that support a markup language drives the adoption of the vocabulary the markup encompasses, by other markup languages and user agents, increasing the likelihood that services will be developed for user agents which consume the vocabulary, but some services were created as specific cases of the vocabulary to drive its development... IMNSHO, evolution is entirely circular, and this is a feature of the Web in general and REST specifically. Markup languages drive (or rather, guide) user agents, not user components. Only user components drive user agents, by choosing state transitions which manipulate either application state or representations of resource state, or both. > > Serendipitous re-use tends to come from the declarative nature of > markup languages that, via the principle of least power, enables > secondary clients (e.g. spiders) to also be driven by the markup > language designed around the primary client. > Serendipitous re-use comes from applying the constraints of the uniform interface; in particular self-descriptive stateless messaging, which increases the likelihood of re-use even without a self-documenting API. It isn't just a function of media type and domain-specific vocabulary. Googlebot is driven by Google's indexing/search service instructing it to follow its nose, guided by hypertext, from an entry point (bookmark). > > The primary client can also drive the behavior of the secondary > clients. > Gaaah! I don't understand these terms. Do you mean that a REST system's native application (self-documenting hypertext API) can drive the behavior of a third-party application? If so, I agree, as I explained in my response to Jan about how a third-party application can use RDF to learn how to accomplish a goal by following the native hypertext, and elaborate on below. > > For example, Google's spider isn't just about indexing the > data in web pages. It must prioritize the data presented to users > when they view a page in the browser. If "invisible" data (e.g. > meta-data or even data hidden via CSS) was given the same priority as > visible data, then the service would be less valuable to its users. > Google isn't indexing the browser input -- it is indexing the browser > output (to your screen). The declarative nature of HTML makes it easy > to derive output from input. > Really? You may know more about it than I do, but I've never noticed Google index any image that I'm linking to from CSS, or miss indexing an image linked to from HTML. Screen readers are supposed to speak anything with a CSS property of visibility:hidden, but not speak anything with a CSS property of display:none. I also argue that Googlebot has nothing to do with indexing -- that's done by the search/ indexing service acting as the user component driving the Googlebot user agent. Googlebot, though, doesn't render and parse steady-states -- my demo app would need a server-side transformation representation for each resource in order to support Googlebot. So I'd say Google is indexing representations, not steady-states. What prioritizes results is the markup: //head/title is the most relevant, followed by //body/h1, and the CSS is never checked to see if that h1 is display:none or not. > > The single biggest flaw in most REST system design today is that > markup languages are being designed around the service's view of the > data. The markup should be a declarative program for the client -- > saying "what" the client does with the data rather than "how" it does > it. Folks are using markup to denote "what" the data means, but to > the service rather than the client. The clients then have to be > designed around the service semantics -- this binds them to the > service just as much as RPC does. > Again, what do you mean by client? User agent, or perhaps client connector on a cache? Specific terminology exists for us to use, and I think we'd all confuse each other a lot less by sticking with it more. I do agree that most REST API claimants are flawed by the creation of a markup language that's then declared a (unregistered) media type, despite lacking a media type definition that encompasses methods. The user component tells the user agent *what* to do, not the markup. If the user component wants to post a comment to my weblog, it relies on the markup to tell the user agent *how* to carry out its instructions. The "what" is self-implied, the "how" is the problem markup addresses. My weblog's origin server provides an XHTML representation to a user agent, the user agent renders it into a steady-state which provides the user with a choice of possible state transitions. The user knows *what* it wants to do, but not *how* to do it, that's where the user agent comes in. The user agent, when informed of the user's choice to post a comment to my weblog as the next state transition, discovers *how* to carry out this transition by following its nose to the rel='replies' which leads it to discover what URI to target with what method and what media type to send. IOW, *how* to post a comment is contained in the hypertext, which is why the hypertext is a self-documenting API. Most details required by a third-party developer to figure out my weblog API are explained by HTTP, Atom, Atom Protocol and the IANA link-relation registry documentation. The markup can't tell the user agent *what* to do, that would be nothing less than the origin server leading the user agent around by the nose, instead of having the user agent follow its nose in response to user action. The markup instructs the user agent *how* to execute the state transitions it contains, i.e. what URI to target with what method and what, if any, query string and/or media type to send. > > I've been saying for years > ( http://tech.groups.yahoo.com/group/rest-discuss/message/8411 ) that > with REST, the API is in the client. The markup is a declarative > program written to that API -- and being declarative means it can be > adapted to other APIs. The scripts are optional imperative programs > to fill in the feature gaps of the markup language. Content > negotiation allows a single service to use other markup languages to > target other sets of APIs. If you design your markup languages around > this line of reasoning, you will get a lot more mileage out of them. > I'd say the API *executes* in the client, not *is* in the client. The API is *in* the representations and response codes sent from the origin server. IOW, the API is *in* the origin server(s). What's on the client are either resource states, or application steady-states, depending on if by client you mean the client connector on an intermediary, or the user agent... Or you can say the REST API is *in* the steady-states of the native hypertext application. A Web browser and an Atom Reader are different user agents which use my API differently. How a Web browser is to interact with my system is laid out explicitly, and the domain-specific vocabulary used may be the subject of a contract. How an Atom Reader uses my API is implicit in the media type, link relations and response codes (if not by service document). But that's unsupported and not part of any such contract. A third-party developer extending an Atom Protocol client to work without a service document, do my PATCH thingie, and otherwise interact with my system and its services my way, can come up with a nifty application with its own steady-states that aren't anything like my own. It's unsupported, though, because it wouldn't be following the hypertext constraint (it's ad-hoc) and therefore couldn't _be_ covered by any contract, unless I make such API explicit, which I won't do if it isn't REST. That third-party application would be using its own API to my system, not my API to my system. Unless... The user agent compiles the site- wide GRDDL XSLT file, content-negotiates for the default representation and applies the transformation whenever it dereferences a resource, in order to obtain the RDF it can then use as the pattern for discovering and following state transitions carried out on Atom representations -- if this third-party application is reading my Xforms model, then it's also following my API to my system, not its own. Better to follow the RDF instead of the Xforms model, though, and be able to discover the proper interface wherever it appears in whatever media type and forms system I go with, to decouple and allow independent evolution. Remember, I can expose any number of non-RESTful services with a RESTful API, just as I can expose any number of perfectly RESTful services with a non-RESTful API, and anything in between. Trust me, I'm completely capable, and quite experienced, at exposing my perfectly REST-capable services with a non-RESTful weblog API. ;-) The trick is to create a REST API regardless of the underlying implementation, or even your flavor-of-the-eon choice of media type -- what if HTML 5 requires some parameter to text/html to "version" it, perish the thought being debated seriously? What if HTML 5 doesn't include RDF attributes? Then my RDF will have to map to different metadata than it does now, but it still will expose how to find the rel='edit' and rel='replies' interfaces -- the third- party user agent will then need to know HTML 5 to understand how Atom is manipulated by my native hypertext application when it's updated. Or, the user-agent can negotiate for a different media type, and reference the RDF associated with that media type, as my system will maintain backwards compatibility so as to allow user agents to evolve independently to support HTML 5, or not, as they see fit. I have no crystal ball, but I'm prepared to fly whichever way the wind eventually blows me by using this RDF approach to defining my application for machine users (especially the ones I intend to code for site moderation and maintenance, I'm maintaining backwards compatibility primarily out of self-interest, so my own applications may independently evolve). Thanks for everyone's patience who's actually been reading through my waxing philosophical about REST the last few days. I'm putting my mind to the machine-user problem I hadn't given much thought to before, and really fleshing out my ideas through prosaic debate... not to mention grasping the notion that m2m is the principal principle of the Semantic Web and adopting it, finally, into my plans and talking-points for m2m. -Eric
OK, this is now my biggest REST pet peeve. We'll never get anywhere in discussions if we keep insisting on using undefined terms like "REST client." You can say "client component" if you are talking about the client side in general. You can say "client connector" of course. But you can't say "REST client" and expect anyone to know what you're talking about, because that term ambiguously also applies to the client connector on a cache component, user agents and REST applications. You can, of course, say "user agent" when discussing that specific type of a client component. Sometimes, some people seem to mean "user agent" when they say "REST client" but other times those same people mean "REST application," as that term refers to the API described by your hypertext, which executes in the user agent -- the executing app is a "REST client" just like a user agent or client connector. Damn confusing, it is! I hereby ban the term from rest-discuss, for whatever good that'll do. :-) -Eric
I'm not trying to be a jerk with this. What I'm saying is that it's too difficult to deduce the meaning of "REST client" from context from one person to the next. REST development involves adapting the terms of Roy's thesis to your problem area. It's much harder for me to know what people mean when their meaning must be taken from context, than when their meaning is clear due to their use of precise terms. Have a nice day, Eric
"William Martinez Pomares" wrote: > > 2. To me, any interface is in the limits of a system. What is outside > the limits is the client of the system. And here is a confusion many > engineering students have when we look at client-server style. They > assume the server is a system and the client the one that uses it, > but in that style both elements are part of the system! > I don't know in what sense you mean the term "client of the system". REST encompasses the constraints of the client-server style, so the client connectors of all the components "in-circuit" for the request/ response (and their corresponding client and intermediary components) interaction with the origin server's cache or server connector are indeed part of the system, for whatever application is executing. Since a discussion of what components are "in-circuit" really serves absolutely no purpose (other than facilitating philosophical debates), it may be safely avoided (taken for granted) by shorthanding the REST system as the REST API or using these terms interchangeably. For all intents and purposes. > > Same in REST. When I mention client of an API, I mean the client that > will use the system (and thus, not part of it) through an interface. > REST uses the client-server style, in which the client is part of the > system, not outside or it. It may need an interface to communicate > with the server, but this interface is not an external one. > I don't know what a "client of an API" is, either. The user agent actually serves the API to the user as a series of steady-states. The user component's client connector to the user agent may be a human's eyes, ears and fingers, and the user agent's server connector to the client component may be KVM/audio. On VOIP, the user component's client connector and the user agent's server connector are both acoustic couplers which implement a generic telephonic interface which sounds a tone for each key on the pad, plus natural language. What natural-language description corresponds to what key on the pad I should press is conveyed to me by my user agent (telephone) as a choice of state transitions. That domain-specific vocabulary could be a self-documenting REST API, as it does the same thing -- instructs the user agent *how* to use the generic telephonic connector interface to carry out the user's instructions via hypertext (CCXML + VoiceXML): Press '1' for English, '2' for Spanish on one domain may be Press '1' for Spanish, '2' for English on another domain -- the API carries domain-specific instructions on how to use the generic connector interface to transition to the next steady-state (no HTTP required, but I can say '1 for English' is equivalent to GET, and entering an Rx# is equivalent to POST, 'press # when finished' is a submit button). Any VOIP telephone executing my application is, at that time, a part of my system while other telephones are not. It doesn't matter if that telephone is on a public or private exchange, landline or IP, the API (press '1' for English, '#' when finished) isn't affected at all -- it doesn't make sense to make any distinction between API and system, because the only parts of the system the developer cares about are the user agent executing the application, and the server component. > > This is a little hard to explain. We have a REST system, which > includes the clients in the terms of the client-server style. The BIG > REST system is the web, as a whole. Clients are part of that system, > and the particular thing about the API those In-system client use, is > that API is a network based API (as opposed as a library based one). > The Web, as a whole, is not a REST system in any way (otherwise there'd be no such thing as a not-REST API). The thesis clearly describes the native architecture of the Web as the client-cache-stateless-server style. All REST constraints build from here, i.e. REST constrains the Web architecture as a whole (while first requiring that the constraints of client-cache-stateless-server be adhered to -- plenty of APIs don't meet the constraints of the Web's native architectural style either), down to a set of best practices for systems desiring the benefits of REST, i.e. a sweet spot. You're correct that a REST API is distributed -- my application steady-states are derived from a variety of sources which aren't constrained to being from a single domain or server (although in reality, user agents frown on cross-site architectures, so I use multiple gateway components on a single domain, just not by choice, and pray for the day I can dismantle them as the relics they are). > > 3. So... All you say is totally correct for the clients (user agents) > in the REST system (your REST system, not the Web). Now, on the web, > you also have clients of your REST system, that is also on the web > (we assume you can have a private REST system in your own cloud, that > is not a sin). YOur system maybe is not a REST system (as you mention > of WP), and you made that REST wrapper (an API) so it can be used in > the web (the big system) so other people in the web can use your > system. You are building an API, that starts as an adapter and (if > you change WP to become RESTfull) ends up as a facade. > The only user agents I consider as part of my system, are those executing applications against it, RESTfully or not. I promise you I am developing a REST system which has a REST API, and these system vs. wrapper vs. facade distinctions just don't exist or matter. They're nothing but implementation details, hidden behind the generic connector interface. The API is public, and does not vary based on the topology of my backend system. The topology of my backend is *irrelevant* to REST, as it's part of the system but not the API. The distinction of where the user is located, inside or outside a firewall, is likewise irrelevant to REST. What's relevant are their authentication headers, and perhaps IP address (or incoming phone #), but the nature of that IP address (public or private) has no bearing on anything. Most intranets allow authorized access over the Web, REST API or not. That certain requests are only allowed from certain IPs is an aspect of my system that has nothing to do with its API -- I publish no list of (un)allowable IP ranges to the world as part of the API, I leave it to my response codes to inform the user agent why a request failed (operation not allowed from your location, unauthorized user). I use REST API and REST system interchangeably, because the only place it matters is philosophical discussions about the difference between the two, like we're having here. Any user agent executing an application against my API is in-circuit with my system. For all intents and purposes this makes no difference to anything, since REST developers are concerned with how their system responds to requests, i.e. how it implements the generic connector interface -- not how it implements resources behind the generic connector interface. As I said before, we take so many system components (caches) for granted when discussing what the system entails for any given request/ response that going into detail isn't really of much use... it amounts to obscuring the system's design. This debate is going into exactly that sort of detail to prove that API and system aren't technically interchangeable, which I agree with, but for any discussion but this one there's really no point making the distinction, because such distinction only obscures what's being discussed. For all intents and purposes. > > You are building then and API. For you WP system. Two things. That > was my point: what I actually see is people having a non-REST system > that want to build a REST API. An adapter. > No, I'm not making a REST API for WordPress, bearing in mind that implementation detail has nothing to do with REST and I could just as easily be modifying WordPress to achieve my objective as encapsulating it with a REST layer. It wouldn't appear any different to the world I'm exposing the API to. Implementation details are hidden behind the generic connector interface, REST APIs are only concerned with instructing user agents how to manipulate representations of resource or application state over the generic connector interface. I'm not creating two things, I'm creating one gateway layer. It has client and server connectors, but only the server connector is part of the API. What I'm making is a REST API for my overall system, made up of separate wiki, weblog, forum, blogroll and tagging APIs plus the /date service. The API I'm developing is a frontend to whatever totally obscured implementation details make up the backend. Could be WP, could be Drupal, could be my native Atom Store, could be a combination of any of the above, and can change at my whim without affecting the API I've exposed to the world. Only the client connector of the gateway layer needs changing, in the form of adding a module for each source generator, but the purpose of the layer is to provide the REST API and it does that without skipping a beat, because implementation details are hidden behind the generic connector interface. > > So, I guess we are on the same page, only that you use the term API > and System as similar/the same thing (blurring the limit between the > API and the encompassed system, as if they were parts of a whole, > which is not bad), while I actually want to state the difference. > When a client comes to me saying: "We want to move on and become > REST", I ask : Do you want your old system to be re-architected using > REST, or do you want to build an API around it? When a client comes > saying they want an API, I surely know there is a system (certainly > not REST) on the back, that wants to be exposed to the web. > Not quite on the same page, but some of what we're debating we do agree on, so when you see a response that looks contradictory to what you're saying, it's more likely to mean "your terminology is imprecise" rather than "you're wrong". When a client comes to you for REST development, you should start by modeling resources. Once that's done and you've gotten a feel for the existing system, it's up to you as the architect to present your client with implementation options on the backend, i.e. encapsulate or replace the existing system? That isn't the sort of up-front question you can expect someone who's not a REST expert to be capable of answering, in fact they're probably coming to you for *your* answer to that question, or at least your input. But it isn't a starting point, and the distinction you're trying to make has nothing to do with the REST API you're developing for your client, it's an implementation detail hidden behind the generic connector interface having no bearing on REST. Make your backend generate your REST resources' representations however you see fit, using REST behind the firewall or not. The generated representations are part of the REST API, how they are generated is not, so how they are generated is out-of- scope to any discussion of the REST API, meaning we can go ahead and call it a REST system without causing any harm because we just don't care about implementation details. Sorry for all the repetition, I just think these are important points. -Eric
Hi Erick. 1. I understand all that you say. I feel I'm the one does didn't made myself understandable, but that is ok. 2. For instance I tried telling you I see two different clients, different kinds of systems, and you came back telling me the only one you see is the client in terms of rest, and only rest systems with non-rest background implementation (that are the hidden part on the servers). That means we are looking at different places but talking about the same thing. 3. I try to tell you I feel that differentiating that is important to avoid our customers thinking the got a REST system when they got a something else system with a REST wrapper, and you think it is not important because the clients using that will see REST anyway. I see there we can go on saying the same things and will not agree. That happens often in this IT world, but I'm confident we are facing opposite directions. 4. Lastly, about how to approach the incoming client, of course the example was an extreme simplification of the process to exemplify the API vrs system debate. Still, for sure, I wont start modeling the resources right away, I will start with studying the stakeholders business contexts and domains, and evolve from that, since it may be REST is not what he actually needs or constrains allow. Architecting in REST is far more that modeling resources, that actually goes into tactical design. But no worries, I understand your idea, it is clear and good, and all this discussion is about using one term or the other, or both, so it may not have a great deal of impact between you and me. Cheers William Martinez Pomares --- In rest-discuss@yahoogroups.com, "Eric J. Bowman" <eric@...> wrote: > > "William Martinez Pomares" wrote: > > > > 2. To me, any interface is in the limits of a system. What is outside > > the limits is the client of the system. And here is a confusion many > > engineering students have when we look at client-server style. They > > assume the server is a system and the client the one that uses it, > > but in that style both elements are part of the system! > > > > I don't know in what sense you mean the term "client of the system". > REST encompasses the constraints of the client-server style, so the > client connectors of all the components "in-circuit" for the request/ > response (and their corresponding client and intermediary components) > interaction with the origin server's cache or server connector are > indeed part of the system, for whatever application is executing. > > Since a discussion of what components are "in-circuit" really serves > absolutely no purpose (other than facilitating philosophical debates), > it may be safely avoided (taken for granted) by shorthanding the REST > system as the REST API or using these terms interchangeably. For all > intents and purposes. > > > > > Same in REST. When I mention client of an API, I mean the client that > > will use the system (and thus, not part of it) through an interface. > > REST uses the client-server style, in which the client is part of the > > system, not outside or it. It may need an interface to communicate > > with the server, but this interface is not an external one. > > > > I don't know what a "client of an API" is, either. The user agent > actually serves the API to the user as a series of steady-states. The > user component's client connector to the user agent may be a human's > eyes, ears and fingers, and the user agent's server connector to the > client component may be KVM/audio. On VOIP, the user component's client > connector and the user agent's server connector are both acoustic > couplers which implement a generic telephonic interface which sounds a > tone for each key on the pad, plus natural language. > > What natural-language description corresponds to what key on the pad I > should press is conveyed to me by my user agent (telephone) as a choice > of state transitions. That domain-specific vocabulary could be a > self-documenting REST API, as it does the same thing -- instructs the > user agent *how* to use the generic telephonic connector interface to > carry out the user's instructions via hypertext (CCXML + VoiceXML): > Press '1' for English, '2' for Spanish on one domain may be Press '1' > for Spanish, '2' for English on another domain -- the API carries > domain-specific instructions on how to use the generic connector > interface to transition to the next steady-state (no HTTP required, but > I can say '1 for English' is equivalent to GET, and entering an Rx# is > equivalent to POST, 'press # when finished' is a submit button). > > Any VOIP telephone executing my application is, at that time, a part of > my system while other telephones are not. It doesn't matter if that > telephone is on a public or private exchange, landline or IP, the API > (press '1' for English, '#' when finished) isn't affected at all -- it > doesn't make sense to make any distinction between API and system, > because the only parts of the system the developer cares about are the > user agent executing the application, and the server component. > > > > > This is a little hard to explain. We have a REST system, which > > includes the clients in the terms of the client-server style. The BIG > > REST system is the web, as a whole. Clients are part of that system, > > and the particular thing about the API those In-system client use, is > > that API is a network based API (as opposed as a library based one). > > > > The Web, as a whole, is not a REST system in any way (otherwise there'd > be no such thing as a not-REST API). The thesis clearly describes the > native architecture of the Web as the client-cache-stateless-server > style. All REST constraints build from here, i.e. REST constrains the > Web architecture as a whole (while first requiring that the constraints > of client-cache-stateless-server be adhered to -- plenty of APIs don't > meet the constraints of the Web's native architectural style either), > down to a set of best practices for systems desiring the benefits of > REST, i.e. a sweet spot. > > You're correct that a REST API is distributed -- my application > steady-states are derived from a variety of sources which aren't > constrained to being from a single domain or server (although in > reality, user agents frown on cross-site architectures, so I use > multiple gateway components on a single domain, just not by choice, and > pray for the day I can dismantle them as the relics they are). > > > > > 3. So... All you say is totally correct for the clients (user agents) > > in the REST system (your REST system, not the Web). Now, on the web, > > you also have clients of your REST system, that is also on the web > > (we assume you can have a private REST system in your own cloud, that > > is not a sin). YOur system maybe is not a REST system (as you mention > > of WP), and you made that REST wrapper (an API) so it can be used in > > the web (the big system) so other people in the web can use your > > system. You are building an API, that starts as an adapter and (if > > you change WP to become RESTfull) ends up as a facade. > > > > The only user agents I consider as part of my system, are those > executing applications against it, RESTfully or not. I promise you I > am developing a REST system which has a REST API, and these system vs. > wrapper vs. facade distinctions just don't exist or matter. They're > nothing but implementation details, hidden behind the generic connector > interface. The API is public, and does not vary based on the topology > of my backend system. The topology of my backend is *irrelevant* to > REST, as it's part of the system but not the API. > > The distinction of where the user is located, inside or outside a > firewall, is likewise irrelevant to REST. What's relevant are their > authentication headers, and perhaps IP address (or incoming phone #), > but the nature of that IP address (public or private) has no bearing on > anything. Most intranets allow authorized access over the Web, REST > API or not. That certain requests are only allowed from certain IPs is > an aspect of my system that has nothing to do with its API -- I publish > no list of (un)allowable IP ranges to the world as part of the API, I > leave it to my response codes to inform the user agent why a request > failed (operation not allowed from your location, unauthorized user). > > I use REST API and REST system interchangeably, because the only place > it matters is philosophical discussions about the difference between > the two, like we're having here. Any user agent executing an > application against my API is in-circuit with my system. For all > intents and purposes this makes no difference to anything, since REST > developers are concerned with how their system responds to requests, > i.e. how it implements the generic connector interface -- not how it > implements resources behind the generic connector interface. > > As I said before, we take so many system components (caches) for > granted when discussing what the system entails for any given request/ > response that going into detail isn't really of much use... it amounts > to obscuring the system's design. This debate is going into exactly > that sort of detail to prove that API and system aren't technically > interchangeable, which I agree with, but for any discussion but this > one there's really no point making the distinction, because such > distinction only obscures what's being discussed. For all intents and > purposes. > > > > > You are building then and API. For you WP system. Two things. That > > was my point: what I actually see is people having a non-REST system > > that want to build a REST API. An adapter. > > > > No, I'm not making a REST API for WordPress, bearing in mind that > implementation detail has nothing to do with REST and I could just as > easily be modifying WordPress to achieve my objective as encapsulating > it with a REST layer. It wouldn't appear any different to the world > I'm exposing the API to. Implementation details are hidden behind the > generic connector interface, REST APIs are only concerned with > instructing user agents how to manipulate representations of resource > or application state over the generic connector interface. > > I'm not creating two things, I'm creating one gateway layer. It has > client and server connectors, but only the server connector is part of > the API. > > What I'm making is a REST API for my overall system, made up of > separate wiki, weblog, forum, blogroll and tagging APIs plus the /date > service. The API I'm developing is a frontend to whatever totally > obscured implementation details make up the backend. Could be WP, > could be Drupal, could be my native Atom Store, could be a combination > of any of the above, and can change at my whim without affecting the > API I've exposed to the world. Only the client connector of the > gateway layer needs changing, in the form of adding a module for each > source generator, but the purpose of the layer is to provide the REST > API and it does that without skipping a beat, because implementation > details are hidden behind the generic connector interface. > > > > > So, I guess we are on the same page, only that you use the term API > > and System as similar/the same thing (blurring the limit between the > > API and the encompassed system, as if they were parts of a whole, > > which is not bad), while I actually want to state the difference. > > When a client comes to me saying: "We want to move on and become > > REST", I ask : Do you want your old system to be re-architected using > > REST, or do you want to build an API around it? When a client comes > > saying they want an API, I surely know there is a system (certainly > > not REST) on the back, that wants to be exposed to the web. > > > > Not quite on the same page, but some of what we're debating we do agree > on, so when you see a response that looks contradictory to what you're > saying, it's more likely to mean "your terminology is imprecise" rather > than "you're wrong". > > When a client comes to you for REST development, you should start by > modeling resources. Once that's done and you've gotten a feel for the > existing system, it's up to you as the architect to present your client > with implementation options on the backend, i.e. encapsulate or replace > the existing system? That isn't the sort of up-front question you can > expect someone who's not a REST expert to be capable of answering, in > fact they're probably coming to you for *your* answer to that question, > or at least your input. > > But it isn't a starting point, and the distinction you're trying to make > has nothing to do with the REST API you're developing for your client, > it's an implementation detail hidden behind the generic connector > interface having no bearing on REST. Make your backend generate your > REST resources' representations however you see fit, using REST behind > the firewall or not. The generated representations are part of the REST > API, how they are generated is not, so how they are generated is out-of- > scope to any discussion of the REST API, meaning we can go ahead and > call it a REST system without causing any harm because we just don't > care about implementation details. > > Sorry for all the repetition, I just think these are important points. > > -Eric >
I agree with that. Problem is who is going to define the meaning, first, and then how to make all others that tend to use different meanings to use the one specified as "valid". Cheers. William Martinez. --- In rest-discuss@yahoogroups.com, "Eric J. Bowman" <eric@...> wrote: > > I'm not trying to be a jerk with this. What I'm saying is that it's > too difficult to deduce the meaning of "REST client" from context from > one person to the next. REST development involves adapting the terms > of Roy's thesis to your problem area. It's much harder for me to know > what people mean when their meaning must be taken from context, than > when their meaning is clear due to their use of precise terms. > > Have a nice day, > Eric >
If I look at a REST service like I do an SQL server then Eric's point makes more sense. Who defines the meaning of an SQL database? It is open to meaming for many clients. Furthermore, a REST client seems equivalent to the functionality of a SQL client. Why don't we demand meaning from SQL clients? Mark W. William Martinez Pomares <wmartinez@acoscomp.com> wrote: >I agree with that. >Problem is who is going to define the meaning, first, and then how to make all others that tend to use different meanings to use the one specified as "valid". > >Cheers. > >William Martinez. >--- In rest-discuss@yahoogroups.com, "Eric J. Bowman" <eric@...> wrote: >> >> I'm not trying to be a jerk with this. What I'm saying is that it's >> too difficult to deduce the meaning of "REST client" from context from >> one person to the next. REST development involves adapting the terms >> of Roy's thesis to your problem area. It's much harder for me to know >> what people mean when their meaning must be taken from context, than >> when their meaning is clear due to their use of precise terms. >> >> Have a nice day, >> Eric >> > > > > >------------------------------------ > >Yahoo! Groups Links > > >
Hello Ivan. I agree with you, but: 1. In this group you have many people that actually do that, uses terms of what we understand, while other understand different things. There are discussions that focus on that, defining the terms. So, we would need to put the group to agree, and that may be problematic, since actually disagreement lead to healthy discussions. Even, some people just left the group because majority did not agree with them, which turns out bad. 2. The other problem is we all have different levels of understanding of REST. There are many that work with this everyday, and thus there are some things that are a de facto reality, although academically may not be totally congruent with the dissertation. Far more, the dissertation, some one proposed, may need an update since lots of things had happened since then. So, what can we do? The actually existence of this group is a step forward. Actual promotion of the group so it is available to more people is necessary. The This week on REST wiki is another great idea. Opening a site with the "official" definitions and tutorial from all levels (not just development) could be another one. Problem is we may have some trouble getting all people, or majority, to agree on some definitions. Any way, the solution may come out from this group, or even from another one that may start somewhere else. Not sure if the future of REST concept will be closer to original idea or a derivation from the actual practices, but at some point there must be an agreement, or the concept will be diluted. William Martinez. --- In rest-discuss@yahoogroups.com, Ivan Žužak <izuzak@...> wrote: > > Hey all, > > A few days ago I wrote a blog post on why I think understanding REST > is hard and what we can and *should* do about it - > http://wp.me/poYaf-34. Since a lot of what I wrote was inspired by > following this group and since this group is the most relevant (only?) > place for discussing REST - I'd like to know what you think. Short > version (since the post is really long): > > Although there's a lot of great blog posts, papers and mailing list > discussions, the current material on REST is a mess which makes REST > hard to understand and confusing to discuss: > * there is no agreed upon and widely used terminology, but a lot of > unexplained and overlapping terms, > * discussions are fragmented all over the web and often unnecessarily > repeat previous discussions, > * there are no (formal or semi-formal) models of (important) concepts. > > Therefore, I think people involved in and enthusiastic about REST > (mostly people on this group) should: > 1. Agree that there is a problem worth fixing " do we think that we > can create a better, clearer and more systematized way of > understanding and discussing about REST? > 2. Express interest in fixing it " is this something people want to > contribute their time to? > 3. Agree on how to fix it " what should be our output (a RESTopedia, a > document, video tutorials) and how would we all contribute to and > moderate the process? > 4. Do it " spend time discussing and developing the output. > 5. Eat our dogfood " use whatever we produce. If we don’t use the > terminology and models we agree upon, the the mess has only gotten > bigger. > > Cheers, > Ivan (hoping that all of this doesn't sound as babbling of an > overeagerly naive megalomaniac) >
"William Martinez Pomares" wrote: > > I agree with that. > Problem is who is going to define the meaning, first, and then how to > make all others that tend to use different meanings to use the one > specified as "valid". > I didn't mean to suggest that the term "REST client" needs a new definition. If it means anything, it means "client connector". What I'm suggesting is that people read the thesis and use the precise terms that *are* defined, instead of making up our own terminology to discuss that thesis. Such a terminology becomes limited to this group, while only introducing more confusion. Remember when "noun vs. verb" came along? Did that ultimately help or hinder? My opinion is that it hindered, and we should remember that any time we consider defining new terminology that isn't covered in the thesis we're all trying to learn and explain. @Bob: I disagree that I'm being pedantic. When discussing client- server the meaning of client is clear; less so with REST where it could be a component or a connector. Lumping component and connector into one is, to me, the primary issue hindering m2m discussion. -Eric
I respectfully disagree. I don't think any confusion exists outside this list. When people say REST client, they mean a client application or a component that is talking to a server. User agents, connectors etc are not the typical terms that most developers use in their daily life. Subbu On Apr 10, 2010, at 6:43 PM, Eric J. Bowman wrote: > OK, this is now my biggest REST pet peeve. We'll never get anywhere in > discussions if we keep insisting on using undefined terms like "REST > client." > > You can say "client component" if you are talking about the client side > in general. You can say "client connector" of course. But you can't > say "REST client" and expect anyone to know what you're talking about, > because that term ambiguously also applies to the client connector on a > cache component, user agents and REST applications. > > You can, of course, say "user agent" when discussing that specific > type of a client component. Sometimes, some people seem to mean "user > agent" when they say "REST client" but other times those same people > mean "REST application," as that term refers to the API described by > your hypertext, which executes in the user agent -- the executing app > is a "REST client" just like a user agent or client connector. > > Damn confusing, it is! I hereby ban the term from rest-discuss, for > whatever good that'll do. :-) > > -Eric > > > ------------------------------------ > > Yahoo! Groups Links > > >
Mark, Erick: Sorry, I was not talking about the meaning of REST Client, my bad. I was thinking in something else, thinking in another thread maybe. My answer is about the idea of defining meaning to concepts and make all people follow that one, in general, not only Client. Sorry again. Still, I foresee that many others that will join this group in the future, and possibly some of the ones in the group, will use the client word. So, we are not going to escape the fate of asking: what do you mean by client? Furthermore, REST is not the only thing out there, you can expect the same thing to happen again when somebody says client meaning a completely different thing than user-agent. What does that mean? We will always need to read in context, no escape for that, and we will always expect the word client may not mean user-agent. William. --- In rest-discuss@yahoogroups.com, Mark Wonsil <mark_wonsil@...> wrote: > > If I look at a REST service like I do an SQL server then Eric's point makes more sense. Who defines the meaning of an SQL database? It is open to meaming for many clients. Furthermore, a REST client seems equivalent to the functionality of a SQL client. Why don't we demand meaning from SQL clients? > > Mark W. > > William Martinez Pomares <wmartinez@...> wrote: > > >I agree with that. > >Problem is who is going to define the meaning, first, and then how to make all others that tend to use different meanings to use the one specified as "valid". > > > >Cheers. > > > >William Martinez. > >--- In rest-discuss@yahoogroups.com, "Eric J. Bowman" <eric@> wrote: > >> > >> I'm not trying to be a jerk with this. What I'm saying is that it's > >> too difficult to deduce the meaning of "REST client" from context from > >> one person to the next. REST development involves adapting the terms > >> of Roy's thesis to your problem area. It's much harder for me to know > >> what people mean when their meaning must be taken from context, than > >> when their meaning is clear due to their use of precise terms. > >> > >> Have a nice day, > >> Eric > >> > > > > > > > > > >------------------------------------ > > > >Yahoo! Groups Links > > > > > > >
Volume 11 of This week in REST is up on the blog - http://wp.me/pMXr1-1q. Also, as I wrote on the blog, from this week there will be a change in the way links for the blog are collected. Since no one except me was contributing links to the REST wiki (http://rest.blueoxen.net/cgi-bin/wiki.pl?RESTWeekly) for the last couple of weeks, from now on I'll be collecting links myself and if you want a REST-related link included - just e-mail me, tweet or leave a comment on the latest blog post. Ivan
On Sat, Apr 10, 2010 at 6:28 AM, Eric J. Bowman <eric@...> wrote: > I don't think it's up to us, and I don't think you're taking the actual > problem into account -- REST being a buzzword. I agree with Eric. The other problem is simply that everyone "already knows about REST". It's HTTP POST, right? How hard can it be. And off they go. Sending crap over HTTP == REST. Thankfully, REST is easier to pronounce than HTTP. So all my ad hoc HTTP RPC interfaces now become REST interfaces. Simple. Just rolls of the tounge. So, the problem is that REST is already understood, WELL understood. It just happens to be pretty much completely wrong. Then you have to go in to the whole thesis, the concepts behind it, the vocabulary, etc. etc. etc. "Why can't I just POST whatever I want?" arguments, etc. It's hardly worth fighting any more. It's exhausting. If folks want to know about REST, they can look it up. If you think that REST is misunderstood, then rewrite the wikipedia article until it's clear, so that rather than getting in to exhausting discussions, advocates can just point to that and the thesis and tell folks to come back later if they're interested. They won't, HTTP RPC solves 99% of the use cases that they're REALLY trying to solve, so they won't make the leap to rearchitecting their system. But at least it makes it easier for folks using it to make the material more approachable.
First, thanks to you, Ivan, for writing that blog post and for your work to chronicle REST activities on the Web. I think you make several very good points. It's true that, after ten years, there is still not much accessible material on this very important architectural style. One of the problems I see is that some very good scholarship on architectural styles is bottled up behind IEEE and ACM pay walls. This makes it more difficult to build a large collection of information from which others can easily draw. My own feeling is that, at this point in time, collecting as many of the varied and disparate links, references, and examples as possible and offering them in a targeted search would go a long way to exposing a general theme and common ground that has developed around Roy's initial work over the years. That includes making rest-discuss and other email archives (HTTP, etc.) searchable. I also think that many people active on this list are type to keep themselves quite busy. I see very few folks here volunteering to take up new tasks. That's a bummer since so many on this list have been here quite a while and have so much to offer the general community and probably would offer their time if we could come up with ways to make it easy to contribute regularly. I, too, admit I'm pretty tied up. But I'm ready to offer assistance where possible. mca http://amundsen.com/blog/ On Sat, Apr 10, 2010 at 08:59, Ivan Žužak <izuzak@...> wrote: > Hey all, > > A few days ago I wrote a blog post on why I think understanding REST > is hard and what we can and *should* do about it - > http://wp.me/poYaf-34. Since a lot of what I wrote was inspired by > following this group and since this group is the most relevant (only?) > place for discussing REST - I'd like to know what you think. Short > version (since the post is really long): > > Although there's a lot of great blog posts, papers and mailing list > discussions, the current material on REST is a mess which makes REST > hard to understand and confusing to discuss: > * there is no agreed upon and widely used terminology, but a lot of > unexplained and overlapping terms, > * discussions are fragmented all over the web and often unnecessarily > repeat previous discussions, > * there are no (formal or semi-formal) models of (important) concepts. > > Therefore, I think people involved in and enthusiastic about REST > (mostly people on this group) should: > 1. Agree that there is a problem worth fixing – do we think that we > can create a better, clearer and more systematized way of > understanding and discussing about REST? > 2. Express interest in fixing it – is this something people want to > contribute their time to? > 3. Agree on how to fix it – what should be our output (a RESTopedia, a > document, video tutorials) and how would we all contribute to and > moderate the process? > 4. Do it – spend time discussing and developing the output. > 5. Eat our dogfood – use whatever we produce. If we don’t use the > terminology and models we agree upon, the the mess has only gotten > bigger. > > Cheers, > Ivan (hoping that all of this doesn't sound as babbling of an > overeagerly naive megalomaniac) > > > ------------------------------------ > > Yahoo! Groups Links > > > >
I haven't thought about Wikipedia! Hummm.... wikipedia.... Maybe is time to have a look. Is anyone in the list the one responsible of what's in the wikipedia about REST? William. --- In rest-discuss@yahoogroups.com, Will Hartung <willh@...> wrote: > > On Sat, Apr 10, 2010 at 6:28 AM, Eric J. Bowman <eric@...> wrote: > > > I don't think it's up to us, and I don't think you're taking the actual > > problem into account -- REST being a buzzword. > > I agree with Eric. > > The other problem is simply that everyone "already knows about REST". > It's HTTP POST, right? How hard can it be. > > And off they go. > > Sending crap over HTTP == REST. Thankfully, REST is easier to > pronounce than HTTP. So all my ad hoc HTTP RPC interfaces now become > REST interfaces. Simple. Just rolls of the tounge. > > So, the problem is that REST is already understood, WELL understood. > It just happens to be pretty much completely wrong. > > Then you have to go in to the whole thesis, the concepts behind it, > the vocabulary, etc. etc. etc. "Why can't I just POST whatever I > want?" arguments, etc. > > It's hardly worth fighting any more. It's exhausting. If folks want to > know about REST, they can look it up. > > If you think that REST is misunderstood, then rewrite the wikipedia > article until it's clear, so that rather than getting in to exhausting > discussions, advocates can just point to that and the thesis and tell > folks to come back later if they're interested. They won't, HTTP RPC > solves 99% of the use cases that they're REALLY trying to solve, so > they won't make the leap to rearchitecting their system. But at least > it makes it easier for folks using it to make the material more > approachable. >
Hi,
I was just wondering if HTTP is an exact implementation of REST principles or were any concessions/trade-offs made?
Thanks,
Sean.
I actually found this group in a link from a tweet, not sure if from Mike or from Dnene. I recall the link was to show a remark from Roy. Anyway, not sure how hard is to find it. Hey, guys, how hard is to create a magazine (digital one, of course)? I still dislike the idea that REST is discussed in the SOA Magazine as a web service technique. I think it should have its own place as architectural style. Last time I looked around, there was none. I'll check around again. William Martinez. --- In rest-discuss@yahoogroups.com, mike amundsen <mamund@...> wrote: > > First, thanks to you, Ivan, for writing that blog post and for your > work to chronicle REST activities on the Web. I think you make several > very good points. It's true that, after ten years, there is still not > much accessible material on this very important architectural style. > > One of the problems I see is that some very good scholarship on > architectural styles is bottled up behind IEEE and ACM pay walls. This > makes it more difficult to build a large collection of information > from which others can easily draw. > > My own feeling is that, at this point in time, collecting as many of > the varied and disparate links, references, and examples as possible > and offering them in a targeted search would go a long way to exposing > a general theme and common ground that has developed around Roy's > initial work over the years. That includes making rest-discuss and > other email archives (HTTP, etc.) searchable. > > I also think that many people active on this list are type to keep > themselves quite busy. I see very few folks here volunteering to take > up new tasks. That's a bummer since so many on this list have been > here quite a while and have so much to offer the general community and > probably would offer their time if we could come up with ways to make > it easy to contribute regularly. > > I, too, admit I'm pretty tied up. But I'm ready to offer assistance > where possible. > > mca > http://amundsen.com/blog/ > > > > > On Sat, Apr 10, 2010 at 08:59, Ivan Žužak <izuzak@...> wrote: > > Hey all, > > > > A few days ago I wrote a blog post on why I think understanding REST > > is hard and what we can and *should* do about it - > > http://wp.me/poYaf-34. Since a lot of what I wrote was inspired by > > following this group and since this group is the most relevant (only?) > > place for discussing REST - I'd like to know what you think. Short > > version (since the post is really long): > > > > Although there's a lot of great blog posts, papers and mailing list > > discussions, the current material on REST is a mess which makes REST > > hard to understand and confusing to discuss: > > * there is no agreed upon and widely used terminology, but a lot of > > unexplained and overlapping terms, > > * discussions are fragmented all over the web and often unnecessarily > > repeat previous discussions, > > * there are no (formal or semi-formal) models of (important) concepts. > > > > Therefore, I think people involved in and enthusiastic about REST > > (mostly people on this group) should: > > 1. Agree that there is a problem worth fixing " do we think that we > > can create a better, clearer and more systematized way of > > understanding and discussing about REST? > > 2. Express interest in fixing it " is this something people want to > > contribute their time to? > > 3. Agree on how to fix it " what should be our output (a RESTopedia, a > > document, video tutorials) and how would we all contribute to and > > moderate the process? > > 4. Do it " spend time discussing and developing the output. > > 5. Eat our dogfood " use whatever we produce. If we don’t use the > > terminology and models we agree upon, the the mess has only gotten > > bigger. > > > > Cheers, > > Ivan (hoping that all of this doesn't sound as babbling of an > > overeagerly naive megalomaniac) > > > > > > ------------------------------------ > > > > Yahoo! Groups Links > > > > > > > > >
On Apr 13, 2010, at 2:28 PM, Sean Kennedy wrote: > > > Hi, > I was just wondering if HTTP is an exact implementation of REST principles or were any concessions/trade-offs made? > Read the section http://www.ics.uci.edu/~fielding/pubs/dissertation/evaluation.htm#sec_6_3_4 .. and related sections of course Jan > Thanks, > Sean. > > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
Hello guys, I've made a video [1] and a description [2] on why REST != HTTP and how this is reflected in our applications. I believe we got into the wrong direction at some point, that http was REST, and a easy way to make people understand more REST concepts were with Richardson's web usage maturity model. Although it went on the direction of REST, I believe it does not imply in it, that's why I recorded the video. Regards [1] http://guilhermesilveira.wordpress.com/2010/04/13/buying-through-rest-applying-rest-to-the-enterprise/ [2] http://guilhermesilveira.wordpress.com/2010/04/13/rest-maturity-model/ Guilherme Silveira Caelum | Ensino e Inovação http://www.caelum.com.br/ 2010/4/12 mike amundsen <mamund@...> > > > First, thanks to you, Ivan, for writing that blog post and for your > work to chronicle REST activities on the Web. I think you make several > very good points. It's true that, after ten years, there is still not > much accessible material on this very important architectural style. > > One of the problems I see is that some very good scholarship on > architectural styles is bottled up behind IEEE and ACM pay walls. This > makes it more difficult to build a large collection of information > from which others can easily draw. > > My own feeling is that, at this point in time, collecting as many of > the varied and disparate links, references, and examples as possible > and offering them in a targeted search would go a long way to exposing > a general theme and common ground that has developed around Roy's > initial work over the years. That includes making rest-discuss and > other email archives (HTTP, etc.) searchable. > > I also think that many people active on this list are type to keep > themselves quite busy. I see very few folks here volunteering to take > up new tasks. That's a bummer since so many on this list have been > here quite a while and have so much to offer the general community and > probably would offer their time if we could come up with ways to make > it easy to contribute regularly. > > I, too, admit I'm pretty tied up. But I'm ready to offer assistance > where possible. > > mca > http://amundsen.com/blog/ > > > On Sat, Apr 10, 2010 at 08:59, Ivan Žužak <izuzak@...<izuzak%40gmail.com>> > wrote: > > Hey all, > > > > A few days ago I wrote a blog post on why I think understanding REST > > is hard and what we can and *should* do about it - > > http://wp.me/poYaf-34. Since a lot of what I wrote was inspired by > > following this group and since this group is the most relevant (only?) > > place for discussing REST - I'd like to know what you think. Short > > version (since the post is really long): > > > > Although there's a lot of great blog posts, papers and mailing list > > discussions, the current material on REST is a mess which makes REST > > hard to understand and confusing to discuss: > > * there is no agreed upon and widely used terminology, but a lot of > > unexplained and overlapping terms, > > * discussions are fragmented all over the web and often unnecessarily > > repeat previous discussions, > > * there are no (formal or semi-formal) models of (important) concepts. > > > > Therefore, I think people involved in and enthusiastic about REST > > (mostly people on this group) should: > > 1. Agree that there is a problem worth fixing – do we think that we > > can create a better, clearer and more systematized way of > > understanding and discussing about REST? > > 2. Express interest in fixing it – is this something people want to > > contribute their time to? > > 3. Agree on how to fix it – what should be our output (a RESTopedia, a > > document, video tutorials) and how would we all contribute to and > > moderate the process? > > 4. Do it – spend time discussing and developing the output. > > 5. Eat our dogfood – use whatever we produce. If we don’t use the > > terminology and models we agree upon, the the mess has only gotten > > bigger. > > > > Cheers, > > Ivan (hoping that all of this doesn't sound as babbling of an > > overeagerly naive megalomaniac) > > > > > > ------------------------------------ > > > > Yahoo! Groups Links > > > > > > > > > > >
Eric, Will, Thanks for the comments. I definitely agree with you and do think that it's a big problem - REST is being misused as a marketing term and as a substitute term for HTTP. I myself am closer to thinking that this problem is currently unsolvable and this makes the effort to sort out the "mess" less attractive (since it would probably not have as big an impact). So, just to pick your brain a bit more, let's say/pretend that we don't want to solve that problem but rather sort out the problems targeting people that do have an understanding of REST beyond that buzzword level. What other problems do you see, would that be a worthwhile effort and what would it take to make it happen? I saw the other thread on "REST clients" which Eric started -- that's exactly what I'm talking about. In other words, there might be multiple problems caused by the whole mess, some of which might be worth solving and some not. So I'm just interested in seeing if there is a problem worth solving by asking what people would be interested in contributing to. Thanks, Ivan On Tue, Apr 13, 2010 at 02:20, Will Hartung <willh@...> wrote: > On Sat, Apr 10, 2010 at 6:28 AM, Eric J. Bowman <eric@...> wrote: > >> I don't think it's up to us, and I don't think you're taking the actual >> problem into account -- REST being a buzzword. > > I agree with Eric. > > The other problem is simply that everyone "already knows about REST". > It's HTTP POST, right? How hard can it be. > > And off they go. > > Sending crap over HTTP == REST. Thankfully, REST is easier to > pronounce than HTTP. So all my ad hoc HTTP RPC interfaces now become > REST interfaces. Simple. Just rolls of the tounge. > > So, the problem is that REST is already understood, WELL understood. > It just happens to be pretty much completely wrong. > > Then you have to go in to the whole thesis, the concepts behind it, > the vocabulary, etc. etc. etc. "Why can't I just POST whatever I > want?" arguments, etc. > > It's hardly worth fighting any more. It's exhausting. If folks want to > know about REST, they can look it up. > > If you think that REST is misunderstood, then rewrite the wikipedia > article until it's clear, so that rather than getting in to exhausting > discussions, advocates can just point to that and the thesis and tell > folks to come back later if they're interested. They won't, HTTP RPC > solves 99% of the use cases that they're REALLY trying to solve, so > they won't make the leap to rearchitecting their system. But at least > it makes it easier for folks using it to make the material more > approachable. >
Hello William, I agree, people on this group mostly do understand REST on a level deeper than the Buzzword. However, I've seen lots of discussion on the group which have diverged due to misunderstandings. And yes, disagreement is healthy, but not infinite disagreement as you noticed at the end. At some point, you have to call it what it is, agree, and move on. And sometimes, you can't see where you can move on to until you've agreed on the past. The Web is moving on, and I think we're lagging behind due to various issues including this one. Maybe creating a wiki with definitions or collecting various material created recently would be of benefit, as you say. That's actually what I wanted to find out - would it be helpful to you? What would be helpful to you? Thanks, Ivan On Sun, Apr 11, 2010 at 21:26, William Martinez Pomares <wmartinez@...> wrote: > > > > Hello Ivan. > I agree with you, but: > > 1. In this group you have many people that actually do that, uses terms of what we understand, while other understand different things. There are discussions that focus on that, defining the terms. So, we would need to put the group to agree, and that may be problematic, since actually disagreement lead to healthy discussions. > Even, some people just left the group because majority did not agree with them, which turns out bad. > > 2. The other problem is we all have different levels of understanding of REST. There are many that work with this everyday, and thus there are some things that are a de facto reality, although academically may not be totally congruent with the dissertation. Far more, the dissertation, some one proposed, may need an update since lots of things had happened since then. > > So, what can we do? The actually existence of this group is a step forward. Actual promotion of the group so it is available to more people is necessary. > The This week on REST wiki is another great idea. > Opening a site with the "official" definitions and tutorial from all levels (not just development) could be another one. Problem is we may have some trouble getting all people, or majority, to agree on some definitions. > > Any way, the solution may come out from this group, or even from another one that may start somewhere else. Not sure if the future of REST concept will be closer to original idea or a derivation from the actual practices, but at some point there must be an agreement, or the concept will be diluted. > > William Martinez.
Thanks for commenting, Mike. Access to academic papers is a problem. I hate the whole monopoly more than most people and that's why I'm publishing everything on my blog first. Since this problem won't be going away any time soon -- maybe we can just ignore it and work with what we have? Excellent blog posts are being published, there are several people active on the Web that are currently working at universities and do have access to that material, the WWW conference proceedings are available for free online, and so on. So, just collecting and organizing available material/references in some way is something you think would be useful to you and others? I definitely think that it's valuable, both on it's own and as a first step towards something more. It could also be something that wouldn't require as much an effort as organizing the terminology, coming up with models and other stuff I mentioned, so more people would potentially chip in and not loose half of the next year over it. Lastly, I think that a lot depends on coordination of activities -- no coordination/planning = nothing happens. This is the hardest part and where I see the least people volunteering. What do you think? Ivan On Tue, Apr 13, 2010 at 02:38, mike amundsen <mamund@...> wrote: > First, thanks to you, Ivan, for writing that blog post and for your > work to chronicle REST activities on the Web. I think you make several > very good points. It's true that, after ten years, there is still not > much accessible material on this very important architectural style. > > One of the problems I see is that some very good scholarship on > architectural styles is bottled up behind IEEE and ACM pay walls. This > makes it more difficult to build a large collection of information > from which others can easily draw. > > My own feeling is that, at this point in time, collecting as many of > the varied and disparate links, references, and examples as possible > and offering them in a targeted search would go a long way to exposing > a general theme and common ground that has developed around Roy's > initial work over the years. That includes making rest-discuss and > other email archives (HTTP, etc.) searchable. > > I also think that many people active on this list are type to keep > themselves quite busy. I see very few folks here volunteering to take > up new tasks. That's a bummer since so many on this list have been > here quite a while and have so much to offer the general community and > probably would offer their time if we could come up with ways to make > it easy to contribute regularly. > > I, too, admit I'm pretty tied up. But I'm ready to offer assistance > where possible. > > mca > http://amundsen.com/blog/ > > > > > On Sat, Apr 10, 2010 at 08:59, Ivan Žužak <izuzak@...> wrote: >> Hey all, >> >> A few days ago I wrote a blog post on why I think understanding REST >> is hard and what we can and *should* do about it - >> http://wp.me/poYaf-34. Since a lot of what I wrote was inspired by >> following this group and since this group is the most relevant (only?) >> place for discussing REST - I'd like to know what you think. Short >> version (since the post is really long): >> >> Although there's a lot of great blog posts, papers and mailing list >> discussions, the current material on REST is a mess which makes REST >> hard to understand and confusing to discuss: >> * there is no agreed upon and widely used terminology, but a lot of >> unexplained and overlapping terms, >> * discussions are fragmented all over the web and often unnecessarily >> repeat previous discussions, >> * there are no (formal or semi-formal) models of (important) concepts. >> >> Therefore, I think people involved in and enthusiastic about REST >> (mostly people on this group) should: >> 1. Agree that there is a problem worth fixing – do we think that we >> can create a better, clearer and more systematized way of >> understanding and discussing about REST? >> 2. Express interest in fixing it – is this something people want to >> contribute their time to? >> 3. Agree on how to fix it – what should be our output (a RESTopedia, a >> document, video tutorials) and how would we all contribute to and >> moderate the process? >> 4. Do it – spend time discussing and developing the output. >> 5. Eat our dogfood – use whatever we produce. If we don’t use the >> terminology and models we agree upon, the the mess has only gotten >> bigger. >> >> Cheers, >> Ivan (hoping that all of this doesn't sound as babbling of an >> overeagerly naive megalomaniac) >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> >> >
Not sure if this has been referred. http://tech.groups.yahoo.com/group/rest-discuss/message/10256 <http://tech.groups.yahoo.com/group/rest-discuss/message/10256>"An application is something a user wants to do with computers. "Writing a memo" is an application. Microsoft Word is a software system for writing memos (among other things). Google Docs is a network-based software system for writing memos (among other things). "Buying stuff" is another application. Amazon is a network-based software system for buying stuff. You get the idea ... a network-based application is something a user wants to do with computers that needs (for whatever reason) to be network-based. It can be as simple as reading a book, or as complex as ordering parts for a Boeing 767. And let's not forget that there are many network-based applications for which REST is not likely to be a good design (an alarm monitoring system would be one example -- it may have RESTful components, such as a status view, but it would be silly to restrict the sensors to a pull-based interaction just out of architectural purity)." Cheers, Dong On Wed, Apr 7, 2010 at 5:40 AM, Jan Algermissen <algermissen1971@...>wrote: > > > While reading through section 5.3.3[1] I am wondering, whether my > understanding of "Application" actually matches Roy's. He writes: > > "A data view of an architecture reveals the application state as > information flows through the components. Since REST is specifically > targeted at distributed information systems, it views an application as a > cohesive structure of information and control alternatives through which a > user can perform a desired task. For example, looking-up a word in an > on-line dictionary is one application, as is touring through a virtual > museum, or reviewing a set of class notes to study for an exam. Each > application defines goals for the underlying system, against which the > system's performance can be measured." > > Thinking through this (and the following paragraphs) I get the impression > that a specific application is 'created' only when a user[2] chooses a goal > it intends to pursue and turns to the RESTful system (the Web) to start > pursuing it. The application thereby brought to life might span several, > unrelated 'services'. > > Another way one might say this is 'The application is defined by the > current use of the system (the Web) for the given user intention' (and the > current application state is "defined by its pending requests, the topology > of connected components (some of which may be filtering buffered data), the > active requests on those connectors, the data flow of representations in > response to those requests, and the processing of those representations as > they are received by the user agent."[1] > > If that understanding makes sense at all, it has the consequence, that > application design is actually done on the client side and *not* on the > server side. > > In the context of machine clients this would mean that applications are > defined by the client side developer's interpretations of and assumptions > about the envisioned media types (and link relations) and rules for choosing > transitions. > > Comments most welcome... > > Jan > > [1] > http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm#sec_5_3_3 > > [2] 'User' in this context would be a human user or someone who prepares > (codes or configures) a client component to persue a certain goal > > >
Hi, I am a PhD student and I am trying to find out the level of shift in the enterprise-space from SOAP/POX WS to RESTful WS. I get a strong feeling that there is a substantial shift in this direction but I have no examples or figures that bear it out... Thanks, Sean.
Sean, On Apr 14, 2010, at 12:27 PM, Sean Kennedy wrote: > > > Hi, > I am a PhD student and I am trying to find out the level of shift in the enterprise-space from SOAP/POX WS to RESTful WS. I get a strong feeling that there is a substantial shift in this direction The shift is happening, but it appears that it is mostly a shift of hyped terms only and not a real architectural shift. The true application of REST would be a radical shift towards simplicity and decoupling and as far as my experience goes, enterprises are not yet willing to let go of complexity :-) I have recently persuaded myself that the ubiquitiousness of object-oriented thinking is a major obstacle. We have all been so immersed in OOA, OOD, classes, interfaces, etc. that it is really hard to think differently about software systems. Maybe the apparent rise of functional languages helps to cure that situation... > but I have no examples or figures that bear it out... I think many enterprise understand that there is a benefit induced by REST and start trying. But it is extremely rare, for example, that people understand that media type design is at the heart of all design activity (as opposed to service interface design, which is an implementation detail in REST). Jan > > Thanks, > Sean. > > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
Hi,
I am getting conflicting information on the max length of a URL that I can submit from a programmatic client e.g. RESTlet (not a browser). For example, the minimum according to http://www.boutell.com/newfaq/misc/urllength.html is around 4000 chars. However, the Apache docs suggest 8190 http://httpd.apache.org/docs/2.2/mod/core.html#limitrequestline . Is it the case that administrators adjust it down to 4k due to security concerns? Can it be configured upward if required?
Thanks,
Sean.
Sean, On Apr 14, 2010, at 5:38 PM, Sean Kennedy wrote: > > > Hi, > I am getting conflicting information on the max length of a URL that I can submit from a programmatic client e.g. RESTlet (not a browser). This recent thread should be helpful: http://lists.w3.org/Archives/Public/uri/2010Apr/0003.html Jan > For example, the minimum according to http://www.boutell.com/newfaq/misc/urllength.html is around 4000 chars. However, the Apache docs suggest 8190 http://httpd.apache.org/docs/2.2/mod/core.html#limitrequestline . Is it the case that administrators adjust it down to 4k due to security concerns? Can it be configured upward if required? > > Thanks, > Sean. > > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
See http://lists.w3.org/Archives/Public/uri/2010Apr/0003.html for a recent discussion. Subbu On Apr 14, 2010, at 8:38 AM, Sean Kennedy wrote: > > > Hi, > I am getting conflicting information on the max length of a URL that I can submit from a programmatic client e.g. RESTlet (not a browser). For example, the minimum according to http://www.boutell.com/newfaq/misc/urllength.html is around 4000 chars. However, the Apache docs suggest 8190 http://httpd.apache.org/docs/2.2/mod/core.html#limitrequestline . Is it the case that administrators adjust it down to 4k due to security concerns? Can it be configured upward if required? > > Thanks, > Sean. > > > >
Hi Jan/Subbu,
Thanks for that. I am not sure if it suits my needs but will look at it closer. I am thinking of taking XML data and encoding it in the URL and was wondering what limits I am up against and if the limits are configurable...
Sean.
________________________________
From: Jan Algermissen <algermissen1971@...>
To: Sean Kennedy <seandkennedy@....uk>
Cc: Rest Discussion Group <rest-discuss@yahoogroups.com>
Sent: Wed, 14 April, 2010 16:45:58
Subject: Re: [rest-discuss] server URL length
Sean,
On Apr 14, 2010, at 5:38 PM, Sean Kennedy wrote:
>
>
> Hi,
> I am getting conflicting information on the max length of a URL that I can submit from a programmatic client e.g. RESTlet (not a browser).
This recent thread should be helpful: http://lists.w3.org/Archives/Public/uri/2010Apr/0003.html
Jan
> For example, the minimum according to http://www.boutell.com/newfaq/misc/urllength.html is around 4000 chars. However, the Apache docs suggest 8190 http://httpd.apache.org/docs/2.2/mod/core.html#limitrequestline . Is it the case that administrators adjust it down to 4k due to security concerns? Can it be configured upward if required?
>
> Thanks,
> Sean.
>
>
>
>
-----------------------------------
Jan Algermissen, Consultant
NORD Software Consulting
Mail: algermissen@...
Blog: http://www.nordsc.com/blog/
Work: http://www.nordsc.com/
-----------------------------------
Hi Sean, I am not sure what your application is, but this, to me, sounds like a bad idea. URL is for identifying and locating not for sending messages to the server, although you can do that if you like. Cheers, Dong On Wed, Apr 14, 2010 at 10:25 AM, Sean Kennedy <seandkennedy@...>wrote: > > > Hi Jan/Subbu, > Thanks for that. I am not sure if it suits my needs but will look at it > closer. I am thinking of taking XML data and encoding it in the URL and was > wondering what limits I am up against and if the limits are configurable... > > Sean. > > ------------------------------ > *From:* Jan Algermissen <algermissen1971@...> > *To:* Sean Kennedy <seandkennedy@...> > *Cc:* Rest Discussion Group <rest-discuss@yahoogroups.com> > *Sent:* Wed, 14 April, 2010 16:45:58 > *Subject:* Re: [rest-discuss] server URL length > > Sean, > > On Apr 14, 2010, at 5:38 PM, Sean Kennedy wrote: > > > > > > > Hi, > > I am getting conflicting information on the max length of a URL that I > can submit from a programmatic client e.g. RESTlet (not a browser). > > This recent thread should be helpful: > http://lists.w3.org/Archives/Public/uri/2010Apr/0003.html > > Jan > > > > For example, the minimum according to > http://www.boutell.com/newfaq/misc/urllength.html is around 4000 chars. > However, the Apache docs suggest 8190 > http://httpd.apache.org/docs/2.2/mod/core.html#limitrequestline . Is it > the case that administrators adjust it down to 4k due to security concerns? > Can it be configured upward if required? > > > > Thanks, > > Sean. > > > > > > > > > > ----------------------------------- > Jan Algermissen, Consultant > NORD Software Consulting > > Mail: algermissen@... > Blog: http://www.nordsc.com/blog/ > Work: http://www.nordsc.com/ > ----------------------------------- > > > > > > >
Hi.
I was wondering if there is a standard way of referencing to a part of an
atom entry, like an anchor in HTML.
Consider the following:
<entry xmlns="http://www.w3.org/2005/Atom" xmlns:app="
http://www.w3.org/2007/app" xmlns:gd="http://schemas.google.com/g/2005">
<id>urn:uuid:60a76c80-d399-11d9-b91C-0003939e0af6</id>
<app:edited>2003-12-13T18:30:02Z</app:edited>
<updated>2003-12-13T18:30:02Z</updated>
<published>2003-12-13T18:30:02Z</published>
<link rel="self" href="http://ws.example.com/venues/1"/>
<link rel="edit" href="http://ws.example.com/venues/1"/>
<category scheme='http://schemas.google.com/g/2005#kind'
term='http://schemas.google.com/contact/2008#contact'/<http://schemas.google.com/contact/2008#contact%27/>
>
<title>Oslo Spektrum</title>
<content>The big one...</content>
<link rel="anchor" href="room1" title="Room 1"/>
<link rel="anchor" href="room2" title="Room 2"/>
<link rel="anchor" href="room3" title="Room 3"/>
</entry>
Assuming that this is available at http://ws.example.com/venues/1
Is this possible, or even allowed? I haven't found anything that in my
google searches, hence the question.
The other solution would be that I have a link with a relation type and
embed something within that, like a feed.
<entry xmlns="http://www.w3.org/2005/Atom" xmlns:app="
http://www.w3.org/2007/app" xmlns:gd="http://schemas.google.com/g/2005">
<id>urn:uuid:60a76c80-d399-11d9-b91C-0003939e0af6</id>
<app:edited>2003-12-13T18:30:02Z</app:edited>
<updated>2003-12-13T18:30:02Z</updated>
<published>2003-12-13T18:30:02Z</published>
<link rel="self" href="http://ws.example.com/venues/1"/>
<link rel="edit" href="http://ws.example.com/venues/1"/>
<gd:feedLink href="http://ws.example.com/venues/1/rooms">
<feed>
...
<entry>
<title>Room1</title>
<link href="http://ws.example.com/venues/1/rooms/1" rel="edit"/>
<link href="http://ws.example.com/venues/1/rooms/1" rel="edit"/>
....
</entry>
</feed>
</link>
<category scheme='http://schemas.google.com/g/2005#kind'
term='http://schemas.google.com/contact/2008#contact'/<http://schemas.google.com/contact/2008#contact%27/>
>
<title>Oslo Spektrum</title>
<content>The big one...</content>
</entry>
Any suggestions for how to proceed?
Thanks
--
Erlend Hamnaberg
On Wed, Apr 14, 2010 at 3:40 AM, Jan Algermissen <algermissen1971@...> wrote: > > > > Sean, > > On Apr 14, 2010, at 12:27 PM, Sean Kennedy wrote: > > > > > > > Hi, > > I am a PhD student and I am trying to find out the level of shift in the enterprise-space from SOAP/POX WS to RESTful WS. I get a strong feeling that there is a substantial shift in this direction > > The shift is happening, but it appears that it is mostly a shift of hyped terms only and > not a real architectural shift. > > The true application of REST would be a radical shift towards simplicity and decoupling > and as far as my experience goes, enterprises are not yet willing to let go of complexity :-) > I have recently persuaded myself that the ubiquitiousness of object-oriented thinking is a > major obstacle. We have all been so immersed in OOA, OOD, classes, interfaces, etc. > that it is really hard to think differently about software systems. Maybe the apparent rise > of functional languages helps to cure that situation... I would argue about REST being simpler. I think it is certainly more elegant, and with that elegance comes implicit power, power that may not be readily apparent, particularly in smaller applications. Combined with the paradigm shift in application design, a design not well understood, and one that does not map well on existing infrastructure, REST is rather disruptive to many existing applications, making adoption even more difficult. The "rise" is REST is, like you said, in the "hyped" terms. Basically little more than RPC over HTTP with ad hoc, under documented payloads (not even necessarily media types). This is arguably no different than SOAP or XML-RPC. But it's REST because people just think of REST as "stuff over HTTP" and aren't "burdened" by following any existing standards. I think there are far fewer practitioners of REST than there are folks "making stuff up" and calling it REST. Finally, using SOAP and WS WebServices today IS "simple". It's a drag and drop button click on an IDE or tool set. The underlying details are complicated and mired in thick standards documents, but connecting up to a SOAP end point today is pretty simple. > I think many enterprise understand that there is a benefit induced by REST and start > trying. But it is extremely rare, for example, that people understand that media type > design is at the heart of all design activity (as opposed to service interface design, > which is an implementation detail in REST). Exactly. Regards, Will Hartung (willh@...)
As someone who works at creating software that enterprises run, I'd say there isn't that much of a shift. Many of my colleagues recognize the value of REST, but it generally is an aspirational endpoint rather than an actual endpoint. There might be a shift to recognize the aspiration, but certainly not the full execution. As an example of one difficulties, we're typically having machines communicate with machines. Yes, in theory we could invest the time to define a media type that embodies all the semantics we need to capture. In practice, that is more work up front than if we follow the "web services" path, define a WSDL interface, and generate the code we need. Especially when we know it is almost a certainty that the client and server in these cases will change together, the extra work of figuring out a fully REST approach is actually unnecessary, and consequently inappropriate. Of course, in aspiring to REST, we do use tools like JAX-RS, which of course don't magically transform our code to "REST", but they do start us down the path of "REST". Whether we get there or not depends on the use-cases and requirements for the software. Which I think is as it should be. -Eric. On 04/14/2010 03:27 AM, Sean Kennedy wrote: > > Hi, > I am a PhD student and I am trying to find out the level of shift > in the enterprise-space from SOAP/POX WS to RESTful WS. I get a strong > feeling that there is a substantial shift in this direction but I have > no examples or figures that bear it out... > > Thanks, > Sean. > > > Reply to sender > <mailto:seandkennedy@...?subject=migration%20toward%20REST%20in%20the%20enterprise> > | Reply to group > <mailto:rest-discuss@yahoogroups.com?subject=migration%20toward%20REST%20in%20the%20enterprise> > | Reply via web post > <http://groups.yahoo.com/group/rest-discuss/post;_ylc=X3oDMTJxNm40bmdwBF9TAzk3MzU5NzE0BGdycElkAzQzMTkyNTUEZ3Jwc3BJZAMxNzA1NzAxMDE0BG1zZ0lkAzE1MjMwBHNlYwNmdHIEc2xrA3JwbHkEc3RpbWUDMTI3MTI0MDgzOA--?act=reply&messageNum=15230> > | Start a New Topic > <http://groups.yahoo.com/group/rest-discuss/post;_ylc=X3oDMTJlNTBlZ2hpBF9TAzk3MzU5NzE0BGdycElkAzQzMTkyNTUEZ3Jwc3BJZAMxNzA1NzAxMDE0BHNlYwNmdHIEc2xrA250cGMEc3RpbWUDMTI3MTI0MDgzOA--> >
Hello Sean, As with Jan, I believe there is a huge difference in typical SOAP/POX WS and REST systems. First of all, clients work in a different way and you should code both systems thinking on the media type and its semantics: something completely inexistent in SOAP/POX WS. I recently posted an example here on the list where I mention how different typical RPC (and in this case WS) is from REST, you can check it at vimeo [1]. If you for REST = cute tunneling through HTTP, then the change is not that big, but REST is not that. Regards [1] http://guilhermesilveira.wordpress.com/ Guilherme Silveira Caelum | Ensino e Inovação http://www.caelum.com.br/ 2010년 4월 14일 오전 07시 27분 10초 UTC-3, Sean Kennedy <seandkennedy@...>님의 말: > > > Hi, > I am a PhD student and I am trying to find out the level of shift in the > enterprise-space from SOAP/POX WS to RESTful WS. I get a strong feeling that > there is a substantial shift in this direction but I have no examples or > figures that bear it out... > > Thanks, > Sean. > > >
On Apr 14, 2010, at 8:09 PM, Will Hartung wrote: > I would argue about REST being simpler. To clarify: I was referring to the simplicity induced into the resulting system, not to the simplicity of the task of creating the system. (Though I think that the process of system creation is actually also simpler with REST - once you grok it...) Jan
On Wed, Apr 14, 2010 at 12:13 PM, Jan Algermissen <algermissen1971@...> wrote: > > On Apr 14, 2010, at 8:09 PM, Will Hartung wrote: > >> I would argue about REST being simpler. > > To clarify: I was referring to the simplicity induced into the resulting system, not to the simplicity of the task of creating the system. > > (Though I think that the process of system creation is actually also simpler with REST - once you grok it...) That may well be true. I'm still on the "G" of Grokking it myself :) And I agree with Eric in that, apparently in spite of how much or well grok'd REST is, for many transactions the effort involved isn't necessarily worth it. As has been said, REST is designed for large, long living systems. Most systems don't even fall under that banner. Or, typically, you don't recognise the new system as being large and long lived until 5 years later where it has morphed and snowballed in to some monstrosity underneath the mutating forces of business needs with very short schedules. But even then, it's not necessarily worth the up front effort to do a full boat REST system for such an, initially, small system. When REST is better grok'd, when the patterns of implementation and solutions of business cases get better documented and propagated, then REST systems will be, ideally, easier to develop. I still need to get the Cookbook, but I think efforts like that are a good start. Regards, Will Hartung (willh@...)
I bet this has been asked many time before, but here goes: I have an web application which is like a workflow. It goes from StateA to StateB and so on. I want to be sure that a user who is accessing the application at StateB has completed StateA. If I cannot use a sessionId stored at the server somewhere, how can I do it in a REST compliant way? Any help is greatly appreciated. --Aiman
kurtrips wrote: > I bet this has been asked many time before, but here goes: > > I have an web application which is like a workflow. It goes from StateA > to StateB and so on. > I want to be sure that a user who is accessing the application at > StateB has completed StateA. > If I cannot use a sessionId stored at the server somewhere, how can I > do it in a REST compliant way? Return links to state B only in the responses from state A, not from other states. Robert Brewer fumanchu@...
Eric Johnson wrote: > > As an example of one difficulties, we're typically having machines > communicate with machines. Yes, in theory we could invest the time to > define a media type that embodies all the semantics we need to > capture. In practice, that is more work up front than if we follow > the "web services" path, define a WSDL interface, and generate the > code we need. Especially when we know it is almost a certainty that > the client and server in these cases will change together, the extra > work of figuring out a fully REST approach is actually unnecessary, > and consequently inappropriate. Well, the value you get from REST depends on how distributed the system is e.g. the above approach wouldn't work for the web, but that's not because the clients are human rather than machine > > Of course, in aspiring to REST, we do use tools like JAX-RS, which of > course don't magically transform our code to "REST", but they do start > us down the path of "REST". Whether we get there or not depends on > the use-cases and requirements for the software. Which I think is as > it should be. Ok, but this might present a problem if the 'easiest up front' solutions to short term requirements (e.g. "web services") become costly in the long term? I agree there's a 'middle-way' that presents the best of both worlds, perhaps this is Level 2 RMM? http://martinfowler.com/articles/richardsonMaturityModel.html#level2 Cheers, Mike
Is caching a necessary constraint from an arch style pov? It seems like a consequence/benefit of the other constraints (cs/layered/stateless/uniform interface) rather than one in its own right Cheers, Mike
This might be worth looking at: http://bit.ly/cKewo3 -- Erlend On Thu, Apr 15, 2010 at 1:28 PM, Mike Kelly <mike@...> wrote: > > > Is caching a necessary constraint from an arch style pov? > > It seems like a consequence/benefit of the other constraints > (cs/layered/stateless/uniform interface) rather than one in its own right > > Cheers, > Mike > >
Mike, Within REST, Caching is both an optional and a highly desirable constraint. It is optional in that it is easy to imagine a RESTful implementation that never actually expects caching by User-Agent or any intermediaries (this would be the same as implicitly labelling everything as non-cacheable). Such an implementation would, however, be inefficient to some degree which is why adding the constraint is desirable. Regards, Alan Dean On Thu, Apr 15, 2010 at 12:28, Mike Kelly <mike@...> wrote: > > > Is caching a necessary constraint from an arch style pov? > > It seems like a consequence/benefit of the other constraints > (cs/layered/stateless/uniform interface) rather than one in its own right > > Cheers, > Mike > >
Well, Reading Wikipedia, I found out English version works more on the thesis/academic part of REST (I like it overall). ON the other hand, the Spanish version says something like: "Even when the REST term refers originally to a set of architectural principles -described below-, currently it is used in a broader sense to describe any simple web interface using XML anf HTTP, without the additional abstractions of protocols based in message interchange patterns like the SOAP web services protocol". In other words, it is taken for granted that the original REST meaning is no more, and that simplification is the new valid REST meaning. I guess I can enter and do some changes... William Martinez. --- In rest-discuss@yahoogroups.com, "William Martinez Pomares" <wmartinez@...> wrote: > > I haven't thought about Wikipedia! Hummm.... wikipedia.... > Maybe is time to have a look. Is anyone in the list the one responsible of what's in the wikipedia about REST? > William. > > --- In rest-discuss@yahoogroups.com, Will Hartung <willh@> wrote: > > > > On Sat, Apr 10, 2010 at 6:28 AM, Eric J. Bowman <eric@> wrote: > > > > > I don't think it's up to us, and I don't think you're taking the actual > > > problem into account -- REST being a buzzword. > > > > I agree with Eric. > > > > The other problem is simply that everyone "already knows about REST". > > It's HTTP POST, right? How hard can it be. > > > > And off they go. > > > > Sending crap over HTTP == REST. Thankfully, REST is easier to > > pronounce than HTTP. So all my ad hoc HTTP RPC interfaces now become > > REST interfaces. Simple. Just rolls of the tounge. > > > > So, the problem is that REST is already understood, WELL understood. > > It just happens to be pretty much completely wrong. > > > > Then you have to go in to the whole thesis, the concepts behind it, > > the vocabulary, etc. etc. etc. "Why can't I just POST whatever I > > want?" arguments, etc. > > > > It's hardly worth fighting any more. It's exhausting. If folks want to > > know about REST, they can look it up. > > > > If you think that REST is misunderstood, then rewrite the wikipedia > > article until it's clear, so that rather than getting in to exhausting > > discussions, advocates can just point to that and the thesis and tell > > folks to come back later if they're interested. They won't, HTTP RPC > > solves 99% of the use cases that they're REALLY trying to solve, so > > they won't make the leap to rearchitecting their system. But at least > > it makes it easier for folks using it to make the material more > > approachable. > > >
Totally agree with Jan. The other issue important to ponder is the why. Many people think about "easy" and thinks that means simple in terms of creating services. You see, there are two layers there: business and developers. Business guys were sold the idea of SOA as a business oriented architecture. But then developers were told to use SOAP. Then someone came and said REST was better to do services, simpler. And thus REST is the easy way of doing services nowadays. All that I said is flawed! SOA has nothing to do with REST (sorry guys, I know many that do think it does, but I mean SOA as Service Oriented Architecture, the hard definition), one focuses on businesses and the other is for networked systems and transfers (the web). SOAP is not the only option for SOA, and REST should not be a developer's only tool. So, there are many trying to get REST into their systems, but what they mean is simple replacing the SOAP calls. The people that actually think on doing a complete architectural refactoring is not that many. Lastly, the other component is the adoption percentage of the style. I mean with this, how much of the style is being adopted. You see, REST has many constrains, some are required, some other are optional. Each constrain is there to get a benefit. People usually over read the dissertation and implements using a couple of constrains, leaving out some others. The most problematic is the Hypertext as the engine of application state. It is what everybody checks first to see if you have REST, and say it is totally required, but it is the one where most people fail implementing. So, you may have companies with 50% REST adoption, 80% REST adoption and so on (some adopt that percentage of style's constrains, but not being RESTfull yet!). Checking on what is on paper, and what is actually in practice, and checking if the benefits are real thus providing an evaluation of the theory, is a good research. Cheers! William Martinez --- In rest-discuss@yahoogroups.com, Jan Algermissen <algermissen1971@...> wrote: > > Sean, > > On Apr 14, 2010, at 12:27 PM, Sean Kennedy wrote: > > > > > > > Hi, > > I am a PhD student and I am trying to find out the level of shift in the enterprise-space from SOAP/POX WS to RESTful WS. I get a strong feeling that there is a substantial shift in this direction > > The shift is happening, but it appears that it is mostly a shift of hyped terms only and not a real architectural shift. > > The true application of REST would be a radical shift towards simplicity and decoupling and as far as my experience goes, enterprises are not yet willing to let go of complexity :-) I have recently persuaded myself that the ubiquitiousness of object-oriented thinking is a major obstacle. We have all been so immersed in OOA, OOD, classes, interfaces, etc. that it is really hard to think differently about software systems. Maybe the apparent rise of functional languages helps to cure that situation... > > > but I have no examples or figures that bear it out... > > I think many enterprise understand that there is a benefit induced by REST and start trying. But it is extremely rare, for example, that people understand that media type design is at the heart of all design activity (as opposed to service interface design, which is an implementation detail in REST). > > > Jan > > > > > > Thanks, > > Sean. > > > > > > > > > > ----------------------------------- > Jan Algermissen, Consultant > NORD Software Consulting > > Mail: algermissen@... > Blog: http://www.nordsc.com/blog/ > Work: http://www.nordsc.com/ > ----------------------------------- >
Hi Alan, Sorry, I hadn't intended to imply that caching was undesirable, but I might say something like this instead of identifying it as a constraint: "The constraints of REST induce cacheability, which is a highly beneficial system property. This can be used to offset the style's costs in efficiency." In a similar way you wouldn't call evolvability a constraint Cheers, Mike Alan Dean wrote: > > > Mike, > > Within REST, Caching is both an optional and a highly desirable > constraint. > > It is optional in that it is easy to imagine a RESTful implementation > that never actually expects caching by User-Agent or any > intermediaries (this would be the same as implicitly labelling > everything as non-cacheable). Such an implementation would, however, > be inefficient to some degree which is why adding the constraint is > desirable. > > Regards, > Alan Dean > > On Thu, Apr 15, 2010 at 12:28, Mike Kelly <mike@... > <mailto:mike@...>> wrote: > > > > Is caching a necessary constraint from an arch style pov? > > It seems like a consequence/benefit of the other constraints > (cs/layered/stateless/uniform interface) rather than one in its > own right > > Cheers, > Mike > > > > >
But the link of stateB is going to be a fixed link, right? So how would this work if somehow user knew the link of StateB (say by previous experience) and then tried to directly access link of StateB without ever going to StateA? --kurtrips On Wed, Apr 14, 2010 at 9:19 PM, Robert Brewer <fumanchu@...> wrote: > kurtrips wrote: > > I bet this has been asked many time before, but here goes: > > > > I have an web application which is like a workflow. It goes from > StateA > > to StateB and so on. > > I want to be sure that a user who is accessing the application at > > StateB has completed StateA. > > If I cannot use a sessionId stored at the server somewhere, how can I > > do it in a REST compliant way? > > Return links to state B only in the responses from state A, not from > other states. > > > Robert Brewer > fumanchu@... >
If "visiting" stateA results in some change in the application state for
that client (results in computing some value, storing some data, etc.), then
it is reasonable for the server, upon receiving a request for StateB to
check for the proper application state caused by activating StateA (does
this make sense the way I'm stating it?).
For example, if a client activates the "CheckOut" link (sends the request to
the server), the server can check the application state for that client and
- if there are no items in the shopping cart - respond appropriately ("You
didn't put anything into your cart yet", etc.).
mca
http://amundsen.com/blog/
On Thu, Apr 15, 2010 at 10:13, Aiman Ashraf <kurtrips@...> wrote:
>
>
> But the link of stateB is going to be a fixed link, right?
> So how would this work if somehow user knew the link of StateB (say by
> previous experience) and then tried to directly access link of StateB
> without ever going to StateA?
>
> --kurtrips
>
>
> On Wed, Apr 14, 2010 at 9:19 PM, Robert Brewer <fumanchu@...>wrote:
>
>> kurtrips wrote:
>> > I bet this has been asked many time before, but here goes:
>> >
>> > I have an web application which is like a workflow. It goes from
>> StateA
>> > to StateB and so on.
>> > I want to be sure that a user who is accessing the application at
>> > StateB has completed StateA.
>> > If I cannot use a sessionId stored at the server somewhere, how can I
>> > do it in a REST compliant way?
>>
>> Return links to state B only in the responses from state A, not from
>> other states.
>>
>>
>> Robert Brewer
>> fumanchu@...
>>
>
>
>
>
>>If "visiting" stateA results in some change in the application state for
that client (results in computing some value, storing some >>data, etc.),
then it is reasonable for the server, upon receiving a request for StateB to
check for the proper application state >>caused by activating StateA (does
this make sense the way I'm stating it?).
Yes it does make sense.
But how does server check for "proper application state caused by activating
StateA", when it is not allowed to store anything?
--kurtrips
On Thu, Apr 15, 2010 at 9:20 AM, mike amundsen <mamund@...> wrote:
> If "visiting" stateA results in some change in the application state for
> that client (results in computing some value, storing some data, etc.), then
> it is reasonable for the server, upon receiving a request for StateB to
> check for the proper application state caused by activating StateA (does
> this make sense the way I'm stating it?).
>
> For example, if a client activates the "CheckOut" link (sends the request
> to the server), the server can check the application state for that client
> and - if there are no items in the shopping cart - respond appropriately
> ("You didn't put anything into your cart yet", etc.).
>
> mca
> http://amundsen.com/blog/
>
>
>
> On Thu, Apr 15, 2010 at 10:13, Aiman Ashraf <kurtrips@...> wrote:
>
>>
>>
>> But the link of stateB is going to be a fixed link, right?
>> So how would this work if somehow user knew the link of StateB (say by
>> previous experience) and then tried to directly access link of StateB
>> without ever going to StateA?
>>
>> --kurtrips
>>
>>
>> On Wed, Apr 14, 2010 at 9:19 PM, Robert Brewer <fumanchu@...>wrote:
>>
>>> kurtrips wrote:
>>> > I bet this has been asked many time before, but here goes:
>>> >
>>> > I have an web application which is like a workflow. It goes from
>>> StateA
>>> > to StateB and so on.
>>> > I want to be sure that a user who is accessing the application at
>>> > StateB has completed StateA.
>>> > If I cannot use a sessionId stored at the server somewhere, how can I
>>> > do it in a REST compliant way?
>>>
>>> Return links to state B only in the responses from state A, not from
>>> other states.
>>>
>>>
>>> Robert Brewer
>>> fumanchu@...
>>>
>>
>>
>>
>>
>
>
>
<snip>
But how does server check for "proper application state caused by activating
StateA", when it is not allowed to store anything?
</snip>
First, servers are allowed to store lots of things. Second, if it turns out
that a particular application state change (StateA) happens on the client
(not the server) and the server must use information from that state change
when evaluating another request (StateB), then the client must ship that
state along with the StateB request.
To use my previous example, if the user activates the "CheckOut" link and
fails to send along the client's shopping cart with the request, the server
can respond appropriately ("You failed to include your shopping cart with
the checkout request", etc.).
mca
http://amundsen.com/blog/
On Thu, Apr 15, 2010 at 10:26, Aiman Ashraf <kurtrips@...> wrote:
> >>If "visiting" stateA results in some change in the application state for
> that client (results in computing some value, storing some >>data, etc.),
> then it is reasonable for the server, upon receiving a request for StateB to
> check for the proper application state >>caused by activating StateA (does
> this make sense the way I'm stating it?).
>
> Yes it does make sense.
> But how does server check for "proper application state caused by
> activating StateA", when it is not allowed to store anything?
>
> --kurtrips
>
>
> On Thu, Apr 15, 2010 at 9:20 AM, mike amundsen <mamund@...> wrote:
>
>> If "visiting" stateA results in some change in the application state for
>> that client (results in computing some value, storing some data, etc.), then
>> it is reasonable for the server, upon receiving a request for StateB to
>> check for the proper application state caused by activating StateA (does
>> this make sense the way I'm stating it?).
>>
>> For example, if a client activates the "CheckOut" link (sends the request
>> to the server), the server can check the application state for that client
>> and - if there are no items in the shopping cart - respond appropriately
>> ("You didn't put anything into your cart yet", etc.).
>>
>> mca
>> http://amundsen.com/blog/
>>
>>
>>
>> On Thu, Apr 15, 2010 at 10:13, Aiman Ashraf <kurtrips@...> wrote:
>>
>>>
>>>
>>> But the link of stateB is going to be a fixed link, right?
>>> So how would this work if somehow user knew the link of StateB (say by
>>> previous experience) and then tried to directly access link of StateB
>>> without ever going to StateA?
>>>
>>> --kurtrips
>>>
>>>
>>> On Wed, Apr 14, 2010 at 9:19 PM, Robert Brewer <fumanchu@...>wrote:
>>>
>>>> kurtrips wrote:
>>>> > I bet this has been asked many time before, but here goes:
>>>> >
>>>> > I have an web application which is like a workflow. It goes from
>>>> StateA
>>>> > to StateB and so on.
>>>> > I want to be sure that a user who is accessing the application at
>>>> > StateB has completed StateA.
>>>> > If I cannot use a sessionId stored at the server somewhere, how can I
>>>> > do it in a REST compliant way?
>>>>
>>>> Return links to state B only in the responses from state A, not from
>>>> other states.
>>>>
>>>>
>>>> Robert Brewer
>>>> fumanchu@...
>>>>
>>>
>>>
>>>
>>>
>>
>>
>>
>
Well, following Roy Fielding disseration, it is a constraint. Now if it is a necessary constraint (meaning optional or required), the way I see it is required but since the responses can be implicitly labeled as non-cacheable, if you implictly label *all* responses as non-cacheable then in practice that make it a optional constraint... 5.1.4 Cache In order to improve network efficiency, we add cache constraints to form the client-cache-stateless-server style of Section 3.4.4<http://www.ics.uci.edu/%7Efielding/pubs/dissertation/net_arch_styles.htm#sec_3_4_4>(Figure 5-4<http://www.ics.uci.edu/%7Efielding/pubs/dissertation/rest_arch_style.htm#fig_5_4>). Cache constraints require that the data within a response to a request be implicitly or explicitly labeled as cacheable or non-cacheable. _________________________________________________ Melhores cumprimentos / Beir beannacht / Best regards Antnio Manuel dos Santos Mota http://card.ly/amsmota _________________________________________________ On 15 April 2010 14:45, Mike Kelly <mike@...> wrote: > > > Hi Alan, > > Sorry, I hadn't intended to imply that caching was undesirable, but I > might say something like this instead of identifying it as a constraint: > > "The constraints of REST induce cacheability, which is a highly > beneficial system property. This can be used to offset the style's costs > in efficiency." > > In a similar way you wouldn't call evolvability a constraint > > Cheers, > Mike > > > Alan Dean wrote: > > > > > > Mike, > > > > Within REST, Caching is both an optional and a highly desirable > > constraint. > > > > It is optional in that it is easy to imagine a RESTful implementation > > that never actually expects caching by User-Agent or any > > intermediaries (this would be the same as implicitly labelling > > everything as non-cacheable). Such an implementation would, however, > > be inefficient to some degree which is why adding the constraint is > > desirable. > > > > Regards, > > Alan Dean > > > > On Thu, Apr 15, 2010 at 12:28, Mike Kelly <mike@...<mike%40mykanjo.co.uk> > > <mailto:mike@... <mike%40mykanjo.co.uk>>> wrote: > > > > > > > > Is caching a necessary constraint from an arch style pov? > > > > It seems like a consequence/benefit of the other constraints > > (cs/layered/stateless/uniform interface) rather than one in its > > own right > > > > Cheers, > > Mike > > > > > > > > > > > > >
On 15 April 2010 15:26, Aiman Ashraf <kurtrips@...> wrote: > > Yes it does make sense. > But how does server check for "proper application state caused by > activating StateA", when it is not allowed to store anything? > > What the server is not allowed to maintain is "conversational state", not application state. Otherwise it won't be of much use... Also, check this thread where I asked a similar question http://tech.groups.yahoo.com/group/rest-discuss/message/15028 and the answers from Subbu and Roy http://tech.groups.yahoo.com/group/rest-discuss/message/15029 http://tech.groups.yahoo.com/group/rest-discuss/message/15032
>>First, servers are allowed to store lots of things.
In the database? I assume that would work and is better than storing stuff
on server (from a scalability point of view), but I'm not so sure if that
doesn't violate the 'stateless server' constraint.
>>Second, if it turns out that a particular application state change
(StateA) happens on the client (not the server)..
The problem is what if client just makes up a request saying "I want StateB
and here's the StateA included in my request" when client actually never
visited URL for StateA but just made that up. I hope I am making at least
some sense!!
--kurtrips
On Thu, Apr 15, 2010 at 9:35 AM, mike amundsen <mamund@...> wrote:
> <snip>
> But how does server check for "proper application state caused by
> activating StateA", when it is not allowed to store anything?
> </snip>
>
> First, servers are allowed to store lots of things. Second, if it turns out
> that a particular application state change (StateA) happens on the client
> (not the server) and the server must use information from that state change
> when evaluating another request (StateB), then the client must ship that
> state along with the StateB request.
>
> To use my previous example, if the user activates the "CheckOut" link and
> fails to send along the client's shopping cart with the request, the server
> can respond appropriately ("You failed to include your shopping cart with
> the checkout request", etc.).
>
> mca
> http://amundsen.com/blog/
>
>
>
> On Thu, Apr 15, 2010 at 10:26, Aiman Ashraf <kurtrips@...> wrote:
>
>> >>If "visiting" stateA results in some change in the application state for
>> that client (results in computing some value, storing some >>data, etc.),
>> then it is reasonable for the server, upon receiving a request for StateB to
>> check for the proper application state >>caused by activating StateA (does
>> this make sense the way I'm stating it?).
>>
>> Yes it does make sense.
>> But how does server check for "proper application state caused by
>> activating StateA", when it is not allowed to store anything?
>>
>> --kurtrips
>>
>>
>> On Thu, Apr 15, 2010 at 9:20 AM, mike amundsen <mamund@...> wrote:
>>
>>> If "visiting" stateA results in some change in the application state for
>>> that client (results in computing some value, storing some data, etc.), then
>>> it is reasonable for the server, upon receiving a request for StateB to
>>> check for the proper application state caused by activating StateA (does
>>> this make sense the way I'm stating it?).
>>>
>>> For example, if a client activates the "CheckOut" link (sends the request
>>> to the server), the server can check the application state for that client
>>> and - if there are no items in the shopping cart - respond appropriately
>>> ("You didn't put anything into your cart yet", etc.).
>>>
>>> mca
>>> http://amundsen.com/blog/
>>>
>>>
>>>
>>> On Thu, Apr 15, 2010 at 10:13, Aiman Ashraf <kurtrips@...> wrote:
>>>
>>>>
>>>>
>>>> But the link of stateB is going to be a fixed link, right?
>>>> So how would this work if somehow user knew the link of StateB (say by
>>>> previous experience) and then tried to directly access link of StateB
>>>> without ever going to StateA?
>>>>
>>>> --kurtrips
>>>>
>>>>
>>>> On Wed, Apr 14, 2010 at 9:19 PM, Robert Brewer <fumanchu@...>wrote:
>>>>
>>>>> kurtrips wrote:
>>>>> > I bet this has been asked many time before, but here goes:
>>>>> >
>>>>> > I have an web application which is like a workflow. It goes from
>>>>> StateA
>>>>> > to StateB and so on.
>>>>> > I want to be sure that a user who is accessing the application at
>>>>> > StateB has completed StateA.
>>>>> > If I cannot use a sessionId stored at the server somewhere, how can I
>>>>> > do it in a REST compliant way?
>>>>>
>>>>> Return links to state B only in the responses from state A, not from
>>>>> other states.
>>>>>
>>>>>
>>>>> Robert Brewer
>>>>> fumanchu@...
>>>>>
>>>>
>>>>
>>>>
>>>>
>>>
>>>
>>>
>>
>
Aiman Ashraf wrote: > > > >>First, servers are allowed to store lots of things. > In the database? I assume that would work and is better than storing > stuff on server (from a scalability point of view), but I'm not so > sure if that doesn't violate the 'stateless server' constraint. The stateless constraint refers to client-server interaction Cheers, Mike
As for clients that might "lie" about their own application state, this
problem is not unique to the REST arch style. Any server handling client
requests will need to verify the validity of application state by any means
necessary. If you suspect clients will lie to your server, you can require
them to collect a unique, server-generated value from along the way (e.g. at
StateA) and validate that token later on (e.g. StateB).
mca
http://amundsen.com/blog/
On Thu, Apr 15, 2010 at 10:58, Aiman Ashraf <kurtrips@...> wrote:
> >>First, servers are allowed to store lots of things.
> In the database? I assume that would work and is better than storing stuff
> on server (from a scalability point of view), but I'm not so sure if that
> doesn't violate the 'stateless server' constraint.
>
> >>Second, if it turns out that a particular application state change
> (StateA) happens on the client (not the server)..
> The problem is what if client just makes up a request saying "I want StateB
> and here's the StateA included in my request" when client actually never
> visited URL for StateA but just made that up. I hope I am making at least
> some sense!!
>
> --kurtrips
>
>
> On Thu, Apr 15, 2010 at 9:35 AM, mike amundsen <mamund@...> wrote:
>
>> <snip>
>> But how does server check for "proper application state caused by
>> activating StateA", when it is not allowed to store anything?
>> </snip>
>>
>> First, servers are allowed to store lots of things. Second, if it turns
>> out that a particular application state change (StateA) happens on the
>> client (not the server) and the server must use information from that state
>> change when evaluating another request (StateB), then the client must ship
>> that state along with the StateB request.
>>
>> To use my previous example, if the user activates the "CheckOut" link and
>> fails to send along the client's shopping cart with the request, the server
>> can respond appropriately ("You failed to include your shopping cart with
>> the checkout request", etc.).
>>
>> mca
>> http://amundsen.com/blog/
>>
>>
>>
>> On Thu, Apr 15, 2010 at 10:26, Aiman Ashraf <kurtrips@...> wrote:
>>
>>> >>If "visiting" stateA results in some change in the application state
>>> for that client (results in computing some value, storing some >>data,
>>> etc.), then it is reasonable for the server, upon receiving a request for
>>> StateB to check for the proper application state >>caused by activating
>>> StateA (does this make sense the way I'm stating it?).
>>>
>>> Yes it does make sense.
>>> But how does server check for "proper application state caused by
>>> activating StateA", when it is not allowed to store anything?
>>>
>>> --kurtrips
>>>
>>>
>>> On Thu, Apr 15, 2010 at 9:20 AM, mike amundsen <mamund@...> wrote:
>>>
>>>> If "visiting" stateA results in some change in the application state for
>>>> that client (results in computing some value, storing some data, etc.), then
>>>> it is reasonable for the server, upon receiving a request for StateB to
>>>> check for the proper application state caused by activating StateA (does
>>>> this make sense the way I'm stating it?).
>>>>
>>>> For example, if a client activates the "CheckOut" link (sends the
>>>> request to the server), the server can check the application state for that
>>>> client and - if there are no items in the shopping cart - respond
>>>> appropriately ("You didn't put anything into your cart yet", etc.).
>>>>
>>>> mca
>>>> http://amundsen.com/blog/
>>>>
>>>>
>>>>
>>>> On Thu, Apr 15, 2010 at 10:13, Aiman Ashraf <kurtrips@...> wrote:
>>>>
>>>>>
>>>>>
>>>>> But the link of stateB is going to be a fixed link, right?
>>>>> So how would this work if somehow user knew the link of StateB (say by
>>>>> previous experience) and then tried to directly access link of StateB
>>>>> without ever going to StateA?
>>>>>
>>>>> --kurtrips
>>>>>
>>>>>
>>>>> On Wed, Apr 14, 2010 at 9:19 PM, Robert Brewer <fumanchu@...>wrote:
>>>>>
>>>>>> kurtrips wrote:
>>>>>> > I bet this has been asked many time before, but here goes:
>>>>>> >
>>>>>> > I have an web application which is like a workflow. It goes from
>>>>>> StateA
>>>>>> > to StateB and so on.
>>>>>> > I want to be sure that a user who is accessing the application at
>>>>>> > StateB has completed StateA.
>>>>>> > If I cannot use a sessionId stored at the server somewhere, how can
>>>>>> I
>>>>>> > do it in a REST compliant way?
>>>>>>
>>>>>> Return links to state B only in the responses from state A, not from
>>>>>> other states.
>>>>>>
>>>>>>
>>>>>> Robert Brewer
>>>>>> fumanchu@...
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>>
>>>
>>
>
Ant�nio Mota wrote: > Well, following Roy Fielding disseration, it is a constraint. Now if > it is a necessary constraint (meaning optional or required), the way I > see it is required but since the responses can be implicitly labeled > as non-cacheable, if you implictly label *all* responses as > non-cacheable then in practice that make it a optional constraint... Is caching not just a specific type of layering, and therefore accounted for by the layered constraint? This might explain why the cache constraint appears 'optional' Cheers, Mike
I don't think so, I think "cache" is a constraint ortogonal to all layers of the system (you can have cache on the client, on the intermediaries and even on the server). _________________________________________________ Melhores cumprimentos / Beir beannacht / Best regards Antnio Manuel dos Santos Mota http://card.ly/amsmota _________________________________________________ 2010/4/15 Mike Kelly <mike@...> > Antnio Mota wrote: > >> Well, following Roy Fielding disseration, it is a constraint. Now if it is >> a necessary constraint (meaning optional or required), the way I see it is >> required but since the responses can be implicitly labeled as >> non-cacheable, if you implictly label *all* responses as non-cacheable then >> in practice that make it a optional constraint... >> > > Is caching not just a specific type of layering, and therefore accounted > for by the layered constraint? > > This might explain why the cache constraint appears 'optional' > > Cheers, > Mike >
Ant�nio Mota wrote: > you can have cache on the client, on the intermediaries and even on > the server > Yes - each of those caches would be a layer at different points in the client-server interaction Cheers, Mike
Well, I think yours is a point of view as valid as the other. Not that that changes anything anyhow... Neverthless, and taking into account that *the* dissertation is the starting point of all this, I wonder why Fielding explicitly separate those two element: 5.1.4 Cache In order to improve network efficiency, we add cache constraints (...) 5.1.6 Layered System In order to further improve behavior for Internet-scale requirements, we add layered system constraints (...) _________________________________________________ Melhores cumprimentos / Beir beannacht / Best regards Antnio Manuel dos Santos Mota http://card.ly/amsmota _________________________________________________ 2010/4/15 Mike Kelly <mike@...> > Antnio Mota wrote: > >> you can have cache on the client, on the intermediaries and even on the >> server >> >> > Yes - each of those caches would be a layer at different points in the > client-server interaction > > Cheers, > Mike >
2010/4/15 Mike Kelly <mike@...>: > Antnio Mota wrote: >> Well, following Roy Fielding disseration, it is a constraint. Now if >> it is a necessary constraint (meaning optional or required), the way I >> see it is required but since the responses can be implicitly labeled >> as non-cacheable, if you implictly label *all* responses as >> non-cacheable then in practice that make it a optional constraint... > > Is caching not just a specific type of layering, and therefore accounted > for by the layered constraint? It is a specific type of layering. Specifically introduced to induce efficiency, scalability, and user-perceived performance. "Layering" can't account for it because it isn't specific enough to evoke those properties alone. --tim
To my understanding, a constraint means something you need to consider to work with when you design on the basis of an architectural style. So it translates to try to make it cachable. Different constraints might "corss-cut" each other. Cheers, Dong 2010/4/15 Antnio Mota <amsmota@...> > > > Well, I think yours is a point of view as valid as the other. Not that that > changes anything anyhow... > > Neverthless, and taking into account that *the* dissertation is the > starting point of all this, I wonder why Fielding explicitly separate those > two element: > > 5.1.4 Cache In order to improve network efficiency, we add cache > constraints (...) > > 5.1.6 Layered System In order to further improve behavior for > Internet-scale requirements, we add layered system constraints (...) > > > > _________________________________________________ > > Melhores cumprimentos / Beir beannacht / Best regards > > Antnio Manuel dos Santos Mota > > http://card.ly/amsmota > _________________________________________________ > > > > 2010/4/15 Mike Kelly <mike@...> > >> Antnio Mota wrote: >> >> you can have cache on the client, on the intermediaries and even on the >>> server >>> >>> >> Yes - each of those caches would be a layer at different points in the >> client-server interaction >> >> Cheers, >> Mike >> > > >
Tim Williams wrote: > 2010/4/15 Mike Kelly <mike@...>: > >> Ant�nio Mota wrote: >> >>> Well, following Roy Fielding disseration, it is a constraint. Now if >>> it is a necessary constraint (meaning optional or required), the way I >>> see it is required but since the responses can be implicitly labeled >>> as non-cacheable, if you implictly label *all* responses as >>> non-cacheable then in practice that make it a optional constraint... >>> >> Is caching not just a specific type of layering, and therefore accounted >> for by the layered constraint? >> > > It is a specific type of layering. Specifically introduced to induce > efficiency, scalability, and user-perceived performance. "Layering" > can't account for it because it isn't specific enough to evoke those > properties alone. > Agreed - one requires the other constraints for this i.e. uniform interface, statelessness I don't see the requirement for a specific 'caching' constraint Cheers, Mike
On Thu, Apr 15, 2010 at 1:18 PM, Mike Kelly <mike@...> wrote: > Tim Williams wrote: >> >> 2010/4/15 Mike Kelly <mike@...>: >> >>> >>> Antnio Mota wrote: >>> >>>> >>>> Well, following Roy Fielding disseration, it is a constraint. Now if >>>> it is a necessary constraint (meaning optional or required), the way I >>>> see it is required but since the responses can be implicitly labeled >>>> as non-cacheable, if you implictly label *all* responses as >>>> non-cacheable then in practice that make it a optional constraint... >>>> >>> >>> Is caching not just a specific type of layering, and therefore accounted >>> for by the layered constraint? >>> >> >> It is a specific type of layering. Specifically introduced to induce >> efficiency, scalability, and user-perceived performance. "Layering" >> can't account for it because it isn't specific enough to evoke those >> properties alone. >> > > Agreed - one requires the other constraints for this i.e. uniform interface, > statelessness > > I don't see the requirement for a specific 'caching' constraint Doesn't the dissertation answer that question with the reasoning below? Oddly enough, it's needed, in part, to compensate for the negative effect that the layered system constraint itself has on those same properties. Or, maybe I don't understand your question? Thanks, --tim "The advantage of adding cache constraints is that they have the potential to partially or completely eliminate some interactions, improving efficiency, scalability, and user-perceived performance by reducing the average latency of a series of interactions."
On Thu, Apr 15, 2010 at 4:32 AM, Erlend Hamnaberg <ngarthl@...> wrote: > > This might be worth looking at: > > http://bit.ly/cKewo3 This leads to some password protected site, so it's not really useful. Regards, Will Hartung (willh@...)
Tim Williams wrote: > On Thu, Apr 15, 2010 at 1:18 PM, Mike Kelly <mike@...> wrote: > >> Tim Williams wrote: >> >>> 2010/4/15 Mike Kelly <mike@...>: >>> >>> >>>> Ant�nio Mota wrote: >>>> >>>> >>>>> Well, following Roy Fielding disseration, it is a constraint. Now if >>>>> it is a necessary constraint (meaning optional or required), the way I >>>>> see it is required but since the responses can be implicitly labeled >>>>> as non-cacheable, if you implictly label *all* responses as >>>>> non-cacheable then in practice that make it a optional constraint... >>>>> >>>>> >>>> Is caching not just a specific type of layering, and therefore accounted >>>> for by the layered constraint? >>>> >>>> >>> It is a specific type of layering. Specifically introduced to induce >>> efficiency, scalability, and user-perceived performance. "Layering" >>> can't account for it because it isn't specific enough to evoke those >>> properties alone. >>> >>> >> Agreed - one requires the other constraints for this i.e. uniform interface, >> statelessness >> >> I don't see the requirement for a specific 'caching' constraint >> > > Doesn't the dissertation answer that question with the reasoning > below? Oddly enough, it's needed, in part, to compensate for the > negative effect that the layered system constraint itself has on those > same properties. Or, maybe I don't understand your question? > Thanks, > --tim > > "The advantage of adding cache constraints is that they have the > potential to partially or completely eliminate some interactions, > improving efficiency, scalability, and user-perceived performance by > reducing the average latency of a series of interactions." > That seems to answer why caching is a good idea, but it doesn't really address why the addition of the constraint is necessary; given that the other constraints appear to generate cacheability anyway. Cheers, Mike
Tim Williams wrote: > On Thu, Apr 15, 2010 at 2:01 PM, Mike Kelly <mike@...> wrote: > >> Tim Williams wrote: >> >>> On Thu, Apr 15, 2010 at 1:18 PM, Mike Kelly <mike@...> wrote: >>> >>> >>>> Tim Williams wrote: >>>> >>>> >>>>> 2010/4/15 Mike Kelly <mike@...>: >>>>> >>>>> >>>>> >>>>>> Ant�nio Mota wrote: >>>>>> >>>>>> >>>>>> >>>>>>> Well, following Roy Fielding disseration, it is a constraint. Now if >>>>>>> it is a necessary constraint (meaning optional or required), the way I >>>>>>> see it is required but since the responses can be implicitly labeled >>>>>>> as non-cacheable, if you implictly label *all* responses as >>>>>>> non-cacheable then in practice that make it a optional constraint... >>>>>>> >>>>>>> >>>>>>> >>>>>> Is caching not just a specific type of layering, and therefore >>>>>> accounted >>>>>> for by the layered constraint? >>>>>> >>>>>> >>>>>> >>>>> It is a specific type of layering. Specifically introduced to induce >>>>> efficiency, scalability, and user-perceived performance. "Layering" >>>>> can't account for it because it isn't specific enough to evoke those >>>>> properties alone. >>>>> >>>>> >>>>> >>>> Agreed - one requires the other constraints for this i.e. uniform >>>> interface, >>>> statelessness >>>> >>>> I don't see the requirement for a specific 'caching' constraint >>>> >>>> >>> Doesn't the dissertation answer that question with the reasoning >>> below? Oddly enough, it's needed, in part, to compensate for the >>> negative effect that the layered system constraint itself has on those >>> same properties. Or, maybe I don't understand your question? >>> Thanks, >>> --tim >>> >>> "The advantage of adding cache constraints is that they have the >>> potential to partially or completely eliminate some interactions, >>> improving efficiency, scalability, and user-perceived performance by >>> reducing the average latency of a series of interactions." >>> >>> >> That seems to answer why caching is a good idea, but it doesn't really >> address why the addition of the constraint is necessary; given that the >> other constraints appear to generate cacheability anyway. >> > > The other constraints could facilitate cache but they don't > necessarily control it to anyone's advantage. Beyond cacheability, > you want consistency. To say, for example, that a resource is *not* > cacheable or is cacheable for how long, right? The constraint is to > label the resource's cacheability - to allow for control/consistency. > > --tim > As I understand it, that kind of control data would form part of a uniform interface, and is therefore covered by that constraint. Cheers, Mike
On Thu, Apr 15, 2010 at 2:29 PM, Mike Kelly <mike@...> wrote: > Tim Williams wrote: >> >> On Thu, Apr 15, 2010 at 2:01 PM, Mike Kelly <mike@mykanjo.co.uk> wrote: >> >>> >>> Tim Williams wrote: >>> >>>> >>>> On Thu, Apr 15, 2010 at 1:18 PM, Mike Kelly <mike@...> wrote: >>>> >>>> >>>>> >>>>> Tim Williams wrote: >>>>> >>>>> >>>>>> >>>>>> 2010/4/15 Mike Kelly <mike@...>: >>>>>> >>>>>> >>>>>> >>>>>>> >>>>>>> Antnio Mota wrote: >>>>>>> >>>>>>> >>>>>>> >>>>>>>> >>>>>>>> Well, following Roy Fielding disseration, it is a constraint. Now if >>>>>>>> it is a necessary constraint (meaning optional or required), the way >>>>>>>> I >>>>>>>> see it is required but since the responses can be implicitly labeled >>>>>>>> as non-cacheable, if you implictly label *all* responses as >>>>>>>> non-cacheable then in practice that make it a optional constraint... >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>> >>>>>>> Is caching not just a specific type of layering, and therefore >>>>>>> accounted >>>>>>> for by the layered constraint? >>>>>>> >>>>>>> >>>>>>> >>>>>> >>>>>> It is a specific type of layering. Specifically introduced to induce >>>>>> efficiency, scalability, and user-perceived performance. "Layering" >>>>>> can't account for it because it isn't specific enough to evoke those >>>>>> properties alone. >>>>>> >>>>>> >>>>>> >>>>> >>>>> Agreed - one requires the other constraints for this i.e. uniform >>>>> interface, >>>>> statelessness >>>>> >>>>> I don't see the requirement for a specific 'caching' constraint >>>>> >>>>> >>>> >>>> Doesn't the dissertation answer that question with the reasoning >>>> below? Oddly enough, it's needed, in part, to compensate for the >>>> negative effect that the layered system constraint itself has on those >>>> same properties. Or, maybe I don't understand your question? >>>> Thanks, >>>> --tim >>>> >>>> "The advantage of adding cache constraints is that they have the >>>> potential to partially or completely eliminate some interactions, >>>> improving efficiency, scalability, and user-perceived performance by >>>> reducing the average latency of a series of interactions." >>>> >>>> >>> >>> That seems to answer why caching is a good idea, but it doesn't really >>> address why the addition of the constraint is necessary; given that the >>> other constraints appear to generate cacheability anyway. >>> >> >> The other constraints could facilitate cache but they don't >> necessarily control it to anyone's advantage. Beyond cacheability, >> you want consistency. To say, for example, that a resource is *not* >> cacheable or is cacheable for how long, right? The constraint is to >> label the resource's cacheability - to allow for control/consistency. >> >> --tim >> > > As I understand it, that kind of control data would form part of a uniform > interface, and is therefore covered by that constraint. It's true that it goes in the control data but there's nothing in the uniform interface that says *what* control data is required. The cache constraint exists to say that this specific cache labeling is required. Maybe it'd help to suppose you *didn't* have a cache constraint - how would let origin servers have consistency/control their data - you'd likely have a caching wild west with wildly inconsistent representations of resources, right? --tim
Tim Williams wrote: > On Thu, Apr 15, 2010 at 2:29 PM, Mike Kelly <mike@...> wrote: > >> Tim Williams wrote: >> >>> On Thu, Apr 15, 2010 at 2:01 PM, Mike Kelly <mike@...> wrote: >>> >>> >>>> Tim Williams wrote: >>>> >>>> >>>>> On Thu, Apr 15, 2010 at 1:18 PM, Mike Kelly <mike@...> wrote: >>>>> >>>>> >>>>> >>>>>> Tim Williams wrote: >>>>>> >>>>>> >>>>>> >>>>>>> 2010/4/15 Mike Kelly <mike@...>: >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>>> Ant�nio Mota wrote: >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>>> Well, following Roy Fielding disseration, it is a constraint. Now if >>>>>>>>> it is a necessary constraint (meaning optional or required), the way >>>>>>>>> I >>>>>>>>> see it is required but since the responses can be implicitly labeled >>>>>>>>> as non-cacheable, if you implictly label *all* responses as >>>>>>>>> non-cacheable then in practice that make it a optional constraint... >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>> Is caching not just a specific type of layering, and therefore >>>>>>>> accounted >>>>>>>> for by the layered constraint? >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>> It is a specific type of layering. Specifically introduced to induce >>>>>>> efficiency, scalability, and user-perceived performance. "Layering" >>>>>>> can't account for it because it isn't specific enough to evoke those >>>>>>> properties alone. >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>> Agreed - one requires the other constraints for this i.e. uniform >>>>>> interface, >>>>>> statelessness >>>>>> >>>>>> I don't see the requirement for a specific 'caching' constraint >>>>>> >>>>>> >>>>>> >>>>> Doesn't the dissertation answer that question with the reasoning >>>>> below? Oddly enough, it's needed, in part, to compensate for the >>>>> negative effect that the layered system constraint itself has on those >>>>> same properties. Or, maybe I don't understand your question? >>>>> Thanks, >>>>> --tim >>>>> >>>>> "The advantage of adding cache constraints is that they have the >>>>> potential to partially or completely eliminate some interactions, >>>>> improving efficiency, scalability, and user-perceived performance by >>>>> reducing the average latency of a series of interactions." >>>>> >>>>> >>>>> >>>> That seems to answer why caching is a good idea, but it doesn't really >>>> address why the addition of the constraint is necessary; given that the >>>> other constraints appear to generate cacheability anyway. >>>> >>>> >>> The other constraints could facilitate cache but they don't >>> necessarily control it to anyone's advantage. Beyond cacheability, >>> you want consistency. To say, for example, that a resource is *not* >>> cacheable or is cacheable for how long, right? The constraint is to >>> label the resource's cacheability - to allow for control/consistency. >>> >>> --tim >>> >>> >> As I understand it, that kind of control data would form part of a uniform >> interface, and is therefore covered by that constraint. >> > > It's true that it goes in the control data but there's nothing in the > uniform interface that says *what* control data is required. The > cache constraint exists to say that this specific cache labeling is > required. Maybe it'd help to suppose you *didn't* have a cache > constraint - how would let origin servers have consistency/control > their data - you'd likely have a caching wild west with wildly > inconsistent representations of resources, right? > Removing the cache constraint doesn't have any effect on my ability to leverage the cacheability of the style or define a uniform interface with caching mechanisms in it, so it would make no difference at all Cheers, Mike
Tim Williams wrote: > It's true that it goes in the control data but there's nothing in the > uniform interface that says *what* control data is required. The > cache constraint exists to say that this specific cache labeling is > required. Maybe it'd help to suppose you *didn't* have a cache > constraint - how would let origin servers have consistency/control > their data - you'd likely have a caching wild west with wildly > inconsistent representations of resources, right? Exactly. The "caching constraint" is an architectural constraint, not an operational one; that is, you're not required to cache, but you are required to explicitly declare what is cacheable and what isn't. Part of the uniform interface might do this for you; for example, the response to a POST is not cacheable. Robert Brewer fumanchu@aminus.org
On Thu, Apr 15, 2010 at 3:05 PM, Mike Kelly <mike@...> wrote: > Tim Williams wrote: >> >> On Thu, Apr 15, 2010 at 2:29 PM, Mike Kelly <mike@mykanjo.co.uk> wrote: >> >>> >>> Tim Williams wrote: >>> >>>> >>>> On Thu, Apr 15, 2010 at 2:01 PM, Mike Kelly <mike@...> wrote: >>>> >>>> >>>>> >>>>> Tim Williams wrote: >>>>> >>>>> >>>>>> >>>>>> On Thu, Apr 15, 2010 at 1:18 PM, Mike Kelly <mike@...> >>>>>> wrote: >>>>>> >>>>>> >>>>>> >>>>>>> >>>>>>> Tim Williams wrote: >>>>>>> >>>>>>> >>>>>>> >>>>>>>> >>>>>>>> 2010/4/15 Mike Kelly <mike@...>: >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>>> >>>>>>>>> Antnio Mota wrote: >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>>> >>>>>>>>>> Well, following Roy Fielding disseration, it is a constraint. Now >>>>>>>>>> if >>>>>>>>>> it is a necessary constraint (meaning optional or required), the >>>>>>>>>> way >>>>>>>>>> I >>>>>>>>>> see it is required but since the responses can be implicitly >>>>>>>>>> labeled >>>>>>>>>> as non-cacheable, if you implictly label *all* responses as >>>>>>>>>> non-cacheable then in practice that make it a optional >>>>>>>>>> constraint... >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>>> Is caching not just a specific type of layering, and therefore >>>>>>>>> accounted >>>>>>>>> for by the layered constraint? >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>>> It is a specific type of layering. Specifically introduced to >>>>>>>> induce >>>>>>>> efficiency, scalability, and user-perceived performance. "Layering" >>>>>>>> can't account for it because it isn't specific enough to evoke those >>>>>>>> properties alone. >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>> >>>>>>> Agreed - one requires the other constraints for this i.e. uniform >>>>>>> interface, >>>>>>> statelessness >>>>>>> >>>>>>> I don't see the requirement for a specific 'caching' constraint >>>>>>> >>>>>>> >>>>>>> >>>>>> >>>>>> Doesn't the dissertation answer that question with the reasoning >>>>>> below? Oddly enough, it's needed, in part, to compensate for the >>>>>> negative effect that the layered system constraint itself has on those >>>>>> same properties. Or, maybe I don't understand your question? >>>>>> Thanks, >>>>>> --tim >>>>>> >>>>>> "The advantage of adding cache constraints is that they have the >>>>>> potential to partially or completely eliminate some interactions, >>>>>> improving efficiency, scalability, and user-perceived performance by >>>>>> reducing the average latency of a series of interactions." >>>>>> >>>>>> >>>>>> >>>>> >>>>> That seems to answer why caching is a good idea, but it doesn't really >>>>> address why the addition of the constraint is necessary; given that the >>>>> other constraints appear to generate cacheability anyway. >>>>> >>>>> >>>> >>>> The other constraints could facilitate cache but they don't >>>> necessarily control it to anyone's advantage. Beyond cacheability, >>>> you want consistency. To say, for example, that a resource is *not* >>>> cacheable or is cacheable for how long, right? The constraint is to >>>> label the resource's cacheability - to allow for control/consistency. >>>> >>>> --tim >>>> >>>> >>> >>> As I understand it, that kind of control data would form part of a >>> uniform >>> interface, and is therefore covered by that constraint. >>> >> >> It's true that it goes in the control data but there's nothing in the >> uniform interface that says *what* control data is required. The >> cache constraint exists to say that this specific cache labeling is >> required. Maybe it'd help to suppose you *didn't* have a cache >> constraint - how would let origin servers have consistency/control >> their data - you'd likely have a caching wild west with wildly >> inconsistent representations of resources, right? >> > > Removing the cache constraint doesn't have any effect on my ability to > leverage the cacheability of the style or define a uniform interface with > caching mechanisms in it, so it would make no difference at all This is all very confusing. It seems that the difference here is "can vs. should". Sure, via the other constraints you *can* facilitate cache, but not necessarily consistently across a networked system. But by adding it as an explicit constraint we're going further and saying that you *should* facilitate cache (because some reasoning leads us to believe that by doing so you'll get a bunch of cool benefits). In any case, once you "define a uniform interface with caching mechanisms in it" - you have an implementation that adheres to the cache control constraint anyway , right? --tim
Tim Williams wrote: > On Thu, Apr 15, 2010 at 3:05 PM, Mike Kelly <mike@...> wrote: > >> Tim Williams wrote: >> >>> >>> It's true that it goes in the control data but there's nothing in the >>> uniform interface that says *what* control data is required. The >>> cache constraint exists to say that this specific cache labeling is >>> required. Maybe it'd help to suppose you *didn't* have a cache >>> constraint - how would let origin servers have consistency/control >>> their data - you'd likely have a caching wild west with wildly >>> inconsistent representations of resources, right? >>> >>> >> Removing the cache constraint doesn't have any effect on my ability to >> leverage the cacheability of the style or define a uniform interface with >> caching mechanisms in it, so it would make no difference at all >> > > This is all very confusing. It seems that the difference here is "can > vs. should". Sure, via the other constraints you *can* facilitate > cache, but not necessarily consistently across a networked system. > You're right, this is confusing - If it's defined as part of the uniform interface then it will be consistent across the system > In any case, once you "define a uniform interface with caching > mechanisms in it" - you have an implementation that adheres to the > cache control constraint anyway , right? > > Ok, but that capability doesn't exist *because* of a cache constraint - which is why caching is equally possible even if you remove the constraint. Cheers, Mike
On Fri, Apr 16, 2010 at 3:49 AM, Mike Kelly <mike@...> wrote: > Tim Williams wrote: >> >> On Thu, Apr 15, 2010 at 3:05 PM, Mike Kelly <mike@mykanjo.co.uk> wrote: >> >>> >>> Tim Williams wrote: >>> >>>> >>>> It's true that it goes in the control data but there's nothing in the >>>> uniform interface that says *what* control data is required. The >>>> cache constraint exists to say that this specific cache labeling is >>>> required. Maybe it'd help to suppose you *didn't* have a cache >>>> constraint - how would let origin servers have consistency/control >>>> their data - you'd likely have a caching wild west with wildly >>>> inconsistent representations of resources, right? >>>> >>>> >>> >>> Removing the cache constraint doesn't have any effect on my ability to >>> leverage the cacheability of the style or define a uniform interface with >>> caching mechanisms in it, so it would make no difference at all >>> >> >> This is all very confusing. It seems that the difference here is "can >> vs. should". Sure, via the other constraints you *can* facilitate >> cache, but not necessarily consistently across a networked system. >> > > You're right, this is confusing - If it's defined as part of the uniform > interface then it will be consistent across the system > >> In any case, once you "define a uniform interface with caching >> mechanisms in it" - you have an implementation that adheres to the >> cache control constraint anyway , right? >> >> > > Ok, but that capability doesn't exist *because* of a cache constraint - > which is why caching is equally possible even if you remove the constraint. I'll give it one more try and then hopefully someone more capable than I can bail me out here:) It is true that caching is *possible* if you remove the cache constraint but when Roy did the architectural analysis he apparently reasoned that cache was important in evoking the desired properties - so he went beyond possible. I suppose, as you seem to suggest, he could have added it as a sub-clause of the uniform interface constraint, but he didn't. If we don't have a cache constraint, then it would also be *possible* for systems that don't implement cache labeling at all to claim to be RESTful and they wouldn't be getting all of the benefits of the style. So again, "possible to do it right" isn't enough, the desire is to also remove the "possibility of doing it wrong" --tim
From the responses it seems that RESTful HTTP has its work cut out to be adopted within the enterprise. This appears to be due to the thinking of either "yes, we do REST" (when in fact its tunnelled HTTP e.g. POX) or SOAP is in situ and with the click of a button, one is abstracted from the verbose complex XML that is going on underneath...
RESTful systems are easier to integrate with (Uniform Interface), more flexible/easier to extend (well known MIME types) and have lower maintenance/upgrade costs (server decoupled from client).. RESTful HTTPs distributed advantages are unquestioned (e.g. caching).
The question appears to be: how to sell REST within the enterprise? I would be surprised if the reduced longterm costs highlighted above are not of interest... Are the perceived costs of a migration from SOAP/POX to RESTful HTTP resulting in a lack of adoption?
Sean.
PS Thanks for the replies..
________________________________
From: Eric Johnson <eric@tibco.com>
To: Sean Kennedy <seandkennedy@...>
Cc: Rest Discussion Group <rest-discuss@yahoogroups.com>
Sent: Wed, 14 April, 2010 19:15:36
Subject: Re: [rest-discuss] migration toward REST in the enterprise
As someone who works at creating software that enterprises run, I'd say
there isn't that much of a shift.
Many of my colleagues recognize the value of REST, but it generally is
an aspirational endpoint rather than an actual endpoint. There might
be a shift to recognize the aspiration, but certainly not the full
execution.
As an example of one difficulties, we're typically having machines
communicate with machines. Yes, in theory we could invest the time to
define a media type that embodies all the semantics we need to
capture. In practice, that is more work up front than if we follow the
"web services" path, define a WSDL interface, and generate the code we
need. Especially when we know it is almost a certainty that the client
and server in these cases will change together, the extra work of
figuring out a fully REST approach is actually unnecessary, and
consequently inappropriate.
Of course, in aspiring to REST, we do use tools like JAX-RS, which of
course don't magically transform our code to "REST", but they do start
us down the path of "REST". Whether we get there or not depends on the
use-cases and requirements for the software. Which I think is as it
should be.
-Eric.
On 04/14/2010 03:27 AM, Sean Kennedy wrote:
>
>Hi,
>> I am a PhD student and I am trying to find out the level of shift in
>the enterprise-space from SOAP/POX WS to RESTful WS. I get a strong
>feeling that there is a substantial shift in this direction but I have
>no examples or figures that bear it out...
>
>>Thanks,
>>Sean.
>
>
On Apr 16, 2010, at 1:49 PM, Sean Kennedy wrote: > The question appears to be: how to sell REST within the enterprise? I would be surprised if the reduced longterm costs highlighted above are not of interest. Unfortunately, cost, especially long-term cost, is usually not the determining aspect when it comes to enterprise IT decisions. Jan ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
So if SOAP or POX is in situ in an enterprise what approach/arguments are best to encourage a migration?
Sean.
________________________________
From: Jan Algermissen <algermissen1971@...>
To: Sean Kennedy <seandkennedy@...>
Cc: Eric Johnson <eric@...>; Rest Discussion Group <rest-discuss@yahoogroups.com>
Sent: Fri, 16 April, 2010 13:07:38
Subject: Re: [rest-discuss] migration toward REST in the enterprise
On Apr 16, 2010, at 1:49 PM, Sean Kennedy wrote:
> The question appears to be: how to sell REST within the enterprise? I would be surprised if the reduced longterm costs highlighted above are not of interest.
Unfortunately, cost, especially long-term cost, is usually not the determining aspect when it comes to enterprise IT decisions.
Jan
-----------------------------------
Jan Algermissen, Consultant
NORD Software Consulting
Mail: algermissen@...
Blog: http://www.nordsc.com/blog/
Work: http://www.nordsc.com/
-----------------------------------
On Apr 16, 2010, at 2:39 PM, Sean Kennedy wrote: > So if SOAP or POX is in situ in an enterprise what approach/arguments are best to encourage a migration? IMO: - simplicity, simplicity, simplicity :-) - protection of investment into HTTP by Millions of Web sites and Billions of users (HTTP is to stay) - HTTP has been around and tested for over a decade - evolvability/decentralization (no need for service and client owners to communicate in order to evolve the system) These are the ones I usually use - and in that order. Jan > > Sean. > > > From: Jan Algermissen <algermissen1971@...> > To: Sean Kennedy <seandkennedy@...> > Cc: Eric Johnson <eric@...>; Rest Discussion Group <rest-discuss@yahoogroups.com> > Sent: Fri, 16 April, 2010 13:07:38 > Subject: Re: [rest-discuss] migration toward REST in the enterprise > > > On Apr 16, 2010, at 1:49 PM, Sean Kennedy wrote: > > > The question appears to be: how to sell REST within the enterprise? I would be surprised if the reduced longterm costs highlighted above are not of interest. > > Unfortunately, cost, especially long-term cost, is usually not the determining aspect when it comes to enterprise IT decisions. > > Jan > > ----------------------------------- > Jan Algermissen, Consultant > NORD Software Consulting > > Mail: algermissen@... > Blog: http://www.nordsc.com/blog/ > Work: http://www.nordsc.com/ > ----------------------------------- > > > > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
On Fri, Apr 16, 2010 at 7:39 AM, Sean Kennedy <seandkennedy@...> wrote: > So if SOAP or POX is in situ in an enterprise what approach/arguments are best to encourage a migration? Another way to look at this whole topic might be, what kinds of enterprises would benefit from the advantages of REST? That is, if your enterprise requires that kind of scalability, evolvability, global connectedness, serendipitous connectedness, etc. I don't think I am doing a good job of explaining this yet, but I do think that if your enterprise does not require those traits, REST will probably not be interesting. And if your enterprise does not require those traits, it may also be obsolete without knowing so yet.
On Apr 16, 2010, at 3:13 PM, Bob Haugen wrote: > On Fri, Apr 16, 2010 at 7:39 AM, Sean Kennedy <seandkennedy@...> wrote: >> So if SOAP or POX is in situ in an enterprise what approach/arguments are best to encourage a migration? > > Another way to look at this whole topic might be, what kinds of > enterprises would benefit from the advantages of REST? IMHO, enterprise integration is actually the same problem space like the one of the Web. The scale is different, but the complexity issues are pretty much the same. From a technology investment POV, HTTP will beat any vendor-specific stack any time, especially with regard to protection of investment, quality of products and developer availability. Jan > > That is, if your enterprise requires that kind of scalability, > evolvability, global connectedness, serendipitous connectedness, etc. > > I don't think I am doing a good job of explaining this yet, but I do > think that if your enterprise does not require those traits, REST will > probably not be interesting. > > And if your enterprise does not require those traits, it may also be > obsolete without knowing so yet. > > > ------------------------------------ > > Yahoo! Groups Links > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
> > IMHO, enterprise integration is actually the same problem space like the > one of the Web. > > The scale is different, but the complexity issues are pretty much the same. > > From a technology investment POV, HTTP will beat any vendor-specific stack > any time, especially with regard to protection of investment, quality of > products and developer availability. > > Jan > Convincing enterprises that their problem is like that of the Web is the biggest challenge to me - at least if you believe that it is. Eb
On 16 April 2010 14:42, Jan Algermissen <algermissen1971@...> wrote: > IMHO, enterprise integration is actually the same problem space like > the one of the Web. > > The scale is different, but the complexity issues are pretty much the same. > > If you develop only for the boundaries of a intranet there are lot's of constraints you can relax (like the layered intermediaries and hateoas) because you have much more control over the infrastructure. Of course, if you decide to later open the infrastructure to the outside, you'll have then to put extra effort to constraint what you relaxed in the first place... On the other hand, in Enterprise Integration I don't see how one can live without a multi-protocol infrastructure so that has to be taken into account since the architecture design phase - and that is a thing that is not frequently addressed from what I see. But to the point, for me the main argument to introduce REST (even considering the relaxed constraints) in a enterprise infrastructure is clearly time-to-market. Time-to market can be dramatically reduced using a REST-like infrastructure compared to a "legacy" MVC-like. And time-to-market is a concept that management understands all too well.......
In my experience, RESTful arch model provides the following real-world values to any size operation: - low cost tools (most of them free) - ubiquity (even the expensive tools can do HTTP programming) - low cost scalability (scale out w/ commodity hardware/software) - high degree of agility and speed (mods/fixes are, by design, isolated in a stateless arch model) - applicability to more than the HTTP interface (REST principles for the back-end data layer is a *huge* scalability/reliability win) - wide understanding (everyone 'knows HTTP' even if that knowledge is not deep) I've found the biggest barriers to adopting REST principles in development are human and social, not financial or technical. And these human factors are more prevalent/powerful in large organizations. IMO, that's the biggest reason this arch style is harder to deploy in "enterprise" communities. It's basically the "herding cats" problem [1]. [1] http://en.wikipedia.org/wiki/Herding_cats <http://en.wikipedia.org/wiki/Herding_cats> mca http://amundsen.com/blog/ On Fri, Apr 16, 2010 at 10:35, Eb <amaeze@...> wrote: > > > > >> IMHO, enterprise integration is actually the same problem space like the >> one of the Web. >> >> The scale is different, but the complexity issues are pretty much the >> same. >> >> From a technology investment POV, HTTP will beat any vendor-specific stack >> any time, especially with regard to protection of investment, quality of >> products and developer availability. >> >> Jan >> > > > Convincing enterprises that their problem is like that of the Web is the > biggest challenge to me - at least if you believe that it is. > > Eb > > >
I've got a couple cases where important links that need to be machine readable don't necessarily have relationship that is easily qualified relative to the "current" resource - orthogonal links. Since my representations of choice (e.g. xhtml) have a 'class' attribute on a link, I'm thinking of using that and having no [artificial] link relation. The class attribute is intended to describe the "nature of the content" which seems to fit nicely in my scenario, but since they're typically used to drive CSS I feel a bit wonky doing so and wanted to get some validation. So, thoughts? --tim
On Apr 16, 2010, at 6:47 PM, Tim Williams wrote: > I've got a couple cases where important links that need to be machine > readable don't necessarily have relationship that is easily qualified > relative to the "current" resource - orthogonal links. Since my > representations of choice (e.g. xhtml) have a 'class' attribute on a > link, I'm thinking of using that and having no [artificial] link > relation. The interpretation of the link would still be that the requested resource is the link source resource. Why not embed some RDF? Jan > The class attribute is intended to describe the "nature of > the content" which seems to fit nicely in my scenario, but since > they're typically used to drive CSS I feel a bit wonky doing so and > wanted to get some validation. So, thoughts? > > --tim > > > ------------------------------------ > > Yahoo! Groups Links > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
I've found that doing Web-like things in the enterprise is met with bafflement and skepticism. However both of those can be addressed with data. My favourite case study is used HTTP (Web-y not RESTful because there was no hypermedia) for building scalable compute and storage capabilities. My back-of-the-envelope calculations on cost/benefit are here: http://jim.webber.name/2009/10/30/617410fc-7ec9-489f-a937-f50cf090bf48.aspx The conclusions are that a Web-inspired solution cost approximately 1/20th what a vendor-inspired solution would (using informal data from an Oracle VP). As Mike pointed out in his previous post allowed us to develop in a very rapid, agile, iterative way where we could do things like continuously performance test our code as we went because the Web made it so isolated and easy to do so. In general now my approach is to "show my working out" by going to stakeholders with data rather than emotions or opinions (I love the Web, but that's inadmissible). If I can show representative data that a Web-inspired solution will work within the expectations of the business problem, then there's less trouble delivering a solution with usually free middleware. Jim
Tim: I've been working on using XHTML for data representations lately and came upon the same choices. I decided to stick to using only the REL attribute for semantic annotations. Additionally, I decided to take advantage of the feature that supports multiple values for REL when separated by a space [1]. Doing this means I can "train" clients to sniff only for REL attributes when performing tasks. I still leverage the CLASS attribute for MicroFormat-like constructions. This means some elements sport both CLASS and REL values that contain similar values. In a related item, I am using the PROFILE attribute[2] (for HEAD or META tags) to point to my documentation for the semantic meaning of my custom REL values. This comes close to providing a custom media-type for XHTML clients and has (so far) been an acceptable compromise to allow common browsers to understand the custom semantics of the representation. [1] http://www.w3.org/TR/html401/types.html#type-links [2] http://www.w3.org/TR/html401/struct/global.html#h-7.4.4.3 mca http://amundsen.com/blog/ On Fri, Apr 16, 2010 at 12:47, Tim Williams <williamstw@...> wrote: > I've got a couple cases where important links that need to be machine > readable don't necessarily have relationship that is easily qualified > relative to the "current" resource - orthogonal links. Since my > representations of choice (e.g. xhtml) have a 'class' attribute on a > link, I'm thinking of using that and having no [artificial] link > relation. The class attribute is intended to describe the "nature of > the content" which seems to fit nicely in my scenario, but since > they're typically used to drive CSS I feel a bit wonky doing so and > wanted to get some validation. So, thoughts? > > --tim > > > ------------------------------------ > > Yahoo! Groups Links > > > >
On Fri, Apr 16, 2010 at 12:57 PM, Jan Algermissen <algermissen1971@...> wrote: > > On Apr 16, 2010, at 6:47 PM, Tim Williams wrote: > >> I've got a couple cases where important links that need to be machine >> readable don't necessarily have relationship that is easily qualified >> relative to the "current" resource - orthogonal links. Since my >> representations of choice (e.g. xhtml) have a 'class' attribute on a >> link, I'm thinking of using that and having no [artificial] link >> relation. > > The interpretation of the link would still be that the requested resource is the link source resource. But in this case, clients wouldn't be expected to be selecting these links by link relation (e.g. next, prev), but by the content itself. I reckon it's analogous to a newspaper. If you look at an online newspaper's home page, you may see links to (e.g. jobs, real estate, etc.). If machine clients were parsing it to index jobs, they'd be selecting a jobs link based on the nature of the content rather than the nature of the relationship between the homepage and that page - there isn't a particularly good way to qualify the link relation but a class="jobs" could nicely qualify the content itself. > Why not embed some RDF? 'cause it gives me the willies:) --tim
Hi Sean, On 04/16/2010 04:49 AM, Sean Kennedy wrote: > From the responses it seems that RESTful HTTP has its work cut out to > be adopted within the enterprise. This appears to be due to the > thinking of either "yes, we do REST" (when in fact its tunnelled HTTP > e.g. POX) or SOAP is in situ and with the click of a button, one is > abstracted from the verbose complex XML that is going on underneath... While I definitely want to see what my company does involve more "RESTful" efforts, I think the above has two mis-characterizations: 1) Our developers are aware that they are not doing "REST" - they know they're using SOAP, or JMX to issue remote requests. I think the confusion will start when developers build an app using Ruby-On-Rails, or JAX-RS, and think they've made it REST-ful. If they don't bother to think about the details of the media-types they're creating, or whether they've coupled the clients to specific URIs, then it isn't RESTful. Sure it is closer, which is great, but it still isn't there. 2) the statement "complex XML that is going on underneath" makes no sense to me. If I have to define my own media type as I anticipate the applications I will work on will need to do - then I have to spend just as much time defining that media type as I would for a schema definition to be used with SOAP messages. If my application needs end-to-end encryption, whether I'm doing SOAP or REST, I may still need to use XML Security, with all of its complexity. So the alleged complexity strikes me as being about the problem being solved, not about the choice of framework. Plain old SOAP over HTTPS (without WS-Addressing, WS-RM, WS-Policy), is pretty darn straightforward, and secure. > > RESTful systems are easier to integrate with (Uniform Interface), If a development team doesn't have the time to figure out how to do a well-defined WSDL interface (known verbs, known data), why would you think that getting the "REST" approach right is any more likely? Sure the interface is well known, but you have to think a lot more about other problems (media-type definitions, steady state, unanticipated uses). If you do have a useful interface, the tooling around SOAP is much better than it used to be, so yes it may be easier to integrate RESTful systems with a browser, but not all clients are browsers, and it may be just as easy to integrate a SOAP-based system with a SOAP-client. > more flexible/easier to extend (well known MIME types) Except that if I cannot use those well known MIME types, or I have to create a profile for them, then it is just shifting work from one place to another. Reuse is always a good idea, so it seems like "well known MIME types) is just an argument based on reuse. That's fine, but it is worth recognizing it for what it is, and that re-use isn't always possible. > and have lower maintenance/upgrade costs (server decoupled from client). Decoupling software is a very hard problem. With a RESTful approach, you still need to test and work with old clients. With SOAP, I need to test and work with old clients - if all I do is extend the WSDL in the SOAP world, why is that any more difficult? Also, there are other ways to decouple - such as middle-tier messaging systems - JMS, AMQP, Jabber. Maybe I'm mis-casting this, but I could theoretically build a RESTful (non-human) clients by "screen-scraping" HTML, which of course required frequent intervention by a human to update the client to reflect the new state of the server. Until the servers offer up the data in a predictable way (custom media-type, for example), the servers haven't delivered on the promise of lower maintenance/upgrade costs. > . RESTful HTTPs distributed advantages are unquestioned (e.g. caching). Well, except when they are questioned. If the key portion of my application is processing data that is constantly changing, then what advantage do I possibly get from caching? > > The question appears to be: how to sell REST within the enterprise? That framing strikes me as part of the problem. I don't need to "sell" developers on using REST - I need to get them to understand it, when it makes sense to use, and when it makes sense to do something else, such as one of those alternate messaging systems. The people that need to be "sold" on REST are the IT purchasing managers, and their frame is typically straightforward - how much does it cost, and how well does it satisfy my requirements? REST is almost completely orthogonal to that discussion, because they don't typically care about serendipity to create new applications that they didn't anticipate. That's a hidden opportunity to which they cannot assign value. This is sort of the inverse of the discussion about security (hidden costs that are difficult to assign value). To that end, the "almost REST" approaches are actually much more useful to this discussion - if I can get a developer to use something like Ruby-On-Rails, or JAX-WS, or to notice that their application is entirely a "CRUD" application that can fit into three or four verbs (GET, PUT, POST, DELETE), then they can start down the path to using "full" REST. I typically don't actually care if they ever get all the way to a REST architecture, because within the enterprise, the systems simply aren't "internet scale", there aren't millions of clients - maybe a few hundred, or at most a few thousand, and the things that make REST a requirement on the web are simply not requirements within the enterprise. > I would be surprised if the reduced longterm costs highlighted above > are not of interest... Are the perceived costs of a migration from > SOAP/POX to RESTful HTTP resulting in a lack of adoption? Well, going back to the notion of decoupling clients and servers - I can't simply "migrate" - I have to continue supporting old clients. So REST becomes an additional interface, rather than the alternate interface. Typically it will take years to phase out the old APIs, and in the meantime, you've only added costs, added complexity, and split the client base, instead of unifying it. Eventually it will all be better, but in the meantime, it isn't simple. I'm throwing out such an extensive reply because I think this is a really interesting and important problem. Yet if you tackle REST as an idea that just needs to be "sold", then your chances of success are low. REST is useful, but from what I see, it is easy to underestimate the utility of the other approaches already in use. In the enterprise, REST is not the only valid and useful architectural style. Rather it is but one of many, so people need to know more about it, know when to prefer it over some other style, know when the clear technical benefits apply, and have a clear picture of how to migrate from where they are to where they might be. Hope that helps. -Eric. > > Sean. > > PS Thanks for the replies.. > > ------------------------------------------------------------------------ > *From:* Eric Johnson <eric@...> > *To:* Sean Kennedy <seandkennedy@...> > *Cc:* Rest Discussion Group <rest-discuss@yahoogroups.com> > *Sent:* Wed, 14 April, 2010 19:15:36 > *Subject:* Re: [rest-discuss] migration toward REST in the enterprise > > > > As someone who works at creating software that enterprises run, I'd > say there isn't that much of a shift. > > Many of my colleagues recognize the value of REST, but it generally is > an aspirational endpoint rather than an actual endpoint. There might > be a shift to recognize the aspiration, but certainly not the full > execution. > > As an example of one difficulties, we're typically having machines > communicate with machines. Yes, in theory we could invest the time to > define a media type that embodies all the semantics we need to > capture. In practice, that is more work up front than if we follow > the "web services" path, define a WSDL interface, and generate the > code we need. Especially when we know it is almost a certainty that > the client and server in these cases will change together, the extra > work of figuring out a fully REST approach is actually unnecessary, > and consequently inappropriate. > > Of course, in aspiring to REST, we do use tools like JAX-RS, which of > course don't magically transform our code to "REST", but they do start > us down the path of "REST". Whether we get there or not depends on > the use-cases and requirements for the software. Which I think is as > it should be. > > -Eric. > > On 04/14/2010 03:27 AM, Sean Kennedy wrote: > >> >> Hi, >> I am a PhD student and I am trying to find out the level of shift >> in the enterprise-space from SOAP/POX WS to RESTful WS. I get a >> strong feeling that there is a substantial shift in this direction >> but I have no examples or figures that bear it out... >> >> Thanks, >> Sean. >> > > Reply to sender > <mailto:eric@...?subject=Re:%20%5Brest-discuss%5D%20migration%20toward%20REST%20in%20the%20enterprise> > | Reply to group > <mailto:rest-discuss@yahoogroups.com?subject=Re:%20%5Brest-discuss%5D%20migration%20toward%20REST%20in%20the%20enterprise> > | Reply via web post > <http://groups.yahoo.com/group/rest-discuss/post;_ylc=X3oDMTJxdjc4NTg1BF9TAzk3MzU5NzE0BGdycElkAzQzMTkyNTUEZ3Jwc3BJZAMxNzA1NzAxMDE0BG1zZ0lkAzE1MjM5BHNlYwNmdHIEc2xrA3JwbHkEc3RpbWUDMTI3MTI2ODkzNg--?act=reply&messageNum=15239> > | Start a New Topic > <http://groups.yahoo.com/group/rest-discuss/post;_ylc=X3oDMTJlaTVibjhrBF9TAzk3MzU5NzE0BGdycElkAzQzMTkyNTUEZ3Jwc3BJZAMxNzA1NzAxMDE0BHNlYwNmdHIEc2xrA250cGMEc3RpbWUDMTI3MTI2ODkzNg--> > > Messages in this topic > <http://groups.yahoo.com/group/rest-discuss/message/15230;_ylc=X3oDMTM2aXJkMWNjBF9TAzk3MzU5NzE0BGdycElkAzQzMTkyNTUEZ3Jwc3BJZAMxNzA1NzAxMDE0BG1zZ0lkAzE1MjM5BHNlYwNmdHIEc2xrA3Z0cGMEc3RpbWUDMTI3MTI2ODkzNgR0cGNJZAMxNTIzMA--> > (4) > Recent Activity: > > * New Members > <http://groups.yahoo.com/group/rest-discuss/members;_ylc=X3oDMTJmdWFpaWpvBF9TAzk3MzU5NzE0BGdycElkAzQzMTkyNTUEZ3Jwc3BJZAMxNzA1NzAxMDE0BHNlYwN2dGwEc2xrA3ZtYnJzBHN0aW1lAzEyNzEyNjg5MzY-?o=6> > 3 > > Visit Your Group > <http://groups.yahoo.com/group/rest-discuss;_ylc=X3oDMTJlY3I5ZnM3BF9TAzk3MzU5NzE0BGdycElkAzQzMTkyNTUEZ3Jwc3BJZAMxNzA1NzAxMDE0BHNlYwN2dGwEc2xrA3ZnaHAEc3RpbWUDMTI3MTI2ODkzNg--> >
On Fri, Apr 16, 2010 at 1:23 PM, mike amundsen <mamund@...> wrote: > Tim: > > I've been working on using XHTML for data representations lately and > came upon the same choices. I decided to stick to using only the REL > attribute for semantic annotations. Can you share more of your rationale? Mine is: o) Use rel= when clients will be selecting a link based on relationship semantics. o) Use class= when clients will be selecting a link based on the nature of the content itself. I've gone this route (so far) to keep pure with my interpretation of the intent of those attributes. I also figure if any of our developers actually read the spec they won't be confused that way. I'm mainly interested if you see something "wrong" with this approach:) > In a related item, I am using the PROFILE attribute[2] (for HEAD or > META tags) to point to my documentation for the semantic meaning of my > custom REL values. This comes close to providing a custom media-type > for XHTML clients and has (so far) been an acceptable compromise to > allow common browsers to understand the custom semantics of the > representation. Thanks for the profile link - that's HUGE! I've secretly felt uncomfortable about these "magic" strings - having them tied to a "profile". I've got an internal link relation registry so that they can be "universal and independent of media type" but nothing for the class values - I'll now be using the profile for that too:) --tim
<snip> > o) Use rel= when clients will be selecting a link based on > relationship semantics. > o) Use class= when clients will be selecting a link based on the > nature of the content itself. </snip> It sounds like you're using the CLASS attribute as a way of "tagging" content for others to use. It's an interesting approach. I assume you are decorating DIV, P, SPAN with these CLASS attributes. Will this kind of annotation confuse typical CSS rendering details? In my work to this point, I've been focusing only on adding semantic values to links to allow automated agents (who understand the link semantics ahead of time) to make selections and follow their desired path to some goal. I've not been operating this difference in selection. It's one I need to mull a bit. <snip> > Thanks for the profile link - that's HUGE! I've secretly felt > uncomfortable about these "magic" strings - having them tied to a > "profile". I've got an internal link relation registry so that they > can be "universal and independent of media type" but nothing for the > class values - I'll now be using the profile for that too:) </snip> Here's some additional info on the XHTML Meta Data Profiles (XMDP) from Tantek elik. It's old (I think), but I still use this model for my Profile documents. http://gmpg.org/xmdp/ mca http://amundsen.com/blog/ On Fri, Apr 16, 2010 at 13:44, Tim Williams <williamstw@...> wrote: > On Fri, Apr 16, 2010 at 1:23 PM, mike amundsen <mamund@...> wrote: >> Tim: >> >> I've been working on using XHTML for data representations lately and >> came upon the same choices. I decided to stick to using only the REL >> attribute for semantic annotations. > > Can you share more of your rationale? Mine is: > > o) Use rel= when clients will be selecting a link based on > relationship semantics. > o) Use class= when clients will be selecting a link based on the > nature of the content itself. > > I've gone this route (so far) to keep pure with my interpretation of > the intent of those attributes. I also figure if any of our > developers actually read the spec they won't be confused that way. > I'm mainly interested if you see something "wrong" with this > approach:) > >> In a related item, I am using the PROFILE attribute[2] (for HEAD or >> META tags) to point to my documentation for the semantic meaning of my >> custom REL values. This comes close to providing a custom media-type >> for XHTML clients and has (so far) been an acceptable compromise to >> allow common browsers to understand the custom semantics of the >> representation. > > Thanks for the profile link - that's HUGE! I've secretly felt > uncomfortable about these "magic" strings - having them tied to a > "profile". I've got an internal link relation registry so that they > can be "universal and independent of media type" but nothing for the > class values - I'll now be using the profile for that too:) > > --tim >
On Fri, Apr 16, 2010 at 2:07 PM, mike amundsen <mamund@...> wrote: > <snip> >> o) Use rel= when clients will be selecting a link based on >> relationship semantics. >> o) Use class= when clients will be selecting a link based on the >> nature of the content itself. > </snip> > > It sounds like you're using the CLASS attribute as a way of "tagging" > content for others to use. It's an interesting approach. I assume you > are decorating DIV, P, SPAN with these CLASS attributes. Will this > kind of annotation confuse typical CSS rendering details? I'm actually adding the class attribute directly on the link itself (e.g. <a href="..." class="jobs">Job Listing</a>). It doesn't affect CSS rendering at all. I can either take advantage of that class name in CSS, or add multiple classes (e.g. <a href="..." class="jobs important">Job Listing</a>). > Here's some additional info on the XHTML Meta Data Profiles (XMDP) > from Tantek elik. It's old (I think), but I still use this model for > my Profile documents. > http://gmpg.org/xmdp/ Perfect, thanks... --tim
<snip> > I'm actually adding the class attribute directly on the link itself > (e.g. <a href="..." class="jobs">Job Listing</a>). It doesn't affect > CSS rendering at all. I can either take advantage of that class name > in CSS, or add multiple classes (e.g. <a href="..." class="jobs > important">Job Listing</a>). </snip> I see. So you're decorating links in both cases, but using REL for imparting the meaning of the *link*, and CLASS for imparting the meaning of that *data* at the other end of the link arc. Is that really necessary? I mean to ask. Can do you have several examples where these two bits of information (meaning of the link, meaning of the data at the other end) are particularly divergent? mca http://amundsen.com/blog/ On Fri, Apr 16, 2010 at 14:36, Tim Williams <williamstw@gmail.com> wrote: > On Fri, Apr 16, 2010 at 2:07 PM, mike amundsen <mamund@yahoo.com> wrote: >> <snip> >>> o) Use rel= when clients will be selecting a link based on >>> relationship semantics. >>> o) Use class= when clients will be selecting a link based on the >>> nature of the content itself. >> </snip> >> >> It sounds like you're using the CLASS attribute as a way of "tagging" >> content for others to use. It's an interesting approach. I assume you >> are decorating DIV, P, SPAN with these CLASS attributes. Will this >> kind of annotation confuse typical CSS rendering details? > > I'm actually adding the class attribute directly on the link itself > (e.g. <a href="..." class="jobs">Job Listing</a>). It doesn't affect > CSS rendering at all. I can either take advantage of that class name > in CSS, or add multiple classes (e.g. <a href="..." class="jobs > important">Job Listing</a>). > > >> Here's some additional info on the XHTML Meta Data Profiles (XMDP) >> from Tantek elik. It's old (I think), but I still use this model for >> my Profile documents. >> http://gmpg.org/xmdp/ > > Perfect, thanks... > --tim >
On Fri, Apr 16, 2010 at 2:46 PM, mike amundsen <mamund@...> wrote: > <snip> >> I'm actually adding the class attribute directly on the link itself >> (e.g. <a href="..." class="jobs">Job Listing</a>). It doesn't affect >> CSS rendering at all. I can either take advantage of that class name >> in CSS, or add multiple classes (e.g. <a href="..." class="jobs >> important">Job Listing</a>). > </snip> > > I see. So you're decorating links in both cases, but using REL for > imparting the meaning of the *link*, and CLASS for imparting the > meaning of that *data* at the other end of the link arc. > > Is that really necessary? I mean to ask. Can do you have several > examples where these two bits of information (meaning of the link, > meaning of the data at the other end) are particularly divergent? I reckon they aren't divergent, just different. I have links to resources whose relation to the current resource is somewhat arbitrary so it seems weird to have a REL qualification. It would naturally be selected because of the nature of the data itself. So, for example, where REL fits nicely: The first page of some search results with the following links: <link rel="self" href="..."/> <link rel="next" href="..."/> <link rel="first" href="..."/> <link rel="last" href="..."/> And now (a contrived example) of where REL seems awkward: The homepage of the NYTimes (as a neutral example of the dilemma): <a href="..." rel="jobs">Jobs</a> <a href="..." rel="real_estate">Real Estate</a> <a href="..." rel="classified">Classifieds</a> In this case, those REL values don't describe the *relationship* between the current and target resource at all. And, being somewhat arbitrary, I can't think of a good way to provide a meaningful qualification of that relationship. So, instead, I choose to qualify it based on the data itself making it: <a href="..." class="jobs">Jobs</a> <a href="..." class="real_estate">Real Estate</a> <a href="..." class="classified">Classifieds</a> Make sense? --tim
Consider the situation where a client PUTs a new representation of some business object, for instance an update to an existing customer. On the server side the software detects that some mandatory fields are not completed. How should this information be returned to the client? I want to be able to say more than just "400 Fields X,Y and Z needs to be completed" - the error information should be encoded in a structured way. My first shot is to define an error media-type like application/vnd.error+xml in which I can embed all sorts of error information. The the HTTP response could be: 400 Fields X,Y and Z needs to be completed Content-Type: application/vnd.error+xml <errors> <error>...</error> </errors> Is this on the right track? Are there any existing media types for this? Thanks, J�rn
Yes, it's a good idea to return error details in the body of the
response. You can use a custom media type just for errors (assuming
your client can handle that) or you can include an "error element" in
the media type generally used in your response (div class="error" />,
<error />, {error:"..."}. etc.). I do not know of any "error
media-type" but I usually return a short message, a longer
description, some type of internal identifier, and possibly a link to
some documentation or other reference that would help the client sort
things out.
I'll also say that, in his book "Web Services Cookbook", Subbu
Alamaraju has a nice section (3.13 & 3.14) on returning errors from
servers and accepting them on the client-side. Including typical items
to include in the body.
mca
http://amundsen.com/blog/
On Sat, Apr 17, 2010 at 15:22, Jrn Wildt <jw@...> wrote:
> Consider the situation where a client PUTs a new representation of some
> business object, for instance an update to an existing customer. On the
> server side the software detects that some mandatory fields are not
> completed. How should this information be returned to the client?
>
> I want to be able to say more than just "400 Fields X,Y and Z needs to be
> completed" - the error information should be encoded in a structured way.
>
> My first shot is to define an error media-type like
> application/vnd.error+xml in which I can embed all sorts of error
> information. The the HTTP response could be:
>
> 400 Fields X,Y and Z needs to be completed
> Content-Type: application/vnd.error+xml
>
> <errors>
> <error>...</error>
> </errors>
>
>
> Is this on the right track? Are there any existing media types for this?
>
> Thanks, Jrn
>
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
>
On Apr 17, 2010, at 9:22 PM, Jrn Wildt wrote: > Consider the situation where a client PUTs a new representation of some > business object, for instance an update to an existing customer. On the > server side the software detects that some mandatory fields are not > completed. How should this information be returned to the client? > > I want to be able to say more than just "400 Fields X,Y and Z needs to be > completed" - the error information should be encoded in a structured way. Beware that you cannot redefine the existing HTTP status codes. 400 means Bad Request and not something application specific. What you should do is choose the most appropriate HTTP error code and, as you suggest, include the details in the body. In your example, that would be 422 Unprocessable Entitye (== syntactically correct, but semantic errors) Jan > > My first shot is to define an error media-type like > application/vnd.error+xml in which I can embed all sorts of error > information. The the HTTP response could be: > > 400 Fields X,Y and Z needs to be completed > Content-Type: application/vnd.error+xml > > <errors> > <error>...</error> > </errors> > > > Is this on the right track? Are there any existing media types for this? > > Thanks, Jrn > > > > ------------------------------------ > > Yahoo! Groups Links > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
A POST operation may yield a result with no body at all and a return code of 204 No Content. In that case, what is the response content type header? Can it simply be left out? Thanks, J�rn
Am 23.04.10 08:51, schrieb J�rn Wildt: > A POST operation may yield a result with no body at all and a return > code of > 204 No Content. In that case, what is the response content type header? Can > it simply be left out? Yes, you can leave it out. If there is no content, then there is no type of content. -billy.
If I want add a user to a system, should I POST to <root>/users or should I POST to <root>/users/<userId> To me, it seems that it should be the latter, but a guidelines document that I am reading suggests that it should be the former. What I am trying to do is add user's xml to <root>/users/<userId> What the guidelines suggests is POST <root>/users Content-type: text/uri-list <root>/someUriForUser Note that I already have the new userId before making the call.
Posting to <root>/users/<userId> assumes, as you note, that you already have a userId. If you do that, your client is assuming how the server operates, rather than letting the server decide for itself. Post to <root>/users, and let the response tell you what was assigned. -Eric. On 04/27/2010 11:19 AM, kurtrips wrote: > > > If I want add a user to a system, should I POST to <root>/users or > should I POST to <root>/users/<userId> > > To me, it seems that it should be the latter, but a guidelines > document that I am reading suggests that it should be the former. > > What I am trying to do is add user's xml to <root>/users/<userId> > > What the guidelines suggests is > POST <root>/users > Content-type: text/uri-list > <root>/someUriForUser > > Note that I already have the new userId before making the call. > >
I would choose PUT <root>/users/<userId> to create the user. Or POST <root>/users returns the user id. Cheers, Dong On Tue, Apr 27, 2010 at 12:37 PM, Eric Johnson <eric@...> wrote: > > > Posting to <root>/users/<userId> assumes, as you note, that you already > have a userId. If you do that, your client is assuming how the server > operates, rather than letting the server decide for itself. > > Post to <root>/users, and let the response tell you what was assigned. > > -Eric. > > > On 04/27/2010 11:19 AM, kurtrips wrote: > > > > If I want add a user to a system, should I POST to <root>/users or should I > POST to <root>/users/<userId> > > To me, it seems that it should be the latter, but a guidelines document > that I am reading suggests that it should be the former. > > What I am trying to do is add user's xml to <root>/users/<userId> > > What the guidelines suggests is > POST <root>/users > Content-type: text/uri-list > <root>/someUriForUser > > Note that I already have the new userId before making the call. > > >
With Paul Sandoz I have spent the last couple of weeks to work on a client side framework for Jersey that encourages RESTful design. After a period of frequent code changes, we now feel it can go public. There are two introductory blog posts (and more to come): http://www.nordsc.com/blog/?p=439 http://www.nordsc.com/blog/?p=484 Comments are (of course) welcome. Jan ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
Roy Fielding mistyped (ha!) a word in one of his playful
messages<http://tech.groups.yahoo.com/group/rest-discuss/message/15023>
:
Ah, yes, a similar thing happens all the time when flaying frisbee in the
> park with my dog.
>
Frisbee-brand flying discs <http://www.frisbeedisc.com/> have skin on them?
Wow! Thanks, Wham-o! ("Frisbee" is a registered trademark of Wham-o Inc. Or,
if one is to take the business's Web site at its word, "Frisbee® is a
Registered Trademark of © 2004 Wham-o Inc.".)
However, when the dog tries to throw the frisbee and I try to catch it in my
> teeth, it just doesn't seem to work well for either of us.
>
You're one sick puppy, Roy, and I suspect that your activities with your
canine companion quite perplex the other humans in the park.
P.S. Humans should catch the
reference<http://www.imdb.com/character/ch0010817/quotes>,
but not in their mouths. Please don't go barking up the wrong
tree<http://www.imdb.com/title/tt0302674/>.
(Or shrub. Trees don't much like deserts.)
I am working on a Blog project where I use ATOM for representing feeds and blog posts. I have a set of draft posts and published posts. In order to change a draft post to a published post I need to "publish" it. How should I represent one such command in REST? Here is my thoughts so far: The two sets are represented by two different ATOM resources, each having their own URL. Each individual post also has it's own resource URL. Nothing special here. In order to publish a post I can do: 1) Use a new HTTP verb called "PUBLISH" and send that to the draft post I want to publish. 2) POST the complete blog post representation on the "published posts" URL, thereby adding the blog post to the published list, and implicitly remove it from the draft list. 3) Do an empty POST to a sub-resource of the blog post named "publish" indicating that I want to "publish" that resource. 4) Expose the single "published" state value as a sub-resource of the blog post and POST true/false to that resource. 5) Mainting a non-ATOM list of published blog post identifiers. When publishing a blog post I can POST that post's identifier to the "published" identifier list (instead of sending the complete set of blog post data). What are your suggestions? Thanks, Jrn
POST a text/uri-list with the blog posts you want to publish? On Mon, May 10, 2010 at 8:30 AM, Jorn Wildt <jw@...> wrote: > > > I am working on a Blog project where I use ATOM for representing feeds and > blog posts. I have a set of draft posts and published posts. In order to > change a draft post to a published post I need to "publish" it. How should I > represent one such command in REST? > > Here is my thoughts so far: > > The two sets are represented by two different ATOM resources, each having > their own URL. Each individual post also has it's own resource URL. Nothing > special here. > > In order to publish a post I can do: > > 1) Use a new HTTP verb called "PUBLISH" and send that to the draft post I > want to publish. > > 2) POST the complete blog post representation on the "published posts" URL, > thereby adding the blog post to the published list, and implicitly remove it > from the draft list. > > 3) Do an empty POST to a sub-resource of the blog post named "publish" > indicating that I want to "publish" that resource. > > 4) Expose the single "published" state value as a sub-resource of the blog > post and POST true/false to that resource. > > 5) Mainting a non-ATOM list of published blog post identifiers. When > publishing a blog post I can POST that post's identifier to the "published" > identifier list (instead of sending the complete set of blog post data). > > What are your suggestions? > > Thanks, Jrn > > >
> POST a text/uri-list with the blog posts you want to publish? Yes. That's like my (5) except using complete URI's (which is better than my idea of using IDs). /Jrn --- In rest-discuss@yahoogroups.com, Erlend Hamnaberg <ngarthl@...> wrote: > > POST a text/uri-list with the blog posts you want to publish? > > On Mon, May 10, 2010 at 8:30 AM, Jorn Wildt <jw@...> wrote: > > > > > > > I am working on a Blog project where I use ATOM for representing feeds and > > blog posts. I have a set of draft posts and published posts. In order to > > change a draft post to a published post I need to "publish" it. How should I > > represent one such command in REST? > > > > Here is my thoughts so far: > > > > The two sets are represented by two different ATOM resources, each having > > their own URL. Each individual post also has it's own resource URL. Nothing > > special here. > > > > In order to publish a post I can do: > > > > 1) Use a new HTTP verb called "PUBLISH" and send that to the draft post I > > want to publish. > > > > 2) POST the complete blog post representation on the "published posts" URL, > > thereby adding the blog post to the published list, and implicitly remove it > > from the draft list. > > > > 3) Do an empty POST to a sub-resource of the blog post named "publish" > > indicating that I want to "publish" that resource. > > > > 4) Expose the single "published" state value as a sub-resource of the blog > > post and POST true/false to that resource. > > > > 5) Mainting a non-ATOM list of published blog post identifiers. When > > publishing a blog post I can POST that post's identifier to the "published" > > identifier list (instead of sending the complete set of blog post data). > > > > What are your suggestions? > > > > Thanks, Jrn > > > > > > >
Hello Jorn, What do you think of? POST /drafts (+ post content) ---- returns 201 with Location header set to the draft location GET (previous location header) ---- returns the post with a link to a relation called "publish" POST (following the link 'publish') ---- returns 201, with Location header set to the post location So you do not need to send the entire representation at all, only by following the correct link. In other words, use hypermedia to navigate through your application process. All you had to do is give meaning to a publish relation. Regards Guilherme Silveira Caelum | Ensino e Inovao http://www.caelum.com.br/ 2010/5/10 Jorn Wildt <jw@...> > > > I am working on a Blog project where I use ATOM for representing feeds and > blog posts. I have a set of draft posts and published posts. In order to > change a draft post to a published post I need to "publish" it. How should I > represent one such command in REST? > > Here is my thoughts so far: > > The two sets are represented by two different ATOM resources, each having > their own URL. Each individual post also has it's own resource URL. Nothing > special here. > > In order to publish a post I can do: > > 1) Use a new HTTP verb called "PUBLISH" and send that to the draft post I > want to publish. > > 2) POST the complete blog post representation on the "published posts" URL, > thereby adding the blog post to the published list, and implicitly remove it > from the draft list. > > 3) Do an empty POST to a sub-resource of the blog post named "publish" > indicating that I want to "publish" that resource. > > 4) Expose the single "published" state value as a sub-resource of the blog > post and POST true/false to that resource. > > 5) Mainting a non-ATOM list of published blog post identifiers. When > publishing a blog post I can POST that post's identifier to the "published" > identifier list (instead of sending the complete set of blog post data). > > What are your suggestions? > > Thanks, Jrn > > >
Am 10.05.10 13:33, schrieb Guilherme Silveira: > POST /drafts (+ post content) > ---- returns 201 with Location header set to the draft location > > GET (previous location header) > ---- returns the post with a link to a relation called "publish" > > POST (following the link 'publish') > ---- returns 201, with Location header set to the post location > > So you do not need to send the entire representation at all, only by > following the correct link. In other words, use hypermedia to navigate > through your application process. I consider the 'empty' POST to the publish URL as a little too RPC like. I would rather post a text/uri-list to a resource that is linked with relation "publish" (like Erlend suggested). -billy.
Yes, this was also one of my solutions: > 3) Do an empty POST to a sub-resource of the blog post named "publish" indicating that I want to "publish" that resource. Except that your introduction of a publish URL is one step better than assuming a specific sub-resource. Thanks! So a more general solution is: if you have a "command" (Publish! or MakePreferredCustomer! or what ever else) then we can use this pattern of an empty POST to a specific related URL. Looking forward to your response on Phillips comment about being to RPC-like :-) /Jrn --- In rest-discuss@yahoogroups.com, Guilherme Silveira <guilherme.silveira@...> wrote: > > Hello Jorn, > > What do you think of? > > POST /drafts (+ post content) > ---- returns 201 with Location header set to the draft location > > GET (previous location header) > ---- returns the post with a link to a relation called "publish" > > POST (following the link 'publish') > ---- returns 201, with Location header set to the post location > > So you do not need to send the entire representation at all, only by > following the correct link. In other words, use hypermedia to navigate > through your application process. > > All you had to do is give meaning to a publish relation. > > Regards > > > > Guilherme Silveira > Caelum | Ensino e Inovao > http://www.caelum.com.br/ > > > 2010/5/10 Jorn Wildt <jw@...> > > > > > > > I am working on a Blog project where I use ATOM for representing feeds and > > blog posts. I have a set of draft posts and published posts. In order to > > change a draft post to a published post I need to "publish" it. How should I > > represent one such command in REST? > > > > Here is my thoughts so far: > > > > The two sets are represented by two different ATOM resources, each having > > their own URL. Each individual post also has it's own resource URL. Nothing > > special here. > > > > In order to publish a post I can do: > > > > 1) Use a new HTTP verb called "PUBLISH" and send that to the draft post I > > want to publish. > > > > 2) POST the complete blog post representation on the "published posts" URL, > > thereby adding the blog post to the published list, and implicitly remove it > > from the draft list. > > > > 3) Do an empty POST to a sub-resource of the blog post named "publish" > > indicating that I want to "publish" that resource. > > > > 4) Expose the single "published" state value as a sub-resource of the blog > > post and POST true/false to that resource. > > > > 5) Mainting a non-ATOM list of published blog post identifiers. When > > publishing a blog post I can POST that post's identifier to the "published" > > identifier list (instead of sending the complete set of blog post data). > > > > What are your suggestions? > > > > Thanks, Jrn > > > > > > >
> I consider the 'empty' POST to the publish URL as a little too RPC like. > I would rather post a text/uri-list to a resource that is linked with > relation "publish" (like Erlend suggested). Sorry, Im pretty bad with the difference between verbs and nouns in english as its my second language. The correct relation name would be "publication". Now you might play around with the publication related to this draft by working with this related resource. > 'empty' POST Right... there is no such a thing as an empty POST - I would not recommend that. HTTP POSTing, by its definition, implies in sending an enclosed entity, even if it is just a <publication></publication>. Regards > > -billy. > > > ------------------------------------ > > Yahoo! Groups Links > > > >
> The correct relation name would be "publication". Now you might play > around with the publication related to this draft by working with this > related resource. Aha ... this would make the "publication" (or rather "publicationS") resource a list of "transaction objects" - objects representing operations that have taken place. So by posting a new "publication" item to the "publications" resource, we add that publication to the resource as well as actually publishing the blog post. But I really dislike the naming. A publication is a complete book or paper in my dictionary. So I would rather call it a "PublishCommand". Now I can POST an new PublishCommand to the PublishCommands resource. Thereby sticking to nouns while still having a verb in it. This would also reflect nicely how tha backend works with one-way messages. /Jrn
> Aha ... this would make the "publication" (or rather "publicationS") resource a list of "transaction objects" - objects representing operations that have taken place. Actually, in a lot of systems, every resource list is a list of POSTed resources, objects that were created through a first POST. In some scenarios it might fit better, i.e. creating and removing a forum thread "observer", instead of "watch/unwatch". We just need to be careful to avoid what Phillip mentioned. regards Guilherme Silveira Caelum | Ensino e Inovao http://www.caelum.com.br/ 2010/5/10 Jorn Wildt <jw@fjeldgruppen.dk> > > > > The correct relation name would be "publication". Now you might play > > around with the publication related to this draft by working with this > > related resource. > > Aha ... this would make the "publication" (or rather "publicationS") > resource a list of "transaction objects" - objects representing operations > that have taken place. > > So by posting a new "publication" item to the "publications" resource, we > add that publication to the resource as well as actually publishing the blog > post. > > But I really dislike the naming. A publication is a complete book or paper > in my dictionary. So I would rather call it a "PublishCommand". Now I can > POST an new PublishCommand to the PublishCommands resource. Thereby sticking > to nouns while still having a verb in it. This would also reflect nicely how > tha backend works with one-way messages. > > /Jrn > > >
On May 10, 2010, at 8:30 AM, Jorn Wildt wrote: > I am working on a Blog project where I use ATOM for representing feeds and blog posts. I have a set of draft posts and published posts. In order to change a draft post to a published post I need to "publish" it. How should I represent one such command in REST? AtomPub has native support for that use case: http://tools.ietf.org/html/rfc5023#section-13.1.1 Use <app:draft/> to mark the posting as draft and simply update the posting (without <app:draft/>) to have it published. Jan > > Here is my thoughts so far: > > The two sets are represented by two different ATOM resources, each having their own URL. Each individual post also has it's own resource URL. Nothing special here. > > In order to publish a post I can do: > > 1) Use a new HTTP verb called "PUBLISH" and send that to the draft post I want to publish. > > 2) POST the complete blog post representation on the "published posts" URL, thereby adding the blog post to the published list, and implicitly remove it from the draft list. > > 3) Do an empty POST to a sub-resource of the blog post named "publish" indicating that I want to "publish" that resource. > > 4) Expose the single "published" state value as a sub-resource of the blog post and POST true/false to that resource. > > 5) Mainting a non-ATOM list of published blog post identifiers. When publishing a blog post I can POST that post's identifier to the "published" identifier list (instead of sending the complete set of blog post data). > > What are your suggestions? > > Thanks, Jrn > > > > > > ------------------------------------ > > Yahoo! Groups Links > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
On May 10, 2010, at 1:38 PM, Philipp Meier wrote: > Am 10.05.10 13:33, schrieb Guilherme Silveira: > >> POST /drafts (+ post content) >> ---- returns 201 with Location header set to the draft location >> >> GET (previous location header) >> ---- returns the post with a link to a relation called "publish" >> >> POST (following the link 'publish') >> ---- returns 201, with Location header set to the post location >> >> So you do not need to send the entire representation at all, only by >> following the correct link. In other words, use hypermedia to navigate >> through your application process. > > I consider the 'empty' POST to the publish URL as a little too RPC like. Yes. It is an anti-pattern because the message is not self descriptive. See http://www.nordsc.com/blog/?p=414 Jan > I would rather post a text/uri-list to a resource that is linked with > relation "publish" (like Erlend suggested). > > -billy. > > > ------------------------------------ > > Yahoo! Groups Links > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
On May 10, 2010, at 1:38 PM, Philipp Meier wrote: > Am 10.05.10 13:33, schrieb Guilherme Silveira: > >> POST /drafts (+ post content) >> ---- returns 201 with Location header set to the draft location >> >> GET (previous location header) >> ---- returns the post with a link to a relation called "publish" >> >> POST (following the link 'publish') >> ---- returns 201, with Location header set to the post location >> >> So you do not need to send the entire representation at all, only by >> following the correct link. In other words, use hypermedia to navigate >> through your application process. > > I consider the 'empty' POST to the publish URL as a little too RPC like. Yes. It is an anti-pattern because the message is not self descriptive. See http://www.nordsc.com/blog/?p=414 Jan > I would rather post a text/uri-list to a resource that is linked with > relation "publish" (like Erlend suggested). > > -billy. > > > ------------------------------------ > > Yahoo! Groups Links > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
Am 10.05.10 16:12, schrieb Jan Algermissen: >> I am working on a Blog project where I use ATOM for representing feeds and blog posts. I have a set of draft posts and published posts. In order to change a draft post to a published post I need to "publish" it. How should I represent one such command in REST? > > AtomPub has native support for that use case: http://tools.ietf.org/html/rfc5023#section-13.1.1 > > Use <app:draft/> to mark the posting as draft and simply update the posting (without <app:draft/>) to have it published. This would by a nice use case for the PATCH method. This way one could avoid the re-transmission of the whole posting. -billy.
> AtomPub has native support for that use case: http://tools.ietf.org/html/rfc5023#section-13.1.1 Oh, cool. Thanks. But ... then we are back to the situation where I must POST my whole resource in order to perform one single operation (namely Publish). Besides using up more bandwidth (which I don't really care about), then it opens up for the possibility of both changing the resource content AND publishing it at the same time. In which order should this be handled on the server side? Publish first and then update content - or vice versa? The ordering may not be important in this use case, but I can easily imagine other use cases where the ordering is important. This is why I am looking for a solution where each operation is very explicit. Maybe PATCH would be suitable here. I Could PATCH the "app:draft" element and only that element. But that could still lead to situations where a PATCH included changes to different values where the ordering is important. /Jrn
I'm sure this has been covered in the past, but I missed it... what are some general guidelines for giving a client hints about what sort of data to supply to a resource which responds to POST or PUT? I'm interested both in hints about general content-type and encoding, as well more specific hints such as a RelaxNG schema for a resource which expects XML, or an expected JSON Schema. For that matter, are there any good ways to give hints about the fields that are expected in application/x-www-form-urlencoded or multipart/form-data? It seems like there could be some tricky cases here for resources which can accept multiple content types. Interested in any advice you folks might have. Thanks, -- Avdi Home: http://avdi.org Developer Blog: http://avdi.org/devblog/ York Coworking: http://www.meetup.com/york-coworking/ Twitter: http://twitter.com/avdi Work: http://wearetitans.net The Lazy Faire: http://thelazyfaire.org Journal: http://avdi.livejournal.com
On Mon, May 10, 2010 at 10:24 AM, Jorn Wildt <jw@...> wrote: >> AtomPub has native support for that use case: > http://tools.ietf.org/html/rfc5023#section-13.1.1 > > Oh, cool. Thanks. > > But ... then we are back to the situation where I must POST my whole resource in order to > perform one single operation (namely Publish). Besides using up more bandwidth (which I > don't really care about), You've selected to build on top of a pretty inefficient style, so it's to be expected I suppose. > then it opens up for the possibility of both changing the resource content AND publishing it at > the same time. In which order should this be handled on the server side? Publish first and > then update content - or vice versa? It seems to me that it's just changing the resource and that particular flag in its representation has the side effect of making it publicly available. > The ordering may not be important in this use case, but I can easily imagine other use cases > where the ordering is important. This is why I am looking for a solution where each operation > is very explicit. Maybe don't view it as two "operations" - the operation is simply updating (POST/PUT) the state of the resource and the new state has certain consequences? --tim
IMO, this boils down to one of three options:
1) resend the entire representation (via PUT /{existing-item-uri})
back to the server w/ the "status" data changed (from "draft" to
"publish", etc.).
2) send only the changed "status" data (via PATCH /{patch-item-uri}
back to the server using the appropriate patch media-type.
3) send a unique representation of the "status" change (via POST
/status-changes/) back to the server using the appropriate media type.
I use 1) when the changes are infrequent and/or the representation is
relatively small
I use 2) when I have a handy patch media type and the original
representation is large
I use 3) when, regardless of the size of the representations or the
existence of a patch media type, I want to expose a client-server
interaction that raises the level of the status change ("publish",
"approve", "accept-bid", etc.) to that of a trackable resource that
has a usable history
The third option is especially handy if I suspect the "status change"
representation may grow/modify over time. This option even allows me
to expand the process to include additional client-server interactions
(via included hypermedia links) for status changes that may involve
multiple stages or stretch out over some period of time.
mca
http://amundsen.com/blog/
On Mon, May 10, 2010 at 10:42, Tim Williams <williamstw@...> wrote:
> On Mon, May 10, 2010 at 10:24 AM, Jorn Wildt <jw@fjeldgruppen.dk> wrote:
>>> AtomPub has native support for that use case:
>> http://tools.ietf.org/html/rfc5023#section-13.1.1
>>
>> Oh, cool. Thanks.
>>
>> But ... then we are back to the situation where I must POST my whole resource in order to
>> perform one single operation (namely Publish). Besides using up more bandwidth (which I
>> don't really care about),
>
> You've selected to build on top of a pretty inefficient style, so it's
> to be expected I suppose.
>
>> then it opens up for the possibility of both changing the resource content AND publishing it at
>> the same time. In which order should this be handled on the server side? Publish first and
>> then update content - or vice versa?
>
> It seems to me that it's just changing the resource and that
> particular flag in its representation has the side effect of making it
> publicly available.
>
>> The ordering may not be important in this use case, but I can easily imagine other use cases
>> where the ordering is important. This is why I am looking for a solution where each operation
>> is very explicit.
>
> Maybe don't view it as two "operations" - the operation is simply
> updating (POST/PUT) the state of the resource and the new state has
> certain consequences?
>
> --tim
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
>
> > 3) send a unique representation of the "status" change (via POST > /status-changes/) back to the server using the appropriate media type. > > The third option is especially handy if I suspect the "status change" > representation may grow/modify over time. This option even allows me > to expand the process to include additional client-server interactions > (via included hypermedia links) for status changes that may involve > multiple stages or stretch out over some period of time. > . > > > I concur. This is what I use whenever I start asking myself which way to go. Eb
(3) is the one I am gravitating towards. It expresses exactly what I want
and in addition to this it also serves as a source of the history of the
resource - just like you say.
Thanks, J�rn
----- Original Message -----
From: "mike amundsen" <mamund@...>
To: <rest-discuss@yahoogroups.com>
Sent: Monday, May 10, 2010 7:53 PM
Subject: Re: [rest-discuss] Re: How to publish
IMO, this boils down to one of three options:
1) resend the entire representation (via PUT /{existing-item-uri})
back to the server w/ the "status" data changed (from "draft" to
"publish", etc.).
2) send only the changed "status" data (via PATCH /{patch-item-uri}
back to the server using the appropriate patch media-type.
3) send a unique representation of the "status" change (via POST
/status-changes/) back to the server using the appropriate media type.
I use 1) when the changes are infrequent and/or the representation is
relatively small
I use 2) when I have a handy patch media type and the original
representation is large
I use 3) when, regardless of the size of the representations or the
existence of a patch media type, I want to expose a client-server
interaction that raises the level of the status change ("publish",
"approve", "accept-bid", etc.) to that of a trackable resource that
has a usable history
The third option is especially handy if I suspect the "status change"
representation may grow/modify over time. This option even allows me
to expand the process to include additional client-server interactions
(via included hypermedia links) for status changes that may involve
multiple stages or stretch out over some period of time.
mca
http://amundsen.com/blog/
On Mon, May 10, 2010 at 10:42, Tim Williams <williamstw@...> wrote:
> On Mon, May 10, 2010 at 10:24 AM, Jorn Wildt <jw@...> wrote:
>>> AtomPub has native support for that use case:
>> http://tools.ietf.org/html/rfc5023#section-13.1.1
>>
>> Oh, cool. Thanks.
>>
>> But ... then we are back to the situation where I must POST my whole
>> resource in order to
>> perform one single operation (namely Publish). Besides using up more
>> bandwidth (which I
>> don't really care about),
>
> You've selected to build on top of a pretty inefficient style, so it's
> to be expected I suppose.
>
>> then it opens up for the possibility of both changing the resource
>> content AND publishing it at
>> the same time. In which order should this be handled on the server side?
>> Publish first and
>> then update content - or vice versa?
>
> It seems to me that it's just changing the resource and that
> particular flag in its representation has the side effect of making it
> publicly available.
>
>> The ordering may not be important in this use case, but I can easily
>> imagine other use cases
>> where the ordering is important. This is why I am looking for a solution
>> where each operation
>> is very explicit.
>
> Maybe don't view it as two "operations" - the operation is simply
> updating (POST/PUT) the state of the resource and the new state has
> certain consequences?
>
> --tim
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
>
On Mon, May 10, 2010 at 10:53 AM, mike amundsen <mamund@...> wrote:
>
>
> IMO, this boils down to one of three options:
>
> 1) resend the entire representation (via PUT /{existing-item-uri})
> back to the server w/ the "status" data changed (from "draft" to
> "publish", etc.).
> 2) send only the changed "status" data (via PATCH /{patch-item-uri}
> back to the server using the appropriate patch media-type.
> 3) send a unique representation of the "status" change (via POST
> /status-changes/) back to the server using the appropriate media type.
>
> I use 1) when the changes are infrequent and/or the representation is
> relatively small
> I use 2) when I have a handy patch media type and the original
> representation is large
> I use 3) when, regardless of the size of the representations or the
> existence of a patch media type, I want to expose a client-server
> interaction that raises the level of the status change ("publish",
> "approve", "accept-bid", etc.) to that of a trackable resource that
> has a usable history
>
> The third option is especially handy if I suspect the "status change"
> representation may grow/modify over time. This option even allows me
> to expand the process to include additional client-server interactions
> (via included hypermedia links) for status changes that may involve
> multiple stages or stretch out over some period of time.
>
> Option 3 also works well when the process of changing status can take
longer than you want the client to have to wait for an initial response. I
like to return a 202 Accepted status in this case, complete with a link to a
status tracking resource that the client can call to determine if/when the
operation completed, and whether there were any problems.
> mca
> http://amundsen.com/blog/
>
> Craig
Am 10.05.10 20:53, schrieb Craig McClanahan: > Option 3 also works well when the process of changing status can take > longer than you want the client to have to wait for an initial > response. I like to return a 202 Accepted status in this case, complete > with a link to a status tracking resource that the client can call to > determine if/when the operation completed, and whether there were any > problems. Which makes me wonder if this is related only to status changes. I think it's a more general pattern: whenever you expect that the processing of a post/put/delete process takes longer than a certain amount of time then the server can reply with a "transaction status" resource instead. -billy.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 5/10/2010 6:43 AM, Avdi Grimm wrote: > > > I'm sure this has been covered in the past, but I missed it... what > are some general guidelines for giving a client hints about what > sort of data to supply to a resource which responds to POST or PUT? > I'm interested both in hints about general content-type and > encoding, as well more specific hints such as a RelaxNG schema for a > resource which expects XML, or an expected JSON Schema. > I believe one should be able to assume that the content type of the representation returned from a server from GET for URI is acceptable in a PUT request to that server for the same URI. When using JSON, additional information about acceptable property values can be determined from any JSON Schema referenced by the resource. In other words, if you GET some resource, and the server responds with: Content-Type: application/my-type+json; profile=my-schema One could retrieve the schema from the "my-schema" relative URI and do a PUT using the application/my-type+json content type with the schema information as a guide to what property values are acceptable. Discovery of POST actions is completely different than PUT (since PUT's behavior is implied by a GET response). A JSON Schema can describe possible POST actions with submission links, including an acceptable content type (in the "enctype" property). - -- Kris Zyp SitePen (503) 806-1841 http://sitepen.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (MingW32) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAkvob1gACgkQ9VpNnHc4zAzPNwCfXhxGA4OiiR9n/1sROUfPny5v ZTsAoIq6etzkD5yQxmdnMrRgMYQDsTX2 =ZnG2 -----END PGP SIGNATURE-----
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 I put down some thoughts on the application of REST principles to programming (and how I've applied those in the Persevere framework), if anyone is interested: http://www.sitepen.com/blog/2010/05/09/resource-oriented-programming/ Thanks, - -- Kris Zyp SitePen (503) 806-1841 http://sitepen.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (MingW32) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAkvocPwACgkQ9VpNnHc4zAzF+wCfWsTEHIBSBC7NvSPmQyikzqfy 6ucAoJyFATs1Cm9J+R+Kgi1S1XXe2sFy =lDcy -----END PGP SIGNATURE-----
> > Content-Type: application/my-type+json; profile=my-schema > One could retrieve the schema from the "my-schema" relative URI and do > a PUT using the application/my-type+json content type with the schema > information as a guide to what property values are acceptable. > GETting the schema URI returns a schema, but in which media type+profile is this schema described? Conneg can be done as usual with this GET, but the same question might arrive? Discovery of POST actions is completely different than PUT (since > PUT's behavior is implied by a GET response). A JSON Schema can > describe possible POST actions with submission links, including an > acceptable content type (in the "enctype" property). > As Kris pointed, JSON representations might handle it, link headers can also describe it, as atom links too. This should solve the problem if it is not an entry point to your system, but you are navigating through it. If POSTing is your system entry point, your rest library should guess a media type (+profile) and POST it, if it gets back a 415, try it with any of the media types that the server has told you it understands. Regards - guilherme
Hi, I wanted to have the groups opinion on the http://googlecode.blogspot.com/2010/03/making-apis-faster-introducing-partial.html Sounds very attractive but then not all that glitters is gold so wanted to have a second opinion. Didn't find too much chatter on the web over that blog entry. Cheers Piyush
Hello Mike!
> 3) send a unique representation of the "status" change (via POST
/status-changes/) back to the server using the appropriate media type.
Did I miss something or is it the suggestion I made? If so, ++ on this one
also.
> I use 1) when the changes are infrequent and/or the representation is
relatively small
> I use 2) when I have a handy patch media type and the original
representation is large
Can you help me to identify the representation size influence? (I can see we
need a handy patch media type to do 2, but if its present, I would pick
it... so I am probably missing something)
> I use 3) when, regardless of the size of the representations or the
existence of a patch media type, I want to expose a client-server
interaction that raises the level of the status change ("publish",
"approve", "accept-bid", etc.) to that of a trackable resource that
has a usable history
Someone once told me the metaphor "A ~-to-many relationship with
attributes", in this case, time of change...
Regards
Guilherme Silveira
Caelum | Ensino e Inovao
http://www.caelum.com.br/
1) resend the entire representation (via PUT /{existing-item-uri})
> back to the server w/ the "status" data changed (from "draft" to
> "publish", etc.).
> 2) send only the changed "status" data (via PATCH /{patch-item-uri}
> back to the server using the appropriate patch media-type.
> 3) send a unique representation of the "status" change (via POST
> /status-changes/) back to the server using the appropriate media type.
>
>
> The third option is especially handy if I suspect the "status change"
> representation may grow/modify over time. This option even allows me
> to expand the process to include additional client-server interactions
> (via included hypermedia links) for status changes that may involve
> multiple stages or stretch out over some period of time.
>
> mca
> http://amundsen.com/blog/
>
> On Mon, May 10, 2010 at 10:42, Tim Williams <williamstw@...<williamstw%40gmail.com>>
> wrote:
> > On Mon, May 10, 2010 at 10:24 AM, Jorn Wildt <jw@...<jw%40fjeldgruppen.dk>>
> wrote:
> >>> AtomPub has native support for that use case:
> >> http://tools.ietf.org/html/rfc5023#section-13.1.1
> >>
> >> Oh, cool. Thanks.
> >>
> >> But ... then we are back to the situation where I must POST my whole
> >> resource in order to
> >> perform one single operation (namely Publish). Besides using up more
> >> bandwidth (which I
> >> don't really care about),
> >
> > You've selected to build on top of a pretty inefficient style, so it's
> > to be expected I suppose.
> >
> >> then it opens up for the possibility of both changing the resource
> >> content AND publishing it at
> >> the same time. In which order should this be handled on the server side?
>
> >> Publish first and
> >> then update content - or vice versa?
> >
> > It seems to me that it's just changing the resource and that
> > particular flag in its representation has the side effect of making it
> > publicly available.
> >
> >> The ordering may not be important in this use case, but I can easily
> >> imagine other use cases
> >> where the ordering is important. This is why I am looking for a solution
>
> >> where each operation
> >> is very explicit.
> >
> > Maybe don't view it as two "operations" - the operation is simply
> > updating (POST/PUT) the state of the resource and the new state has
> > certain consequences?
> >
> > --tim
> >
> >
> > ------------------------------------
> >
> > Yahoo! Groups Links
> >
> >
> >
> >
>
>
>
Hi Guilherme!
<snip>
Can you help me to identify the representation size influence? (I can see we
need a handy patch media type to do 2, but if its present, I would pick
it... so I am probably missing something)
</snip>
I usually "shoot from the hip" on this. For example, this representation:
<task-item>
<title>finish maze documentation</title>
<date-due>yesterday</date-due>
<is-completed>false</is-completed>
</task-item>
Is small enough that it seems trivial to resend the entire representation
when I change one of the element values.
If, however, the representation has many elements (10 or more?) or a single
large element (content of a long blog post, an embedded data object holding
an image, etc.) then I would start looking for a PATCH media-type instead.
Finally, if I'm using HTML FORMS and the media-type is very simple (e.g. the
task-item above), it's simple to use application/x-www-form-urlencoded as
the "PATCH" media type.
So, it's often a toss-up, IMO.
<snip>
Someone once told me the metaphor "A ~-to-many relationship with
attributes", in this case, time of change...
</snip>
Not sure what you mean here, I've not been paying careful attention to this
thread, maybe.
mca
http://amundsen.com/blog/
On Mon, May 10, 2010 at 17:04, Guilherme Silveira <
guilherme.silveira@...> wrote:
> Hello Mike!
>
> > 3) send a unique representation of the "status" change (via POST
> /status-changes/) back to the server using the appropriate media type.
> Did I miss something or is it the suggestion I made? If so, ++ on this one
> also.
>
> > I use 1) when the changes are infrequent and/or the representation is
> relatively small
> > I use 2) when I have a handy patch media type and the original
> representation is large
> Can you help me to identify the representation size influence? (I can see
> we need a handy patch media type to do 2, but if its present, I would pick
> it... so I am probably missing something)
>
> > I use 3) when, regardless of the size of the representations or the
> existence of a patch media type, I want to expose a client-server
> interaction that raises the level of the status change ("publish",
> "approve", "accept-bid", etc.) to that of a trackable resource that
> has a usable history
> Someone once told me the metaphor "A ~-to-many relationship with
> attributes", in this case, time of change...
>
> Regards
>
> Guilherme Silveira
> Caelum | Ensino e Inovao
> http://www.caelum.com.br/
>
>
> 1) resend the entire representation (via PUT /{existing-item-uri})
>> back to the server w/ the "status" data changed (from "draft" to
>> "publish", etc.).
>> 2) send only the changed "status" data (via PATCH /{patch-item-uri}
>> back to the server using the appropriate patch media-type.
>> 3) send a unique representation of the "status" change (via POST
>> /status-changes/) back to the server using the appropriate media type.
>>
>>
>> The third option is especially handy if I suspect the "status change"
>> representation may grow/modify over time. This option even allows me
>> to expand the process to include additional client-server interactions
>> (via included hypermedia links) for status changes that may involve
>> multiple stages or stretch out over some period of time.
>>
>> mca
>> http://amundsen.com/blog/
>>
>> On Mon, May 10, 2010 at 10:42, Tim Williams <williamstw@...m<williamstw%40gmail.com>>
>> wrote:
>> > On Mon, May 10, 2010 at 10:24 AM, Jorn Wildt <jw@...<jw%40fjeldgruppen.dk>>
>> wrote:
>> >>> AtomPub has native support for that use case:
>> >> http://tools.ietf.org/html/rfc5023#section-13.1.1
>> >>
>> >> Oh, cool. Thanks.
>> >>
>> >> But ... then we are back to the situation where I must POST my whole
>> >> resource in order to
>> >> perform one single operation (namely Publish). Besides using up more
>> >> bandwidth (which I
>> >> don't really care about),
>> >
>> > You've selected to build on top of a pretty inefficient style, so it's
>> > to be expected I suppose.
>> >
>> >> then it opens up for the possibility of both changing the resource
>> >> content AND publishing it at
>> >> the same time. In which order should this be handled on the server
>> side?
>> >> Publish first and
>> >> then update content - or vice versa?
>> >
>> > It seems to me that it's just changing the resource and that
>> > particular flag in its representation has the side effect of making it
>> > publicly available.
>> >
>> >> The ordering may not be important in this use case, but I can easily
>> >> imagine other use cases
>> >> where the ordering is important. This is why I am looking for a
>> solution
>> >> where each operation
>> >> is very explicit.
>> >
>> > Maybe don't view it as two "operations" - the operation is simply
>> > updating (POST/PUT) the state of the resource and the new state has
>> > certain consequences?
>> >
>> > --tim
>> >
>> >
>> > ------------------------------------
>> >
>> > Yahoo! Groups Links
>> >
>> >
>> >
>> >
>>
>>
>>
>
>
On May 10, 2010, at 10:45 PM, piyushpurang wrote: > Hi, > > I wanted to have the groups opinion on the > > http://googlecode.blogspot.com/2010/03/making-apis-faster-introducing-partial.html > > Sounds very attractive but then not all that glitters is gold so wanted to have a second opinion. From a very brief look: Sounds somewhat ok, here is what is missing: - a link relation (or other hypermedia control) that defines the implied URI template (IOW, that defines the field parameter and its value syntax) - proper media types for the partial bodies (for GET and PATCH); for PATCH it is important to specify how the body is to be interpreted as a delta (usually not an easy thing) - not 100% sure, but I think that the ordering of deltas should be addressed; e.g. whether it is significant (and how) or not HTH, Jan > > Didn't find too much chatter on the web over that blog entry. > > Cheers > Piyush > > > > > > ------------------------------------ > > Yahoo! Groups Links > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
Partial updates use atom with PATCH and seems nice. I just dont personally like too many query parameters and URL patterning them with GET, even less with POST and PUT... Regards Guilherme Silveira Caelum | Ensino e Inovao http://www.caelum.com.br/ 2010/5/10 piyushpurang <piyushpurang@yahoo.com> > > > Hi, > > I wanted to have the groups opinion on the > > > http://googlecode.blogspot.com/2010/03/making-apis-faster-introducing-partial.html > > Sounds very attractive but then not all that glitters is gold so wanted to > have a second opinion. > > Didn't find too much chatter on the web over that blog entry. > > Cheers > Piyush > > >
On Mon, May 10, 2010 at 4:40 PM, Kris Zyp <kris@...> wrote: > Discovery of POST actions is completely different than PUT (since > PUT's behavior is implied by a GET response). A JSON Schema can > describe possible POST actions with submission links, including an > acceptable content type (in the "enctype" property). ...and for non-JSON? Is there any generic analog of JSON Schema's submission links for POSTs? -- Avdi
On Mon, May 10, 2010 at 4:55 PM, Guilherme Silveira <guilherme.silveira@...> wrote: > If POSTing is your system entry point, your rest library should guess a media type (+profile) and POST it, if it gets back a 415, try it with any of the media types that the server has told you it understands. So there's no way to safely discover an acceptable list of POST formats? The library must performs a potentially state-changing operation in order to get info about supported operations? It seems like there's an asymmetry here... HTTP provides copious ways to discover content type of representations, content types of linked resources, and alternative representations of resources; but when it comes to POSTing data the means of discovery are practically nonexistent. My interest, incidentally, is in making APIs explorable as a way to make them more accessible to developers. I have a project [http://github.com/avdi/rack-stereoscope] which seeks to put an explorable HTML frontend on API based solely on hints gleaned from from Link headers, content types, etc.; and I'm wondering if there are any general ways to structure services so that projects like Stereoscope can discover the shape of data expected to be POSTed and show that to a developer in a helpful way. -- Avdi
Exposing and discovering POST and PUT options is done a couple different ways: 1) HTML uses templated inputs via the FORM and INPUT elements to tell clients how to handle both templated queries (FORM action="get") and templated writes (FORM action="post"). [1] 2) AtomPub handles this by identifying two important links (Collection URI and Member URI) and instructing clients and servers that the Collection URI can be used to create members (via HTTP POST) and the Member URI can be used for both updating entires (via PUT) and removing entries (via HTTP DELETE) [2] The common thread here is that the details on how to discover the rules for writing data to the server is spelled out in the media type documentation. This is fine as long as you are using a media type that has these details delineated in the media-type (HTML, Atom/AtomPub, SMIL, etc). Data-Oriented media-types such as XML, JSON, RDF, CSV, etc do not have these read/write semantics defined and that can be a real bummer<g>. You can get around this limitation by documentation a set of link relations that your clients and servers will need to understand; by importing other sub-sets of media types (e.g. XForms for XML, etc.) into your representations and telling clients and servers to refer to related documentation for guidance; or you can design and document you own media-type that has all the necessary template and link relation details for your clients and servers to implement. mca http://amundsen.com/blog/ [1] http://www.w3.org/TR/REC-html40/interact/forms.html#edef-FORM [2] http://tools.ietf.org/html/rfc5023#section-5 On Mon, May 10, 2010 at 17:44, Avdi Grimm <avdi@...> wrote: > On Mon, May 10, 2010 at 4:55 PM, Guilherme Silveira > <guilherme.silveira@...> wrote: >> If POSTing is your system entry point, your rest library should guess a media type (+profile) and POST it, if it gets back a 415, try it with any of the media types that the server has told you it understands. > > So there's no way to safely discover an acceptable list of POST > formats? The library must performs a potentially state-changing > operation in order to get info about supported operations? > > It seems like there's an asymmetry here... HTTP provides copious ways > to discover content type of representations, content types of linked > resources, and alternative representations of resources; but when it > comes to POSTing data the means of discovery are practically > nonexistent. > > My interest, incidentally, is in making APIs explorable as a way to > make them more accessible to developers. I have a project > [http://github.com/avdi/rack-stereoscope] which seeks to put an > explorable HTML frontend on API based solely on hints gleaned from > from Link headers, content types, etc.; and I'm wondering if there are > any general ways to structure services so that projects like > Stereoscope can discover the shape of data expected to be POSTed and > show that to a developer in a helpful way. > > -- > Avdi > > > ------------------------------------ > > Yahoo! Groups Links > > > >
Kris Zyp wrote: > > I believe one should be able to assume that the content type of the > representation returned from a server from GET for URI is acceptable > in a PUT request to that server for the same URI. > Absolutely not. The late binding of representation to resource precludes this assumption. HTML is capable of providing an interface to an Atom system. What media type to PUT or POST to the system is explicitly provided in the markup, i.e. a self-documenting interface. Assuming that you can PUT or POST HTML to my system because that's the media type I sent on GET would not work -- I derive HTML from Atom, not the other way around. A PUT of an HTML document would show an intent to replace the self-documenting interface provided by the HTML representation, with some other application state. HTML is generated by my system, it is not subject to change via PUT to negotiated resources which happen to return text/html or application/xhtml+xml on GET with a Web browser, but happen to return Atom to a feed reader. > > When using JSON, > additional information about acceptable property values can be > determined from any JSON Schema referenced by the resource. In other > words, if you GET some resource, and the server responds with: > > Content-Type: application/my-type+json; profile=my-schema > > One could retrieve the schema from the "my-schema" relative URI and do > a PUT using the application/my-type+json content type with the schema > information as a guide to what property values are acceptable. > Sure you can *do* this, it just wouldn't be REST. Leaving aside that the media type identifier definition for JSON doesn't say anything about extending it using *+json, the media type definition for JSON says nothing about HTTP methods. Where have you provided a self-documenting interface giving a target URI, method and media type -- as provided by forms languages having no corollary in JSON, yet required by REST? If you "just know" that you can PUT or DELETE some JSON resource, it's no more RESTful than "just knowing" that you can PUT or DELETE some JPEG. You're resorting to unbounded creativity, rather than using standard media types and link relations which *do* cover HTTP methods, for any target media type. > > Discovery of POST actions is completely different than PUT (since > PUT's behavior is implied by a GET response). A JSON Schema can > describe possible POST actions with submission links, including an > acceptable content type (in the "enctype" property). > I don't see how. Regardless of schema, there's simply no mention in the media type definition of JSON for describing URIs or methods, i.e. there's no forms language. The demo I posted consists of XHTML steady- states derived from various source representationss of other media types. These steady-states (will) provide a self-documenting API to the underlying Atom-based system. The user isn't trying to discover PUT vs. POST actions. The user is trying to drive an application to another steady-state. The user agent needs to translate that user goal into HTTP interactions. If the user is trying to add a new post, the user agent is instructed to POST to the domain root. If the user is trying to add a new comment, the user agent is instructed to POST to the appropriate comment thread. If the user intent is to edit an existing entry, the user agent is instructed to PUT to the existing URI. In each case, the user agent is instructed to use application/atom+xml; type=entry. There's no RESTful way to instruct any user agent that "this system uses Atom Protocol" and this may not be inferred by the fact that the system uses Atom. All I can do is provide a self-documenting hypertext API which instructs user agents how to interact with the system. This API may or may not conform to Atom Protocol. Whether it does or not is less important to REST than its presence. None of this is any different for a system based on JSON rather than Atom. As a REST system, I could change my Atom backend to a JSON backend on a whim. I'm not saying it would be easy, but I am saying that the application states wouldn't change. The HTML would still present a textarea, changes to that textarea would be submitted to the same URI, using whatever media type the form says to use -- all HTML user agents automatically update to the new API. If you need to guess what media type to use then you can't possibly be using REST. A REST API will always tell you exactly what media type to use. It isn't implicit in any guessable fashion, it's explicit. If it isn't explicit, it isn't REST. HTML says what POST does, but only your hypertext can specify media type, if you lack such hypertext you lack a critical REST constraint. -Eric
Hey mike! Superb summary Do machine-focused media types actually have a requirement for templated writes (a la HTML)? What benefit would machine clients get from that, given that they're programmed ahead of time anyway? Cheers, Mike On Mon, May 10, 2010 at 11:07 PM, mike amundsen <mamund@...> wrote: > Exposing and discovering POST and PUT options is done a couple different > ways: > > 1) HTML uses templated inputs via the FORM and INPUT elements to tell > clients how to handle both templated queries (FORM action="get") and > templated writes (FORM action="post"). [1] > > 2) AtomPub handles this by identifying two important links (Collection > URI and Member URI) and instructing clients and servers that the > Collection URI can be used to create members (via HTTP POST) and the > Member URI can be used for both updating entires (via PUT) and > removing entries (via HTTP DELETE) [2] > > The common thread here is that the details on how to discover the > rules for writing data to the server is spelled out in the media type > documentation. This is fine as long as you are using a media type > that has these details delineated in the media-type (HTML, > Atom/AtomPub, SMIL, etc). Data-Oriented media-types such as XML, JSON, > RDF, CSV, etc do not have these read/write semantics defined and that > can be a real bummer<g>. > > You can get around this limitation by documentation a set of link > relations that your clients and servers will need to understand; by > importing other sub-sets of media types (e.g. XForms for XML, etc.) > into your representations and telling clients and servers to refer to > related documentation for guidance; or you can design and document you > own media-type that has all the necessary template and link relation > details for your clients and servers to implement. > > mca > http://amundsen.com/blog/ > > [1] http://www.w3.org/TR/REC-html40/interact/forms.html#edef-FORM > [2] http://tools.ietf.org/html/rfc5023#section-5 > > > On Mon, May 10, 2010 at 17:44, Avdi Grimm <avdi@...> wrote: > > On Mon, May 10, 2010 at 4:55 PM, Guilherme Silveira > > <guilherme.silveira@...> wrote: > >> If POSTing is your system entry point, your rest library should guess a > media type (+profile) and POST it, if it gets back a 415, try it with any of > the media types that the server has told you it understands. > > > > So there's no way to safely discover an acceptable list of POST > > formats? The library must performs a potentially state-changing > > operation in order to get info about supported operations? > > > > It seems like there's an asymmetry here... HTTP provides copious ways > > to discover content type of representations, content types of linked > > resources, and alternative representations of resources; but when it > > comes to POSTing data the means of discovery are practically > > nonexistent. > > > > My interest, incidentally, is in making APIs explorable as a way to > > make them more accessible to developers. I have a project > > [http://github.com/avdi/rack-stereoscope] which seeks to put an > > explorable HTML frontend on API based solely on hints gleaned from > > from Link headers, content types, etc.; and I'm wondering if there are > > any general ways to structure services so that projects like > > Stereoscope can discover the shape of data expected to be POSTed and > > show that to a developer in a helpful way. > > > > -- > > Avdi > > > > > > ------------------------------------ > > > > Yahoo! Groups Links > > > > > > > > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
I am trying to design a "collection of items" resource. I need to support the following operations: Create the collection Remove the collection Add a single item to the collection Add multiple items to the collection Remove a single item from the collection Remove multiple items from the collection This is as far as I have gone: Create collection: ==> POST /service Host: www.host.com Content-Type: application/xml <collection name="items"> <item href="item1"/> <item href="item2"/> <item href="item3"/> </collection> <== 201 Created Location: http://myserver.com/service/items Content-Type: application/xml ... Remove collection: ==> DELETE /service/items <== 200 OK Removing a single item from the collection: ==> DELETE /service/items/item1 <== 200 OK However, I am finding supporting the other operations a bit tricky i.e. what methods can I use to: Add single or multiple items to the collection. (PUT doesn't seem to be right here as per HTTP 1.1 RFC) Remove multiple items from the collection in one transaction. (DELETE doesn't seem to right here either) Thanks, Suresh
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 5/11/2010 4:31 AM, Eric J. Bowman wrote: > > > Kris Zyp wrote: > > > > I believe one should be able to assume that the content type of the > > representation returned from a server from GET for URI is acceptable > > in a PUT request to that server for the same URI. > > > > Absolutely not. The late binding of representation to resource > precludes this assumption. HTML is capable of providing an interface > to an Atom system. What media type to PUT or POST to the system is > explicitly provided in the markup, i.e. a self-documenting interface. > > Assuming that you can PUT or POST HTML to my system because that's the > media type I sent on GET would not work -- I derive HTML from Atom, not > the other way around. > > A PUT of an HTML document would show an intent to replace the > self-documenting interface provided by the HTML representation, with > some other application state. HTML is generated by my system, it is not > subject to change via PUT to negotiated resources which happen to return > text/html or application/xhtml+xml on GET with a Web browser, but > happen to return Atom to a feed reader. > I definitely agree that this assumption can be wrong, and a 415 could be returned with directions about what media type is acceptable. Or the client's preferred media type may override the GET's content type (in which case he probably wouldn't be asking this question). But requiring a client to simply "know" what media type the server needs (when the client could encode in multiple media types) rather than attempting to use the same media type from a GET would obviously require out of band information and badly violate REST. > > > > > When using JSON, > > additional information about acceptable property values can be > > determined from any JSON Schema referenced by the resource. In other > > words, if you GET some resource, and the server responds with: > > > > Content-Type: application/my-type+json; profile=my-schema > > > > One could retrieve the schema from the "my-schema" relative URI and do > > a PUT using the application/my-type+json content type with the schema > > information as a guide to what property values are acceptable. > > > > Sure you can *do* this, it just wouldn't be REST. Leaving aside that > the media type identifier definition for JSON doesn't say anything about > extending it using *+json, the media type definition for JSON says > nothing about HTTP methods. Where have you provided a self-documenting > interface giving a target URI, method and media type -- as provided by > forms languages having no corollary in JSON, yet required by REST? > > > If you "just know" that you can PUT or DELETE some JSON resource, it's > no more RESTful than "just knowing" that you can PUT or DELETE some > JPEG. You're resorting to unbounded creativity, rather than using > standard media types and link relations which *do* cover HTTP methods, > for any target media type. > RFC 2616 is sufficient for describing the semantics of PUT and DELETE. I don't need to know anything besides what RFC 2616 has clearly described. > > > > > Discovery of POST actions is completely different than PUT (since > > PUT's behavior is implied by a GET response). A JSON Schema can > > describe possible POST actions with submission links, including an > > acceptable content type (in the "enctype" property). > > > > I don't see how. Regardless of schema, there's simply no mention in > the media type definition of JSON for describing URIs or methods, i.e. > there's no forms language. The demo I posted consists of XHTML steady- > states derived from various source representationss of other media > types. These steady-states (will) provide a self-documenting API to > the underlying Atom-based system. > JSON Schema effectively provides a forms language: http://tools.ietf.org/html/draft-zyp-json-schema-02 > > > The user isn't trying to discover PUT vs. POST actions. The user is > trying to drive an application to another steady-state. The user agent > needs to translate that user goal into HTTP interactions. If the user > is trying to add a new post, the user agent is instructed to POST to > the domain root. If the user is trying to add a new comment, the user > agent is instructed to POST to the appropriate comment thread. If the > user intent is to edit an existing entry, the user agent is instructed > to PUT to the existing URI. In each case, the user agent is instructed > to use application/atom+xml; type=entry. > > There's no RESTful way to instruct any user agent that "this system > uses Atom Protocol" and this may not be inferred by the fact that the > system uses Atom. All I can do is provide a self-documenting hypertext > API which instructs user agents how to interact with the system. This > API may or may not conform to Atom Protocol. Whether it does or not is > less important to REST than its presence. > > None of this is any different for a system based on JSON rather than > Atom. As a REST system, I could change my Atom backend to a JSON > backend on a whim. I'm not saying it would be easy, but I am saying > that the application states wouldn't change. The HTML would still > present a textarea, changes to that textarea would be submitted to the > same URI, using whatever media type the form says to use -- all HTML > user agents automatically update to the new API. > > If you need to guess what media type to use then you can't possibly be > using REST. A REST API will always tell you exactly what media type to > use. It isn't implicit in any guessable fashion, it's explicit. If it > isn't explicit, it isn't REST. HTML says what POST does, but only your > hypertext can specify media type, if you lack such hypertext you lack > a critical REST constraint. > Absolutely, I agree. - -- Kris Zyp SitePen (503) 806-1841 http://sitepen.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (MingW32) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAkvpUc4ACgkQ9VpNnHc4zAyE7wCcCjIsRLOPR9UmAvJj50z9whT/ 5fsAn2gCtvWZto0PD4c/WeRUAGyMUZJE =mp4f -----END PGP SIGNATURE-----
On Tue, May 11, 2010 at 8:39 AM, Suresh <sureshkk@...> wrote: > Create the collection > Remove the collection > Add a single item to the collection > Add multiple items to the collection > Remove a single item from the collection > Remove multiple items from the collection > This is as far as I have gone: It seems to me that WebDAV has already covered this, yes? -- Avdi Home: http://avdi.org Developer Blog: http://avdi.org/devblog/ York Coworking: http://www.meetup.com/york-coworking/ Twitter: http://twitter.com/avdi Work: http://wearetitans.net The Lazy Faire: http://thelazyfaire.org Journal: http://avdi.livejournal.com
If your goal is to be able to manage not just members of a list, but lists themselves, the easiest solution is to model the list as a member of a collection, too. Then your first step would be to add a new list to the list collection: *** REQUEST POST /lists/ host: www.example.org content-type: application/x-www-form-urlencoded slug: services creator=mike&description=list+of+services *** RESPONSE 201 Created Location: http://www.example.org/lists/services By modeling the list collection, you can apply operations directly to the collection including updating list member details (PUT), removing a list (DELETE) and you can use POST to model more complex tasks including moving or copying lists or list contents (if you media-type allows such things) mca http://amundsen.com/blog/ On Tue, May 11, 2010 at 08:39, Suresh <sureshkk@...> wrote: > I am trying to design a "collection of items" resource. I need to support the following operations: > > Create the collection > Remove the collection > Add a single item to the collection > Add multiple items to the collection > Remove a single item from the collection > Remove multiple items from the collection > This is as far as I have gone: > > Create collection: > ==> > POST /service > Host: www.host.com > Content-Type: application/xml > <collection name="items"> > <item href="item1"/> > <item href="item2"/> > <item href="item3"/> > </collection> > > <== > 201 Created > Location: http://myserver.com/service/items > Content-Type: application/xml > ... > > Remove collection: > ==> > DELETE /service/items > > <== > 200 OK > > Removing a single item from the collection: > ==> > DELETE /service/items/item1 > > <== > 200 OK > > However, I am finding supporting the other operations a bit tricky i.e. what methods can I use to: > > Add single or multiple items to the collection. (PUT doesn't seem to be right here as per HTTP 1.1 RFC) > > Remove multiple items from the collection in one transaction. (DELETE doesn't seem to right here either) > > Thanks, > Suresh > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
Recent m2m discussions on rest-discuss have had me thinking quite
deeply about the problem. For any given XHTML application steady-
state on my REST system, regardless of the form language I am using,
an XSLT template may be developed which outputs the metadata as RDF
(GRDDL). It occurs to me that, because my domain-specific vocabulary
remains static across variant representations (generated on the user
agent using Xforms and/or XSLT), so does the XSLT pattern for GRDDL-
generating RDF from these variants.
(You'll notice that my RESTful take on Semweb doesn't assign URIs to
RDF representations. A limited number of XSLT GRDDL transformations
are assigned URIs and linked to from the application steady-states.
The idea is that RDF can describe an m2m starting point for any
function provided by a REST application, by linking to its machine-
readable hypertext interface using link relations. This approach falls
a few link relations shy of standardization, though.)
I was surprised to discover this. I have lots of example/test files
for my system, one is a static-page WIP for the Xforms interface, with
RDFa matching my demo (except without syntax errors like @instance-of).
Regardless of the specific markup, the metadata stays constant (well,
not exactly, but it will when I've refactored a bit as a result of this
research). In fact, the RDF pattern (the XSLT generating it) for my
weblog, or for some other weblog using my domain-specific vocabulary,
regardless of its URI allocation scheme, stays constant. Here's that
pattern again (I posted this several weeks ago), somewhat:
<rdf:Description about="#{//*[@instanceof='wiski:weblog-entry']/@id}">
<link rel='self' href='{./@about}.atom'
<link rel='edit' href=
"{document(concat(./@about,'.atom'))//*[@rel='edit']/@href}"/>
<link rel='replies' href=
"{document(concat(./@about,'.atom'))//*[@rel='replies']/@href}"/>
<link rel='alternate' href='{./@about}'/>
<link rel='etc' href='{etc.}'/>
</rdf:Description>
The post-new-entry form, if present, is always:
//*[@instanceof='wiski:weblog-entry'][1]
While a post-new-comment form, if present, is always:
//*[@instanceof='wiski:weblog-comment'][1]
So, having an understanding of the domain-specific vocabulary expressed
as RDFa metadata on my weblog, allows a spambot to be programmed to
follow the API from starting points in the RDF, i.e. the spambot needs
to know how to post a reply to a collection, so it looks in the RDF for
rel='replies', which informs the bot where to look in the steady-state
it generated, to find the interface for posting a reply to any entry or
comment it encounters.
So it makes sense to provide m2m capability using Semweb (kinda the
whole point) technology, based on standard link relations to make it
RESTful. Given the m2m purpose of Semweb, it makes sense to use
fragments, and link explicitly to form controls in the generated RDF.
This requires a worldview that allows link relation semantics to vary
based on context -- if the intent is to view a comment thread, rel=
'replies' points to the comment thread, whereas if the intent is to
post a comment (via RDF introspection) then rel='replies' points to a
form control.
Taking an OO perspective, link relations may identify properties or
methods, depending on the context in which they appear -- if a user
agent wants the rel='replies' method, it looks in the RDF, if it wants
the rel='replies' property it looks in the steady-state. Like so:
<rdf:Description about='#post-1'><!--object in steady state-->
<link rel='source' href='/2006/aug/09/11.atom'/><!--property-->
<link rel='edit' href='#post-1.edit'/><!--method-->
<link rel='replies' href='/2006/aug/09/11#comment-0.edit'/><!--method-->
<link rel='self' href='/2006/aug/09/11'/><!--property-->
<!--nonexistent link relation would be a property in the steady state-->
<link rel='tags' href='#tags-1.edit'/><!--method-->
</rdf:Description>
In a threaded-comment architecture, each comment has its own unique
rel='replies' URI, whereas in my flat-comment architecture, each comment
shares the same rel='replies' URI. If the m2m goal is to reply to
a comment instead of an entry, the RDF link for rel='replies' for the
//*[@instanceof='wiski:weblog-comment'] of interest is followed, not
the rel='replies' link in the steady-state (which doesn't point to the
form control).
When my architecture is extended to support threaded comments, it isn't
the m2m intent that will change -- only the target URI will change. The
XSLT pattern doesn't change, it's still reading in the same <form>
field with the same method with the same media type, but the RDF output
from GRDDL reflects the new, unique target URIs for each comment's rel=
'replies'. Until a comment is made, there is no rel='replies' in the
steady-state, only in the RDF.
Spambots coded against my GRDDL output automatically follow the new API,
because the RDF is defining a... >gasp< ...contract to follow for
introspecting the rel='replies' interface, regardless of how the markup
(or even method, say I change to PUT instead of tunneling over POST
when a firewall rule is relaxed) evolves over time. I'm guaranteeing a
metadata vocabulary which describes my interfaces, not any particular
interface description.
In other words, the Semantic Web provides a Framework for Describing
Resources (duh). In the case of my weblog, the manipulable resources
are all represented as Atom, but *how* to manipulate resource state is
contained within XHTML application steady-states. An RDF view of a
steady-state is just a framework for exposing the resources of interest
making up that steady-state, and their interfaces -- in the OO view of
it, each rdf:Description element identifies an object of interest in the
markup, while the child elements describe its properties and methods.
This is where RDF differs from an Interface Definition Language. The
properties and methods I'm referring to aren't HTTP methods as in IDLs,
rather standard link relations and specific application functions. The
XSLT GRDDL pattern above merely *points to* a given object's hypertext-
embedded property and method definitions. It's a framework, and a nice
m2m entry point... this is about the first time the terms "Semweb" and
"RDF" haven't elicited shoulder-shrugs from me as regards my own
system. I'd only been halfheartedly using RDFa without really grokking
the point of it.
So the form control for commenting on #post-1 is located at its rel=self
URI's #comment-0.edit, which will contain method, target and media type
instructions... very useful knowledge for a spambot to glean, once it
knows where to look, provided by a rel='replies' that's only present in
the RDF -- the steady-state would just give a link to the comment
thread, if it even has rel='replies', whereas the RDF links to the
comment thread's post-reply control.
My application's "edit method" points at either an HTML <fieldset> or an
Xforms <group> containing form fields for title, slug header, content,
draft, tags and submission. My Xforms interface looks somewhat like:
<xfm:group id='post-1.edit>...<xfm:send submission='post-1.save'/>
</xfm:trigger></xfm:group>. The xfm:send is referencing the Xforms
model in the <head>, or perhaps the RDFa could link to #post-1.save...
Here's the very incomplete gist of it (I've worked out the document
structure, now I'm working on the submissions so I can code the system
to handle them):
<xfm:model>
...
<xfm:submission id='post-1.save' ref="instance('post-1.src')" method=
'put' validate='false' serialization='application/atom+xml; type=entry'>
<!-- xfm:repeat logic may be used instead of static values -->
<xfm:resource>/2006/aug/09/11.atom.draft</xfm:resource>
<!-- if app:draft (in the xfm:model) = true() then previous line, else
<xfm:resource>/2006/aug/09/11.atom</xfm:resource>
<xfm:header><!-- todo: make this header optional -->
<xfm:name>slug</xfm:name>
<xfm:value><!-- $post-1-slug --></xfm:value>
</xfm:header>
<xfm:header>
<xfm:name>If-Match</xfm:name>
<xfm:value><!-- $post-1-etag --></xfm:value>
</xfm:header>
</xfm:submission>
...
</xfm:model>
See why I like Xforms? The model instructs the user agent *how* to
conditionally submit entries, comments and edits by following its nose.
Now, that's what I call a self-documenting API. The target URI can't be
known until the parameters of the request for the "edit post" state
transition are known (due to whether or not it's a draft, *.draft is not
world-readable on my system), which is why rel='edit' in the RDF points
to a self-documenting interface instead of a source document -- it's a
method, not a property, in OO-speak (and in terms of m2m intent).
The referenced form fields for editing tags has buttons for apply,
remove, reset and commit. Following the commit button (or when draft
is toggled to false) leads to the xfm:model for the application
function, which gives instructions for target URI, method (PATCH) and
media type (application/atomcat+xml).
If 'tags' were an understood link relation, pointing it to a <ul> would
indicate non-editable, whereas pointing it to an element within a form
would indicate otherwise. This way, user agents are instructed where
to look for the interface for a task, allowing that interface to evolve.
Initially, I'll implement tags as part of the POST or PUT for creating
or editing, only later will it be a standalone PATCH function.
The link in the RDF stays the same (points to the tag-editing control),
but the hypertext will indicate a different method over time. So the
contract specifies where to look for the URI, method and media type.
Not what the URI, method or media type should be.
Users who lack privilege to do a certain operation will get different
target URIs in the RDF GRDDL output. For example, if you can't change
a post's tags, then the link points to #post-1.tags not #tags-1.edit.
I suppose it's a drawback that the RDF isn't explicit about editability,
but it isn't difficult to check the self-or-parent axis for form
elements, either. Or look for an Accept response header.
Anyway, just thought I'd share the idea of using link relations to
identify m2m goals inside RDF documents, linking directly to API
controls using fragments, instead of linking to source documents.
-Eric
Thanks Mike. My goal is to manage only the members of a list, especially removing multiple items from the list. Which HTTP method is most suitable for such a use case? i.e. given a list of members, what is the best way to remove a subset of the members from the list? Best Regards, Suresh On Tue, May 11, 2010 at 8:45 PM, mike amundsen <mamund@...> wrote: > If your goal is to be able to manage not just members of a list, but > lists themselves, the easiest solution is to model the list as a > member of a collection, too. > > Then your first step would be to add a new list to the list collection: > > *** REQUEST > POST /lists/ > host: www.example.org > content-type: application/x-www-form-urlencoded > slug: services > > creator=mike&description=list+of+services > > *** RESPONSE > 201 Created > Location: http://www.example.org/lists/services > > By modeling the list collection, you can apply operations directly to > the collection including updating list member details (PUT), removing > a list (DELETE) and you can use POST to model more complex tasks > including moving or copying lists or list contents (if you media-type > allows such things) > > mca > http://amundsen.com/blog/ > > > > > On Tue, May 11, 2010 at 08:39, Suresh <sureshkk@...> wrote: > > I am trying to design a "collection of items" resource. I need to support > the following operations: > > > > Create the collection > > Remove the collection > > Add a single item to the collection > > Add multiple items to the collection > > Remove a single item from the collection > > Remove multiple items from the collection > > This is as far as I have gone: > > > > Create collection: > > ==> > > POST /service > > Host: www.host.com > > Content-Type: application/xml > > <collection name="items"> > > <item href="item1"/> > > <item href="item2"/> > > <item href="item3"/> > > </collection> > > > > <== > > 201 Created > > Location: http://myserver.com/service/items > > Content-Type: application/xml > > ... > > > > Remove collection: > > ==> > > DELETE /service/items > > > > <== > > 200 OK > > > > Removing a single item from the collection: > > ==> > > DELETE /service/items/item1 > > > > <== > > 200 OK > > > > However, I am finding supporting the other operations a bit tricky i.e. > what methods can I use to: > > > > Add single or multiple items to the collection. (PUT doesn't seem to be > right here as per HTTP 1.1 RFC) > > > > Remove multiple items from the collection in one transaction. (DELETE > doesn't seem to right here either) > > > > Thanks, > > Suresh > > > > > > > > ------------------------------------ > > > > Yahoo! Groups Links > > > > > > > > > -- When the facts change, I change my mind. What do you do, sir?
Sounds like you're talking about supporting some type of bulk operation(s). I sometimes fashion a media type that supports bulk operations by defining a <bulk /> element that can contains <add />, <update />, and <remove /> sections. I then define usually define a specific resource for bulk operations: (/services/;bulk) and POST the representation there. I've also POSTed the representation to the collection root (/services), too. Biggest hassle w/ bulk operations is that it reduces visibility on the wire. caches are not going to understand that you've removed (added, updated) each item since that's buried in the representation. If you use ETags, you'll have a fallback to help caches, but sometimes that's not enough. BTW, Subbu's RESTful Web Services Cookbook covers this scenario pretty well (CH 11, i think) mca http://amundsen.com/blog/ On Tue, May 11, 2010 at 14:11, Suresh Kumar <sureshkk@...> wrote: > Thanks Mike. > My goal is to manage only the members of a list, especially removing > multiple items from the list. Which HTTP method is most suitable for such a > use case? i.e. given a list of members, what is the best way to remove a > subset of the members from the list? > Best Regards, > Suresh > > On Tue, May 11, 2010 at 8:45 PM, mike amundsen <mamund@...> wrote: >> >> If your goal is to be able to manage not just members of a list, but >> lists themselves, the easiest solution is to model the list as a >> member of a collection, too. >> >> Then your first step would be to add a new list to the list collection: >> >> *** REQUEST >> POST /lists/ >> host: www.example.org >> content-type: application/x-www-form-urlencoded >> slug: services >> >> creator=mike&description=list+of+services >> >> *** RESPONSE >> 201 Created >> Location: http://www.example.org/lists/services >> >> By modeling the list collection, you can apply operations directly to >> the collection including updating list member details (PUT), removing >> a list (DELETE) and you can use POST to model more complex tasks >> including moving or copying lists or list contents (if you media-type >> allows such things) >> >> mca >> http://amundsen.com/blog/ >> >> >> >> >> On Tue, May 11, 2010 at 08:39, Suresh <sureshkk@...m> wrote: >> > I am trying to design a "collection of items" resource. I need to >> > support the following operations: >> > >> > Create the collection >> > Remove the collection >> > Add a single item to the collection >> > Add multiple items to the collection >> > Remove a single item from the collection >> > Remove multiple items from the collection >> > This is as far as I have gone: >> > >> > Create collection: >> > ==> >> > POST /service >> > Host: www.host.com >> > Content-Type: application/xml >> > <collection name="items"> >> > <item href="item1"/> >> > <item href="item2"/> >> > <item href="item3"/> >> > </collection> >> > >> > <== >> > 201 Created >> > Location: http://myserver.com/service/items >> > Content-Type: application/xml >> > ... >> > >> > Remove collection: >> > ==> >> > DELETE /service/items >> > >> > <== >> > 200 OK >> > >> > Removing a single item from the collection: >> > ==> >> > DELETE /service/items/item1 >> > >> > <== >> > 200 OK >> > >> > However, I am finding supporting the other operations a bit tricky i.e. >> > what methods can I use to: >> > >> > Add single or multiple items to the collection. (PUT doesn't seem to be >> > right here as per HTTP 1.1 RFC) >> > >> > Remove multiple items from the collection in one transaction. (DELETE >> > doesn't seem to right here either) >> > >> > Thanks, >> > Suresh >> > >> > >> > >> > ------------------------------------ >> > >> > Yahoo! Groups Links >> > >> > >> > >> > > > > > -- > When the facts change, I change my mind. What do you do, sir? >
Kris Zyp wrote: > > > > > > > I believe one should be able to assume that the content type of > > > the representation returned from a server from GET for URI is > > > acceptable in a PUT request to that server for the same URI. > > > > > > > Absolutely not. The late binding of representation to resource > > precludes this assumption. HTML is capable of providing an interface > > to an Atom system. What media type to PUT or POST to the system is > > explicitly provided in the markup, i.e. a self-documenting > > interface. > > > > Assuming that you can PUT or POST HTML to my system because that's > > the media type I sent on GET would not work -- I derive HTML from > > Atom, not the other way around. > > > > A PUT of an HTML document would show an intent to replace the > > self-documenting interface provided by the HTML representation, with > > some other application state. HTML is generated by my system, it is > > not subject to change via PUT to negotiated resources which happen > > to return text/html or application/xhtml+xml on GET with a Web > > browser, but happen to return Atom to a feed reader. > > > > I definitely agree that this assumption can be wrong, and a 415 could > be returned with directions about what media type is acceptable. Or > the client's preferred media type may override the GET's content type > (in which case he probably wouldn't be asking this question). > I don't follow. The media type of a response to a GET request is a function of the client's Accept request header. To "override" conneg, one uses the URI assigned to the desired variant, instead of the negotiated URI. The client's desired media type will in no way affect a non-negotiated resource. On my system, one may directly dereference Atom representations by using .atom extensions -- each variant representation is a resource in its own right. Even so, REST isn't based on making assumptions about being able to PUT that .atom file back after editing it, by virtue of its media type being application/atom+xml. The only allowable change for the requesting user, may be to change the tags associated with a post. This could be done via PATCH, or via PUT to a subordinate resource, in either case using application/atomcat+xml. Only the hypertext API can tell me this. In REST, these specifics are communicated via hypertext. A user agent following its nose couldn't possibly run into a 415 error. If it does, the correct response from the broken server that led the user agent astray in the first place with incorrect hypertext, should be 500. I suspect you're expecting your dog to throw you the frisbee... ;-) > > But > requiring a client to simply "know" what media type the server needs > (when the client could encode in multiple media types) rather than > attempting to use the same media type from a GET would obviously > require out of band information and badly violate REST. > No, both are equally bad violations of REST. What media type to associate with a PUT, POST or PATCH request must be explicitly stated in the hypertext which instructs user agents *how* to make PUT, POST or PATCH requests (by "how" I mean, is the request required by the system to be conditional, and such). If you're relying on the assumption that the media type returned with a GET has anything to do with instructing the client what media type to use for PUT, POST or PATCH then your API is based on out-of-band information that is not common knowledge encompassed within a media type definition. > > RFC 2616 is sufficient for describing the semantics of PUT and DELETE. > I don't need to know anything besides what RFC 2616 has clearly > described. > Yes, you do, in REST. HTTP describes a range of possible semantics for some methods. REST APIs must describe the specific method semantics as implemented on your system, using standard media types. HTTP allows you to DELETE an Atom Media Entry. REST constrains that deletion to behave in accordance with the media type, i.e. the media file must also be deleted. DELETE behavior on a collection resource is undefined in HTTP and Atom, meaning the Atom media type allows different behaviors. So a REST API must self-document the DELETE behavior on collections, or even offer the user a choice of behaviors. HTTP does not a REST API make, there must be hypertext instructing the user agent *how* to DELETE. > > JSON Schema effectively provides a forms language: > http://tools.ietf.org/html/draft-zyp-json-schema-02 > I'm sure you know more about the workings of the IETF than I do, but I don't see how you can register application/schema+json without first revising the JSON media type identifier definition to allow for this extensibility. You should also reference RFC 3986, rather than 2396. In my opinion, JSON lacks the basis for a schema language which defines linking and forms. This is significantly beyond the scope of JSON-as- written. What you're after is a schema language for application/ hyperjson, i.e. first you need a structured JSON language, then you have the basis for schema on top of it. Or somesuch. "This specification is protocol agnostic. The underlying protocol (such as HTTP) should sufficiently define the semantics of the client-server interface, the retrieval of resource representations linked to by JSON representations, and modification of those resources." Not really. HTML markup implies GET in several ways, in addition to defining GET as used in forms, but does not specify protocol. Standard methods are cross-protocol, with protocol determined by the URI. An HTML form can just as easily GET and PUT FTP URIs as HTTP URIs. This is part of the protocol-agnostic REST style, which relies on standard media types to constrain method semantics (or hypertext, where the media type lacks such constraints). Atom Protocol, for example, constrains the application/atom+xml media type's method implementation. HTTP allows PUT to create and/or replace a resource. REST constrains PUT to mean either create or replace for all resources on your system -- varying PUT semantics by media type is not allowed. In a REST system which implements Atom Protocol, PUT is constrained to replacement semantics by the application/atom+xml media type. Creation semantics are assigned to POST. The underlying protocol does _not_ sufficiently define these semantics for REST, because the REST style is protocol-agnostic. Method semantics are defined by the protocols which implement them (HTTP, FTP etc.). In REST, method implementation is constrained by media type (or API). If Atom Protocol method semantics were left to the underlying protocol, there wouldn't be interoperability because some systems would use PUT to create, while other systems would use POST, due to the unconstrained nature of standard method semantics. If a JSON schema language is to be of any use in REST development, then it must allow for the constraint of standard method semantics. So first, there must be a JSON language which provides data structures for traversal of a link (as with <a href>) vs. inclusion of a link (as with <img src/>) vs. processing instructions (as with <link rel= 'stylesheet'/>), I think. Note that the media type, as with HTML, would define all of these cases as GET. What your draft lacks, is any means to instruct user agents to fetch a resource for inclusion, vs. traversing the link and presenting the retrieved representation as the next steady-state. There's also no way to communicate constraints on method semantics, i.e. to assign PUT replacement semantics and assign POST creation semantics, or vice-versa depending on the needs of the schema/API developer. I believe what you're trying is possible, but it's my opinion that you have enough in there for two separate proposals, while lacking the tools I would need to implement it as a REST developer. -Eric
Two thoughts:
1. This could be seen as a PATCH (http://tools.ietf.org/html/draft-dusseault-http-patch-16) to the collection
http://myserver.com/service/items
with a patch media type describing the representation of the difference.
This has the advantage that the entity in the caches will be treated as stale
$B!!(B
2.Or similar to Mike$B!G(Bs suggestion POST to a resource (named from set terminology)
http://myserver.com/service/items/union
<collection>
...
</collection>
This then creates the union of the list and the items in the <collection> element, adds new items & updates/replaces items that intersect. This has the disadvantage of not having cache of items made stale (however caching issue maybe addressed at some time by suggestions in
http://www.ws-rest.org/files/WSREST2010-Preliminary-Proceedings.pdf
Using HTTP Link: Header for Gateway Cache Invalidation by Mike Kelly, Michael Hausenblas)
To remove elements, POST to
http://myserver.com/service/items/compliment
<collection>
...
<collection>
$B!!(B
As Mike mentioned see RESTful WebServices Cookbook by Subbu Allamaraju for eg of bulk operations, it also has an example of using the PATCH method
$B!!(B
Robert Wilson (wilsonrm@...)
To: sureshkk@...
CC: rest-discuss@yahoogroups.com
From: mamund@...
Date: Tue, 11 May 2010 15:05:38 -0400
Subject: Re: [rest-discuss] How to design a RESTful collection resource?
Sounds like you're talking about supporting some type of bulk operation(s).
I sometimes fashion a media type that supports bulk operations by
defining a <bulk /> element that can contains <add />, <update />, and
<remove /> sections. I then define usually define a specific resource
for bulk operations: (/services/;bulk) and POST the representation
there. I've also POSTed the representation to the collection root
(/services), too.
Biggest hassle w/ bulk operations is that it reduces visibility on the
wire. caches are not going to understand that you've removed (added,
updated) each item since that's buried in the representation. If you
use ETags, you'll have a fallback to help caches, but sometimes that's
not enough.
BTW, Subbu's RESTful Web Services Cookbook covers this scenario pretty
well (CH 11, i think)
mca
http://amundsen.com/blog/
_________________________________________________________________
Need a new place to live? Find it on Domain.com.au
http://clk.atdmt.com/NMN/go/157631292/direct/01/On 12.05.2010 14:04, Robert Wilson wrote: > > > > Two thoughts: > > 1. This could be seen as a PATCH > (http://tools.ietf.org/html/draft-dusseault-http-patch-16) to the > collection > http://myserver.com/service/items <http://myserver.com/service/items> > with a patch media type describing the representation of the difference. > This has the advantage that the entity in the caches will be treated as > stale > ... Now RFC 5789. Best regards, Julian
Thanks Mike. Luckily I had a copy of the book and found a similar scenario in section 11.10 in CH 11. However, the suggestion of having a URI referring to a collection of items to be deleted is not good enough for me. In my case, a client is allowed to delete any number of items from the collection i.e. there is no fixed number of items that a client is allowed to delete so having a separate URI for all possible combination is not feasible. I think I will go with the option described here: http://www.suryasuravarapu.com/2009/10/rest-delete-operation-and-tunneling.html Though it tunnels the DELETE operation in POST, it provides some visibility by using a distinct URI for delete. <http://www.suryasuravarapu.com/2009/10/rest-delete-operation-and-tunneling.html> Best regards, Suresh On Wed, May 12, 2010 at 12:35 AM, mike amundsen <mamund@...> wrote: > Sounds like you're talking about supporting some type of bulk operation(s). > > I sometimes fashion a media type that supports bulk operations by > defining a <bulk /> element that can contains <add />, <update />, and > <remove /> sections. I then define usually define a specific resource > for bulk operations: (/services/;bulk) and POST the representation > there. I've also POSTed the representation to the collection root > (/services), too. > > Biggest hassle w/ bulk operations is that it reduces visibility on the > wire. caches are not going to understand that you've removed (added, > updated) each item since that's buried in the representation. If you > use ETags, you'll have a fallback to help caches, but sometimes that's > not enough. > > BTW, Subbu's RESTful Web Services Cookbook covers this scenario pretty > well (CH 11, i think) > > mca > http://amundsen.com/blog/ > > > > > On Tue, May 11, 2010 at 14:11, Suresh Kumar <sureshkk@...> wrote: > > Thanks Mike. > > My goal is to manage only the members of a list, especially removing > > multiple items from the list. Which HTTP method is most suitable for such > a > > use case? i.e. given a list of members, what is the best way to remove a > > subset of the members from the list? > > Best Regards, > > Suresh > > > > On Tue, May 11, 2010 at 8:45 PM, mike amundsen <mamund@...> wrote: > >> > >> If your goal is to be able to manage not just members of a list, but > >> lists themselves, the easiest solution is to model the list as a > >> member of a collection, too. > >> > >> Then your first step would be to add a new list to the list collection: > >> > >> *** REQUEST > >> POST /lists/ > >> host: www.example.org > >> content-type: application/x-www-form-urlencoded > >> slug: services > >> > >> creator=mike&description=list+of+services > >> > >> *** RESPONSE > >> 201 Created > >> Location: http://www.example.org/lists/services > >> > >> By modeling the list collection, you can apply operations directly to > >> the collection including updating list member details (PUT), removing > >> a list (DELETE) and you can use POST to model more complex tasks > >> including moving or copying lists or list contents (if you media-type > >> allows such things) > >> > >> mca > >> http://amundsen.com/blog/ > >> > >> > >> > >> > >> On Tue, May 11, 2010 at 08:39, Suresh <sureshkk@...> wrote: > >> > I am trying to design a "collection of items" resource. I need to > >> > support the following operations: > >> > > >> > Create the collection > >> > Remove the collection > >> > Add a single item to the collection > >> > Add multiple items to the collection > >> > Remove a single item from the collection > >> > Remove multiple items from the collection > >> > This is as far as I have gone: > >> > > >> > Create collection: > >> > ==> > >> > POST /service > >> > Host: www.host.com > >> > Content-Type: application/xml > >> > <collection name="items"> > >> > <item href="item1"/> > >> > <item href="item2"/> > >> > <item href="item3"/> > >> > </collection> > >> > > >> > <== > >> > 201 Created > >> > Location: http://myserver.com/service/items > >> > Content-Type: application/xml > >> > ... > >> > > >> > Remove collection: > >> > ==> > >> > DELETE /service/items > >> > > >> > <== > >> > 200 OK > >> > > >> > Removing a single item from the collection: > >> > ==> > >> > DELETE /service/items/item1 > >> > > >> > <== > >> > 200 OK > >> > > >> > However, I am finding supporting the other operations a bit tricky > i.e. > >> > what methods can I use to: > >> > > >> > Add single or multiple items to the collection. (PUT doesn't seem to > be > >> > right here as per HTTP 1.1 RFC) > >> > > >> > Remove multiple items from the collection in one transaction. (DELETE > >> > doesn't seem to right here either) > >> > > >> > Thanks, > >> > Suresh > >> > > >> > > >> > > >> > ------------------------------------ > >> > > >> > Yahoo! Groups Links > >> > > >> > > >> > > >> > > > > > > > > > -- > > When the facts change, I change my mind. What do you do, sir? > > > -- When the facts change, I change my mind. What do you do, sir?
On 11.05.2010 20:11, Suresh Kumar wrote: > Thanks Mike. > > My goal is to manage only the members of a list, especially removing > multiple items from the list. Which HTTP method is most suitable for > such a use case? i.e. given a list of members, what is the best way to > remove a subset of the members from the list? > > Best Regards, > Suresh Are you sure that a set of DELETE requests (that could be pipelined) isn't good enough? Keep in mind that if you don't use DELETE and instead batch things together then intermediaries will not be aware of what's going on (which may or not may make a difference in practice). Best regards, Julian
Eric:
1) i like the idea of using link relations this way.
2) i'm curious about your use of RDF here. are you using an existing
vocab as your main RDF serialization? have you defined a vocabulary
explicitly? using adhoc rdf:Description elements?
3) are you using RDF as the "base" medium and transforming that based
on conneg (XHTML, XFORMS, etc.)?
Is any part of your work available via open source? code? docs? etc.
mca
http://amundsen.com/blog/
On Tue, May 11, 2010 at 13:27, Eric J. Bowman <eric@...> wrote:
> Recent m2m discussions on rest-discuss have had me thinking quite
> deeply about the problem. For any given XHTML application steady-
> state on my REST system, regardless of the form language I am using,
> an XSLT template may be developed which outputs the metadata as RDF
> (GRDDL). It occurs to me that, because my domain-specific vocabulary
> remains static across variant representations (generated on the user
> agent using Xforms and/or XSLT), so does the XSLT pattern for GRDDL-
> generating RDF from these variants.
>
> (You'll notice that my RESTful take on Semweb doesn't assign URIs to
> RDF representations. A limited number of XSLT GRDDL transformations
> are assigned URIs and linked to from the application steady-states.
> The idea is that RDF can describe an m2m starting point for any
> function provided by a REST application, by linking to its machine-
> readable hypertext interface using link relations. This approach falls
> a few link relations shy of standardization, though.)
>
> I was surprised to discover this. I have lots of example/test files
> for my system, one is a static-page WIP for the Xforms interface, with
> RDFa matching my demo (except without syntax errors like @instance-of).
> Regardless of the specific markup, the metadata stays constant (well,
> not exactly, but it will when I've refactored a bit as a result of this
> research). In fact, the RDF pattern (the XSLT generating it) for my
> weblog, or for some other weblog using my domain-specific vocabulary,
> regardless of its URI allocation scheme, stays constant. Here's that
> pattern again (I posted this several weeks ago), somewhat:
>
> <rdf:Description about="#{//*[@instanceof='wiski:weblog-entry']/@id}">
> <link rel='self' href='{./@about}.atom'
> <link rel='edit' href=
> "{document(concat(./@about,'.atom'))//*[@rel='edit']/@href}"/>
> <link rel='replies' href=
> "{document(concat(./@about,'.atom'))//*[@rel='replies']/@href}"/>
> <link rel='alternate' href='{./@about}'/>
> <link rel='etc' href='{etc.}'/>
> </rdf:Description>
>
> The post-new-entry form, if present, is always:
> //*[@instanceof='wiski:weblog-entry'][1]
> While a post-new-comment form, if present, is always:
> //*[@instanceof='wiski:weblog-comment'][1]
>
> So, having an understanding of the domain-specific vocabulary expressed
> as RDFa metadata on my weblog, allows a spambot to be programmed to
> follow the API from starting points in the RDF, i.e. the spambot needs
> to know how to post a reply to a collection, so it looks in the RDF for
> rel='replies', which informs the bot where to look in the steady-state
> it generated, to find the interface for posting a reply to any entry or
> comment it encounters.
>
> So it makes sense to provide m2m capability using Semweb (kinda the
> whole point) technology, based on standard link relations to make it
> RESTful. Given the m2m purpose of Semweb, it makes sense to use
> fragments, and link explicitly to form controls in the generated RDF.
>
> This requires a worldview that allows link relation semantics to vary
> based on context -- if the intent is to view a comment thread, rel=
> 'replies' points to the comment thread, whereas if the intent is to
> post a comment (via RDF introspection) then rel='replies' points to a
> form control.
>
> Taking an OO perspective, link relations may identify properties or
> methods, depending on the context in which they appear -- if a user
> agent wants the rel='replies' method, it looks in the RDF, if it wants
> the rel='replies' property it looks in the steady-state. Like so:
>
> <rdf:Description about='#post-1'><!--object in steady state-->
> <link rel='source' href='/2006/aug/09/11.atom'/><!--property-->
> <link rel='edit' href='#post-1.edit'/><!--method-->
> <link rel='replies' href='/2006/aug/09/11#comment-0.edit'/><!--method-->
> <link rel='self' href='/2006/aug/09/11'/><!--property-->
> <!--nonexistent link relation would be a property in the steady state-->
> <link rel='tags' href='#tags-1.edit'/><!--method-->
> </rdf:Description>
>
> In a threaded-comment architecture, each comment has its own unique
> rel='replies' URI, whereas in my flat-comment architecture, each comment
> shares the same rel='replies' URI. If the m2m goal is to reply to
> a comment instead of an entry, the RDF link for rel='replies' for the
> //*[@instanceof='wiski:weblog-comment'] of interest is followed, not
> the rel='replies' link in the steady-state (which doesn't point to the
> form control).
>
> When my architecture is extended to support threaded comments, it isn't
> the m2m intent that will change -- only the target URI will change. The
> XSLT pattern doesn't change, it's still reading in the same <form>
> field with the same method with the same media type, but the RDF output
> from GRDDL reflects the new, unique target URIs for each comment's rel=
> 'replies'. Until a comment is made, there is no rel='replies' in the
> steady-state, only in the RDF.
>
> Spambots coded against my GRDDL output automatically follow the new API,
> because the RDF is defining a... >gasp< ...contract to follow for
> introspecting the rel='replies' interface, regardless of how the markup
> (or even method, say I change to PUT instead of tunneling over POST
> when a firewall rule is relaxed) evolves over time. I'm guaranteeing a
> metadata vocabulary which describes my interfaces, not any particular
> interface description.
>
> In other words, the Semantic Web provides a Framework for Describing
> Resources (duh). In the case of my weblog, the manipulable resources
> are all represented as Atom, but *how* to manipulate resource state is
> contained within XHTML application steady-states. An RDF view of a
> steady-state is just a framework for exposing the resources of interest
> making up that steady-state, and their interfaces -- in the OO view of
> it, each rdf:Description element identifies an object of interest in the
> markup, while the child elements describe its properties and methods.
>
> This is where RDF differs from an Interface Definition Language. The
> properties and methods I'm referring to aren't HTTP methods as in IDLs,
> rather standard link relations and specific application functions. The
> XSLT GRDDL pattern above merely *points to* a given object's hypertext-
> embedded property and method definitions. It's a framework, and a nice
> m2m entry point... this is about the first time the terms "Semweb" and
> "RDF" haven't elicited shoulder-shrugs from me as regards my own
> system. I'd only been halfheartedly using RDFa without really grokking
> the point of it.
>
> So the form control for commenting on #post-1 is located at its rel=self
> URI's #comment-0.edit, which will contain method, target and media type
> instructions... very useful knowledge for a spambot to glean, once it
> knows where to look, provided by a rel='replies' that's only present in
> the RDF -- the steady-state would just give a link to the comment
> thread, if it even has rel='replies', whereas the RDF links to the
> comment thread's post-reply control.
>
> My application's "edit method" points at either an HTML <fieldset> or an
> Xforms <group> containing form fields for title, slug header, content,
> draft, tags and submission. My Xforms interface looks somewhat like:
> <xfm:group id='post-1.edit>...<xfm:send submission='post-1.save'/>
> </xfm:trigger></xfm:group>. The xfm:send is referencing the Xforms
> model in the <head>, or perhaps the RDFa could link to #post-1.save...
>
> Here's the very incomplete gist of it (I've worked out the document
> structure, now I'm working on the submissions so I can code the system
> to handle them):
>
> <xfm:model>
> ...
> <xfm:submission id='post-1.save' ref="instance('post-1.src')" method=
> 'put' validate='false' serialization='application/atom+xml; type=entry'>
> <!-- xfm:repeat logic may be used instead of static values -->
> <xfm:resource>/2006/aug/09/11.atom.draft</xfm:resource>
> <!-- if app:draft (in the xfm:model) = true() then previous line, else
> <xfm:resource>/2006/aug/09/11.atom</xfm:resource>
> <xfm:header><!-- todo: make this header optional -->
> <xfm:name>slug</xfm:name>
> <xfm:value><!-- $post-1-slug --></xfm:value>
> </xfm:header>
> <xfm:header>
> <xfm:name>If-Match</xfm:name>
> <xfm:value><!-- $post-1-etag --></xfm:value>
> </xfm:header>
> </xfm:submission>
> ...
> </xfm:model>
>
> See why I like Xforms? The model instructs the user agent *how* to
> conditionally submit entries, comments and edits by following its nose.
> Now, that's what I call a self-documenting API. The target URI can't be
> known until the parameters of the request for the "edit post" state
> transition are known (due to whether or not it's a draft, *.draft is not
> world-readable on my system), which is why rel='edit' in the RDF points
> to a self-documenting interface instead of a source document -- it's a
> method, not a property, in OO-speak (and in terms of m2m intent).
>
> The referenced form fields for editing tags has buttons for apply,
> remove, reset and commit. Following the commit button (or when draft
> is toggled to false) leads to the xfm:model for the application
> function, which gives instructions for target URI, method (PATCH) and
> media type (application/atomcat+xml).
>
> If 'tags' were an understood link relation, pointing it to a <ul> would
> indicate non-editable, whereas pointing it to an element within a form
> would indicate otherwise. This way, user agents are instructed where
> to look for the interface for a task, allowing that interface to evolve.
> Initially, I'll implement tags as part of the POST or PUT for creating
> or editing, only later will it be a standalone PATCH function.
>
> The link in the RDF stays the same (points to the tag-editing control),
> but the hypertext will indicate a different method over time. So the
> contract specifies where to look for the URI, method and media type.
> Not what the URI, method or media type should be.
>
> Users who lack privilege to do a certain operation will get different
> target URIs in the RDF GRDDL output. For example, if you can't change
> a post's tags, then the link points to #post-1.tags not #tags-1.edit.
> I suppose it's a drawback that the RDF isn't explicit about editability,
> but it isn't difficult to check the self-or-parent axis for form
> elements, either. Or look for an Accept response header.
>
> Anyway, just thought I'd share the idea of using link relations to
> identify m2m goals inside RDF documents, linking directly to API
> controls using fragments, instead of linking to source documents.
>
> -Eric
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
>
Hi Folks, I wrote a short article on SOAP versus REST: For years there has been a war raging between those who advocate the use of SOAP for web services versus those who advocate using REST. There has been a lot of misinformation. This paper presents the facts you need to make your own decision. More ... http://www.xfront.com/SOAP-versus-REST/ Comments welcome. /Roger
I've been doing research on cache invalidation and have written up a post about its benefits, some of the problems with it in practice, and a potential solution to those problems http://restafari.blogspot.com/2010/04/link-header-based-invalidation-of.html All feedback welcome Cheers, Mike
Does anyone have any input on naming guidelines for (java) implementations of REST webservices? In jersey (java) WS projects, you have a java class that can serve one or more resources. In examples these class files are themselves often called something like f.x. "sample.server.ContactResource.java", "sample.server.UserResource.java" etc. But thinking about this, the java classes are not really resources themselves but controllers of resources (like in the MVC pattern) so "sample.server.rest.ContactController.java" or "sample.server.rest.controller.Contact.java" or ??? might be more correct? Any input?
Hi, Good writeup. I think you should add a section on caching of GET requests when describing the difference. Here REST has a feature that SOAP does not. In your example it would lower the load on the server if serveral officers searched for the same. /Morten --- Den lr 15/5/10 skrev Costello, Roger L. <costello@...>: > Fra: Costello, Roger L. <costello@mitre.org> > Emne: [rest-discuss] ANN: SOAP versus REST > Til: "rest-discuss@yahoogroups.com" <rest-discuss@yahoogroups.com> > Dato: lrdag 15. maj 2010 17.39 > Hi Folks, > > I wrote a short article on SOAP versus REST: > > For years there has been a war raging between those who > advocate the use of SOAP for web services versus those who > advocate using REST. There has been a lot of misinformation. > This paper presents the facts you need to make your own > decision. > > More ... http://www.xfront.com/SOAP-versus-REST/ > > Comments welcome. > > /Roger > > > ------------------------------------ > > Yahoo! Groups Links > > > rest-discuss-fullfeatured@yahoogroups.com > > >
On Sat, May 15, 2010 at 10:39 AM, Costello, Roger L. <costello@...> wrote: > For years there has been a war raging between those who advocate the use of SOAP > for web services versus those who advocate using REST. Hey Roger, seriously, is this war still "raging"? I thought it was over, and SOAP had lost, except in some odd corners of corporate-dom where people were still in denial.
Very interesting. I have written a local proxy cache that could really benefit from this kind of rules. -- Erlend On Sun, May 16, 2010 at 2:44 PM, Mike Kelly <mike@...> wrote: > > > I've been doing research on cache invalidation and have written up a post > about its benefits, some of the problems with it in practice, and a > potential solution to those problems > > > http://restafari.blogspot.com/2010/04/link-header-based-invalidation-of.html > > All feedback welcome > > Cheers, > Mike > > >
Hi Roger,
Excellent article. It is a point that I, for one, never thought of that way. I am just wondering about other QoS features that WS-* proponents state are a distinguishing feature: transactions and reliability.
1. Is it fair to say the same about those specs? or
2. Is it fair to say that RETRO [1] and Joe Gregorios best practice [2] achieve what WS-Transaction and WS-ReliableMessaging do?.
In essence, I want to argue the case for REST in a QoS environment when WS-* specs. are argued for...
Sean.
[1] http://docs.google.com/View?id=ddffwdq5_2csz22wfd&pageview=1&hgd=1
[2] http://bitworking.org/news/201/RESTify-DayTrader
________________________________
From: "Costello, Roger L." <costello@...>
To: "rest-discuss@yahoogroups.com" <rest-discuss@yahoogroups.com>
Sent: Sat, 15 May, 2010 16:39:29
Subject: [rest-discuss] ANN: SOAP versus REST
Hi Folks,
I wrote a short article on SOAP versus REST:
For years there has been a war raging between those who advocate the use of SOAP for web services versus those who advocate using REST. There has been a lot of misinformation. This paper presents the facts you need to make your own decision.
More ... http://www.xfront.com/SOAP-versus-REST/
Comments welcome.
/Roger
Hello everyone I am currently working on a project where I am trying to figure out how my REST URLs should be structured. In the project we need to share data across different users. Based on that I have chosen to create URLs without any user information in it. But the problem is that it still somehow should be possible to look at resources which are related to specific users or groups. So the question is if I for an instance need to see data related to user A should I do like this: /project/user/A/data or /project/data?user="A". What would you guys recommend, or do you have any relevant links? Thanks in advance. Regards Stefan
Stefan, On May 18, 2010, at 4:19 PM, sket wrote: > > > Hello everyone > > I am currently working on a project where I am trying to figure out how my REST URLs should be structured. > > In the project we need to share data across different users. Based on that I have chosen to create URLs without any user information in it. > > But the problem is that it still somehow should be possible to look at resources which are related to specific users or groups. So the question is if I for an instance need to see data related to user A should I do like this: > > /project/user/A/data or /project/data?user="A". > > What would you guys recommend, or do you have any relevant links? You should use HTTP authentication and use the user information you then receive to send the representation that is applicable to the corresponding user. If you have resources that only exist for that given user, I'd use your former approach /project/user/A/data because it establishes a dedicated URI space per user. Jan > > Thanks in advance. > > Regards Stefan > > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
Hi guys I've been trolling for a few weeks :-) I work on the WCF team at Microsoft. We're currently in the very early stages of planning for new apis for supporting pure REST and HTTP style development. Our goal is to create something simple, lightweight and true to form. We are looking provide a natural API both for the service author and for the consumer. This not an attempt to simply retrofit onto a SOAP based model. It would be great to hear thoughts you guys have on what would be the ideal developer experience for using REST. Also if you'd like to be involved we'd welcome the feedback. Regards Glenn
On May 19, 2010, at 6:42 AM, Glenn Block wrote: > > > Hi guys > > I've been trolling for a few weeks :-) I work on the WCF team at Microsoft. We're currently in the very early stages of planning for new apis for supporting pure REST and HTTP style development. Our goal is to create something simple, lightweight and true to form. We are looking provide a natural API both for the service author and for the consumer. This not an attempt to simply retrofit onto a SOAP based model. Great to hear that! As you might guess I have quite some thoughts on this :-) Are you thinking about the server- or the client side or both? On the server side, I think that the JAX-RS community has done a pretty good job as far as RESTfulness, clarity, and coding efficiency goes so you might want to look at what they did and adapt the approach. Jan > > It would be great to hear thoughts you guys have on what would be the ideal developer experience for using REST. Also if you'd like to be involved we'd welcome the feedback. > > Regards > Glenn > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
Thanks Jan, both. Clients being JQuery and .NET. Glenn On 5/19/10, Jan Algermissen <algermissen1971@...> wrote: > > On May 19, 2010, at 6:42 AM, Glenn Block wrote: > >> >> >> Hi guys >> >> I've been trolling for a few weeks :-) I work on the WCF team at >> Microsoft. We're currently in the very early stages of planning for new >> apis for supporting pure REST and HTTP style development. Our goal is to >> create something simple, lightweight and true to form. We are looking >> provide a natural API both for the service author and for the consumer. >> This not an attempt to simply retrofit onto a SOAP based model. > > Great to hear that! > > As you might guess I have quite some thoughts on this :-) Are you thinking > about the server- or the client side or both? > > On the server side, I think that the JAX-RS community has done a pretty good > job as far as RESTfulness, clarity, and coding efficiency goes so you might > want to look at what they did and adapt the approach. > > Jan > > >> >> It would be great to hear thoughts you guys have on what would be the >> ideal developer experience for using REST. Also if you'd like to be >> involved we'd welcome the feedback. >> >> Regards >> Glenn >> >> >> > > ----------------------------------- > Jan Algermissen, Consultant > NORD Software Consulting > > Mail: algermissen@... > Blog: http://www.nordsc.com/blog/ > Work: http://www.nordsc.com/ > ----------------------------------- > > > > > -- Sent from my mobile device
I will check out JAX-RS thanks! On 5/19/10, Jan Algermissen <algermissen1971@...> wrote: > > On May 19, 2010, at 6:42 AM, Glenn Block wrote: > >> >> >> Hi guys >> >> I've been trolling for a few weeks :-) I work on the WCF team at >> Microsoft. We're currently in the very early stages of planning for new >> apis for supporting pure REST and HTTP style development. Our goal is to >> create something simple, lightweight and true to form. We are looking >> provide a natural API both for the service author and for the consumer. >> This not an attempt to simply retrofit onto a SOAP based model. > > Great to hear that! > > As you might guess I have quite some thoughts on this :-) Are you thinking > about the server- or the client side or both? > > On the server side, I think that the JAX-RS community has done a pretty good > job as far as RESTfulness, clarity, and coding efficiency goes so you might > want to look at what they did and adapt the approach. > > Jan > > >> >> It would be great to hear thoughts you guys have on what would be the >> ideal developer experience for using REST. Also if you'd like to be >> involved we'd welcome the feedback. >> >> Regards >> Glenn >> >> >> > > ----------------------------------- > Jan Algermissen, Consultant > NORD Software Consulting > > Mail: algermissen@... > Blog: http://www.nordsc.com/blog/ > Work: http://www.nordsc.com/ > ----------------------------------- > > > > > -- Sent from my mobile device
Glenn:
Good to see you here.
A while back, I wrote the following in a blog post [1]:
<snip>
proper HTTP libraries expose the following as first-order programming objects:
* URIs
* Resources
* Representations
* Message Body
* Control Data
other primary concepts that deserve direct, top-level support are:
* Media-Types
* User-Agents
* Caching
* Authentication
</snip>
There is a small wiki (Implementing REST [2]) that contains an
incomplete list of frameworks that exhibit some aspects of "supporting
RESTful programming" that might give you some other perspectives. That
includes at least a couple that are built on .NET as well as several
others. One of the items listed there, Webmachine [3] has a very
unique approach to the problem that is worth reviewing, IMO.
FWIW, the biggest pain point I had when working w/ .NET is the lack of
a robust HttpClient class to handle direct protocol work. Not the
WebRequest/WebResponse members, but one "up above" those; one that
comes closer to the features of cUrl [4], wGet [5], or even WFetch
[6].
Feel free to ping me directly if you'd like me to expand on these items.
[1] http://www.amundsen.com/blog/archives/1018
[2] http://code.google.com/p/implementing-rest/
[3] http://code.google.com/p/implementing-rest/wiki/Webmachine
[4] http://curl.haxx.se/
[5] http://www.gnu.org/software/wget/
[6] http://www.microsoft.com/downloads/details.aspx?FamilyID=b134a806-d50e-4664-8348-da5c17129210&displaylang=en
mca
http://amundsen.com/blog/
On Wed, May 19, 2010 at 11:54, Glenn Block <glenn.block@...> wrote:
> Thanks Jan, both. Clients being JQuery and .NET.
>
> Glenn
>
> On 5/19/10, Jan Algermissen <algermissen1971@mac.com> wrote:
>>
>> On May 19, 2010, at 6:42 AM, Glenn Block wrote:
>>
>>>
>>>
>>> Hi guys
>>>
>>> I've been trolling for a few weeks :-) I work on the WCF team at
>>> Microsoft. We're currently in the very early stages of planning for new
>>> apis for supporting pure REST and HTTP style development. Our goal is to
>>> create something simple, lightweight and true to form. We are looking
>>> provide a natural API both for the service author and for the consumer.
>>> This not an attempt to simply retrofit onto a SOAP based model.
>>
>> Great to hear that!
>>
>> As you might guess I have quite some thoughts on this :-) Are you thinking
>> about the server- or the client side or both?
>>
>> On the server side, I think that the JAX-RS community has done a pretty good
>> job as far as RESTfulness, clarity, and coding efficiency goes so you might
>> want to look at what they did and adapt the approach.
>>
>> Jan
>>
>>
>>>
>>> It would be great to hear thoughts you guys have on what would be the
>>> ideal developer experience for using REST. Also if you'd like to be
>>> involved we'd welcome the feedback.
>>>
>>> Regards
>>> Glenn
>>>
>>>
>>>
>>
>> -----------------------------------
>> Jan Algermissen, Consultant
>> NORD Software Consulting
>>
>> Mail: algermissen@...
>> Blog: http://www.nordsc.com/blog/
>> Work: http://www.nordsc.com/
>> -----------------------------------
>>
>>
>>
>>
>>
>
> --
> Sent from my mobile device
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
>
Thanks for the ref Kris, I will check it out. On 5/19/10, Kris Zyp <kris@sitepen.com> wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > On the browser side you might want to take a look at Dojo's REST > capabilities. REST concepts are deeply integrated into Dojo, it has a > uniform interface called the Dojo Data, and utilizes a JsonRestStore > for RESTful communication with servers. We have a lot of developers > using it with a wide variety of servers. > Kris > > On 5/19/2010 9:54 AM, Glenn Block wrote: >> >> >> Thanks Jan, both. Clients being JQuery and .NET. >> >> Glenn >> >> On 5/19/10, Jan Algermissen <algermissen1971@... >> <mailto:algermissen1971%40mac.com>> wrote: >>> >>> On May 19, 2010, at 6:42 AM, Glenn Block wrote: >>> >>>> >>>> >>>> Hi guys >>>> >>>> I've been trolling for a few weeks :-) I work on the WCF team >>>> at Microsoft. We're currently in the very early stages of >>>> planning >> for new >>>> apis for supporting pure REST and HTTP style development. Our >> goal is to >>>> create something simple, lightweight and true to form. We are >>>> looking provide a natural API both for the service author and >>>> for the >> consumer. >>>> This not an attempt to simply retrofit onto a SOAP based >>>> model. >>> >>> Great to hear that! >>> >>> As you might guess I have quite some thoughts on this :-) Are >>> you >> thinking >>> about the server- or the client side or both? >>> >>> On the server side, I think that the JAX-RS community has done a >> pretty good >>> job as far as RESTfulness, clarity, and coding efficiency goes >>> so >> you might >>> want to look at what they did and adapt the approach. >>> >>> Jan >>> >>> >>>> >>>> It would be great to hear thoughts you guys have on what would >>>> be the ideal developer experience for using REST. Also if you'd >>>> like to be involved we'd welcome the feedback. >>>> >>>> Regards Glenn >>>> >>>> >>>> >>> >>> ----------------------------------- Jan Algermissen, Consultant >>> NORD Software Consulting >>> >>> Mail: algermissen@... <mailto:algermissen%40acm.org> Blog: >>> http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ >>> ----------------------------------- >>> >>> >>> >>> >>> >> >> -- Sent from my mobile device >> >> > > - -- > Kris Zyp > SitePen > (503) 806-1841 > http://sitepen.com > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.9 (MingW32) > Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ > > iEYEARECAAYFAkv0CrIACgkQ9VpNnHc4zAxfAQCbByzRMFsrvX+MAvxNT1/QBSqC > vx8AnjbsMxVNFQVuaKdzOZSiIkdbHZSi > =7Wte > -----END PGP SIGNATURE----- > > -- Sent from my mobile device
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On the browser side you might want to take a look at Dojo's REST capabilities. REST concepts are deeply integrated into Dojo, it has a uniform interface called the Dojo Data, and utilizes a JsonRestStore for RESTful communication with servers. We have a lot of developers using it with a wide variety of servers. Kris On 5/19/2010 9:54 AM, Glenn Block wrote: > > > Thanks Jan, both. Clients being JQuery and .NET. > > Glenn > > On 5/19/10, Jan Algermissen <algermissen1971@... > <mailto:algermissen1971%40mac.com>> wrote: >> >> On May 19, 2010, at 6:42 AM, Glenn Block wrote: >> >>> >>> >>> Hi guys >>> >>> I've been trolling for a few weeks :-) I work on the WCF team >>> at Microsoft. We're currently in the very early stages of >>> planning > for new >>> apis for supporting pure REST and HTTP style development. Our > goal is to >>> create something simple, lightweight and true to form. We are >>> looking provide a natural API both for the service author and >>> for the > consumer. >>> This not an attempt to simply retrofit onto a SOAP based >>> model. >> >> Great to hear that! >> >> As you might guess I have quite some thoughts on this :-) Are >> you > thinking >> about the server- or the client side or both? >> >> On the server side, I think that the JAX-RS community has done a > pretty good >> job as far as RESTfulness, clarity, and coding efficiency goes >> so > you might >> want to look at what they did and adapt the approach. >> >> Jan >> >> >>> >>> It would be great to hear thoughts you guys have on what would >>> be the ideal developer experience for using REST. Also if you'd >>> like to be involved we'd welcome the feedback. >>> >>> Regards Glenn >>> >>> >>> >> >> ----------------------------------- Jan Algermissen, Consultant >> NORD Software Consulting >> >> Mail: algermissen@... <mailto:algermissen%40acm.org> Blog: >> http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ >> ----------------------------------- >> >> >> >> >> > > -- Sent from my mobile device > > - -- Kris Zyp SitePen (503) 806-1841 http://sitepen.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (MingW32) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAkv0CrIACgkQ9VpNnHc4zAxfAQCbByzRMFsrvX+MAvxNT1/QBSqC vx8AnjbsMxVNFQVuaKdzOZSiIkdbHZSi =7Wte -----END PGP SIGNATURE-----
Great Mike and thanks for the welcome., I plan to stay for a while now that I've accepted the role to drive this stuff ;-) I will ping you. On the http client, it is one of our top priorities. We already shipped something in our REST starter kit a while ago. Would be interested in your thoughts (I will send a link to the it which includes the client) On 5/19/10, mike amundsen <mamund@...> wrote: > Glenn: > > Good to see you here. > > A while back, I wrote the following in a blog post [1]: > <snip> > proper HTTP libraries expose the following as first-order programming > objects: > > * URIs > * Resources > * Representations > * Message Body > * Control Data > > other primary concepts that deserve direct, top-level support are: > > * Media-Types > * User-Agents > * Caching > * Authentication > </snip> > > There is a small wiki (Implementing REST [2]) that contains an > incomplete list of frameworks that exhibit some aspects of "supporting > RESTful programming" that might give you some other perspectives. That > includes at least a couple that are built on .NET as well as several > others. One of the items listed there, Webmachine [3] has a very > unique approach to the problem that is worth reviewing, IMO. > > FWIW, the biggest pain point I had when working w/ .NET is the lack of > a robust HttpClient class to handle direct protocol work. Not the > WebRequest/WebResponse members, but one "up above" those; one that > comes closer to the features of cUrl [4], wGet [5], or even WFetch > [6]. > > Feel free to ping me directly if you'd like me to expand on these items. > > [1] http://www.amundsen.com/blog/archives/1018 > [2] http://code.google.com/p/implementing-rest/ > [3] http://code.google.com/p/implementing-rest/wiki/Webmachine > [4] http://curl.haxx.se/ > [5] http://www.gnu.org/software/wget/ > [6] > http://www.microsoft.com/downloads/details.aspx?FamilyID=b134a806-d50e-4664-8348-da5c17129210&displaylang=en > > > mca > http://amundsen.com/blog/ > > > > > On Wed, May 19, 2010 at 11:54, Glenn Block <glenn.block@...m> wrote: >> Thanks Jan, both. Clients being JQuery and .NET. >> >> Glenn >> >> On 5/19/10, Jan Algermissen <algermissen1971@...> wrote: >>> >>> On May 19, 2010, at 6:42 AM, Glenn Block wrote: >>> >>>> >>>> >>>> Hi guys >>>> >>>> I've been trolling for a few weeks :-) I work on the WCF team at >>>> Microsoft. We're currently in the very early stages of planning for new >>>> apis for supporting pure REST and HTTP style development. Our goal is to >>>> create something simple, lightweight and true to form. We are looking >>>> provide a natural API both for the service author and for the consumer. >>>> This not an attempt to simply retrofit onto a SOAP based model. >>> >>> Great to hear that! >>> >>> As you might guess I have quite some thoughts on this :-) Are you >>> thinking >>> about the server- or the client side or both? >>> >>> On the server side, I think that the JAX-RS community has done a pretty >>> good >>> job as far as RESTfulness, clarity, and coding efficiency goes so you >>> might >>> want to look at what they did and adapt the approach. >>> >>> Jan >>> >>> >>>> >>>> It would be great to hear thoughts you guys have on what would be the >>>> ideal developer experience for using REST. Also if you'd like to be >>>> involved we'd welcome the feedback. >>>> >>>> Regards >>>> Glenn >>>> >>>> >>>> >>> >>> ----------------------------------- >>> Jan Algermissen, Consultant >>> NORD Software Consulting >>> >>> Mail: algermissen@... >>> Blog: http://www.nordsc.com/blog/ >>> Work: http://www.nordsc.com/ >>> ----------------------------------- >>> >>> >>> >>> >>> >> >> -- >> Sent from my mobile device >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> >> > -- Sent from my mobile device
By representations you mean content types? On 5/19/10, mike amundsen <mamund@...> wrote: > Glenn: > > Good to see you here. > > A while back, I wrote the following in a blog post [1]: > <snip> > proper HTTP libraries expose the following as first-order programming > objects: > > * URIs > * Resources > * Representations > * Message Body > * Control Data > > other primary concepts that deserve direct, top-level support are: > > * Media-Types > * User-Agents > * Caching > * Authentication > </snip> > > There is a small wiki (Implementing REST [2]) that contains an > incomplete list of frameworks that exhibit some aspects of "supporting > RESTful programming" that might give you some other perspectives. That > includes at least a couple that are built on .NET as well as several > others. One of the items listed there, Webmachine [3] has a very > unique approach to the problem that is worth reviewing, IMO. > > FWIW, the biggest pain point I had when working w/ .NET is the lack of > a robust HttpClient class to handle direct protocol work. Not the > WebRequest/WebResponse members, but one "up above" those; one that > comes closer to the features of cUrl [4], wGet [5], or even WFetch > [6]. > > Feel free to ping me directly if you'd like me to expand on these items. > > [1] http://www.amundsen.com/blog/archives/1018 > [2] http://code.google.com/p/implementing-rest/ > [3] http://code.google.com/p/implementing-rest/wiki/Webmachine > [4] http://curl.haxx.se/ > [5] http://www.gnu.org/software/wget/ > [6] > http://www.microsoft.com/downloads/details.aspx?FamilyID=b134a806-d50e-4664-8348-da5c17129210&displaylang=en > > > mca > http://amundsen.com/blog/ > > > > > On Wed, May 19, 2010 at 11:54, Glenn Block <glenn.block@...> wrote: >> Thanks Jan, both. Clients being JQuery and .NET. >> >> Glenn >> >> On 5/19/10, Jan Algermissen <algermissen1971@...> wrote: >>> >>> On May 19, 2010, at 6:42 AM, Glenn Block wrote: >>> >>>> >>>> >>>> Hi guys >>>> >>>> I've been trolling for a few weeks :-) I work on the WCF team at >>>> Microsoft. We're currently in the very early stages of planning for new >>>> apis for supporting pure REST and HTTP style development. Our goal is to >>>> create something simple, lightweight and true to form. We are looking >>>> provide a natural API both for the service author and for the consumer. >>>> This not an attempt to simply retrofit onto a SOAP based model. >>> >>> Great to hear that! >>> >>> As you might guess I have quite some thoughts on this :-) Are you >>> thinking >>> about the server- or the client side or both? >>> >>> On the server side, I think that the JAX-RS community has done a pretty >>> good >>> job as far as RESTfulness, clarity, and coding efficiency goes so you >>> might >>> want to look at what they did and adapt the approach. >>> >>> Jan >>> >>> >>>> >>>> It would be great to hear thoughts you guys have on what would be the >>>> ideal developer experience for using REST. Also if you'd like to be >>>> involved we'd welcome the feedback. >>>> >>>> Regards >>>> Glenn >>>> >>>> >>>> >>> >>> ----------------------------------- >>> Jan Algermissen, Consultant >>> NORD Software Consulting >>> >>> Mail: algermissen@... >>> Blog: http://www.nordsc.com/blog/ >>> Work: http://www.nordsc.com/ >>> ----------------------------------- >>> >>> >>> >>> >>> >> >> -- >> Sent from my mobile device >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> >> > -- Sent from my mobile device
Thanks for the link to the post, that list looks good to me :-) On 5/19/10, mike amundsen <mamund@...> wrote: > Glenn: > > Good to see you here. > > A while back, I wrote the following in a blog post [1]: > <snip> > proper HTTP libraries expose the following as first-order programming > objects: > > * URIs > * Resources > * Representations > * Message Body > * Control Data > > other primary concepts that deserve direct, top-level support are: > > * Media-Types > * User-Agents > * Caching > * Authentication > </snip> > > There is a small wiki (Implementing REST [2]) that contains an > incomplete list of frameworks that exhibit some aspects of "supporting > RESTful programming" that might give you some other perspectives. That > includes at least a couple that are built on .NET as well as several > others. One of the items listed there, Webmachine [3] has a very > unique approach to the problem that is worth reviewing, IMO. > > FWIW, the biggest pain point I had when working w/ .NET is the lack of > a robust HttpClient class to handle direct protocol work. Not the > WebRequest/WebResponse members, but one "up above" those; one that > comes closer to the features of cUrl [4], wGet [5], or even WFetch > [6]. > > Feel free to ping me directly if you'd like me to expand on these items. > > [1] http://www.amundsen.com/blog/archives/1018 > [2] http://code.google.com/p/implementing-rest/ > [3] http://code.google.com/p/implementing-rest/wiki/Webmachine > [4] http://curl.haxx.se/ > [5] http://www.gnu.org/software/wget/ > [6] > http://www.microsoft.com/downloads/details.aspx?FamilyID=b134a806-d50e-4664-8348-da5c17129210&displaylang=en > > > mca > http://amundsen.com/blog/ > > > > > On Wed, May 19, 2010 at 11:54, Glenn Block <glenn.block@...> wrote: >> Thanks Jan, both. Clients being JQuery and .NET. >> >> Glenn >> >> On 5/19/10, Jan Algermissen <algermissen1971@...> wrote: >>> >>> On May 19, 2010, at 6:42 AM, Glenn Block wrote: >>> >>>> >>>> >>>> Hi guys >>>> >>>> I've been trolling for a few weeks :-) I work on the WCF team at >>>> Microsoft. We're currently in the very early stages of planning for new >>>> apis for supporting pure REST and HTTP style development. Our goal is to >>>> create something simple, lightweight and true to form. We are looking >>>> provide a natural API both for the service author and for the consumer. >>>> This not an attempt to simply retrofit onto a SOAP based model. >>> >>> Great to hear that! >>> >>> As you might guess I have quite some thoughts on this :-) Are you >>> thinking >>> about the server- or the client side or both? >>> >>> On the server side, I think that the JAX-RS community has done a pretty >>> good >>> job as far as RESTfulness, clarity, and coding efficiency goes so you >>> might >>> want to look at what they did and adapt the approach. >>> >>> Jan >>> >>> >>>> >>>> It would be great to hear thoughts you guys have on what would be the >>>> ideal developer experience for using REST. Also if you'd like to be >>>> involved we'd welcome the feedback. >>>> >>>> Regards >>>> Glenn >>>> >>>> >>>> >>> >>> ----------------------------------- >>> Jan Algermissen, Consultant >>> NORD Software Consulting >>> >>> Mail: algermissen@... >>> Blog: http://www.nordsc.com/blog/ >>> Work: http://www.nordsc.com/ >>> ----------------------------------- >>> >>> >>> >>> >>> >> >> -- >> Sent from my mobile device >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> >> > -- Sent from my mobile device
Glenn: By representation I mean the actual bits used to "represent" the resource identified by the URI. For example, consider a resource that contains data about the number of requests and their countries of origin. That resource could have any number of representations: - a PDF representation - an HTML representation - a PNG representation Ideally these representations are indicated via the accept-header sent by the client in the request and the resulting selection made by the server is indicated by the content-type control data return w/ the response. (IRL we know some clients are not so good at this, but that's another story). The key point, from my POV, is to resist thinking of resources as "serialized" but instead think of the ways the framework can indicate to clients what representations are available and properly work out the representation the client wishes to get in return. This is more than just binding an XML or JSON serializer to the output stream as some representations of the resource may contain data that others do not. For example the PNG might have only a few data points, the PDF may include a table followed by several paragraphs of explanatory data, the HTML may actually include the PNG, at table and a FORM for making additional filter requests on the data, etc.). The work to make sure each of these representations can be properly "composed" on the server takes some programming effort. That's why I like to say that the representations themselves should be treated as first-class, programmable, elements. I took this approach when putting together my own framework for building Web apps [1]. The published code is a bit outdated, but freely available. [1] http://code.google.com/p/exyus/ mca http://amundsen.com/blog/ On Wed, May 19, 2010 at 12:14, Glenn Block <glenn.block@...> wrote: > By representations you mean content types? > > On 5/19/10, mike amundsen <mamund@...> wrote: >> Glenn: >> >> Good to see you here. >> >> A while back, I wrote the following in a blog post [1]: >> <snip> >> proper HTTP libraries expose the following as first-order programming >> objects: >> >> * URIs >> * Resources >> * Representations >> * Message Body >> * Control Data >> >> other primary concepts that deserve direct, top-level support are: >> >> * Media-Types >> * User-Agents >> * Caching >> * Authentication >> </snip> >> >> There is a small wiki (Implementing REST [2]) that contains an >> incomplete list of frameworks that exhibit some aspects of "supporting >> RESTful programming" that might give you some other perspectives. That >> includes at least a couple that are built on .NET as well as several >> others. One of the items listed there, Webmachine [3] has a very >> unique approach to the problem that is worth reviewing, IMO. >> >> FWIW, the biggest pain point I had when working w/ .NET is the lack of >> a robust HttpClient class to handle direct protocol work. Not the >> WebRequest/WebResponse members, but one "up above" those; one that >> comes closer to the features of cUrl [4], wGet [5], or even WFetch >> [6]. >> >> Feel free to ping me directly if you'd like me to expand on these items. >> >> [1] http://www.amundsen.com/blog/archives/1018 >> [2] http://code.google.com/p/implementing-rest/ >> [3] http://code.google.com/p/implementing-rest/wiki/Webmachine >> [4] http://curl.haxx.se/ >> [5] http://www.gnu.org/software/wget/ >> [6] >> http://www.microsoft.com/downloads/details.aspx?FamilyID=b134a806-d50e-4664-8348-da5c17129210&displaylang=en >> >> >> mca >> http://amundsen.com/blog/ >> >> >> >> >> On Wed, May 19, 2010 at 11:54, Glenn Block <glenn.block@...> wrote: >>> Thanks Jan, both. Clients being JQuery and .NET. >>> >>> Glenn >>> >>> On 5/19/10, Jan Algermissen <algermissen1971@...> wrote: >>>> >>>> On May 19, 2010, at 6:42 AM, Glenn Block wrote: >>>> >>>>> >>>>> >>>>> Hi guys >>>>> >>>>> I've been trolling for a few weeks :-) I work on the WCF team at >>>>> Microsoft. We're currently in the very early stages of planning for new >>>>> apis for supporting pure REST and HTTP style development. Our goal is to >>>>> create something simple, lightweight and true to form. We are looking >>>>> provide a natural API both for the service author and for the consumer. >>>>> This not an attempt to simply retrofit onto a SOAP based model. >>>> >>>> Great to hear that! >>>> >>>> As you might guess I have quite some thoughts on this :-) Are you >>>> thinking >>>> about the server- or the client side or both? >>>> >>>> On the server side, I think that the JAX-RS community has done a pretty >>>> good >>>> job as far as RESTfulness, clarity, and coding efficiency goes so you >>>> might >>>> want to look at what they did and adapt the approach. >>>> >>>> Jan >>>> >>>> >>>>> >>>>> It would be great to hear thoughts you guys have on what would be the >>>>> ideal developer experience for using REST. Also if you'd like to be >>>>> involved we'd welcome the feedback. >>>>> >>>>> Regards >>>>> Glenn >>>>> >>>>> >>>>> >>>> >>>> ----------------------------------- >>>> Jan Algermissen, Consultant >>>> NORD Software Consulting >>>> >>>> Mail: algermissen@... >>>> Blog: http://www.nordsc.com/blog/ >>>> Work: http://www.nordsc.com/ >>>> ----------------------------------- >>>> >>>> >>>> >>>> >>>> >>> >>> -- >>> Sent from my mobile device >>> >>> >>> ------------------------------------ >>> >>> Yahoo! Groups Links >>> >>> >>> >>> >> > > -- > Sent from my mobile device >
Hello Glenn, On how to use dynamic language features (that both Javascript and C# have), you can see our work at Restfulie (C# and ruby implementations will work with dynamic generated methods and so on and how this can benefit clients) Regards Guilherme Silveira Caelum | Ensino e Inovao http://www.caelum.com.br/ 2010/5/19 mike amundsen <mamund@...> > > > Glenn: > > By representation I mean the actual bits used to "represent" the > resource identified by the URI. > > For example, consider a resource that contains data about the number > of requests and their countries of origin. That resource could have > any number of representations: > - a PDF representation > - an HTML representation > - a PNG representation > > Ideally these representations are indicated via the accept-header sent > by the client in the request and the resulting selection made by the > server is indicated by the content-type control data return w/ the > response. (IRL we know some clients are not so good at this, but > that's another story). > > The key point, from my POV, is to resist thinking of resources as > "serialized" but instead think of the ways the framework can indicate > to clients what representations are available and properly work out > the representation the client wishes to get in return. > > This is more than just binding an XML or JSON serializer to the output > stream as some representations of the resource may contain data that > others do not. For example the PNG might have only a few data points, > the PDF may include a table followed by several paragraphs of > explanatory data, the HTML may actually include the PNG, at table and > a FORM for making additional filter requests on the data, etc.). > > The work to make sure each of these representations can be properly > "composed" on the server takes some programming effort. That's why I > like to say that the representations themselves should be treated as > first-class, programmable, elements. > > I took this approach when putting together my own framework for > building Web apps [1]. The published code is a bit outdated, but > freely available. > > [1] http://code.google.com/p/exyus/ > > > mca > http://amundsen.com/blog/ > > On Wed, May 19, 2010 at 12:14, Glenn Block <glenn.block@...<glenn.block%40gmail.com>> > wrote: > > By representations you mean content types? > > > > On 5/19/10, mike amundsen <mamund@... <mamund%40yahoo.com>> wrote: > >> Glenn: > >> > >> Good to see you here. > >> > >> A while back, I wrote the following in a blog post [1]: > >> <snip> > >> proper HTTP libraries expose the following as first-order programming > >> objects: > >> > >> * URIs > >> * Resources > >> * Representations > >> * Message Body > >> * Control Data > >> > >> other primary concepts that deserve direct, top-level support are: > >> > >> * Media-Types > >> * User-Agents > >> * Caching > >> * Authentication > >> </snip> > >> > >> There is a small wiki (Implementing REST [2]) that contains an > >> incomplete list of frameworks that exhibit some aspects of "supporting > >> RESTful programming" that might give you some other perspectives. That > >> includes at least a couple that are built on .NET as well as several > >> others. One of the items listed there, Webmachine [3] has a very > >> unique approach to the problem that is worth reviewing, IMO. > >> > >> FWIW, the biggest pain point I had when working w/ .NET is the lack of > >> a robust HttpClient class to handle direct protocol work. Not the > >> WebRequest/WebResponse members, but one "up above" those; one that > >> comes closer to the features of cUrl [4], wGet [5], or even WFetch > >> [6]. > >> > >> Feel free to ping me directly if you'd like me to expand on these items. > >> > >> [1] http://www.amundsen.com/blog/archives/1018 > >> [2] http://code.google.com/p/implementing-rest/ > >> [3] http://code.google.com/p/implementing-rest/wiki/Webmachine > >> [4] http://curl.haxx.se/ > >> [5] http://www.gnu.org/software/wget/ > >> [6] > >> > http://www.microsoft.com/downloads/details.aspx?FamilyID=b134a806-d50e-4664-8348-da5c17129210&displaylang=en > >> > >> > >> mca > >> http://amundsen.com/blog/ > >> > >> > >> > >> > >> On Wed, May 19, 2010 at 11:54, Glenn Block <glenn.block@...<glenn.block%40gmail.com>> > wrote: > >>> Thanks Jan, both. Clients being JQuery and .NET. > >>> > >>> Glenn > >>> > >>> On 5/19/10, Jan Algermissen <algermissen1971@...<algermissen1971%40mac.com>> > wrote: > >>>> > >>>> On May 19, 2010, at 6:42 AM, Glenn Block wrote: > >>>> > >>>>> > >>>>> > >>>>> Hi guys > >>>>> > >>>>> I've been trolling for a few weeks :-) I work on the WCF team at > >>>>> Microsoft. We're currently in the very early stages of planning for > new > >>>>> apis for supporting pure REST and HTTP style development. Our goal is > to > >>>>> create something simple, lightweight and true to form. We are looking > >>>>> provide a natural API both for the service author and for the > consumer. > >>>>> This not an attempt to simply retrofit onto a SOAP based model. > >>>> > >>>> Great to hear that! > >>>> > >>>> As you might guess I have quite some thoughts on this :-) Are you > >>>> thinking > >>>> about the server- or the client side or both? > >>>> > >>>> On the server side, I think that the JAX-RS community has done a > pretty > >>>> good > >>>> job as far as RESTfulness, clarity, and coding efficiency goes so you > >>>> might > >>>> want to look at what they did and adapt the approach. > >>>> > >>>> Jan > >>>> > >>>> > >>>>> > >>>>> It would be great to hear thoughts you guys have on what would be the > >>>>> ideal developer experience for using REST. Also if you'd like to be > >>>>> involved we'd welcome the feedback. > >>>>> > >>>>> Regards > >>>>> Glenn > >>>>> > >>>>> > >>>>> > >>>> > >>>> ----------------------------------- > >>>> Jan Algermissen, Consultant > >>>> NORD Software Consulting > >>>> > >>>> Mail: algermissen@... <algermissen%40acm.org> > >>>> Blog: http://www.nordsc.com/blog/ > >>>> Work: http://www.nordsc.com/ > >>>> ----------------------------------- > >>>> > >>>> > >>>> > >>>> > >>>> > >>> > >>> -- > >>> Sent from my mobile device > >>> > >>> > >>> ------------------------------------ > >>> > >>> Yahoo! Groups Links > >>> > >>> > >>> > >>> > >> > > > > -- > > Sent from my mobile device > > > > >
Awesome, I was the other day proposing a dynamic restful client ie something along the lines of. dynamic client = new HttpClient(uri).... go to town after that.... :-) On Wed, May 19, 2010 at 9:30 AM, Guilherme Silveira < guilherme.silveira@...> wrote: > Hello Glenn, > > On how to use dynamic language features (that both Javascript and C# have), > you can see our work at Restfulie (C# and ruby implementations will work > with dynamic generated methods and so on and how this can benefit clients) > > Regards > > Guilherme Silveira > Caelum | Ensino e Inovao > http://www.caelum.com.br/ > > > 2010/5/19 mike amundsen <mamund@...> > >> >> >> Glenn: >> >> By representation I mean the actual bits used to "represent" the >> resource identified by the URI. >> >> For example, consider a resource that contains data about the number >> of requests and their countries of origin. That resource could have >> any number of representations: >> - a PDF representation >> - an HTML representation >> - a PNG representation >> >> Ideally these representations are indicated via the accept-header sent >> by the client in the request and the resulting selection made by the >> server is indicated by the content-type control data return w/ the >> response. (IRL we know some clients are not so good at this, but >> that's another story). >> >> The key point, from my POV, is to resist thinking of resources as >> "serialized" but instead think of the ways the framework can indicate >> to clients what representations are available and properly work out >> the representation the client wishes to get in return. >> >> This is more than just binding an XML or JSON serializer to the output >> stream as some representations of the resource may contain data that >> others do not. For example the PNG might have only a few data points, >> the PDF may include a table followed by several paragraphs of >> explanatory data, the HTML may actually include the PNG, at table and >> a FORM for making additional filter requests on the data, etc.). >> >> The work to make sure each of these representations can be properly >> "composed" on the server takes some programming effort. That's why I >> like to say that the representations themselves should be treated as >> first-class, programmable, elements. >> >> I took this approach when putting together my own framework for >> building Web apps [1]. The published code is a bit outdated, but >> freely available. >> >> [1] http://code.google.com/p/exyus/ >> >> >> mca >> http://amundsen.com/blog/ >> >> On Wed, May 19, 2010 at 12:14, Glenn Block <glenn.block@...<glenn.block%40gmail.com>> >> wrote: >> > By representations you mean content types? >> > >> > On 5/19/10, mike amundsen <mamund@... <mamund%40yahoo.com>> >> wrote: >> >> Glenn: >> >> >> >> Good to see you here. >> >> >> >> A while back, I wrote the following in a blog post [1]: >> >> <snip> >> >> proper HTTP libraries expose the following as first-order programming >> >> objects: >> >> >> >> * URIs >> >> * Resources >> >> * Representations >> >> * Message Body >> >> * Control Data >> >> >> >> other primary concepts that deserve direct, top-level support are: >> >> >> >> * Media-Types >> >> * User-Agents >> >> * Caching >> >> * Authentication >> >> </snip> >> >> >> >> There is a small wiki (Implementing REST [2]) that contains an >> >> incomplete list of frameworks that exhibit some aspects of "supporting >> >> RESTful programming" that might give you some other perspectives. That >> >> includes at least a couple that are built on .NET as well as several >> >> others. One of the items listed there, Webmachine [3] has a very >> >> unique approach to the problem that is worth reviewing, IMO. >> >> >> >> FWIW, the biggest pain point I had when working w/ .NET is the lack of >> >> a robust HttpClient class to handle direct protocol work. Not the >> >> WebRequest/WebResponse members, but one "up above" those; one that >> >> comes closer to the features of cUrl [4], wGet [5], or even WFetch >> >> [6]. >> >> >> >> Feel free to ping me directly if you'd like me to expand on these >> items. >> >> >> >> [1] http://www.amundsen.com/blog/archives/1018 >> >> [2] http://code.google.com/p/implementing-rest/ >> >> [3] http://code.google.com/p/implementing-rest/wiki/Webmachine >> >> [4] http://curl.haxx.se/ >> >> [5] http://www.gnu.org/software/wget/ >> >> [6] >> >> >> http://www.microsoft.com/downloads/details.aspx?FamilyID=b134a806-d50e-4664-8348-da5c17129210&displaylang=en >> >> >> >> >> >> mca >> >> http://amundsen.com/blog/ >> >> >> >> >> >> >> >> >> >> On Wed, May 19, 2010 at 11:54, Glenn Block <glenn.block@...<glenn.block%40gmail.com>> >> wrote: >> >>> Thanks Jan, both. Clients being JQuery and .NET. >> >>> >> >>> Glenn >> >>> >> >>> On 5/19/10, Jan Algermissen <algermissen1971@...<algermissen1971%40mac.com>> >> wrote: >> >>>> >> >>>> On May 19, 2010, at 6:42 AM, Glenn Block wrote: >> >>>> >> >>>>> >> >>>>> >> >>>>> Hi guys >> >>>>> >> >>>>> I've been trolling for a few weeks :-) I work on the WCF team at >> >>>>> Microsoft. We're currently in the very early stages of planning for >> new >> >>>>> apis for supporting pure REST and HTTP style development. Our goal >> is to >> >>>>> create something simple, lightweight and true to form. We are >> looking >> >>>>> provide a natural API both for the service author and for the >> consumer. >> >>>>> This not an attempt to simply retrofit onto a SOAP based model. >> >>>> >> >>>> Great to hear that! >> >>>> >> >>>> As you might guess I have quite some thoughts on this :-) Are you >> >>>> thinking >> >>>> about the server- or the client side or both? >> >>>> >> >>>> On the server side, I think that the JAX-RS community has done a >> pretty >> >>>> good >> >>>> job as far as RESTfulness, clarity, and coding efficiency goes so you >> >>>> might >> >>>> want to look at what they did and adapt the approach. >> >>>> >> >>>> Jan >> >>>> >> >>>> >> >>>>> >> >>>>> It would be great to hear thoughts you guys have on what would be >> the >> >>>>> ideal developer experience for using REST. Also if you'd like to be >> >>>>> involved we'd welcome the feedback. >> >>>>> >> >>>>> Regards >> >>>>> Glenn >> >>>>> >> >>>>> >> >>>>> >> >>>> >> >>>> ----------------------------------- >> >>>> Jan Algermissen, Consultant >> >>>> NORD Software Consulting >> >>>> >> >>>> Mail: algermissen@... <algermissen%40acm.org> >> >>>> Blog: http://www.nordsc.com/blog/ >> >>>> Work: http://www.nordsc.com/ >> >>>> ----------------------------------- >> >>>> >> >>>> >> >>>> >> >>>> >> >>>> >> >>> >> >>> -- >> >>> Sent from my mobile device >> >>> >> >>> >> >>> ------------------------------------ >> >>> >> >>> Yahoo! Groups Links >> >>> >> >>> >> >>> >> >>> >> >> >> > >> > -- >> > Sent from my mobile device >> > >> >> >> > >
Great. Yeah Sebastian actually pointed me to this list so it's fair to say we've been chatting :-) Thanks for the offer to help out! Glenn
Have a look at JAX-RS from the Java world. From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of Glenn Block Sent: Mittwoch, 19. Mai 2010 06:42 To: rest-discuss@yahoogroups.com Subject: [rest-discuss] Thinking about REST and HTTP Hi guys I've been trolling for a few weeks :-) I work on the WCF team at Microsoft. We're currently in the very early stages of planning for new apis for supporting pure REST and HTTP style development. Our goal is to create something simple, lightweight and true to form. We are looking provide a natural API both for the service author and for the consumer. This not an attempt to simply retrofit onto a SOAP based model. It would be great to hear thoughts you guys have on what would be the ideal developer experience for using REST. Also if you'd like to be involved we'd welcome the feedback. Regards Glenn
> Have a look at JAX-RS from the Java world. I'm not certain that today's JAX-RS offers much more than today's WCF in terms of REST support. If Glenn's team are going to do "REST like they meant it" to paraphrase Guilherme, I don't think that JAX-RS is the right way to go. Perhaps some of the toolkits that also happen to implement JAX-RS might be useful (e.g. Jersey), because they're starting to support hypermedia (thank you Restfulie for being disruptive there!). Jim
--- Den tirs 18/5/10 skrev Jan Algermissen <algermissen1971@...>: > You should use HTTP authentication and use the user > information you then receive to send the representation that > is applicable to the corresponding user. It is interesting to note here that in general there are two different users involved. 1) The technical user that issues the GET/POST/PUT/DELETE request 2) The logical user that the resource is associated to. These may be the same or they may be different. F.x. it could be possible for an administrator to view/modify a record that belongs to another use. Hence I would not say that one can just use HTTP authentication here. Yes, one can use HTTP authentication for security but not necessarily for getting the applicable information. > If you have resources that only exist for that given user, > I'd use your former approach /project/user/A/data because it > establishes a dedicated URI space per user. The problem here is that what data belongs to what user may be transient (or maybe just have create relationship only to the user). F.x. if the user that the data belongs to leaves the company that owns the data, then the same resource might be the responsibility of another user. /Cheers, Morten
<snip>
> These may be the same or they may be different. F.x. it could be possible for an administrator to view/modify a record that belongs to another use. Hence I would not say that one can just use HTTP authentication here. Yes, one can use HTTP authentication for security but not necessarily for getting the applicable information.
</snip>
Yes, don't use the authenticate header to determine which *resource*
to return; use the URI. So i'd be sure to place the user value (name,
number, etc.) in the URI.
<snip>
> The problem here is that what data belongs to what user may be transient (or maybe just have create relationship only to the user). F.x. if the user that the data belongs to leaves the company that owns the data, then the same resource might be the responsibility of another user.
</snip>
Keep in mind that you are not "designing URIs" here. What you need is
to expose a resource that meets the needs of the client at that moment
and the URI is the way to identify that resource. If the resource is
transient, that's no problem. If the same *data* ("all documents
created by mike" e.g. /documents/mike) appears under two different
*resources* ("all mike's documents" /mike/documents) that's just fine,
too.
mca
http://amundsen.com/blog/
http://mamund.com/foaf.rdf#me
On Wed, May 19, 2010 at 16:47, Morten <mortench2004@yahoo.dk> wrote:
> --- Den tirs 18/5/10 skrev Jan Algermissen <algermissen1971@mac.com>:
>> You should use HTTP authentication and use the user
>> information you then receive to send the representation that
>> is applicable to the corresponding user.
>
> It is interesting to note here that in general there are two different users involved.
> 1) The technical user that issues the GET/POST/PUT/DELETE request
> 2) The logical user that the resource is associated to.
>
> These may be the same or they may be different. F.x. it could be possible for an administrator to view/modify a record that belongs to another use. Hence I would not say that one can just use HTTP authentication here. Yes, one can use HTTP authentication for security but not necessarily for getting the applicable information.
>
>
>> If you have resources that only exist for that given user,
>> I'd use your former approach /project/user/A/data because it
>> establishes a dedicated URI space per user.
>
> The problem here is that what data belongs to what user may be transient (or maybe just have create relationship only to the user). F.x. if the user that the data belongs to leaves the company that owns the data, then the same resource might be the responsibility of another user.
>
> /Cheers,
> Morten
>
>
>
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
>
--- Den ons 19/5/10 skrev mike amundsen <mamund@...>:
According to your scheme, the same resources would be available
as part of the result at many URLs f.x.
/documents/mike
/mike/documents
/documents/may210010
or exactly at:
/documents/mike/docid12345
/mike/documents/docid12345
/documents/may210010/docid12345
But that would mean that there would be no 1-1 mapping between logical resources and URL's + the URLs would be transient so what you could find today would not necessarily be found tomorrow + caching would not work as good as possible.
There is also a discussion "http://stackoverflow.com/questions/1622085/how-to-obtain-rest-resource-with-different-finder-methods" about this.
I like the comment by Will Hartun "Having a unique name is important for cache coherency in a REST architecture. One name, one cache policy, one "place" to get it, change it, etc"
> Keep in mind that you are not "designing URIs" here. What
> you need is
> to expose a resource that meets the needs of the client at
> that moment
> and the URI is the way to identify that resource. If the
> resource is
> transient, that's no problem. If the same *data* ("all
> documents
> created by mike" e.g. /documents/mike) appears under two
> different
> *resources* ("all mike's documents" /mike/documents) that's
> just fine,
> too.
>
> mca
> http://amundsen.com/blog/
> http://mamund.com/foaf.rdf#me
>
>
>
>
> On Wed, May 19, 2010 at 16:47, Morten <mortench2004@...>
> wrote:
> > --- Den tirs 18/5/10 skrev Jan Algermissen <algermissen1971@...>:
> >> You should use HTTP authentication and use the
> user
> >> information you then receive to send the
> representation that
> >> is applicable to the corresponding user.
> >
> > It is interesting to note here that in general there
> are two different users involved.
> > 1) The technical user that issues the
> GET/POST/PUT/DELETE request
> > 2) The logical user that the resource is associated
> to.
> >
> > These may be the same or they may be different. F.x.
> it could be possible for an administrator to view/modify a
> record that belongs to another use. Hence I would not say
> that one can just use HTTP authentication here. Yes, one can
> use HTTP authentication for security but not necessarily for
> getting the applicable information.
> >
> >
> >> If you have resources that only exist for that
> given user,
> >> I'd use your former approach /project/user/A/data
> because it
> >> establishes a dedicated URI space per user.
> >
> > The problem here is that what data belongs to what
> user may be transient (or maybe just have create
> relationship only to the user). F.x. if the user that the
> data belongs to leaves the company that owns the data, then
> the same resource might be the responsibility of another
> user.
> >
> > /Cheers,
> > Morten
> >
> >
> >
> >
> >
> > ------------------------------------
> >
> > Yahoo! Groups Links
> >
> >
> >
> >
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
> rest-discuss-fullfeatured@yahoogroups.com
>
>
>
<snip>
> But that would mean that there would be no 1-1 mapping between logical resources and URL's + the URLs would be transient so what you could find today would not necessarily be found tomorrow + caching would not work as good as possible.
</snip>
Yes, there is no one-to-one mapping between the thing and it's current
location. Just like real life (Sorry for the delay in replying, I was
out for dinner<g>).
It's pretty rare that a Web application can be implemented in a way
such that each resource is reachable via one _and only one_ URI and
those resources exist at the one URI forever.
Examples are: any app that moves items from "active" to "archive" or
"draft" to "publish"; uses work-in-progress style processing where the
"same data" moves from one place to another over it's meaningful
lifetime. You can mitigate some of this through the use of
Content-Location headers and or redirect responses, but not all the
time, and it's not always necessary or advisable.
And yes, these types of patterns can have negative effects of caches.
That's where there are control data (headers) to better manage those
things including ETag & Last-Modified in cases where accurate
responses are more important than low-latency responses.
mca
http://amundsen.com/blog/
On Wed, May 19, 2010 at 17:19, Morten <mortench2004@...> wrote:
> --- Den ons 19/5/10 skrev mike amundsen <mamund@...>:
>
> According to your scheme, the same resources would be available
> as part of the result at many URLs f.x.
>
> /documents/mike
> /mike/documents
> /documents/may210010
>
> or exactly at:
> /documents/mike/docid12345
> /mike/documents/docid12345
> /documents/may210010/docid12345
>
> But that would mean that there would be no 1-1 mapping between logical resources and URL's + the URLs would be transient so what you could find today would not necessarily be found tomorrow + caching would not work as good as possible.
>
> There is also a discussion "http://stackoverflow.com/questions/1622085/how-to-obtain-rest-resource-with-different-finder-methods" about this.
>
> I like the comment by Will Hartun "Having a unique name is important for cache coherency in a REST architecture. One name, one cache policy, one "place" to get it, change it, etc"
>
>> Keep in mind that you are not "designing URIs" here. What
>> you need is
>> to expose a resource that meets the needs of the client at
>> that moment
>> and the URI is the way to identify that resource. If the
>> resource is
>> transient, that's no problem. If the same *data* ("all
>> documents
>> created by mike" e.g. /documents/mike) appears under two
>> different
>> *resources* ("all mike's documents" /mike/documents) that's
>> just fine,
>> too.
>>
>> mca
>> http://amundsen.com/blog/
>> http://mamund.com/foaf.rdf#me
>>
>>
>>
>>
>> On Wed, May 19, 2010 at 16:47, Morten <mortench2004@...>
>> wrote:
>> > --- Den tirs 18/5/10 skrev Jan Algermissen <algermissen1971@...>:
>> >> You should use HTTP authentication and use the
>> user
>> >> information you then receive to send the
>> representation that
>> >> is applicable to the corresponding user.
>> >
>> > It is interesting to note here that in general there
>> are two different users involved.
>> > 1) The technical user that issues the
>> GET/POST/PUT/DELETE request
>> > 2) The logical user that the resource is associated
>> to.
>> >
>> > These may be the same or they may be different. F.x.
>> it could be possible for an administrator to view/modify a
>> record that belongs to another use. Hence I would not say
>> that one can just use HTTP authentication here. Yes, one can
>> use HTTP authentication for security but not necessarily for
>> getting the applicable information.
>> >
>> >
>> >> If you have resources that only exist for that
>> given user,
>> >> I'd use your former approach /project/user/A/data
>> because it
>> >> establishes a dedicated URI space per user.
>> >
>> > The problem here is that what data belongs to what
>> user may be transient (or maybe just have create
>> relationship only to the user). F.x. if the user that the
>> data belongs to leaves the company that owns the data, then
>> the same resource might be the responsibility of another
>> user.
>> >
>> > /Cheers,
>> > Morten
>> >
>> >
>> >
>> >
>> >
>> > ------------------------------------
>> >
>> > Yahoo! Groups Links
>> >
>> >
>> >
>> >
>>
>>
>> ------------------------------------
>>
>> Yahoo! Groups Links
>>
>>
>> rest-discuss-fullfeatured@yahoogroups.com
>>
>>
>>
>
>
>
>
I actually agree that a persistent 1-1 mappings can not always be implemented (and maybe even rarely so), but I do think it is worthwhile to have where it is possible. So one should not give up in advance but rather look for where in a design it would be possible. F.x. here:
/documents/docid12345
This resource and URL could be persistent (as long as the document is not deleted) and have a 1-1 mapping but the access rights for users to the resource at this URL could change over time.
/Morten
--- Den tors 20/5/10 skrev mike amundsen <mamund@...>:
> Fra: mike amundsen <mamund@...m>
> Emne: Re: [rest-discuss] User as a part of the URL or as a query parameter
> Til: "Morten" <mortench2004@...>
> Cc: "Jan Algermissen" <algermissen1971@...>, rest-discuss@yahoogroups.com
> Dato: torsdag 20. maj 2010 03.09
> <snip>
> > But that would mean that there would be no 1-1 mapping
> between logical resources and URL's + the URLs would be
> transient so what you could find today would not necessarily
> be found tomorrow + caching would not work as good as
> possible.
> </snip>
>
> Yes, there is no one-to-one mapping between the thing and
> it's current
> location. Just like real life (Sorry for the delay in
> replying, I was
> out for dinner<g>).
>
> It's pretty rare that a Web application can be implemented
> in a way
> such that each resource is reachable via one _and only one_
> URI and
> those resources exist at the one URI forever.
>
> Examples are: any app that moves items from "active" to
> "archive" or
> "draft" to "publish"; uses work-in-progress style
> processing where the
> "same data" moves from one place to another over it's
> meaningful
> lifetime. You can mitigate some of this through the use of
> Content-Location headers and or redirect responses, but not
> all the
> time, and it's not always necessary or advisable.
>
> And yes, these types of patterns can have negative effects
> of caches.
> That's where there are control data (headers) to better
> manage those
> things including ETag & Last-Modified in cases where
> accurate
> responses are more important than low-latency responses.
>
> mca
> http://amundsen.com/blog/
>
>
>
>
> On Wed, May 19, 2010 at 17:19, Morten <mortench2004@...>
> wrote:
> > --- Den ons 19/5/10 skrev mike amundsen <mamund@yahoo.com>:
> >
> > According to your scheme, the same resources would be
> available
> > as part of the result at many URLs f.x.
> >
> > /documents/mike
> > /mike/documents
> > /documents/may210010
> >
> > or exactly at:
> > /documents/mike/docid12345
> > /mike/documents/docid12345
> > /documents/may210010/docid12345
> >
> > But that would mean that there would be no 1-1 mapping
> between logical resources and URL's + the URLs would be
> transient so what you could find today would not necessarily
> be found tomorrow + caching would not work as good as
> possible.
> >
> > There is also a discussion "http://stackoverflow.com/questions/1622085/how-to-obtain-rest-resource-with-different-finder-methods"
> about this.
> >
> > I like the comment by Will Hartun "Having a unique
> name is important for cache coherency in a REST
> architecture. One name, one cache policy, one "place" to get
> it, change it, etc"
> >
> >> Keep in mind that you are not "designing URIs"
> here. What
> >> you need is
> >> to expose a resource that meets the needs of the
> client at
> >> that moment
> >> and the URI is the way to identify that resource.
> If the
> >> resource is
> >> transient, that's no problem. If the same *data*
> ("all
> >> documents
> >> created by mike" e.g. /documents/mike) appears
> under two
> >> different
> >> *resources* ("all mike's documents"
> /mike/documents) that's
> >> just fine,
> >> too.
> >>
> >> mca
> >> http://amundsen.com/blog/
> >> http://mamund.com/foaf.rdf#me
> >>
> >>
> >>
> >>
> >> On Wed, May 19, 2010 at 16:47, Morten <mortench2004@...>
> >> wrote:
> >> > --- Den tirs 18/5/10 skrev Jan Algermissen
> <algermissen1971@...>:
> >> >> You should use HTTP authentication and
> use the
> >> user
> >> >> information you then receive to send the
> >> representation that
> >> >> is applicable to the corresponding user.
> >> >
> >> > It is interesting to note here that in
> general there
> >> are two different users involved.
> >> > 1) The technical user that issues the
> >> GET/POST/PUT/DELETE request
> >> > 2) The logical user that the resource is
> associated
> >> to.
> >> >
> >> > These may be the same or they may be
> different. F.x.
> >> it could be possible for an administrator to
> view/modify a
> >> record that belongs to another use. Hence I would
> not say
> >> that one can just use HTTP authentication here.
> Yes, one can
> >> use HTTP authentication for security but not
> necessarily for
> >> getting the applicable information.
> >> >
> >> >
> >> >> If you have resources that only exist for
> that
> >> given user,
> >> >> I'd use your former approach
> /project/user/A/data
> >> because it
> >> >> establishes a dedicated URI space per
> user.
> >> >
> >> > The problem here is that what data belongs to
> what
> >> user may be transient (or maybe just have create
> >> relationship only to the user). F.x. if the user
> that the
> >> data belongs to leaves the company that owns the
> data, then
> >> the same resource might be the responsibility of
> another
> >> user.
> >> >
> >> > /Cheers,
> >> > Morten
> >> >
> >> >
> >> >
> >> >
> >> >
> >> > ------------------------------------
> >> >
> >> > Yahoo! Groups Links
> >> >
> >> >
> >> >
> >> >
> >>
> >>
> >> ------------------------------------
> >>
> >> Yahoo! Groups Links
> >>
> >>
> >> rest-discuss-fullfeatured@yahoogroups.com
> >>
> >>
> >>
> >
> >
> >
> >
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
> rest-discuss-fullfeatured@yahoogroups.com
>
>
>
I am not sure if the uri should be assign to an exchange or to the client. For HTTP client api's, I prefer Jetty's than Apache. The former has the http exchange like XHR, and the client is responsible for sending the exchange. -Dong On Wed, May 19, 2010 at 10:33 AM, Glenn Block <glenn.block@...> wrote: > > > Awesome, I was the other day proposing a dynamic restful client ie > something along the lines of. > > dynamic client = new HttpClient(uri).... > > go to town after that.... :-) > > >
> I'm not certain that today's JAX-RS offers much more than today's WCF in terms of REST support. If Glenn's team are going to do "REST like they meant it" to paraphrase Guilherme, I don't think that JAX-RS is the right way to go. As Jim points out... there is much more to REST than what we have been doing the past few years, and it seems like the frameworks are learning from that lately... from the JAX-RS perspective, it has been defined on the way we understood REST a while ago, and lately there has been a lot of work, books and even studies on the client part that makes us either worry about such topics or follow an old line of tought... I've put online a few presentation on showing what is missing on those old ideas. REgards > Perhaps some of the toolkits that also happen to implement JAX-RS might be useful (e.g. Jersey), because they're starting to support hypermedia (thank you Restfulie for being disruptive there!). Guilherme Silveira Caelum | Ensino e Inovao http://www.caelum.com.br/ 2010/5/19 Jim Webber <jim@webber.name> > > > > Have a look at JAX-RS from the Java world. > > I'm not certain that today's JAX-RS offers much more than today's WCF in > terms of REST support. If Glenn's team are going to do "REST like they meant > it" to paraphrase Guilherme, I don't think that JAX-RS is the right way to > go. > > Perhaps some of the toolkits that also happen to implement JAX-RS might be > useful (e.g. Jersey), because they're starting to support hypermedia (thank > you Restfulie for being disruptive there!). > > Jim > > >
For RESTful client APIs for Java, you could take a look at <plug> http://github.com/hamnis/REST-client or http://httpcache4j.codehaus.org/ </plug> -- Erlend On Thu, May 20, 2010 at 7:37 PM, Guilherme Silveira < guilherme.silveira@...> wrote: > > > > I'm not certain that today's JAX-RS offers much more than today's WCF in > terms of REST support. If Glenn's team are going to do "REST like they meant > it" to paraphrase Guilherme, I don't think that JAX-RS is the right way to > go. > As Jim points out... there is much more to REST than what we have been > doing the past few years, and it seems like the frameworks are learning from > that lately... from the JAX-RS perspective, it has been defined on the way > we understood REST a while ago, and lately there has been a lot of work, > books and even studies on the client part that makes us either worry about > such topics or follow an old line of tought... > > I've put online a few presentation on showing what is missing on those old > ideas. > > REgards > > > Perhaps some of the toolkits that also happen to implement JAX-RS might > be useful (e.g. Jersey), because they're starting to support hypermedia > (thank you Restfulie for being disruptive there!). > > Guilherme Silveira > Caelum | Ensino e Inovao > http://www.caelum.com.br/ > > > 2010/5/19 Jim Webber <jim@webber.name> > > >> >> > Have a look at JAX-RS from the Java world. >> >> I'm not certain that today's JAX-RS offers much more than today's WCF in >> terms of REST support. If Glenn's team are going to do "REST like they meant >> it" to paraphrase Guilherme, I don't think that JAX-RS is the right way to >> go. >> >> Perhaps some of the toolkits that also happen to implement JAX-RS might be >> useful (e.g. Jersey), because they're starting to support hypermedia (thank >> you Restfulie for being disruptive there!). >> >> Jim >> >> > >
Hi Glenn, We have been working on the Restlet Framework [1] since 2005 (at this time, I think it was the first so-called REST framework). We have built a comprehensive yet small framework since them, with a thriving community behind it. Our Restlet API is both client-side and server-side, both low-level (all HTTP semantics/headers mapped to a clean Java API, see [2]) and high-level (resource handling, limited/focused use of annotations), synchronous or asynchronous (easily supporting provisional 1xx HTTP responses), supports other (pseudo-)protocols than HTTP (FTP, SMTP, POP3, FILE, etc.) and is available in five consistent editions: Java SE/EE, GAE, GWT and Android [3]. We also happen to support JAX-RS as an extension, but that's not what I would recommend as a starting point for you. Our framework lets developers support hypermedia since day one and we have explored higher-level links traversal for a while with our RDF extension [4]. Otherwise, we are also working with Microsoft Interop teams to provide advanced client support for the OData protocol [5]. IMHO, it would be great to have something similar in the .NET world. I'm glad you are working on such project. Good luck! Best regards, Jerome Louvel -- Restlet ~ Founder and Technical Lead ~ http://www.restlet.org Noelios Technologies ~ http://www.noelios.com [1] http://www.restlet.org [2] http://wiki.restlet.org/docs_2.0/13-restlet/27-restlet/330-restlet/130-restlet.html [3] http://wiki.restlet.org/docs_2.0/13-restlet/21-restlet/318-restlet/303-restlet.html [4] http://www.restlet.org/documentation/snapshot/jee/ext/org/restlet/ext/rdf/RdfClientResource.html [5] http://blog.noelios.com/2010/03/15/restlet-supports-odata-the-open-data-protocol/
Thanks Jerome! On 5/21/10, jerome.louvel <jerome.louvel@...> wrote: > > > > > > Hi Glenn, > > We have been working on the Restlet Framework [1] since 2005 (at this time, > I think it was the first so-called REST framework). We have built a > comprehensive yet small framework since them, with a thriving community > behind it. > > Our Restlet API is both client-side and server-side, both low-level (all > HTTP semantics/headers mapped to a clean Java API, see [2]) and high-level > (resource handling, limited/focused use of annotations), synchronous or > asynchronous (easily supporting provisional 1xx HTTP responses), supports > other (pseudo-)protocols than HTTP (FTP, SMTP, POP3, FILE, etc.) and is > available in five consistent editions: Java SE/EE, GAE, GWT and Android [3]. > > We also happen to support JAX-RS as an extension, but that's not what I > would recommend as a starting point for you. Our framework lets developers > support hypermedia since day one and we have explored higher-level links > traversal for a while with our RDF extension [4]. > > Otherwise, we are also working with Microsoft Interop teams to provide > advanced client support for the OData protocol [5]. > > IMHO, it would be great to have something similar in the .NET world. I'm > glad you are working on such project. Good luck! > > Best regards, > Jerome Louvel > -- > Restlet ~ Founder and Technical Lead ~ http://www.restlet.org > Noelios Technologies ~ http://www.noelios.com > > > [1] http://www.restlet.org > [2] > http://wiki.restlet.org/docs_2.0/13-restlet/27-restlet/330-restlet/130-restlet.html > [3] > http://wiki.restlet.org/docs_2.0/13-restlet/21-restlet/318-restlet/303-restlet.html > [4] > http://www.restlet.org/documentation/snapshot/jee/ext/org/restlet/ext/rdf/RdfClientResource.html > [5] > http://blog.noelios.com/2010/03/15/restlet-supports-odata-the-open-data-protocol/ > > > -- Sent from my mobile device
Jerome, OData is an interesting point. On that subject When you guys look at building RESTful services in the wild, at which point do you find protocols like OData, GData (atom based) insufficient? On 5/21/10, jerome.louvel <jerome.louvel@...> wrote: > > > > > > Hi Glenn, > > We have been working on the Restlet Framework [1] since 2005 (at this time, > I think it was the first so-called REST framework). We have built a > comprehensive yet small framework since them, with a thriving community > behind it. > > Our Restlet API is both client-side and server-side, both low-level (all > HTTP semantics/headers mapped to a clean Java API, see [2]) and high-level > (resource handling, limited/focused use of annotations), synchronous or > asynchronous (easily supporting provisional 1xx HTTP responses), supports > other (pseudo-)protocols than HTTP (FTP, SMTP, POP3, FILE, etc.) and is > available in five consistent editions: Java SE/EE, GAE, GWT and Android [3]. > > We also happen to support JAX-RS as an extension, but that's not what I > would recommend as a starting point for you. Our framework lets developers > support hypermedia since day one and we have explored higher-level links > traversal for a while with our RDF extension [4]. > > Otherwise, we are also working with Microsoft Interop teams to provide > advanced client support for the OData protocol [5]. > > IMHO, it would be great to have something similar in the .NET world. I'm > glad you are working on such project. Good luck! > > Best regards, > Jerome Louvel > -- > Restlet ~ Founder and Technical Lead ~ http://www.restlet.org > Noelios Technologies ~ http://www.noelios.com > > > [1] http://www.restlet.org > [2] > http://wiki.restlet.org/docs_2.0/13-restlet/27-restlet/330-restlet/130-restlet.html > [3] > http://wiki.restlet.org/docs_2.0/13-restlet/21-restlet/318-restlet/303-restlet.html > [4] > http://www.restlet.org/documentation/snapshot/jee/ext/org/restlet/ext/rdf/RdfClientResource.html > [5] > http://blog.noelios.com/2010/03/15/restlet-supports-odata-the-open-data-protocol/ > > > -- Sent from my mobile device
I say that because a common question I expect folks to ask when they hear about our stuff, is why do I need anything else. I have my own perceptions / thoughts, but would be interested in this groups. On 5/21/10, jerome.louvel <jerome.louvel@...> wrote: > > > > > > Hi Glenn, > > We have been working on the Restlet Framework [1] since 2005 (at this time, > I think it was the first so-called REST framework). We have built a > comprehensive yet small framework since them, with a thriving community > behind it. > > Our Restlet API is both client-side and server-side, both low-level (all > HTTP semantics/headers mapped to a clean Java API, see [2]) and high-level > (resource handling, limited/focused use of annotations), synchronous or > asynchronous (easily supporting provisional 1xx HTTP responses), supports > other (pseudo-)protocols than HTTP (FTP, SMTP, POP3, FILE, etc.) and is > available in five consistent editions: Java SE/EE, GAE, GWT and Android [3]. > > We also happen to support JAX-RS as an extension, but that's not what I > would recommend as a starting point for you. Our framework lets developers > support hypermedia since day one and we have explored higher-level links > traversal for a while with our RDF extension [4]. > > Otherwise, we are also working with Microsoft Interop teams to provide > advanced client support for the OData protocol [5]. > > IMHO, it would be great to have something similar in the .NET world. I'm > glad you are working on such project. Good luck! > > Best regards, > Jerome Louvel > -- > Restlet ~ Founder and Technical Lead ~ http://www.restlet.org > Noelios Technologies ~ http://www.noelios.com > > > [1] http://www.restlet.org > [2] > http://wiki.restlet.org/docs_2.0/13-restlet/27-restlet/330-restlet/130-restlet.html > [3] > http://wiki.restlet.org/docs_2.0/13-restlet/21-restlet/318-restlet/303-restlet.html > [4] > http://www.restlet.org/documentation/snapshot/jee/ext/org/restlet/ext/rdf/RdfClientResource.html > [5] > http://blog.noelios.com/2010/03/15/restlet-supports-odata-the-open-data-protocol/ > > > -- Sent from my mobile device
BTW, I want to thank everyone for welcoming me here and for bearing with my noobness :-) On 5/21/10, Glenn Block <glenn.block@...> wrote: > I say that because a common question I expect folks to ask when they > hear about our stuff, is why do I need anything else. > > I have my own perceptions / thoughts, but would be interested in this > groups. > > On 5/21/10, jerome.louvel <jerome.louvel@...> wrote: >> >> >> >> >> >> Hi Glenn, >> >> We have been working on the Restlet Framework [1] since 2005 (at this >> time, >> I think it was the first so-called REST framework). We have built a >> comprehensive yet small framework since them, with a thriving community >> behind it. >> >> Our Restlet API is both client-side and server-side, both low-level (all >> HTTP semantics/headers mapped to a clean Java API, see [2]) and >> high-level >> (resource handling, limited/focused use of annotations), synchronous or >> asynchronous (easily supporting provisional 1xx HTTP responses), supports >> other (pseudo-)protocols than HTTP (FTP, SMTP, POP3, FILE, etc.) and is >> available in five consistent editions: Java SE/EE, GAE, GWT and Android >> [3]. >> >> We also happen to support JAX-RS as an extension, but that's not what I >> would recommend as a starting point for you. Our framework lets >> developers >> support hypermedia since day one and we have explored higher-level links >> traversal for a while with our RDF extension [4]. >> >> Otherwise, we are also working with Microsoft Interop teams to provide >> advanced client support for the OData protocol [5]. >> >> IMHO, it would be great to have something similar in the .NET world. I'm >> glad you are working on such project. Good luck! >> >> Best regards, >> Jerome Louvel >> -- >> Restlet ~ Founder and Technical Lead ~ http://www.restlet.org >> Noelios Technologies ~ http://www.noelios.com >> >> >> [1] http://www.restlet.org >> [2] >> http://wiki.restlet.org/docs_2.0/13-restlet/27-restlet/330-restlet/130-restlet.html >> [3] >> http://wiki.restlet.org/docs_2.0/13-restlet/21-restlet/318-restlet/303-restlet.html >> [4] >> http://www.restlet.org/documentation/snapshot/jee/ext/org/restlet/ext/rdf/RdfClientResource.html >> [5] >> http://blog.noelios.com/2010/03/15/restlet-supports-odata-the-open-data-protocol/ >> >> >> > > -- > Sent from my mobile device > -- Sent from my mobile device
Glenn: For me, the OData format has the following shortcomings WRT the REST architectural style. - OData uses Atom as an envelope for a custom payload - OData is an Object Transfer pattern, not a state transfer pattern - OData has limited hypermedia support - OData relies on URI Convention Below are some details and ideas on how these shortcomings might be addressed. Atom as an Envelope OData uses the Atom format as an envelope for a custom XML payload. I would prefer that a dedicated media type be developed that does not have the extra "baggage" of Atom. For example, in order for me to write a client application that uses the OData format, I need to encode understanding of two Atom RFCs (Atom Syndication [1] and Atom Publishing [2]) _and_ I need to encode into my client the details of the custom payload that appears within the <content /> element (the stuff I'm really interested in anyway). A better choice, IMO, would have been to use some version of the data format employed for the now defunct SQL Server Data Services (SSDS/SDS). It was lean, specific, and provided the same functionality in a much smaller payload that was easier to encode into clients. Object Transfer Pattern & Limited Hypermedia Options Similar to Atom, OData is really an object transfer format. OData servers currently support two different _serializations_ (JSON and XML) but I am not able to send anything other than an "entry" object (or a batch of them, etc.). Essentially, I can't transfer arbitrary state, just pre-defined objects. Since I can only transfer predefined objects within the Atom envelope, I have very few Hypermedia options to provide to my clients. I can't see how to send directions for custom queries, or ways to accomplish specific tasks on the server (compute this months totals and return the results, approve the remaining open invoices for process, etc.). HTML uses the <form /> element with varying method and format instructions for this. SMIL uses the <send /> element along with an XPath query to indicate which portions of the message are to be returned; etc. I see reference to this kind of thing in 2.13 Invoking Service Operations [3] but this is a very limited situation and while I see it is possible for clients to execute these GET methods with arbitrary URIs, I see nothing in the docs that indicates how I can fashion a response from the server that _tells_ clients this operation is possible. One way to resolve this would be to expand the media-type to include similar message blocks that would contain a link relation, URI, and one or more template elements that clients could use to fill in themselves or present to users for population. I show some rudimentary example of this (in simple XML) in a recent blog post [4]. Reliance on URI Convention Much of the documentation for OData is spent outlining URI conventions that need to be encoded into the client application. I would prefer the use of the URI-Templates [5] model for simple filter cases. This would allow clients to simply encode the rules for URI templates and execute them for templated links rather than requiring programmers to commit specific URI conventions directly to code. Using templates also means servers can modify the arrangement of the URI/query without requiring re-coding of the clients. For more complex queries (basically arbitrary filters/sorts, etc.) I'd prefer OData advertise support for one or more query media types themselves (accept: text/t-sql, text/linq, text/yql, etc.[sign, no registered types *yet*]). This would reduce the need to define complex URI conventions and provide greater flexibility in the future when other query languages become more desirable (e.g. application/sparql-query [6], etc.). In all these cases, clients can code for the query language, not the URI convention. [1] http://www.ietf.org/rfc/rfc4287.txt [2] http://tools.ietf.org/html/rfc5023 [3] http://www.odata.org/developers/protocols/operations#InvokingServiceOperations [4] http://amundsen.com/blog/archives/1041 [5] http://tools.ietf.org/html/draft-gregorio-uritemplate-04 [6] http://www.w3.org/TR/rdf-sparql-query/#mediaType mca http://amundsen.com/blog/ On Fri, May 21, 2010 at 20:43, Glenn Block <glenn.block@...> wrote: > BTW, I want to thank everyone for welcoming me here and for bearing > with my noobness :-) > > On 5/21/10, Glenn Block <glenn.block@...> wrote: >> I say that because a common question I expect folks to ask when they >> hear about our stuff, is why do I need anything else. >> >> I have my own perceptions / thoughts, but would be interested in this >> groups. >> >> On 5/21/10, jerome.louvel <jerome.louvel@...> wrote: >>> >>> >>> >>> >>> >>> Hi Glenn, >>> >>> We have been working on the Restlet Framework [1] since 2005 (at this >>> time, >>> I think it was the first so-called REST framework). We have built a >>> comprehensive yet small framework since them, with a thriving community >>> behind it. >>> >>> Our Restlet API is both client-side and server-side, both low-level (all >>> HTTP semantics/headers mapped to a clean Java API, see [2]) and >>> high-level >>> (resource handling, limited/focused use of annotations), synchronous or >>> asynchronous (easily supporting provisional 1xx HTTP responses), supports >>> other (pseudo-)protocols than HTTP (FTP, SMTP, POP3, FILE, etc.) and is >>> available in five consistent editions: Java SE/EE, GAE, GWT and Android >>> [3]. >>> >>> We also happen to support JAX-RS as an extension, but that's not what I >>> would recommend as a starting point for you. Our framework lets >>> developers >>> support hypermedia since day one and we have explored higher-level links >>> traversal for a while with our RDF extension [4]. >>> >>> Otherwise, we are also working with Microsoft Interop teams to provide >>> advanced client support for the OData protocol [5]. >>> >>> IMHO, it would be great to have something similar in the .NET world. I'm >>> glad you are working on such project. Good luck! >>> >>> Best regards, >>> Jerome Louvel >>> -- >>> Restlet ~ Founder and Technical Lead ~ http://www.restlet.org >>> Noelios Technologies ~ http://www.noelios.com >>> >>> >>> [1] http://www.restlet.org >>> [2] >>> http://wiki.restlet.org/docs_2.0/13-restlet/27-restlet/330-restlet/130-restlet.html >>> [3] >>> http://wiki.restlet.org/docs_2.0/13-restlet/21-restlet/318-restlet/303-restlet.html >>> [4] >>> http://www.restlet.org/documentation/snapshot/jee/ext/org/restlet/ext/rdf/RdfClientResource.html >>> [5] >>> http://blog.noelios.com/2010/03/15/restlet-supports-odata-the-open-data-protocol/ >>> >>> >>> >> >> -- >> Sent from my mobile device >> > > -- > Sent from my mobile device > > > ------------------------------------ > > Yahoo! Groups Links > > > >
Meh. OpenRasta has been around for years on the .net side... -----Original Message----- From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of Glenn Block Sent: 22 May 2010 01:37 To: jerome.louvel; rest-discuss@yahoogroups.com Subject: [rest-discuss] Re: Thinking about REST and HTTP Thanks Jerome! On 5/21/10, jerome.louvel <jerome.louvel@noelios.com> wrote: > > > > > > Hi Glenn, > > We have been working on the Restlet Framework [1] since 2005 (at this > time, I think it was the first so-called REST framework). We have > built a comprehensive yet small framework since them, with a thriving > community behind it. > > Our Restlet API is both client-side and server-side, both low-level > (all HTTP semantics/headers mapped to a clean Java API, see [2]) and > high-level (resource handling, limited/focused use of annotations), > synchronous or asynchronous (easily supporting provisional 1xx HTTP > responses), supports other (pseudo-)protocols than HTTP (FTP, SMTP, > POP3, FILE, etc.) and is available in five consistent editions: Java SE/EE, GAE, GWT and Android [3]. > > We also happen to support JAX-RS as an extension, but that's not what > I would recommend as a starting point for you. Our framework lets > developers support hypermedia since day one and we have explored > higher-level links traversal for a while with our RDF extension [4]. > > Otherwise, we are also working with Microsoft Interop teams to provide > advanced client support for the OData protocol [5]. > > IMHO, it would be great to have something similar in the .NET world. > I'm glad you are working on such project. Good luck! > > Best regards, > Jerome Louvel > -- > Restlet ~ Founder and Technical Lead ~ http://www.restlet.org Noelios > Technologies ~ http://www.noelios.com > > > [1] http://www.restlet.org > [2] > http://wiki.restlet.org/docs_2.0/13-restlet/27-restlet/330-restlet/130 > -restlet.html > [3] > http://wiki.restlet.org/docs_2.0/13-restlet/21-restlet/318-restlet/303 > -restlet.html > [4] > http://www.restlet.org/documentation/snapshot/jee/ext/org/restlet/ext/ > rdf/RdfClientResource.html > [5] > http://blog.noelios.com/2010/03/15/restlet-supports-odata-the-open-dat > a-protocol/ > > > -- Sent from my mobile device ------------------------------------ Yahoo! Groups Links
Where do practical discussions of software architecture take place? I mean, this is a great group for discussing a single architectural style - even if its focus tends to be on implementation of a style - but do you know of any good outlets for discussion of styles themselves? Thanks, --tim
Tim: Good question. I've not found any list that covers arch in general, but would be interested in joining. If you find anything, please post here. mca http://amundsen.com/blog/ http://mamund.com/foaf.rdf#me On Mon, May 24, 2010 at 10:47, Tim Williams <williamstw@...> wrote: > Where do practical discussions of software architecture take place? I > mean, this is a great group for discussing a single architectural > style - even if its focus tends to be on implementation of a style - > but do you know of any good outlets for discussion of styles > themselves? > > Thanks, > --tim > > > ------------------------------------ > > Yahoo! Groups Links > > > >
On May 24, 2010, at 4:47 PM, Tim Williams wrote: > Where do practical discussions of software architecture take place? Tim, why not start a new group on Yahoo? (What would you (now) like to discuss in particular?) Jan > I > mean, this is a great group for discussing a single architectural > style - even if its focus tends to be on implementation of a style - > but do you know of any good outlets for discussion of styles > themselves? > > Thanks, > --tim > > > ------------------------------------ > > Yahoo! Groups Links > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
Something in the framework :-) On Sun, May 23, 2010 at 1:54 PM, Sebastien Lambla <seb@...> wrote: > Meh. OpenRasta has been around for years on the .net side... > > -----Original Message----- > From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] > On Behalf Of Glenn Block > Sent: 22 May 2010 01:37 > To: jerome.louvel; rest-discuss@yahoogroups.com > Subject: [rest-discuss] Re: Thinking about REST and HTTP > > Thanks Jerome! > > On 5/21/10, jerome.louvel <jerome.louvel@...> wrote: > > > > > > > > > > > > Hi Glenn, > > > > We have been working on the Restlet Framework [1] since 2005 (at this > > time, I think it was the first so-called REST framework). We have > > built a comprehensive yet small framework since them, with a thriving > > community behind it. > > > > Our Restlet API is both client-side and server-side, both low-level > > (all HTTP semantics/headers mapped to a clean Java API, see [2]) and > > high-level (resource handling, limited/focused use of annotations), > > synchronous or asynchronous (easily supporting provisional 1xx HTTP > > responses), supports other (pseudo-)protocols than HTTP (FTP, SMTP, > > POP3, FILE, etc.) and is available in five consistent editions: Java > SE/EE, GAE, GWT and Android [3]. > > > > We also happen to support JAX-RS as an extension, but that's not what > > I would recommend as a starting point for you. Our framework lets > > developers support hypermedia since day one and we have explored > > higher-level links traversal for a while with our RDF extension [4]. > > > > Otherwise, we are also working with Microsoft Interop teams to provide > > advanced client support for the OData protocol [5]. > > > > IMHO, it would be great to have something similar in the .NET world. > > I'm glad you are working on such project. Good luck! > > > > Best regards, > > Jerome Louvel > > -- > > Restlet ~ Founder and Technical Lead ~ http://www.restlet.org Noelios > > Technologies ~ http://www.noelios.com > > > > > > [1] http://www.restlet.org > > [2] > > http://wiki.restlet.org/docs_2.0/13-restlet/27-restlet/330-restlet/130 > > -restlet.html > > [3] > > http://wiki.restlet.org/docs_2.0/13-restlet/21-restlet/318-restlet/303 > > -restlet.html > > [4] > > http://www.restlet.org/documentation/snapshot/jee/ext/org/restlet/ext/ > > rdf/RdfClientResource.html > > [5] > > http://blog.noelios.com/2010/03/15/restlet-supports-odata-the-open-dat > > a-protocol/ > > > > > > > > -- > Sent from my mobile device > > > ------------------------------------ > > Yahoo! Groups Links > > > >
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 5/11/2010 4:31 AM, Eric J. Bowman wrote: > Kris Zyp wrote: >> >> I believe one should be able to assume that the content type of the >> representation returned from a server from GET for URI is acceptable >> in a PUT request to that server for the same URI. >> > > Absolutely not. The late binding of representation to resource > precludes this assumption. HTML is capable of providing an interface > to an Atom system. What media type to PUT or POST to the system is > explicitly provided in the markup, i.e. a self-documenting interface. > > Assuming that you can PUT or POST HTML to my system because that's the > media type I sent on GET would not work -- I derive HTML from Atom, not > the other way around. > > A PUT of an HTML document would show an intent to replace the > self-documenting interface provided by the HTML representation, with > some other application state. HTML is generated by my system, it is not > subject to change via PUT to negotiated resources which happen to return > text/html or application/xhtml+xml on GET with a Web browser, but > happen to return Atom to a feed reader. I certainly agree that receiving a media type from a server does not guarantee that a server can receive that same media type from the client. However, in the absence of knowledge of a different explicit media type preference (from the media type definition) when it comes to negotiating an acceptable type with the server, pretending the all media types are equally likely is as silly as pretending that any language is any equally likely to be understood in response to someone who speaks to you in french. > >> >> When using JSON, >> additional information about acceptable property values can be >> determined from any JSON Schema referenced by the resource. In other >> words, if you GET some resource, and the server responds with: >> >> Content-Type: application/my-type+json; profile=my-schema >> >> One could retrieve the schema from the "my-schema" relative URI and do >> a PUT using the application/my-type+json content type with the schema >> information as a guide to what property values are acceptable. >> > > Sure you can *do* this, it just wouldn't be REST. Leaving aside that > the media type identifier definition for JSON doesn't say anything about > extending it using *+json, the media type definition for JSON says > nothing about HTTP methods. Where have you provided a self-documenting > interface giving a target URI, method and media type -- as provided by > forms languages having no corollary in JSON, yet required by REST? > > If you "just know" that you can PUT or DELETE some JSON resource, it's > no more RESTful than "just knowing" that you can PUT or DELETE some > JPEG. You're resorting to unbounded creativity, rather than using > standard media types and link relations which *do* cover HTTP methods, > for any target media type. > RFC2616 sufficiently defines the meaning of PUT and DELETE, a media type does not need to conflate protocol concerns to be RESTful. >> >> Discovery of POST actions is completely different than PUT (since >> PUT's behavior is implied by a GET response). A JSON Schema can >> describe possible POST actions with submission links, including an >> acceptable content type (in the "enctype" property). >> > > I don't see how. Regardless of schema, there's simply no mention in > the media type definition of JSON for describing URIs or methods, i.e. > there's no forms language. The demo I posted consists of XHTML steady- > states derived from various source representationss of other media > types. These steady-states (will) provide a self-documenting API to > the underlying Atom-based system. > > The user isn't trying to discover PUT vs. POST actions. The user is > trying to drive an application to another steady-state. The user agent > needs to translate that user goal into HTTP interactions. If the user > is trying to add a new post, the user agent is instructed to POST to > the domain root. If the user is trying to add a new comment, the user > agent is instructed to POST to the appropriate comment thread. If the > user intent is to edit an existing entry, the user agent is instructed > to PUT to the existing URI. In each case, the user agent is instructed > to use application/atom+xml; type=entry. > > There's no RESTful way to instruct any user agent that "this system > uses Atom Protocol" and this may not be inferred by the fact that the > system uses Atom. All I can do is provide a self-documenting hypertext > API which instructs user agents how to interact with the system. This > API may or may not conform to Atom Protocol. Whether it does or not is > less important to REST than its presence. > > None of this is any different for a system based on JSON rather than > Atom. As a REST system, I could change my Atom backend to a JSON > backend on a whim. I'm not saying it would be easy, but I am saying > that the application states wouldn't change. The HTML would still > present a textarea, changes to that textarea would be submitted to the > same URI, using whatever media type the form says to use -- all HTML > user agents automatically update to the new API. > > If you need to guess what media type to use then you can't possibly be > using REST. A REST API will always tell you exactly what media type to > use. It isn't implicit in any guessable fashion, it's explicit. If it > isn't explicit, it isn't REST. HTML says what POST does, but only your > hypertext can specify media type, if you lack such hypertext you lack > a critical REST constraint. There is certainly nothing wrong with a specifying what media type a server can handle in the media type definition or hypertext (JSON Schema allows for specifying an acceptable media type for requests as well), however the dynamic representation/content negotiation principle implies that a server may have capabilities to handle various types that may independently evolve. I know my server software can handle various media types to update resources (JSON, JS, XML, url-encoded, etc.). - -- Kris Zyp SitePen (503) 806-1841 http://sitepen.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (MingW32) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAkv66cIACgkQ9VpNnHc4zAzASwCfeoeiGm7w2B4FmGssIaKfGafj t94An13EQXdWwHtvIC777FS9cXdL5pqn =x8xU -----END PGP SIGNATURE-----
Hi guys I created the following twit-poll: "If you are using REST, how would you rank in importance the following "clients" for invoking RESTful web services in your .NET applications?" http://twtpoll.com/rohrw5 Appreciate you guys weighing in. Note, it's specifically related to .NET development. Thanks Glenn
Glenn, I've done the poll but I would have offered the following options: - curl - WebSockets class - HttpClient / HttpRequest classes - Browser (HTML only; no Javascript) - Browser (HTML + JavaScript / JQuery) - Browser (Silverlight plugin) - Browser (Flash plugin) - Desktop Silverlight / WPF - Desktop Adobe Air - Other (please specify) Regards, Alan Dean On Tue, May 25, 2010 at 07:08, Glenn Block <glenn.block@gmail.com> wrote: > > > Hi guys > > I created the following twit-poll: > > "If you are using REST, how would you rank in importance the following > "clients" for invoking RESTful web services in your .NET applications?" > > http://twtpoll.com/rohrw5 > > Appreciate you guys weighing in. Note, it's specifically related to .NET > development. > > Thanks > Glenn > > > > >
On Mon, May 24, 2010 at 11:30 AM, Jan Algermissen <algermissen1971@...> wrote: > > On May 24, 2010, at 4:47 PM, Tim Williams wrote: > >> Where do practical discussions of software architecture take place? > > Tim, > > why not start a new group on Yahoo? Yeah, it's not looking promising. I'll go ahead and start one this week if nothing turns up - we'll see if it can get any traction or not. > (What would you (now) like to discuss in particular?) Practical architecture-implementation mapping. Style/architecture derivation. Practical architecture documentation. Architectural change management. Other topics motivated by chapters 1-4 of "the dissertation":) Other topics motivated by readings (e.g. Taylor's book). These questions have been building up for a while... --tim
Thanks. My goal was to get a broad sense of which clients we should invest in for managed code. On 5/25/10, Alan Dean <alan.dean@...> wrote: > Glenn, > > I've done the poll but I would have offered the following options: > > - curl > - WebSockets class > - HttpClient / HttpRequest classes > - Browser (HTML only; no Javascript) > - Browser (HTML + JavaScript / JQuery) > - Browser (Silverlight plugin) > - Browser (Flash plugin) > - Desktop Silverlight / WPF > - Desktop Adobe Air > - Other (please specify) > > Regards, > Alan Dean > > On Tue, May 25, 2010 at 07:08, Glenn Block <glenn.block@...> wrote: > >> >> >> Hi guys >> >> I created the following twit-poll: >> >> "If you are using REST, how would you rank in importance the following >> "clients" for invoking RESTful web services in your .NET applications?" >> >> http://twtpoll.com/rohrw5 >> >> Appreciate you guys weighing in. Note, it's specifically related to .NET >> development. >> >> Thanks >> Glenn >> >> >> >> >> > -- Sent from my mobile device
Go for it, Tim. I'd be interested in discussions also, though I'm only a "beginner" in the SW-arch field. Let us know when you start the group/mailing list (google groups is another option, as you probably know). Cheers, Ivan On Tue, May 25, 2010 at 14:55, Tim Williams <williamstw@...> wrote: > > > > On Mon, May 24, 2010 at 11:30 AM, Jan Algermissen > <algermissen1971@...> wrote: > > > > On May 24, 2010, at 4:47 PM, Tim Williams wrote: > > > >> Where do practical discussions of software architecture take place? > > > > Tim, > > > > why not start a new group on Yahoo? > > Yeah, it's not looking promising. I'll go ahead and start one this > week if nothing turns up - we'll see if it can get any traction or > not. > > > (What would you (now) like to discuss in particular?) > > Practical architecture-implementation mapping. Style/architecture > derivation. Practical architecture documentation. Architectural > change management. Other topics motivated by chapters 1-4 of "the > dissertation":) Other topics motivated by readings (e.g. Taylor's > book). These questions have been building up for a while... > > --tim
yep - i'd be interested in talking about Taylor's latest book and expanding to other arch-level items. mca http://amundsen.com/blog/ http://mamund.com/foaf.rdf#me On Tue, May 25, 2010 at 08:55, Tim Williams <williamstw@gmail.com> wrote: > On Mon, May 24, 2010 at 11:30 AM, Jan Algermissen > <algermissen1971@...> wrote: >> >> On May 24, 2010, at 4:47 PM, Tim Williams wrote: >> >>> Where do practical discussions of software architecture take place? >> >> Tim, >> >> why not start a new group on Yahoo? > > Yeah, it's not looking promising. I'll go ahead and start one this > week if nothing turns up - we'll see if it can get any traction or > not. > >> (What would you (now) like to discuss in particular?) > > Practical architecture-implementation mapping. Style/architecture > derivation. Practical architecture documentation. Architectural > change management. Other topics motivated by chapters 1-4 of "the > dissertation":) Other topics motivated by readings (e.g. Taylor's > book). These questions have been building up for a while... > > --tim > > > ------------------------------------ > > Yahoo! Groups Links > > > >
--- In rest-discuss@yahoogroups.com, Tim Williams <williamstw@...> wrote: > > On Mon, May 24, 2010 at 11:30 AM, Jan Algermissen > <algermissen1971@...> wrote: > > > > On May 24, 2010, at 4:47 PM, Tim Williams wrote: > > > >> Where do practical discussions of software architecture take place? > > > > Tim, > > > > why not start a new group on Yahoo? > > Yeah, it's not looking promising. I'll go ahead and start one this > week if nothing turns up - we'll see if it can get any traction or > not. > > > (What would you (now) like to discuss in particular?) > > Practical architecture-implementation mapping. Style/architecture > derivation. Practical architecture documentation. Architectural > change management. Other topics motivated by chapters 1-4 of "the > dissertation":) Other topics motivated by readings (e.g. Taylor's > book). These questions have been building up for a while... > > --tim > Sounds interesting! Please let us know what forum(s) you find/create! Andrew
Disclaimer: This not marketing promoting our TechEd event :-) Are any folks on this list planning to attend Tech-Ed 2010 on the 7th? Reason I am discussing putting together a workshop / informal session on where we go with our REST/HTTP efforts at TechEd. It would be great if some of you guys were there (assuming you are attending the event). If you are interested in such a session let me know. Thanks Glenn
> Disclaimer: This not marketing promoting our TechEd event :-) Disclaimer: this is me blagging, probably unsuccessfully. > Are any folks on this list planning to attend Tech-Ed 2010 on the 7th? > Reason I am discussing putting together a workshop / informal session on > where we go with our REST/HTTP efforts at TechEd. It would be great if some > of you guys were there (assuming you are attending the event). If you are > interested in such a session let me know. Any free tickets? :-) Jim
What if we arranged it "off site" where you wouldn't need attendance. Would you come? Glenn On Wed, May 26, 2010 at 1:07 PM, Jim Webber <jim@webber.name> wrote: > > > > Disclaimer: This not marketing promoting our TechEd event :-) > > Disclaimer: this is me blagging, probably unsuccessfully. > > > > Are any folks on this list planning to attend Tech-Ed 2010 on the 7th? > > Reason I am discussing putting together a workshop / informal session on > > where we go with our REST/HTTP efforts at TechEd. It would be great if > some > > of you guys were there (assuming you are attending the event). If you are > > interested in such a session let me know. > > Any free tickets? :-) > > Jim > > >
On May 26, 2010, at 10:11 PM, Glenn Block wrote: > > > What if we arranged it "off site" where you wouldn't need attendance. Would you come? Glenn, can you make a little more clear how 'serious' the REST-effort is from a Microsoft-as-a-company-for-enterprise-IT-solutions? Does you effort indicate a move by Microsoft to improve the situation of enterprise integration? Or is it merely (no intent to insult you) a side-project to add a little REST support to WCF? I am asking, because if it is the former, I will be very important for you to get it right and not only add support for turning objects into XML and sending that out via HTTP. In that case, I'd be seriously interested to join you. Also in the case of the former (and I guess this is what Jim drove at) Microsoft should probably add some serious bait piece (such as free admission :-) Jan > > Glenn > > On Wed, May 26, 2010 at 1:07 PM, Jim Webber <jim@...> wrote: > > > > Disclaimer: This not marketing promoting our TechEd event :-) > > Disclaimer: this is me blagging, probably unsuccessfully. > > > > Are any folks on this list planning to attend Tech-Ed 2010 on the 7th? > > Reason I am discussing putting together a workshop / informal session on > > where we go with our REST/HTTP efforts at TechEd. It would be great if some > > of you guys were there (assuming you are attending the event). If you are > > interested in such a session let me know. > > Any free tickets? :-) > > Jim > > > > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
Gee - excuse all the typos. It's late here and the train ride is really, really bumpy... Jan On May 26, 2010, at 10:29 PM, Jan Algermissen wrote: > > On May 26, 2010, at 10:11 PM, Glenn Block wrote: > >> >> >> What if we arranged it "off site" where you wouldn't need attendance. Would you come? > > Glenn, > > can you make a little more clear how 'serious' the REST-effort is from a Microsoft-as-a-company-for-enterprise-IT-solutions? > > Does you effort indicate a move by Microsoft to improve the situation of enterprise integration? > > Or is it merely (no intent to insult you) a side-project to add a little REST support to WCF? > > I am asking, because if it is the former, I will be very important for you to get it right and not only add support for turning objects into XML and sending that out via HTTP. In that case, I'd be seriously interested to join you. > > Also in the case of the former (and I guess this is what Jim drove at) Microsoft should probably add some serious bait piece (such as free admission :-) > > Jan > > >> >> Glenn >> >> On Wed, May 26, 2010 at 1:07 PM, Jim Webber <jim@...> wrote: >> >> >>> Disclaimer: This not marketing promoting our TechEd event :-) >> >> Disclaimer: this is me blagging, probably unsuccessfully. >> >> >>> Are any folks on this list planning to attend Tech-Ed 2010 on the 7th? >>> Reason I am discussing putting together a workshop / informal session on >>> where we go with our REST/HTTP efforts at TechEd. It would be great if some >>> of you guys were there (assuming you are attending the event). If you are >>> interested in such a session let me know. >> >> Any free tickets? :-) >> >> Jim >> >> >> >> >> >> > > ----------------------------------- > Jan Algermissen, Consultant > NORD Software Consulting > > Mail: algermissen@... > Blog: http://www.nordsc.com/blog/ > Work: http://www.nordsc.com/ > ----------------------------------- > > > > > > > ------------------------------------ > > Yahoo! Groups Links > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
This is a serious REST and HTTP effort. I keep saying HTTP since it's not only about pure REST. When I say HTTP I mean we will still support RPC style POX / Plain old JSON, but that is not our primary intent. It is supporting a natural programming model for a resource-oriented service approach. This is not about retrofitting to WCF, been there done that. :-) We may use some core peices as appropriate but we have in the works a new client and server story for building HTTP style services. Thanks Glenn On Wed, May 26, 2010 at 1:29 PM, Jan Algermissen <algermissen1971@...>wrote: > > On May 26, 2010, at 10:11 PM, Glenn Block wrote: > > > > > > > What if we arranged it "off site" where you wouldn't need attendance. > Would you come? > > Glenn, > > can you make a little more clear how 'serious' the REST-effort is from a > Microsoft-as-a-company-for-enterprise-IT-solutions? > > Does you effort indicate a move by Microsoft to improve the situation of > enterprise integration? > > Or is it merely (no intent to insult you) a side-project to add a little > REST support to WCF? > > I am asking, because if it is the former, I will be very important for you > to get it right and not only add support for turning objects into XML and > sending that out via HTTP. In that case, I'd be seriously interested to join > you. > > Also in the case of the former (and I guess this is what Jim drove at) > Microsoft should probably add some serious bait piece (such as free > admission :-) > > Jan > > > > > > Glenn > > > > On Wed, May 26, 2010 at 1:07 PM, Jim Webber <jim@...> wrote: > > > > > > > Disclaimer: This not marketing promoting our TechEd event :-) > > > > Disclaimer: this is me blagging, probably unsuccessfully. > > > > > > > Are any folks on this list planning to attend Tech-Ed 2010 on the 7th? > > > Reason I am discussing putting together a workshop / informal session > on > > > where we go with our REST/HTTP efforts at TechEd. It would be great if > some > > > of you guys were there (assuming you are attending the event). If you > are > > > interested in such a session let me know. > > > > Any free tickets? :-) > > > > Jim > > > > > > > > > > > > > > ----------------------------------- > Jan Algermissen, Consultant > NORD Software Consulting > > Mail: algermissen@... > Blog: http://www.nordsc.com/blog/ > Work: http://www.nordsc.com/ > ----------------------------------- > > > > >
I expect for example we will have first class support for resources, links/navigation, content negotation, media types, authentication (OAuth/OAuth2). No WSDL/WADL requred. Which btw, what do you guys think of WADL? On Wed, May 26, 2010 at 1:29 PM, Jan Algermissen <algermissen1971@...>wrote: > > On May 26, 2010, at 10:11 PM, Glenn Block wrote: > > > > > > > What if we arranged it "off site" where you wouldn't need attendance. > Would you come? > > Glenn, > > can you make a little more clear how 'serious' the REST-effort is from a > Microsoft-as-a-company-for-enterprise-IT-solutions? > > Does you effort indicate a move by Microsoft to improve the situation of > enterprise integration? > > Or is it merely (no intent to insult you) a side-project to add a little > REST support to WCF? > > I am asking, because if it is the former, I will be very important for you > to get it right and not only add support for turning objects into XML and > sending that out via HTTP. In that case, I'd be seriously interested to join > you. > > Also in the case of the former (and I guess this is what Jim drove at) > Microsoft should probably add some serious bait piece (such as free > admission :-) > > Jan > > > > > > Glenn > > > > On Wed, May 26, 2010 at 1:07 PM, Jim Webber <jim@...> wrote: > > > > > > > Disclaimer: This not marketing promoting our TechEd event :-) > > > > Disclaimer: this is me blagging, probably unsuccessfully. > > > > > > > Are any folks on this list planning to attend Tech-Ed 2010 on the 7th? > > > Reason I am discussing putting together a workshop / informal session > on > > > where we go with our REST/HTTP efforts at TechEd. It would be great if > some > > > of you guys were there (assuming you are attending the event). If you > are > > > interested in such a session let me know. > > > > Any free tickets? :-) > > > > Jim > > > > > > > > > > > > > > ----------------------------------- > Jan Algermissen, Consultant > NORD Software Consulting > > Mail: algermissen@... > Blog: http://www.nordsc.com/blog/ > Work: http://www.nordsc.com/ > ----------------------------------- > > > > >
On May 26, 2010, at 10:45 PM, Glenn Block wrote: > > > This is a serious REST and HTTP effort. I keep saying HTTP since it's not only about pure REST. When I say HTTP I mean we will still support RPC style POX / Plain old JSON, but that is not our primary intent. Ok. Agreed with both. > It is supporting a natural programming model for a resource-oriented service approach. How far would you go and question existing programming models to go towards REST? (E.g. I have personally dropped the notion of 'service' from my own brain because it just gets in the way because it is an interface centric idea (at least as it is commonly used)) > > This is not about retrofitting to WCF, been there done that. :-) We may use some core peices as appropriate but we have in the works a new client and server story for building HTTP style services. Great. I guess a f2f meet-up is sort of mandatory. I'll let that sink in. Jan > > Thanks > Glenn > On Wed, May 26, 2010 at 1:29 PM, Jan Algermissen <algermissen1971@...> wrote: > > On May 26, 2010, at 10:11 PM, Glenn Block wrote: > > > > > > > What if we arranged it "off site" where you wouldn't need attendance. Would you come? > > Glenn, > > can you make a little more clear how 'serious' the REST-effort is from a Microsoft-as-a-company-for-enterprise-IT-solutions? > > Does you effort indicate a move by Microsoft to improve the situation of enterprise integration? > > Or is it merely (no intent to insult you) a side-project to add a little REST support to WCF? > > I am asking, because if it is the former, I will be very important for you to get it right and not only add support for turning objects into XML and sending that out via HTTP. In that case, I'd be seriously interested to join you. > > Also in the case of the former (and I guess this is what Jim drove at) Microsoft should probably add some serious bait piece (such as free admission :-) > > Jan > > > > > > Glenn > > > > On Wed, May 26, 2010 at 1:07 PM, Jim Webber <jim@...> wrote: > > > > > > > Disclaimer: This not marketing promoting our TechEd event :-) > > > > Disclaimer: this is me blagging, probably unsuccessfully. > > > > > > > Are any folks on this list planning to attend Tech-Ed 2010 on the 7th? > > > Reason I am discussing putting together a workshop / informal session on > > > where we go with our REST/HTTP efforts at TechEd. It would be great if some > > > of you guys were there (assuming you are attending the event). If you are > > > interested in such a session let me know. > > > > Any free tickets? :-) > > > > Jim > > > > > > > > > > > > > > ----------------------------------- > Jan Algermissen, Consultant > NORD Software Consulting > > Mail: algermissen@... > Blog: http://www.nordsc.com/blog/ > Work: http://www.nordsc.com/ > ----------------------------------- > > > > > > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
CIL On Wed, May 26, 2010 at 1:50 PM, Jan Algermissen <algermissen1971@mac.com>wrote: > > > > On May 26, 2010, at 10:45 PM, Glenn Block wrote: > > > > > > > This is a serious REST and HTTP effort. I keep saying HTTP since it's not > only about pure REST. When I say HTTP I mean we will still support RPC style > POX / Plain old JSON, but that is not our primary intent. > > Ok. Agreed with both. > > > > It is supporting a natural programming model for a resource-oriented > service approach. > > How far would you go and question existing programming models to go towards > REST? > >>>All options are up on the table. One area i am starting to really warm too is offering a completely dynamic (dynamic keyword) client side story (ie similar to what RESTFulie though not exact) has. > > (E.g. I have personally dropped the notion of 'service' from my own brain > because it just gets in the way because it is an interface centric idea (at > least as it is commonly used)) > >>>Yeah we mean REST in it's purity, services is just a word people know. > > > > > > This is not about retrofitting to WCF, been there done that. :-) We may > use some core peices as appropriate but we have in the works a new client > and server story for building HTTP style services. > > Great. I guess a f2f meet-up is sort of mandatory. I'll let that sink in. > > >>>Great. As I was telling Mike, I am thinking of organizing a workshop in Redmond where we can explore this all together (assuming you can be there) :-) > Jan > > > > > > Thanks > > Glenn > > On Wed, May 26, 2010 at 1:29 PM, Jan Algermissen < > algermissen1971@... <algermissen1971%40mac.com>> wrote: > > > > On May 26, 2010, at 10:11 PM, Glenn Block wrote: > > > > > > > > > > > What if we arranged it "off site" where you wouldn't need attendance. > Would you come? > > > > Glenn, > > > > can you make a little more clear how 'serious' the REST-effort is from a > Microsoft-as-a-company-for-enterprise-IT-solutions? > > > > Does you effort indicate a move by Microsoft to improve the situation of > enterprise integration? > > > > Or is it merely (no intent to insult you) a side-project to add a little > REST support to WCF? > > > > I am asking, because if it is the former, I will be very important for > you to get it right and not only add support for turning objects into XML > and sending that out via HTTP. In that case, I'd be seriously interested to > join you. > > > > Also in the case of the former (and I guess this is what Jim drove at) > Microsoft should probably add some serious bait piece (such as free > admission :-) > > > > Jan > > > > > > > > > > Glenn > > > > > > On Wed, May 26, 2010 at 1:07 PM, Jim Webber <jim@webber.name<jim%40webber.name>> > wrote: > > > > > > > > > > Disclaimer: This not marketing promoting our TechEd event :-) > > > > > > Disclaimer: this is me blagging, probably unsuccessfully. > > > > > > > > > > Are any folks on this list planning to attend Tech-Ed 2010 on the > 7th? > > > > Reason I am discussing putting together a workshop / informal session > on > > > > where we go with our REST/HTTP efforts at TechEd. It would be great > if some > > > > of you guys were there (assuming you are attending the event). If you > are > > > > interested in such a session let me know. > > > > > > Any free tickets? :-) > > > > > > Jim > > > > > > > > > > > > > > > > > > > > > > ----------------------------------- > > Jan Algermissen, Consultant > > NORD Software Consulting > > > > Mail: algermissen@... <algermissen%40acm.org> > > Blog: http://www.nordsc.com/blog/ > > Work: http://www.nordsc.com/ > > ----------------------------------- > > > > > > > > > > > > > > > > > > ----------------------------------- > Jan Algermissen, Consultant > NORD Software Consulting > > Mail: algermissen@... <algermissen%40acm.org> > Blog: http://www.nordsc.com/blog/ > Work: http://www.nordsc.com/ > ----------------------------------- > > >
On May 26, 2010, at 10:48 PM, Glenn Block wrote: > > > I expect for example we will have first class support for resources, links/navigation, content negotation, media types, authentication (OAuth/OAuth2). No WSDL/WADL requred. That sounds a lot like a JAX-RS style thinking. Fine. Any plans for the client side (which is much more interesting wrt 'doing it right') > > Which btw, what do you guys think of WADL? WADL violates REST (the hypermedia constraint) if used at design time. You can use WADL as a forms language at runtime, but I personally doubt that this is useful. Hypermedia controls can be very easily expressed in the media types themselves (just as HTML does for <a> <img> or <form>). Why would you add WADL top HTML for example? Jan > > On Wed, May 26, 2010 at 1:29 PM, Jan Algermissen <algermissen1971@...> wrote: > > On May 26, 2010, at 10:11 PM, Glenn Block wrote: > > > > > > > What if we arranged it "off site" where you wouldn't need attendance. Would you come? > > Glenn, > > can you make a little more clear how 'serious' the REST-effort is from a Microsoft-as-a-company-for-enterprise-IT-solutions? > > Does you effort indicate a move by Microsoft to improve the situation of enterprise integration? > > Or is it merely (no intent to insult you) a side-project to add a little REST support to WCF? > > I am asking, because if it is the former, I will be very important for you to get it right and not only add support for turning objects into XML and sending that out via HTTP. In that case, I'd be seriously interested to join you. > > Also in the case of the former (and I guess this is what Jim drove at) Microsoft should probably add some serious bait piece (such as free admission :-) > > Jan > > > > > > Glenn > > > > On Wed, May 26, 2010 at 1:07 PM, Jim Webber <jim@...> wrote: > > > > > > > Disclaimer: This not marketing promoting our TechEd event :-) > > > > Disclaimer: this is me blagging, probably unsuccessfully. > > > > > > > Are any folks on this list planning to attend Tech-Ed 2010 on the 7th? > > > Reason I am discussing putting together a workshop / informal session on > > > where we go with our REST/HTTP efforts at TechEd. It would be great if some > > > of you guys were there (assuming you are attending the event). If you are > > > interested in such a session let me know. > > > > Any free tickets? :-) > > > > Jim > > > > > > > > > > > > > > ----------------------------------- > Jan Algermissen, Consultant > NORD Software Consulting > > Mail: algermissen@... > Blog: http://www.nordsc.com/blog/ > Work: http://www.nordsc.com/ > ----------------------------------- > > > > > > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
On Wed, May 26, 2010 at 1:54 PM, Jan Algermissen <algermissen1971@...>wrote: > > > > On May 26, 2010, at 10:48 PM, Glenn Block wrote: > > > > > > > I expect for example we will have first class support for resources, > links/navigation, content negotation, media types, authentication > (OAuth/OAuth2). No WSDL/WADL requred. > > That sounds a lot like a JAX-RS style thinking. Fine. Any plans for the > client side (which is much more interesting wrt 'doing it right') > >>>Client is a big part of our focus. We have a new HTTP Client stack we are building. We shipped something similar in our REST Starter kit. It is designed from the ground up for HTTP, incroporates an HTTP specific channel stack on the client (light weight). >>>When you say sounds alot like JAX-RS what is the context of that? Is that implying a limitation? > > > > > Which btw, what do you guys think of WADL? > > WADL violates REST (the hypermedia constraint) if used at design time. > > You can use WADL as a forms language at runtime, but I personally doubt > that this is useful. Hypermedia controls can be very easily expressed in the > media types themselves (just as HTML does for <a> <img> or <form>). Why > would you add WADL top HTML for example? > >>>We currently are not planning to do anything with it as it forces some reliance on tooling, keeping things in synch etc. Just curious. > > Jan > Glenn > > > > > On Wed, May 26, 2010 at 1:29 PM, Jan Algermissen < > algermissen1971@... <algermissen1971%40mac.com>> wrote: > > > > On May 26, 2010, at 10:11 PM, Glenn Block wrote: > > > > > > > > > > > What if we arranged it "off site" where you wouldn't need attendance. > Would you come? > > > > Glenn, > > > > can you make a little more clear how 'serious' the REST-effort is from a > Microsoft-as-a-company-for-enterprise-IT-solutions? > > > > Does you effort indicate a move by Microsoft to improve the situation of > enterprise integration? > > > > Or is it merely (no intent to insult you) a side-project to add a little > REST support to WCF? > > > > I am asking, because if it is the former, I will be very important for > you to get it right and not only add support for turning objects into XML > and sending that out via HTTP. In that case, I'd be seriously interested to > join you. > > > > Also in the case of the former (and I guess this is what Jim drove at) > Microsoft should probably add some serious bait piece (such as free > admission :-) > > > > Jan > > > > > > > > > > Glenn > > > > > > On Wed, May 26, 2010 at 1:07 PM, Jim Webber <jim@...<jim%40webber.name>> > wrote: > > > > > > > > > > Disclaimer: This not marketing promoting our TechEd event :-) > > > > > > Disclaimer: this is me blagging, probably unsuccessfully. > > > > > > > > > > Are any folks on this list planning to attend Tech-Ed 2010 on the > 7th? > > > > Reason I am discussing putting together a workshop / informal session > on > > > > where we go with our REST/HTTP efforts at TechEd. It would be great > if some > > > > of you guys were there (assuming you are attending the event). If you > are > > > > interested in such a session let me know. > > > > > > Any free tickets? :-) > > > > > > Jim > > > > > > > > > > > > > > > > > > > > > > ----------------------------------- > > Jan Algermissen, Consultant > > NORD Software Consulting > > > > Mail: algermissen@... <algermissen%40acm.org> > > Blog: http://www.nordsc.com/blog/ > > Work: http://www.nordsc.com/ > > ----------------------------------- > > > > > > > > > > > > > > > > > > ----------------------------------- > Jan Algermissen, Consultant > NORD Software Consulting > > Mail: algermissen@... <algermissen%40acm.org> > Blog: http://www.nordsc.com/blog/ > Work: http://www.nordsc.com/ > ----------------------------------- > > >
On May 26, 2010, at 10:54 PM, Glenn Block wrote: > >>>Great. As I was telling Mike, I am thinking of organizing a workshop in Redmond where we can explore this all together (assuming you can be there) :-) Hmm - do you mean for the 7th? Isn't Tech:Ed in Nwe Orleans? Or am I confusing matters. Jan ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
On May 26, 2010, at 11:00 PM, Glenn Block wrote: > > > On Wed, May 26, 2010 at 1:54 PM, Jan Algermissen <algermissen1971@...> wrote: > > > > On May 26, 2010, at 10:48 PM, Glenn Block wrote: > > > > > > > I expect for example we will have first class support for resources, links/navigation, content negotation, media types, authentication (OAuth/OAuth2). No WSDL/WADL requred. > > That sounds a lot like a JAX-RS style thinking. Fine. Any plans for the client side (which is much more interesting wrt 'doing it right') > > >>>Client is a big part of our focus. We have a new HTTP Client stack we are building. We shipped something similar in our REST Starter kit. It is designed from the ground up for HTTP, incroporates an HTTP specific channel stack on the client (light weight). Good. Count me in :-) > > >>>When you say sounds alot like JAX-RS what is the context of that? Is that implying a limitation? No, I think JAX-RS does a pretty good job. Personally I like the most that it very much emphasizes that a resources are very independent things. You can look at each of them in isolation. JAX-RS makes that stand out. > > > > > Which btw, what do you guys think of WADL? > > WADL violates REST (the hypermedia constraint) if used at design time. > > You can use WADL as a forms language at runtime, but I personally doubt that this is useful. Hypermedia controls can be very easily expressed in the media types themselves (just as HTML does for <a> <img> or <form>). Why would you add WADL top HTML for example? > > >>>We currently are not planning to do anything with it as it forces some reliance on tooling, keeping things in synch etc. Just curious. Ok. I'd emphasize media type design (if that is possible for a framework at all). Jan > > Jan > > Glenn > > > > > > > On Wed, May 26, 2010 at 1:29 PM, Jan Algermissen <algermissen1971@...> wrote: > > > > On May 26, 2010, at 10:11 PM, Glenn Block wrote: > > > > > > > > > > > What if we arranged it "off site" where you wouldn't need attendance. Would you come? > > > > Glenn, > > > > can you make a little more clear how 'serious' the REST-effort is from a Microsoft-as-a-company-for-enterprise-IT-solutions? > > > > Does you effort indicate a move by Microsoft to improve the situation of enterprise integration? > > > > Or is it merely (no intent to insult you) a side-project to add a little REST support to WCF? > > > > I am asking, because if it is the former, I will be very important for you to get it right and not only add support for turning objects into XML and sending that out via HTTP. In that case, I'd be seriously interested to join you. > > > > Also in the case of the former (and I guess this is what Jim drove at) Microsoft should probably add some serious bait piece (such as free admission :-) > > > > Jan > > > > > > > > > > Glenn > > > > > > On Wed, May 26, 2010 at 1:07 PM, Jim Webber <jim@...> wrote: > > > > > > > > > > Disclaimer: This not marketing promoting our TechEd event :-) > > > > > > Disclaimer: this is me blagging, probably unsuccessfully. > > > > > > > > > > Are any folks on this list planning to attend Tech-Ed 2010 on the 7th? > > > > Reason I am discussing putting together a workshop / informal session on > > > > where we go with our REST/HTTP efforts at TechEd. It would be great if some > > > > of you guys were there (assuming you are attending the event). If you are > > > > interested in such a session let me know. > > > > > > Any free tickets? :-) > > > > > > Jim > > > > > > > > > > > > > > > > > > > > > > ----------------------------------- > > Jan Algermissen, Consultant > > NORD Software Consulting > > > > Mail: algermissen@... > > Blog: http://www.nordsc.com/blog/ > > Work: http://www.nordsc.com/ > > ----------------------------------- > > > > > > > > > > > > > > > > > > ----------------------------------- > Jan Algermissen, Consultant > NORD Software Consulting > > Mail: algermissen@... > Blog: http://www.nordsc.com/blog/ > Work: http://www.nordsc.com/ > ----------------------------------- > > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
Glenn, TechEd is fine for US-based folks; any chance of doing something over at TVP in the UK for the europeans? (I note that quite a few on the list, including myself, are on right-hand side of the pond) Regards, Alan Dean On Wed, May 26, 2010 at 21:11, Glenn Block <glenn.block@...> wrote: > > > What if we arranged it "off site" where you wouldn't need attendance. Would > you come? > > Glenn > > On Wed, May 26, 2010 at 1:07 PM, Jim Webber <jim@...> wrote: > >> >> >> > Disclaimer: This not marketing promoting our TechEd event :-) >> >> Disclaimer: this is me blagging, probably unsuccessfully. >> >> >> > Are any folks on this list planning to attend Tech-Ed 2010 on the 7th? >> > Reason I am discussing putting together a workshop / informal session on >> > where we go with our REST/HTTP efforts at TechEd. It would be great if >> some >> > of you guys were there (assuming you are attending the event). If you >> are >> > interested in such a session let me know. >> >> Any free tickets? :-) >> >> Jim >> >> > >
echo On Wed, May 26, 2010 at 21:54, Jan Algermissen <algermissen1971@...>wrote: > > On May 26, 2010, at 10:48 PM, Glenn Block wrote: > > > > > Which btw, what do you guys think of WADL? > > WADL violates REST (the hypermedia constraint) if used at design time. > > You can use WADL as a forms language at runtime, but I personally doubt > that this is useful. Hypermedia controls can be very easily expressed in the > media types themselves (just as HTML does for <a> <img> or <form>). Why > would you add WADL top HTML for example? > > Jan >
Not the 7th. Something else that would be more of a 1 to 2 day workshop/discovery 100% focused on this problem space. On Wed, May 26, 2010 at 2:00 PM, Jan Algermissen <algermissen1971@...>wrote: > > > > On May 26, 2010, at 10:54 PM, Glenn Block wrote: > > > >>>Great. As I was telling Mike, I am thinking of organizing a workshop > in Redmond where we can explore this all together (assuming you can be > there) :-) > > Hmm - do you mean for the 7th? Isn't Tech:Ed in Nwe Orleans? Or am I > confusing matters. > > Jan > > > ----------------------------------- > Jan Algermissen, Consultant > NORD Software Consulting > > Mail: algermissen@... <algermissen%40acm.org> > Blog: http://www.nordsc.com/blog/ > Work: http://www.nordsc.com/ > ----------------------------------- > > >
I am coming to the UK in a few weeks actually so we might be able to pull something together. The bigger Redmond workshop I am brainstorming on would include folks from all over, though likely they would need to goot travel. On Wed, May 26, 2010 at 2:06 PM, Alan Dean <alan.dean@...> wrote: > > > Glenn, > > TechEd is fine for US-based folks; any chance of doing something over at > TVP in the UK for the europeans? (I note that quite a few on the list, > including myself, are on right-hand side of the pond) > > Regards, > Alan Dean > > > On Wed, May 26, 2010 at 21:11, Glenn Block <glenn.block@...> wrote: > >> >> >> What if we arranged it "off site" where you wouldn't need attendance. >> Would you come? >> >> Glenn >> >> On Wed, May 26, 2010 at 1:07 PM, Jim Webber <jim@...> wrote: >> >>> >>> >>> > Disclaimer: This not marketing promoting our TechEd event :-) >>> >>> Disclaimer: this is me blagging, probably unsuccessfully. >>> >>> >>> > Are any folks on this list planning to attend Tech-Ed 2010 on the 7th? >>> > Reason I am discussing putting together a workshop / informal session >>> on >>> > where we go with our REST/HTTP efforts at TechEd. It would be great if >>> some >>> > of you guys were there (assuming you are attending the event). If you >>> are >>> > interested in such a session let me know. >>> >>> Any free tickets? :-) >>> >>> Jim >>> >>> >> > >
Hello everyone,
I would like to get some feedback in my attempt to minimize the
coupling we find in typical business process models written in a web
services point of view, by using that key aspect of "retrieving a
resource", "analyzing", "following a link".
http://guilhermesilveira.wordpress.com/2010/05/27/minimize-coupling-with-rest-processes/
You define conditions as you would analyze your resource. Note that
the string can be *anything* that you want, but you checking the
presence of a linked resource:
------------------------------------------------------------------------
When "there is a payment" do |resource|
resource.values.first.links.payment # check if there is a
payment relation
end
------------------------------------------------------------------------
When executing a step, you follow another link to a resource, and
either post, put, patch and so on. Again, navigating the links is
following resources, but the string that alias it to yourself should
be as clear as you wish.
------------------------------------------------------------------------
Then "create the basket" do |resource|
basket = {:basket => {:items => [{:id => @desired['id']}]} }
@basket_resource = resource.items.links.basket.post! basket #
http defined POSTing to that rel meant creating such resource
end
------------------------------------------------------------------------
And finally, creating 4 different steps, there is an infinite number
of possibilites on how to execute my process. This is the actual
running code:
------------------------------------------------------------------------
When there is a required item
And there is a basket
But didnt create a basket
Then create the basket
When there is a required item
And there is a basket
Then add to the basket
When it is a basket
And there are still products to buy
Then start again
When there is a payment
Then pay
------------------------------------------------------------------------
I've noticed that with such a DSL there are many different types of
evolutions that might occur on the server that will keep my clients
working just fine.
Any opinions?
Regards
Guilherme Silveira
Caelum | Ensino e Inovao
http://www.caelum.com.br/
On Thu, May 27, 2010 at 12:10 PM, Guilherme Silveira <guilherme.silveira@...> wrote: > You define conditions as you would analyze your resource. Note that > the string can be *anything* that you want, but you checking the > presence of a linked resource: Not to get all buzzwordy here, but that sounds like a rule based system. You have media type behaviors keyed in to the system, somehow, so, for example, you know what "buy a product" means in terms of what data gets sent to the "buy now" link, and, of course, that the "#buyNnow" link actually MEANS "buy now". Once you have those base behaviors specified, then you can write write rules to react to conditions in pursuit of a goal. Most folks represent those rules statically, as ordered instructions in a program. Vs using some kind of rule system that manages to order the rules based on the state of the application. This "rule soup" is (ideally) self organizing, but you typically have to do some kind of chicanery through state variables to keep rules from firing to soon, or to order the rules when more than one rule has the potential to fire. That's just a truth when dealing with any rule system. I think the key here, the real take away, is the underlying infrastructure of dealing with the resources -- of coding the media type and the actions as generic procedures. Whether you use a rule system to infer your workflow or explicitly lay it out using code, the underlying relationships coded to the media types are the same. Making that underlying infrastructure, the translating of the media types and supported operations in to code easier to create would be a real boon. Such as doing a GET from #itemList, a GET that automatically redirects if it has too. That alone would add a lot of robustness to client code. For most code, IMHO, having to use a rule engine for logic is simply overkill. Sure it can be more flexible, and resistant to change, but one issue with that is that the rules were written given basic assumptions. And the premise is that those assumptions won't change, even if other premises are made on the hosting server. But, historically, many a system has overloaded an older concept using the same identifiers with a new concept. And whatever assumption your rule system made upon that concept are now mistaken because it was never told about the new assumptions associated with the same identifier as published by the server system. Simple example is that the server could have changed the data type for "buy now" but not changed either the actual schema version (i.e. it's still in http://example.com/buynow.xsd, say, just a different buynow.xsd than the rules were developed for) or the operation (with is still a relation tagged "buyNow"). At a high level this looks the same to your rule system, but in truth it could have changed "beneath the feet" of the rule system, and the once "more robust" rule system is in truth no better than a hardcoded system. Computers have yet to do well when the creators lie to them :). And we creators don't lie out of malice, simply laziness. A rule system is a good idea, I'm just pointing out that there can be a lot of issues with one.
Guilherme,
see at end.
On May 27, 2010, at 9:10 PM, Guilherme Silveira wrote:
> Hello everyone,
>
> I would like to get some feedback in my attempt to minimize the
> coupling we find in typical business process models written in a web
> services point of view, by using that key aspect of "retrieving a
> resource", "analyzing", "following a link".
>
> http://guilhermesilveira.wordpress.com/2010/05/27/minimize-coupling-with-rest-processes/
>
> You define conditions as you would analyze your resource. Note that
> the string can be *anything* that you want, but you checking the
> presence of a linked resource:
>
> ------------------------------------------------------------------------
> When "there is a payment" do |resource|
> resource.values.first.links.payment # check if there is a
> payment relation
> end
> ------------------------------------------------------------------------
>
> When executing a step, you follow another link to a resource, and
> either post, put, patch and so on. Again, navigating the links is
> following resources, but the string that alias it to yourself should
> be as clear as you wish.
>
> ------------------------------------------------------------------------
> Then "create the basket" do |resource|
> basket = {:basket => {:items => [{:id => @desired['id']}]} }
> @basket_resource = resource.items.links.basket.post! basket #
> http defined POSTing to that rel meant creating such resource
> end
> ------------------------------------------------------------------------
>
> And finally, creating 4 different steps, there is an infinite number
> of possibilites on how to execute my process. This is the actual
> running code:
>
> ------------------------------------------------------------------------
> When there is a required item
> And there is a basket
> But didnt create a basket
> Then create the basket
>
> When there is a required item
> And there is a basket
> Then add to the basket
>
> When it is a basket
> And there are still products to buy
> Then start again
>
> When there is a payment
> Then pay
> ------------------------------------------------------------------------
>
> I've noticed that with such a DSL there are many different types of
> evolutions that might occur on the server that will keep my clients
> working just fine.
>
> Any opinions?
In the referenced blog you write:
"While integrating systems, implementing access or processes is typically achieved through man ordered list of steps, where one expects specific results from the server."
I think it is important to clearly distinguish between user (human or machine) or user agent (the REST component that acts on behalf of the user):
The *user agent* does not expect specific results beyond that the server does not lie about the links it sends (e.g. <img src=""/> in HTML is expected to reference an image)
The user (human or machine) has expectations that might break (e.g. when Amazon removes the 1-click button) but that is inevitable coupling (Roy called it "intent" a while ago[1]).
IOW, in REST a (bug free) changing server can never break a user agent's assumptions. Only user expectations can happen to be not met.
Jan
[1] http://www.imc.org/atom-protocol/mail-archive/msg11489.html
(Will take another look at your example but not time right now)
>
> Regards
>
> Guilherme Silveira
> Caelum | Ensino e Inovao
> http://www.caelum.com.br/
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
-----------------------------------
Jan Algermissen, Consultant
NORD Software Consulting
Mail: algermissen@...
Blog: http://www.nordsc.com/blog/
Work: http://www.nordsc.com/
-----------------------------------
Hello Will, You are right, its a rule system that takes in account server results in order to decide the following step. > Simple example is that the server could have changed the data type for > "buy now" but not changed either the actual schema version (i.e. it's > still in http://example.com/buynow.xsd, say, just a different > buynow.xsd than the rules were developed for) or the operation (with > is still a relation tagged "buyNow"). At a high level this looks the > same to your rule system, but in truth it could have changed "beneath > the feet" of the rule system, It all makes sense... if the server changes (b) the operation meaning using the same relation name, its seems like he didnt worry about basic backward compatibility issues. And in (a) all the issues Mike has commented on the write web appear, i.e. the media type should tell us which data type to use in specific situations, the representation should do that, or a mix between those two could tell us which media type to use? > and the once "more robust" rule system > is in truth no better than a hardcoded system. > Computers have yet to do well when the creators lie to them :). And we > creators don't lie out of malice, simply laziness. > A rule system is a good idea, I'm just pointing out that there can be > a lot of issues with one. I agree with that, if the server does not even think about being backward compatible - for laziness, mistake and so on. Any other solutions on how to lessen that pain when those process evolve? Regards Guilherme Silveira Caelum | Ensino e Inovao http://www.caelum.com.br/ 2010/5/27 Will Hartung <willh@...>: > On Thu, May 27, 2010 at 12:10 PM, Guilherme Silveira > <guilherme.silveira@...> wrote: >> You define conditions as you would analyze your resource. Note that >> the string can be *anything* that you want, but you checking the >> presence of a linked resource: > > Not to get all buzzwordy here, but that sounds like a rule based system. > > You have media type behaviors keyed in to the system, somehow, so, for > example, you know what "buy a product" means in terms of what data > gets sent to the "buy now" link, and, of course, that the "#buyNnow" > link actually MEANS "buy now". > > Once you have those base behaviors specified, then you can write write > rules to react to conditions in pursuit of a goal. > > Most folks represent those rules statically, as ordered instructions > in a program. Vs using some kind of rule system that manages to order > the rules based on the state of the application. This "rule soup" is > (ideally) self organizing, but you typically have to do some kind of > chicanery through state variables to keep rules from firing to soon, > or to order the rules when more than one rule has the potential to > fire. That's just a truth when dealing with any rule system. > > I think the key here, the real take away, is the underlying > infrastructure of dealing with the resources -- of coding the media > type and the actions as generic procedures. Whether you use a rule > system to infer your workflow or explicitly lay it out using code, the > underlying relationships coded to the media types are the same. > > Making that underlying infrastructure, the translating of the media > types and supported operations in to code easier to create would be a > real boon. Such as doing a GET from #itemList, a GET that > automatically redirects if it has too. That alone would add a lot of > robustness to client code. > > For most code, IMHO, having to use a rule engine for logic is simply > overkill. Sure it can be more flexible, and resistant to change, but > one issue with that is that the rules were written given basic > assumptions. And the premise is that those assumptions won't change, > even if other premises are made on the hosting server. But, > historically, many a system has overloaded an older concept using the > same identifiers with a new concept. And whatever assumption your rule > system made upon that concept are now mistaken because it was never > told about the new assumptions associated with the same identifier as > published by the server system. > >
> The user (human or machine) has expectations that might break (e.g. when Amazon removes the 1-click button) but that is inevitable coupling (Roy called it "intent" a while ago[1]).
> IOW, in REST a (bug free) changing server can never break a user agent's assumptions. Only user expectations can happen to be not met.
> Jan
In that point of view, some expectations will be broken, and systems
wont be compatible with their clients. The problem I see in some
clients is that *every time* there is a change, all clients will
break. Is there anything else we can do to stop breaking every time/so
often?
Cheers
Guilherme Silveira
Caelum | Ensino e Inovao
http://www.caelum.com.br/
2010/5/27 Jan Algermissen <algermissen1971@...>:
> Guilherme,
>
> see at end.
>
> On May 27, 2010, at 9:10 PM, Guilherme Silveira wrote:
>
>> Hello everyone,
>>
>> I would like to get some feedback in my attempt to minimize the
>> coupling we find in typical business process models written in a web
>> services point of view, by using that key aspect of "retrieving a
>> resource", "analyzing", "following a link".
>>
>> http://guilhermesilveira.wordpress.com/2010/05/27/minimize-coupling-with-rest-processes/
>>
>> You define conditions as you would analyze your resource. Note that
>> the string can be *anything* that you want, but you checking the
>> presence of a linked resource:
>>
>> ------------------------------------------------------------------------
>> When "there is a payment" do |resource|
>> resource.values.first.links.payment # check if there is a
>> payment relation
>> end
>> ------------------------------------------------------------------------
>>
>> When executing a step, you follow another link to a resource, and
>> either post, put, patch and so on. Again, navigating the links is
>> following resources, but the string that alias it to yourself should
>> be as clear as you wish.
>>
>> ------------------------------------------------------------------------
>> Then "create the basket" do |resource|
>> basket = {:basket => {:items => [{:id => @desired['id']}]} }
>> @basket_resource = resource.items.links.basket.post! basket #
>> http defined POSTing to that rel meant creating such resource
>> end
>> ------------------------------------------------------------------------
>>
>> And finally, creating 4 different steps, there is an infinite number
>> of possibilites on how to execute my process. This is the actual
>> running code:
>>
>> ------------------------------------------------------------------------
>> When there is a required item
>> And there is a basket
>> But didnt create a basket
>> Then create the basket
>>
>> When there is a required item
>> And there is a basket
>> Then add to the basket
>>
>> When it is a basket
>> And there are still products to buy
>> Then start again
>>
>> When there is a payment
>> Then pay
>> ------------------------------------------------------------------------
>>
>> I've noticed that with such a DSL there are many different types of
>> evolutions that might occur on the server that will keep my clients
>> working just fine.
>>
>> Any opinions?
>
> In the referenced blog you write:
>
> "While integrating systems, implementing access or processes is typically achieved through man ordered list of steps, where one expects specific results from the server."
>
> I think it is important to clearly distinguish between user (human or machine) or user agent (the REST component that acts on behalf of the user):
>
> The *user agent* does not expect specific results beyond that the server does not lie about the links it sends (e.g. <img src=""/> in HTML is expected to reference an image)
>
>
>
> [1] http://www.imc.org/atom-protocol/mail-archive/msg11489.html
>
> (Will take another look at your example but not time right now)
>
>
>
>>
>> Regards
>>
>> Guilherme Silveira
>> Caelum | Ensino e Inovao
>> http://www.caelum.com.br/
>>
>>
>> ------------------------------------
>>
>> Yahoo! Groups Links
>>
>>
>>
>
> -----------------------------------
> Jan Algermissen, Consultant
> NORD Software Consulting
>
> Mail: algermissen@...
> Blog: http://www.nordsc.com/blog/
> Work: http://www.nordsc.com/
> -----------------------------------
>
>
>
>
>
> Making that underlying infrastructure, the translating of the media
> types and supported operations in to code easier to create would be a
> real boon. Such as doing a GET from #itemList, a GET that
> automatically redirects if it has too. That alone would add a lot of
> robustness to client code.
I think thats the code I was trying to achieve earlier, less syntax
noise, easier to access through i.e http:
"at('entry_point').get.links.relation.get..."
I will try to keep following both paths.
Guilherme Silveira
Caelum | Ensino e Inovao
http://www.caelum.com.br/
2010/5/27 Will Hartung <willh@...>:
> On Thu, May 27, 2010 at 12:10 PM, Guilherme Silveira
> <guilherme.silveira@...> wrote:
>> You define conditions as you would analyze your resource. Note that
>> the string can be *anything* that you want, but you checking the
>> presence of a linked resource:
>
> Not to get all buzzwordy here, but that sounds like a rule based system.
>
> You have media type behaviors keyed in to the system, somehow, so, for
> example, you know what "buy a product" means in terms of what data
> gets sent to the "buy now" link, and, of course, that the "#buyNnow"
> link actually MEANS "buy now".
>
> Once you have those base behaviors specified, then you can write write
> rules to react to conditions in pursuit of a goal.
>
> Most folks represent those rules statically, as ordered instructions
> in a program. Vs using some kind of rule system that manages to order
> the rules based on the state of the application. This "rule soup" is
> (ideally) self organizing, but you typically have to do some kind of
> chicanery through state variables to keep rules from firing to soon,
> or to order the rules when more than one rule has the potential to
> fire. That's just a truth when dealing with any rule system.
>
> I think the key here, the real take away, is the underlying
> infrastructure of dealing with the resources -- of coding the media
> type and the actions as generic procedures. Whether you use a rule
> system to infer your workflow or explicitly lay it out using code, the
> underlying relationships coded to the media types are the same.
>
> Making that underlying infrastructure, the translating of the media
> types and supported operations in to code easier to create would be a
> real boon. Such as doing a GET from #itemList, a GET that
> automatically redirects if it has too. That alone would add a lot of
> robustness to client code.
>
> For most code, IMHO, having to use a rule engine for logic is simply
> overkill. Sure it can be more flexible, and resistant to change, but
> one issue with that is that the rules were written given basic
> assumptions. And the premise is that those assumptions won't change,
> even if other premises are made on the hosting server. But,
> historically, many a system has overloaded an older concept using the
> same identifiers with a new concept. And whatever assumption your rule
> system made upon that concept are now mistaken because it was never
> told about the new assumptions associated with the same identifier as
> published by the server system.
>
> Simple example is that the server could have changed the data type for
> "buy now" but not changed either the actual schema version (i.e. it's
> still in http://example.com/buynow.xsd, say, just a different
> buynow.xsd than the rules were developed for) or the operation (with
> is still a relation tagged "buyNow"). At a high level this looks the
> same to your rule system, but in truth it could have changed "beneath
> the feet" of the rule system, and the once "more robust" rule system
> is in truth no better than a hardcoded system.
>
> Computers have yet to do well when the creators lie to them :). And we
> creators don't lie out of malice, simply laziness.
>
> A rule system is a good idea, I'm just pointing out that there can be
> a lot of issues with one.
>
On May 27, 2010, at 11:21 PM, Guilherme Silveira wrote:
>> The user (human or machine) has expectations that might break (e.g. when Amazon removes the 1-click button) but that is inevitable coupling (Roy called it "intent" a while ago[1]).
>> IOW, in REST a (bug free) changing server can never break a user agent's assumptions. Only user expectations can happen to be not met.
>> Jan
> In that point of view, some expectations will be broken, and systems
> wont be compatible with their clients. The problem I see in some
> clients is that *every time* there is a change, all clients will
> break. Is there anything else we can do to stop breaking every time/so
> often?
There is no way to remove the kind of coupling Roy referred to as 'intend'. If you want to buy some book with 1-click at Amazon and Amazon choose to (or was forced to) remove the 1-click functionality you ain't gonna buy that book through 1-click.
What we can do (and this is where REST applies) is to ensure that the networked application itself (constituted by the client- and server side compoenents and the current data flow) will not break, no matter what changes are being made to the server.
What is important with regard to machine clients is to choose wisely how far the user agent component extends 'into' the code that makes up the client side software. Sometimes the user will really be a piece of code, but the more I keep looking the more I think that the user is often in fact a human being, even in situations where the client-side code seems like a pure machine.
For example, you might have a GUI application that calls a remote API, processes the result and displays them in the GUI. Such a system will have coupling in the code that calls the remote API. In a RESTful approachm you would think of the whole GUI as the user agent and let the hypermedia drive the GUI components. The only coupling between user agent and server would be around media types.
All steady states (including error steady states) would simply be exposed through the GUI letting the human being resolve the problem (because if there is a problem it will require a human to solve it anyhow).
Jan
>
> Cheers
>
> Guilherme Silveira
> Caelum | Ensino e Inovao
> http://www.caelum.com.br/
>
>
>
> 2010/5/27 Jan Algermissen <algermissen1971@mac.com>:
>> Guilherme,
>>
>> see at end.
>>
>> On May 27, 2010, at 9:10 PM, Guilherme Silveira wrote:
>>
>>> Hello everyone,
>>>
>>> I would like to get some feedback in my attempt to minimize the
>>> coupling we find in typical business process models written in a web
>>> services point of view, by using that key aspect of "retrieving a
>>> resource", "analyzing", "following a link".
>>>
>>> http://guilhermesilveira.wordpress.com/2010/05/27/minimize-coupling-with-rest-processes/
>>>
>>> You define conditions as you would analyze your resource. Note that
>>> the string can be *anything* that you want, but you checking the
>>> presence of a linked resource:
>>>
>>> ------------------------------------------------------------------------
>>> When "there is a payment" do |resource|
>>> resource.values.first.links.payment # check if there is a
>>> payment relation
>>> end
>>> ------------------------------------------------------------------------
>>>
>>> When executing a step, you follow another link to a resource, and
>>> either post, put, patch and so on. Again, navigating the links is
>>> following resources, but the string that alias it to yourself should
>>> be as clear as you wish.
>>>
>>> ------------------------------------------------------------------------
>>> Then "create the basket" do |resource|
>>> basket = {:basket => {:items => [{:id => @desired['id']}]} }
>>> @basket_resource = resource.items.links.basket.post! basket #
>>> http defined POSTing to that rel meant creating such resource
>>> end
>>> ------------------------------------------------------------------------
>>>
>>> And finally, creating 4 different steps, there is an infinite number
>>> of possibilites on how to execute my process. This is the actual
>>> running code:
>>>
>>> ------------------------------------------------------------------------
>>> When there is a required item
>>> And there is a basket
>>> But didnt create a basket
>>> Then create the basket
>>>
>>> When there is a required item
>>> And there is a basket
>>> Then add to the basket
>>>
>>> When it is a basket
>>> And there are still products to buy
>>> Then start again
>>>
>>> When there is a payment
>>> Then pay
>>> ------------------------------------------------------------------------
>>>
>>> I've noticed that with such a DSL there are many different types of
>>> evolutions that might occur on the server that will keep my clients
>>> working just fine.
>>>
>>> Any opinions?
>>
>> In the referenced blog you write:
>>
>> "While integrating systems, implementing access or processes is typically achieved through man ordered list of steps, where one expects specific results from the server."
>>
>> I think it is important to clearly distinguish between user (human or machine) or user agent (the REST component that acts on behalf of the user):
>>
>> The *user agent* does not expect specific results beyond that the server does not lie about the links it sends (e.g. <img src=""/> in HTML is expected to reference an image)
>>
>>
>>
>> [1] http://www.imc.org/atom-protocol/mail-archive/msg11489.html
>>
>> (Will take another look at your example but not time right now)
>>
>>
>>
>>>
>>> Regards
>>>
>>> Guilherme Silveira
>>> Caelum | Ensino e Inovao
>>> http://www.caelum.com.br/
>>>
>>>
>>> ------------------------------------
>>>
>>> Yahoo! Groups Links
>>>
>>>
>>>
>>
>> -----------------------------------
>> Jan Algermissen, Consultant
>> NORD Software Consulting
>>
>> Mail: algermissen@...
>> Blog: http://www.nordsc.com/blog/
>> Work: http://www.nordsc.com/
>> -----------------------------------
>>
>>
>>
>>
>>
-----------------------------------
Jan Algermissen, Consultant
NORD Software Consulting
Mail: algermissen@...
Blog: http://www.nordsc.com/blog/
Work: http://www.nordsc.com/
-----------------------------------
On Thu, May 27, 2010 at 2:23 PM, Guilherme Silveira
<guilherme.silveira@...> wrote:
>> Making that underlying infrastructure, the translating of the media
>> types and supported operations in to code easier to create would be a
>> real boon. Such as doing a GET from #itemList, a GET that
>> automatically redirects if it has too. That alone would add a lot of
>> robustness to client code.
>
> I think thats the code I was trying to achieve earlier, less syntax
> noise, easier to access through i.e http:
> "at('entry_point').get.links.relation.get..."
>
> I will try to keep following both paths.
Truthfully, this is where the robustness of a REST system comes from.
A REST systems power, IMHO, is that it's underlying structure need not
be static. Simply ensuring that clients are looking up URIs out of
payloads is a great leap in robustness, because Well Behaved Servers
can send out links to other physical systems. The stateless nature
allows for easier back end transparency.
Consider these 4 basic features of REST over HTTP, all of which add to
its robustness and flexibility.
1) Servers servicing requests behing load balancers. Since much of
HTTP can be connectionless, load balancers can more freely direct
traffic to different systems based on load or failure or whatever.
This routing is (can be) invisible to the client. Since REST is
stateless, server transitions ideally can be done more easily.
2) Proxies. HTTP Proxies add robustness by offloading the origin
servers for frequent, appropriate requests. The Proxy can even serve
up content if the host server is down. Caching has lots of issues, and
is difficult to do right. But the capability is there, again, at the
HTTP protocol level. The REST Client wouldn't be ignorant of the fact
that a Proxy intervened.
3) HTTP Redirect. At the HTTP Protocol, this is a client routing
instruction. Most every other popular protocol lacks this kind of
feature, the ability to tell someone at Window A to go to Window B.
This is at the HTTP Protocol, not the REST protocol. A well
implemented client will not even know the re-routing has happened. A
SMART Client will take the Redirect and, assuming it's a permanent
redirect, perhaps in the future always redirect the same request
automatically so as not to incur the cost of the redirect later. The
Smart Client can maintain an internal table of these, perhaps.
4) HATEOS. Following links. Client can connect to System A, who's
payload directs them to System B. At the base level, the client is
aware of this change, since the underlying address changes (and of
course could run in to access issues -- perhaps the new address is to
a blocked server, for example). But at the REST Client level, this is
effectively invisible. Since the URIs are opaque, "one is as good as
the other" as long as its role in the workflow is well defined. The
"#buyNow" can be a completely different URI from request to request,
but, ostensibly, any of them could be use by a REST client. This
ability lets the servers direct the clients "transparently", at the
REST protocol level, for whatever reason necessary (load, failure, new
version of service, etc.).
The first two work with "dumb clients". The second two require a more
robust client. Make these robust client easier to write is key, I
think, to moving REST farther down field and easier, BETTER, adoption.
I don't think there is much beyond this that can be done to keep
stupid applications from being stupid. Instead, more work can be done
to promote better practices, promote solid examples of those
practices, and perhaps tooling or libraries to enable better
practices.
For example, no matter what is done at the client end, how it's coded,
etc., there's not much that can be done if the host server goes dead.
And as the host server service definition drifts from the
specification implemented by the client, the more close to death it
becomes. It takes conscious effort and work to make a robust, backward
compatible service layer and ages and evolves well.
It's easy to imagine how a service can add a new media type for a new
version of a service. How existing types can have new references added
to them promoting the new types, but also having reference to the
original types for existing clients.
How the clients need only implement the new media type, and change to
start looking for the new relationships and following those, thus
incrementally changing. Then, after 6 months, "suddenly" the service
removes the old references, and perhaps support for the old media
type. That, IMHO, is a well behaved service. But that adds an extra
work burden to the provider.
I don't see good ways of making that easier, save simply trying to
empower developers to perhaps make those kinds of well behaved servers
easier to create for their users. Well Behaved Clients and Well
Behaved Servers are the key to the real robustness. REST gives us the
ability to do both, and HTTP happens manifest some of those
characteristics in handy ways.
For that matter, if you're coming to Australia, make sure to drop by for a coffee. Cheers, On 27/05/2010, at 7:06 AM, Alan Dean wrote: > > > Glenn, > > TechEd is fine for US-based folks; any chance of doing something over at TVP in the UK for the europeans? (I note that quite a few on the list, including myself, are on right-hand side of the pond) > > Regards, > Alan Dean > > On Wed, May 26, 2010 at 21:11, Glenn Block <glenn.block@gmail.com> wrote: > > > What if we arranged it "off site" where you wouldn't need attendance. Would you come? > > Glenn > > On Wed, May 26, 2010 at 1:07 PM, Jim Webber <jim@...> wrote: > > > > Disclaimer: This not marketing promoting our TechEd event :-) > > Disclaimer: this is me blagging, probably unsuccessfully. > > > > Are any folks on this list planning to attend Tech-Ed 2010 on the 7th? > > Reason I am discussing putting together a workshop / informal session on > > where we go with our REST/HTTP efforts at TechEd. It would be great if some > > of you guys were there (assuming you are attending the event). If you are > > interested in such a session let me know. > > Any free tickets? :-) > > Jim > > > > > > > > -- Mark Nottingham http://www.mnot.net/
Hello Will,
Thats great... all the 4 constraints you showed me are found within
the code I posted, so I totally agree with the definition, it might be
only a matter of my lack of mastery on the english language:
> 1) Servers servicing requests behing load balancers... Since REST is
> stateless, server transitions ideally can be done more easily.
The server is stateless, including - *but not limited to*, no session,
cookies or anything alike that connects a client to a specific state
in memory.
> 2) Proxies... Caching has lots of issues, and
> is difficult to do right. But the capability is there, again, at the
> HTTP protocol level. The REST Client wouldn't be ignorant of the fact
> that a Proxy intervened.
The client actually behaves as a non-dumb REST client. i.e. If you
have a resource in your hands and ask to GET it again through another
link relation but the resource representation, it will not issue the
GET request because the representation is fresh. If also works fine
with etag and last modified headers.
> 3) HTTP Redirect. At the HTTP Protocol, this is a client routing
> instruction. Most every other popular protocol lacks this kind of
> feature, the ability to tell someone at Window A to go to Window B.
> This is at the HTTP Protocol, not the REST protocol. A well
> implemented client will not even know the re-routing has happened...
Same thing here. Where the server tells to redirect and the http spec
says we can follow without questioning the clients (there are some
restrictions, right), it follows without the client knowing about it.
> 4) HATEOS. Following links. Client can connect to System A, who's
> payload directs them to System B.
It follows links. Every decision it makes is according to the resource
representaiton in its hand and every decision means following a link
(including sending some payload) to system B.
In the example I mentioned, if you start supporting a new media type
on the server side, A keeps running. If part of the A application is
changed to support the new media type (and the old one not), it still
runs. If system A points to a buying system B, it will also follow
this link without being aware of that.
Those 4 restrictions were supported on the internal layers... the DSL
makes it easier for you to read that you are actually following a
link, i.e.:
"Then prepare the payment" is actually defined earlier in the code as
following links, taking care about media type information and data
formats:
resource.links.payment.post payload
"Then access an item" could be implemented as:
resource.items.item[5].links.self.get
In this case, automatic redirection, cache and everything else takes place.
Summing up, all those 4 contraints are in the "resource...." code,
which is aliased into human readable statements through a DSL. But you
still use HATEOAS to follow links and navigate through the state,
support cache and so on.
What do you think?
> I don't see good ways of making that easier, save simply trying to
> empower developers to perhaps make those kinds of well behaved servers
> easier to create for their users.
So from the tool point of view, I believe I can call that REST.
Regards
> At the base level, the client is
> aware of this change, since the underlying address changes (and of
> course could run in to access issues -- perhaps the new address is to
> a blocked server, for example). But at the REST Client level, this is
> effectively invisible. Since the URIs are opaque, "one is as good as
> the other" as long as its role in the workflow is well defined. The
> "#buyNow" can be a completely different URI from request to request,
> but, ostensibly, any of them could be use by a REST client. This
> ability lets the servers direct the clients "transparently", at the
> REST protocol level, for whatever reason necessary (load, failure, new
> version of service, etc.).
Guilherme Silveira
Caelum | Ensino e Inovao
http://www.caelum.com.br/
2010/5/27 Will Hartung <willh@...>:
> On Thu, May 27, 2010 at 2:23 PM, Guilherme Silveira
> <guilherme.silveira@...> wrote:
>>> Making that underlying infrastructure, the translating of the media
>>> types and supported operations in to code easier to create would be a
>>> real boon. Such as doing a GET from #itemList, a GET that
>>> automatically redirects if it has too. That alone would add a lot of
>>> robustness to client code.
>>
>> I think thats the code I was trying to achieve earlier, less syntax
>> noise, easier to access through i.e http:
>> "at('entry_point').get.links.relation.get..."
>>
>> I will try to keep following both paths.
>
> Truthfully, this is where the robustness of a REST system comes from.
>
> A REST systems power, IMHO, is that it's underlying structure need not
> be static. Simply ensuring that clients are looking up URIs out of
> payloads is a great leap in robustness, because Well Behaved Servers
> can send out links to other physical systems. The stateless nature
> allows for easier back end transparency.
>
> Consider these 4 basic features of REST over HTTP, all of which add to
> its robustness and flexibility.
>
> 1) Servers servicing requests behing load balancers. Since much of
> HTTP can be connectionless, load balancers can more freely direct
> traffic to different systems based on load or failure or whatever.
> This routing is (can be) invisible to the client. Since REST is
> stateless, server transitions ideally can be done more easily.
>
> 2) Proxies. HTTP Proxies add robustness by offloading the origin
> servers for frequent, appropriate requests. The Proxy can even serve
> up content if the host server is down. Caching has lots of issues, and
> is difficult to do right. But the capability is there, again, at the
> HTTP protocol level. The REST Client wouldn't be ignorant of the fact
> that a Proxy intervened.
>
> 3) HTTP Redirect. At the HTTP Protocol, this is a client routing
> instruction. Most every other popular protocol lacks this kind of
> feature, the ability to tell someone at Window A to go to Window B.
> This is at the HTTP Protocol, not the REST protocol. A well
> implemented client will not even know the re-routing has happened. A
> SMART Client will take the Redirect and, assuming it's a permanent
> redirect, perhaps in the future always redirect the same request
> automatically so as not to incur the cost of the redirect later. The
> Smart Client can maintain an internal table of these, perhaps.
>
> 4) HATEOS. Following links. Client can connect to System A, who's
> payload directs them to System B. At the base level, the client is
> aware of this change, since the underlying address changes (and of
> course could run in to access issues -- perhaps the new address is to
> a blocked server, for example). But at the REST Client level, this is
> effectively invisible. Since the URIs are opaque, "one is as good as
> the other" as long as its role in the workflow is well defined. The
> "#buyNow" can be a completely different URI from request to request,
> but, ostensibly, any of them could be use by a REST client. This
> ability lets the servers direct the clients "transparently", at the
> REST protocol level, for whatever reason necessary (load, failure, new
> version of service, etc.).
>
> The first two work with "dumb clients". The second two require a more
> robust client. Make these robust client easier to write is key, I
> think, to moving REST farther down field and easier, BETTER, adoption.
>
> I don't think there is much beyond this that can be done to keep
> stupid applications from being stupid. Instead, more work can be done
> to promote better practices, promote solid examples of those
> practices, and perhaps tooling or libraries to enable better
> practices.
>
> For example, no matter what is done at the client end, how it's coded,
> etc., there's not much that can be done if the host server goes dead.
> And as the host server service definition drifts from the
> specification implemented by the client, the more close to death it
> becomes. It takes conscious effort and work to make a robust, backward
> compatible service layer and ages and evolves well.
>
> It's easy to imagine how a service can add a new media type for a new
> version of a service. How existing types can have new references added
> to them promoting the new types, but also having reference to the
> original types for existing clients.
>
> How the clients need only implement the new media type, and change to
> start looking for the new relationships and following those, thus
> incrementally changing. Then, after 6 months, "suddenly" the service
> removes the old references, and perhaps support for the old media
> type. That, IMHO, is a well behaved service. But that adds an extra
> work burden to the provider.
>
> I don't see good ways of making that easier, save simply trying to
> empower developers to perhaps make those kinds of well behaved servers
> easier to create for their users. Well Behaved Clients and Well
> Behaved Servers are the key to the real robustness. REST gives us the
> ability to do both, and HTTP happens manifest some of those
> characteristics in handy ways.
>
I think what Glenn is trying to say is that MS is rebuilding OpenRasta. :) ________________________________ From: rest-discuss@yahoogroups.com [rest-discuss@yahoogroups.com] on behalf of Glenn Block [glenn.block@...] Sent: 26 May 2010 21:45 To: Jan Algermissen Cc: Jim Webber; Rest Discussion Group Subject: Re: [rest-discuss] Chatting about MS future direction on REST and HTTP This is a serious REST and HTTP effort. I keep saying HTTP since it's not only about pure REST. When I say HTTP I mean we will still support RPC style POX / Plain old JSON, but that is not our primary intent. It is supporting a natural programming model for a resource-oriented service approach. This is not about retrofitting to WCF, been there done that. :-) We may use some core peices as appropriate but we have in the works a new client and server story for building HTTP style services. Thanks Glenn On Wed, May 26, 2010 at 1:29 PM, Jan Algermissen <algermissen1971@...<mailto:algermissen1971@...>> wrote: On May 26, 2010, at 10:11 PM, Glenn Block wrote: > > > What if we arranged it "off site" where you wouldn't need attendance. Would you come? Glenn, can you make a little more clear how 'serious' the REST-effort is from a Microsoft-as-a-company-for-enterprise-IT-solutions? Does you effort indicate a move by Microsoft to improve the situation of enterprise integration? Or is it merely (no intent to insult you) a side-project to add a little REST support to WCF? I am asking, because if it is the former, I will be very important for you to get it right and not only add support for turning objects into XML and sending that out via HTTP. In that case, I'd be seriously interested to join you. Also in the case of the former (and I guess this is what Jim drove at) Microsoft should probably add some serious bait piece (such as free admission :-) Jan > > Glenn > > On Wed, May 26, 2010 at 1:07 PM, Jim Webber <jim@...<mailto:jim@...>> wrote: > > > > Disclaimer: This not marketing promoting our TechEd event :-) > > Disclaimer: this is me blagging, probably unsuccessfully. > > > > Are any folks on this list planning to attend Tech-Ed 2010 on the 7th? > > Reason I am discussing putting together a workshop / informal session on > > where we go with our REST/HTTP efforts at TechEd. It would be great if some > > of you guys were there (assuming you are attending the event). If you are > > interested in such a session let me know. > > Any free tickets? :-) > > Jim > > > > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@...<mailto:algermissen@...> Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
+1 ________________________________ From: rest-discuss@yahoogroups.com [rest-discuss@yahoogroups.com] on behalf of Alan Dean [alan.dean@...] Sent: 26 May 2010 22:08 To: Rest Discussion Group Subject: Re: [rest-discuss] Chatting about MS future direction on REST and HTTP echo On Wed, May 26, 2010 at 21:54, Jan Algermissen <algermissen1971@...<mailto:algermissen1971@...>> wrote: On May 26, 2010, at 10:48 PM, Glenn Block wrote: > > Which btw, what do you guys think of WADL? WADL violates REST (the hypermedia constraint) if used at design time. You can use WADL as a forms language at runtime, but I personally doubt that this is useful. Hypermedia controls can be very easily expressed in the media types themselves (just as HTML does for <a> <img> or <form>). Why would you add WADL top HTML for example? Jan
Hi all, I'd appreciate a critique of my RESTful API. It's a project management application, and the client requests to see tasks, or types of tasks. So I have a URL /Tasks, another with /CompetedTasks and another with /PendingTasks. Unfortunately, different parts of the application require different task attributes - one part may just require the names & ids of completed tasks, another may also require the date a task was completed. I'm keen to minimise the amount of data transferred, so don't want to pass all task attributs for all queries. I've had a look at the Google APIs, and Google has a 'fields' parameter in the URL. I've had a go at implementing this, but it makes my lovely tidy RESTUL URLs look ugly and difficult to read/review. Thus, I'm considering alternative options. I could put the fields attribute in the http header attributes instead of the URL. Alternatively, I could analyse my application to determine the bare minimum set of fields that meet the different UI requirements. Another option would be to introduce a 'short' boolean parameter - that when true returns an agreed short set of fields, and when false returns the full set. Has anybody had to solve a problem like this? Or offer any wisdom? cheers, Ian
"ian.mayo" wrote: > > I'm keen to minimise the amount of data transferred, so don't want to > pass all task attributs for all queries. > REST is an architectural style favoring the large-grain transfer of data; as per Fielding, 5.1.5: "The trade-off, though, is that a uniform interface degrades efficiency, since information is transferred in a standardized form rather than one which is specific to an application's needs." Don't be too keen to optimize your way around REST... the result is an undefined architectural style, as opposed to a clearly-defined style guided by constraints. > > I've had a look at the Google APIs, and Google has a 'fields' > parameter in the URL. I've had a go at implementing this, but it > makes my lovely tidy RESTUL URLs look ugly and difficult to > read/review. > REST has nothing to do with your URI pattern. If your URIs become difficult to maintain, yeah that's a problem, but not one that has to do with REST. What I'm saying is, if your assumption is leading you in a direction you don't want to go (request parameters), perhaps you should question your assumption. Designing responses which are cacheable creates more real-world bandwidth savings, than designing responses which are more granular and application-specific. This is counter-intuitive, but a key aspect of the REST style. -Eric
Maybe you could use different media-type definitions for the different field sets. That would have the disadvantage of using non-standard media-types and a profusion of different custom media-types for very similar entities. Or maybe using a general media-type like application/xml with the parameter part to specify the set. (Parameters MAY follow the type/subtype in the form of attribute/value pairs) _________________________________________________ Melhores cumprimentos / Beir beannacht / Best regards Antnio Manuel dos Santos Mota http://card.ly/amsmota _________________________________________________ On 28 May 2010 15:11, ian.mayo <ianmayo@...> wrote: > > > Hi all, > I'd appreciate a critique of my RESTful API. > > It's a project management application, and the client requests to see > tasks, or types of tasks. > > So I have a URL /Tasks, another with /CompetedTasks and another with > /PendingTasks. > > Unfortunately, different parts of the application require different task > attributes - one part may just require the names & ids of completed tasks, > another may also require the date a task was completed. > > I'm keen to minimise the amount of data transferred, so don't want to pass > all task attributs for all queries. > > I've had a look at the Google APIs, and Google has a 'fields' parameter in > the URL. I've had a go at implementing this, but it makes my lovely tidy > RESTUL URLs look ugly and difficult to read/review. > > Thus, I'm considering alternative options. I could put the fields attribute > in the http header attributes instead of the URL. Alternatively, I could > analyse my application to determine the bare minimum set of fields that meet > the different UI requirements. Another option would be to introduce a > 'short' boolean parameter - that when true returns an agreed short set of > fields, and when false returns the full set. > > Has anybody had to solve a problem like this? Or offer any wisdom? > > cheers, > Ian > > >
mike amundsen wrote: > > 1) i like the idea of using link relations this way. > Thanks; but that's all it is right now, an idea. > > 3) are you using RDF as the "base" medium and transforming that based > on conneg (XHTML, XFORMS, etc.)? > No, the base medium is Atom. XSLT transforms the Atom into XHTML marked up with RDFa. From that XHTML, the RDFa may be extracted into RDF using XSLT, that technology is called GRDDL. For my wiki/weblog/ forum system, RDF doesn't make sense as a storage medium, particularly as compared to Atom. > > 2) i'm curious about your use of RDF here. are you using an existing > vocab as your main RDF serialization? have you defined a vocabulary > explicitly? using adhoc rdf:Description elements? > I have a vocabulary consisting of terms like wiki, weblog, forum, article, comment and so forth. Instead of formalizing a vocabulary first, I've taken a path of allowing it to reveal itself organically as the system is developed. As of now, 'wiki', 'weblog' and 'forum' are expressed as XSLT mode names -- I'd never have gotten there if I'd started by using OWL to formalize this ontology, and locked myself into making these terms XML elements. These terms have everything to do with the display and behavior of the resulting website, but nothing to do with the underlying data, where everything is just an Atom Entry. So the challenge for a domain- specific RDF vocabulary, is to map these terms from the application (the XHTML executing in the browser) back to the data structures. > > Is any part of your work available via open source? code? docs? etc. > I set up the wiski.org domain to house various private-process, public- domain work I need hosted. Once my use of my own RDF vocabulary for Content Management Systems based on Atom stabilizes, I will publish the results on wiski.org. Then I will put it forth as an open standard through some other body, ceding control. The same goes for the /date service. Currently, it just converts UNIX timestamps to Gregorian calendar dates. Originally, it was intended to support historical dates, which turned out to be much harder than it sounded. Eventually, /date will support historical dates properly, allowing service consumers to obtain non-Gregorian results, or multiple results -- what date it was in Germany depended for a long time on what religion you were, so neither Gregorian nor Julian is the "correct" calendar for these dates. When the basic /date service framework is nailed down beyond any points of contention between me and my partner, /date will be open- sourced, as global contribution will be required for what amounts to cultural (more so than language) translation, and the addition of holidays. If you look hard enough, the code for /date and the node.js- based httpd it runs on are available, and patches are accepted, but like the rest of Bison Systems' work it is not open-source at this time. > > Is any part of your work available via open source? code? docs? etc. > I'll post the link to my demo site again one of these days, for now you can search for it or contact me off-list. Whenever I publish the link, I get lots of bot hits, whereas since the last restart most of the interest has been from actual people: Server : bison_nanoweb/2.2.9.1 (FreeBSD 7.2; PHP/5.2.9) Started : Sun, 04 Apr 2010 01:44:58 GMT Uptime : 54d, 14h, 22m, 20s Memory : 1867 KB Total hits/connections : 9988/7869 (avg 0.13/m 0.00/s) Total sent size : 9,890 KB (avg 0.13 KB/m 0.02 Kbit/s) HTTP responses statistics 200 OK : 4230 206 Partial Content : 11 304 Not Modified : 2969 400 Bad Request : 4 404 Not Found : 2728 416 Requested Range Not Satisfiable : 1 500 Internal Server Error : 12 501 Method Not Implemented : 32 Those 500's weren't there the other day, I'll have to look into that. Only GET and HEAD are implemented on the demo, so the 501's aren't unexpected, neither are the 404's due to all the dead links. The proof of REST in general, and my approach in particular, is in the pudding: 1KB/hit average with a 70% cache-hit ratio. HTTP pipelining looks to be yielding a 20% reduction in connection overhead. Systems not using REST are highly unlikely to even approach such numbers. All my work on the demo is under the CC 3.0 attribution license, which works out to "if you use my vocabulary you must use my namespace" in practice. Same goes with any media types on wiski.org -- if you use them, don't change the media-type identifiers. To my mind, an XML namespace satisfies attribution for most potential re-use of my work. -Eric
ian.mayo wrote: > So I have a URL /Tasks, another with /CompetedTasks and another with > /PendingTasks. > > Unfortunately, different parts of the application require different > task attributes - one part may just require the names & ids of > completed tasks, another may also require the date a task was > completed. > > I'm keen to minimise the amount of data transferred, so don't want to > pass all task attributs for all queries. As Eric so eloquently put it, REST (and therefore HTTP) is optimized for large-grain messages. Don't fight that--embrace caching instead (or don't use REST if you really really need small messages). > I've had a look at the Google APIs, and Google has a 'fields' parameter > in the URL. I've had a go at implementing this, but it makes my lovely > tidy RESTUL URLs look ugly and difficult to read/review. > > Thus, I'm considering alternative options. I could put the fields > attribute in the http header attributes instead of the URL. Allowing the client to configure which fields are returned sounds like a recipe for cache invalidation headaches galore. > Alternatively, I could analyse my application to determine the bare > minimum set of fields that meet the different UI requirements. That analysis can only help--in my experience, 9 times out of 10 you'll find pretty clear boundaries between the attribute sets, and you can go on your merry way. That's one reason why I put explicit support for "attribute subsets" (which I call "fragments") in the Shoji Catalog Protocol [1]. Having that subsetting baked into the protocol constrains what cache entries you need to invalidate when the parent entity changes. You might consider a similar constraint for your own API/media types. The other one time out of 10, overlapping attribute sets are a "resource smell" that will lead you to refactor your resources into a better design. > Another option would be to introduce a 'short' boolean parameter - > that when true returns an agreed short set of fields, and when false > returns the full set. That might be a reasonable solution, although you might be better off in that case just making a separate URL for the "base set" and another for the "extra data", omitting those fields which are in the "base set". Robert Brewer fumanchu@... [1] http://www.aminus.org/rbre/shoji/shoji-draft-01.txt
On Fri, May 28, 2010 at 10:28 AM, Robert Brewer <fumanchu@...> wrote: > As Eric so eloquently put it, REST (and therefore HTTP) is optimized for > large-grain messages. Don't fight that--embrace caching instead (or > don't use REST if you really really need small messages). If we've learned anything, much of the time, the cost of the request itself dwarfs the cost of the actual payload delivered. Not that data size doesn't matter, but for many applications, the data transfer tie is far less than all of the other components of the remote transaction. If message traffic is an issue, consider perhaps adding some capability for buik messages and transfer for those cases where the message overhead base on transaction volume is impacting your overall throughput. Regards, Will Hartung
I guess my overall point is that i think there's this mistake that
REST gives us something "for free", such as being able to compensate
for "stupid servers". By pointing out those factors of HTTP REST, I
was suggesting that REST allows us to build more robust architectures,
partially because of the 4 points, but those attributes can't save us
from bad clients or bad servers.
I think that you goal of using a rule language to someone enhance
reliability by making the operation of the workflow perhaps more
"declarative" is a noble goal, but, in the end while on the one hand
it may seem it can be more robust, I think the robustness comes from
the quality, number, and interactions of the rules more so than simply
being in a rule engine at all. The one thing that a rule engine is
"better" than "just code" is in that for those cases that are
parallel, and independent, to the other rules in the system, where
order of execution really isn't important (and there are many good
examples of where this could happen), then the declarative nature of
just "Adding a rule to the soup" is a nice, elegant mechanism.
But simple rules lead to simple ("stupid, ridgid") clients, and the
value of the rule system itself is questionable. Complex rules lead to
complex clients, and complex interactions that can end up, with
tightly coupled rules, being little more than just raw code simple
organized differently.
However, with solid infrastructure, such as the stuff you're
mentioning, that's the real value. Then, whether using a rule system
or "just code", reliable, powerful clients can more readily be made.
Regards,
Will Hartung
Really late to this conversation, but still On May 19, 2010, at 10:22 PM, Jim Webber wrote: > > Have a look at JAX-RS from the Java world. > > I'm not certain that today's JAX-RS offers much more than today's WCF in terms of REST support. Surely you can't be serious? > If Glenn's team are going to do "REST like they meant it" to paraphrase Guilherme, I don't think that JAX-RS is the right way to go. > JAX-RS is server-side only, and the more interesting stuff happens on the client. But I wonder what exactly JAX-RS does wrong in your opinion (from a REST POV)? > Perhaps some of the toolkits that also happen to implement JAX-RS might be useful (e.g. Jersey), because they're starting to support hypermedia (thank you Restfulie for being disruptive there!). I am not at all sure the (experimental) server-side part of Jersey is something that should be used as a role model. I do think Restfulie's client support is very interesting, though. Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/
> JAX-RS is server-side only, and the more interesting stuff happens on > the client. But I wonder what exactly JAX-RS does wrong in your opinion > (from a REST POV)? I have to chime in, just for clarification: JAX-RS *currently* is server-side only, but chances are good that Jersey's current API will be proposed as a mandatory part of the specification in a future release, as lots of users asked for that, there is a pending RFE and Paul Sandoz (spec member and RI author) is not unwilling to do that step... Regards Markus, JAX-RS / Jersey contributor Head Crashing Informatics / http://www.headcrashing.eu
It might be worth looking at the approach used by LinkedIn. http://www.slideshare.net/linkedin/building-consistent-restful-apis-in-a-highperformance-environment The presentation above goes into their mechanism to allow the client to optimise the request and reasons for it. Specifically it allowed a client to get less data but also be less 'chatty' (reduce total requests). > Allowing the client to configure which fields are > returned sounds like a recipe for cache invalidation > headaches galore. I don't see why. You could end up with less http cache hits due to more URL variety but that is it as far as I see. Anyone care to expand on this more. > the cost of the request itself dwarfs the cost > of the actual payload delivered. Actually this is also a reason for the optimisation LinkedIn uses and for me this is the number 1 reason when your resource is a ORM/DB. Specifically you can optimise the ORM/DB access and at the same time reduce the total number of total requests. For some applications this could be significant. LinkedIn quote significant savings in bytes shipped. As I see it, if your traffic is big enough then the smaller payload can pay off. For those interested (Java, JAX-RS, Ebean ORM) there is (JSON only) support for this with Ebean ORM's JAX-RS integration. It's early days (first release), currently JSON only and only relevant for ORM/DB resources but might be interesting for folks looking for this type of optimisation. http://www.avaje.org/ebean/jaxrs.html Cheers, Rob. --- In rest-discuss@yahoogroups.com, Will Hartung <willh@...> wrote: > > On Fri, May 28, 2010 at 10:28 AM, Robert Brewer <fumanchu@...> wrote: > > > As Eric so eloquently put it, REST (and therefore HTTP) is optimized for > > large-grain messages. Don't fight that--embrace caching instead (or > > don't use REST if you really really need small messages). > > If we've learned anything, much of the time, the cost of the request > itself dwarfs the cost of the actual payload delivered. Not that data > size doesn't matter, but for many applications, the data transfer tie > is far less than all of the other components of the remote > transaction. > > If message traffic is an issue, consider perhaps adding some > capability for buik messages and transfer for those cases where the > message overhead base on transaction volume is impacting your overall > throughput. > > Regards, > > Will Hartung >
(Unreferenced citations in my posts here, as always, come from Roy's
thesis.)
(Long, lecture-y posts, as always, come about when I feel that
fundamental violations of Web architecture are being discussed.)
Kris Zyp wrote:
>
> On 5/11/2010 4:31 AM, Eric J. Bowman wrote:
> > Kris Zyp wrote:
> >>
> >> I believe one should be able to assume that the content type of the
> >> representation returned from a server from GET for URI is
> >> acceptable in a PUT request to that server for the same URI.
> >>
> >
> > Absolutely not. The late binding of representation to resource
> > precludes this assumption. HTML is capable of providing an
> > interface to an Atom system. What media type to PUT or POST to the
> > system is explicitly provided in the markup, i.e. a
> > self-documenting interface.
> >
> > Assuming that you can PUT or POST HTML to my system because that's
> > the media type I sent on GET would not work -- I derive HTML from
> > Atom, not the other way around.
> >
> > A PUT of an HTML document would show an intent to replace the
> > self-documenting interface provided by the HTML representation, with
> > some other application state. HTML is generated by my system, it
> > is not subject to change via PUT to negotiated resources which
> > happen to return text/html or application/xhtml+xml on GET with a
> > Web browser, but happen to return Atom to a feed reader.
>
> I certainly agree that receiving a media type from a server does not
> guarantee that a server can receive that same media type from the
> client. However, in the absence of knowledge of a different explicit
> media type preference (from the media type definition) when it comes
> to negotiating an acceptable type with the server, pretending the all
> media types are equally likely is as silly as pretending that any
> language is any equally likely to be understood in response to someone
> who speaks to you in french.
>
I don't pretend that my website output will be understood by someone
whose browser indicates that they only know French. But, I can respond
406 with the default English variant as entity-body, because it's the
best representation I have. I "send whatever is most likely to be
optimal first and then provide a list of alternatives for the client to
retrieve if the first response is unsatisfactory."
The French user's browser may not know that the user also understands
the German variant of my (hypothetical) website, but the French user
may recover from the unintelligible English variant by selecting the
German variant, via link rel='alternate' elements or headers with lang
attributes presented in browser chrome, or via links displayed within
page content (the latter being WAI-approved).
Don't work around REST's respond-first-recover-later approach by trying
to first determine the optimal response, by making guesses about what
the system isn't telling you -- just respond to what the system *is*
telling you. In REST, error recovery only occurs _after_ the error,
as the error response itself may be the mechanism through which such
recovery occurs.
(If content is moved to another site, the proper response is a 301
redirect -- unless that site's policy forbids deep linking, in which
case the link may be displayed as text in the body of a 410 response,
with instructions to cut-and-paste. Regardless of whether conneg is
involved, and if so, regardless as to whether it's language-based or
media-type-based, errors must be allowed to occur instead of trying to
head them off by guessing anything.)
If there is some absence of knowledge of media type preference, it is
an error with the coding of the system, or as Roy put it so eloquently,
a case of playing frisbee with your dog backwards. A resource may
indicate via an Accept (or Accept-Patch) header, what media types it
understands. A user agent following its nose may be instructed to send
a POST as multipart/form-data containing text/plain, but determine from
a HEAD request to the target that application/atom+xml is also Accept-ed
(while also confirming that POST is Allow-ed), and send that instead.
Done properly, REST removes any need to make any guesses about metadata,
in any situation. The user agent is always instructed as to what it
can do, so any code attempting to infer what to do in the absence of
some piece of knowledge is solving the problem backwards, i.e.
expecting the dog to throw the frisbee.
If user agents aren't being instructed properly, fix the system such
that they are, instead of "fixing" the user agents to infer "properly"
according to some sniffing algorithm. Such sniffing algorithms may be
necessary in real-world browser development, but are anathema to REST.
>
> >
> >>
> >> When using JSON,
> >> additional information about acceptable property values can be
> >> determined from any JSON Schema referenced by the resource. In
> >> other words, if you GET some resource, and the server responds
> >> with:
> >>
> >> Content-Type: application/my-type+json; profile=my-schema
> >>
> >> One could retrieve the schema from the "my-schema" relative URI
> >> and do a PUT using the application/my-type+json content type with
> >> the schema information as a guide to what property values are
> >> acceptable.
> >>
> >
> > Sure you can *do* this, it just wouldn't be REST. Leaving aside
> > that the media type identifier definition for JSON doesn't say
> > anything about extending it using *+json, the media type definition
> > for JSON says nothing about HTTP methods. Where have you provided
> > a self-documenting interface giving a target URI, method and media
> > type -- as provided by forms languages having no corollary in JSON,
> > yet required by REST?
> >
> > If you "just know" that you can PUT or DELETE some JSON resource,
> > it's no more RESTful than "just knowing" that you can PUT or DELETE
> > some JPEG. You're resorting to unbounded creativity, rather than
> > using standard media types and link relations which *do* cover HTTP
> > methods, for any target media type.
> >
>
> RFC2616 sufficiently defines the meaning of PUT and DELETE, a media
> type does not need to conflate protocol concerns to be RESTful.
>
As I've said many times, media types don't redefine or override method
definitions (saying this PUT is actually a PATCH in the presence of
such-and-such media type, is kinjiru). However, a media type which
constrains the scope of *possible* method semantics to a *specific*
behavior is not conflating anything.
"
The data format of a representation is known as a media type. A
representation can be included in a message and processed by the
recipient according to the control data of the message and the nature
of the media type. Some media types are intended for automated
processing, some are intended to be rendered for viewing by a user, and
a few are capable of both. Composite media types can be used to enclose
multiple representations in a single message.
"
In fact, such media types are required for REST systems to process
requests, since REST systems rely on the combination of control data
and "the nature of the media type". Stating "see RFC2616" indicates a
worldview where the nature of the media type is irrelevant to request
processing. This is (one reason) why we're so fond of saying HTTP !=
REST here.
REST is protocol-agnostic. By introducing a stream transducer to
automate name-value-pair handling for SMTP messages to a standard
listmail, I can implement an HTML-based REST application using forms'
@method='post' @action='mailto:group@listmail'. The next application
steady-state is displayed when the next response to the thread hits
the INBOX. The only over-the-wire protocol used in such a scenario is
SMTP.
This is why I consider it an error that HTML defines protocol-specific
method semantics instead of generic method semantics. But, far from
conflating protocol concerns, HTML manages to constrain the use of HTTP
to specific media types for specific methods. You can send any media
type with a POST, it just has to be declared within multipart/form-data.
http://www.w3.org/TR/html401/interact/forms.html#h-17.13.1
Notice how removing the string "HTTP" from that section changes it to
be inclusive of other protocols like FTP or SMTP which, in practice,
already work with HTML forms anyway? Saying "see RFC2616" tends to
imply that the media type is not to be transferred over other protocols
(like XMPP). Are you sure you want a JSON schema language which
restricts JSON to HTTP-only implementations? This may be fine for Atom
Protocol, but it's an odd choice for a schema language.
HTTP, in REST, is an application protocol based on media type, not a
media-type-agnostic transport protocol like FTP. A system which
processes requests based strictly on the control data (as opposed to
request processing based on the combination of control data + media
type), may as well be using FTP. Most REST claimants are really HTTP-
RPC, because they're using HTTP as FTP with caching -- still just a
transport protocol.
While RESTful interaction is possible over FTP, SMTP or even XMPP, only
HTTP exists (so far) as a true RESTful application protocol. That's why
it's entirely appropriate that Atom Protocol chose HTTP method semantics
(both constraining and defining their behavior, i.e. PUT only replaces
but doesn't create, and DELETE on a media entry also deletes the media
file, neither of which changes the semantics of either method) rather
than generic semantics; HTTP's application-protocol capabilities (like
conditional requests) just aren't present in other protocols.
Using HTTP as a transport protocol results in HTTP-RPC implementations,
like the sparse-bit array solution Roy hypothesizes, here:
"
I should also note that the above is not yet fully RESTful, at least
how I use the term. All I have done is described the service
interfaces, which is no more than any RPC. In order to make it RESTful,
I would need to add hypertext to introduce and define the service,
describe how to perform the mapping using forms and/or link templates,
and provide code to combine the visualizations in useful ways. I could
even go further and define these relationships as a standard, much like
Atom has standardized a normal set of HTTP relationships with expected
semantics, but I have bigger fish to fry right now.
"
http://roy.gbiv.com/untangled/2008/paper-tigers-and-hidden-dragons
Saying "Here's my data format, use HTTP" is not the same thing as a
hypertext API which either re-uses or creates media types which
delineate generic operations. Roy is clearly saying that having some
collection of GIF images (be they sparse-bit arrays or pictures of your
dog playing frisbee) you can interact with using GET, PUT and DELETE
and the http:// URI scheme (i.e. "see RFC2616") is RPC, _not_ REST.
A Uniform REST Interface has a generic "retrieval operation" which maps
to "GET" in HTTP, "RECV" in FTP etc. This operation may also be
referred to as "dereferencing a resource." So a REST API's methods are
a function of whatever protocols are specified by its resources' URI
schemes. In REST, an API can remain static as protocols evolve -- the
waka protocol and the HTTP protocol would accomplish exactly the same
thing, using different syntax, with waka presumably offering better
caching and pipelining that works, but serving the same representations
(except that URIs will vary by scheme); they can even run in parallel.
In a RESTful Atom Protocol system, the media type specifies HTTP, not
FTP, therefore the generic retrieval operation maps to HTTP GET. The
decision to restrict Atom Protocol operations to HTTP was deliberate
and reasoned. Whereas HTML 4.01's form definition is an example of a
REST mismatch -- I would correct it as follows:
"
retrieve: Using the 'retrieval' method, the form data set is appended
to the URI specified by the action attribute (with a question-mark
("?") as separator) and this new URI is sent to the processing agent.
submit: Using the 'submission' method, the form data set is appended
to the URI, or sent in the body of the request, and sent to the
processing agent.
remove: Using the 'removal' method, the URI specified by the action
attribute is removed by the processing agent.
"
This wording is more deferential to the nature of the URI and media
type chosen. The text/html media type (HTML 5 is WIP so I don't
include it yet) doesn't restrict itself to the HTTP protocol anywhere
else, forms shouldn't have either. I would also change the wording
such that application/x-www-form-urlencoded could be used with any
method/operation. I would alter the wording on idempotency to defer to
the protocol method used on submission operations.
My way, the HTML coder can use 'retrieve' plus 'application/x-www-form-
urlencoded' to instruct user agents to append specifically-formatted
name-value-pair ASCII text to a target URI of any protocol scheme. The
over-the-wire method used is determined by the user agent (i.e. GET for
HTTP, RECV for FTP) depending on the combination of protocol and media
type.
HTML coders could instruct user agents to PUT by using 'submit' and a
media type that isn't application/x-www-form-urlencoded or multipart/
form-data, both of which would signal the user agent to POST, assuming
HTTP URIs (Atom content may be POSTed within multipart/form-data to
maintain some semblance of Atom Protocol). PATCH is a possible result,
given a delta-only media type (someone really should define one for
name-value pairs).
Other URI schemes would yield different results, for example there's no
POST in FTP, but there's also no reason one couldn't RECV from an FTP
URI using application/x-www-form-urlencoded (a media type identifier
not meant to go over the wire) to instruct the user agent how to format
the request (same w/ DELETE). An FTP request to PUT either media type
would be possible, too.
Such a re-wording of HTML 4.01 would not only remove the REST mismatch,
but also describe how most browsers work in practice with GET and POST
form methods ('get' and 'post' values for @method would be deprecated,
but not removed, by my proposal) using schemes other than http:, as well
as those oddball user agents which allow 'put' as an HTML 4.01 form
@method.
This rather long example (do I write any other kind) illustrates proper
RESTful media type design, by showing how some minor changes to HTML
4.01 would result in the text/html media type being capable of
providing a hypertext REST API for an Atom Protocol-ish system without
resorting to scripting, invalid markup or major (HTML 5, Xforms)
rethinking of forms.
The key takeaway here, is I've just designed an extension to HTML 4.01
and the text/html media type identifier. All it does is define three
generic operations for use in @method and specify their behavior in
combination with standard media types. I'll probably flesh it out as a
standalone document, come to think of it. This extension to text/html
may be supported natively within browsers, or implemented using XHR
code-on-demand to extend the browser's knowledge of text/html to
encompass the extension.
I do not need to reference generic or protocol-specific methods,
explaining that one combination yields PUT and another yields POST when
the protocol is HTTP, or that retrieval operations follow HTTP GET --
this common-knowledge coupling is contained within the definitions of
the protocols identified by the URI scheme. For the same reason, you
can't say "see RFC2616" to define form action methods, because this
doesn't instruct the client what to do if the URI scheme is mailto:.
The media type of whatever hypertext is driving a REST API doesn't
redefine or override method definitions (although media types may
define new methods). Nor, as Roy has said, can it "bind a service to
do anything -- it only serves as a guide for interpretation of the
current state." So REST requires a forms language capable of
instructing the client how to change state according to the underlying
API (the hypertext constraint), such that client-side assumptions,
guessing and sniffing don't factor in.
http://www.imc.org/atom-protocol/mail-archive/msg11487.html
In order for JSON to be such a forms language, it can't bind a service
to behave as a WebDAV fileserver by saying "see HTTP" (granted, you've
said "see RFC2616" but that's even more restrictive by saying I can't
use MGET, etc.), it must instead serve as a guide for the user agent to
interpret responses.
When a browser encounters a form with method GET and media type
application/x-www-form-urlencoded, if all the browser developer had to
go on was "see RFC2616" then a GET would be made to the target URI,
with an urlencoded entity-body and a Content-Type header. Which, of
course, is nonsensical -- the media type simply instructs the client
how to format the URL for the GET request (which certainly isn't
apparent from reading RFC2616 to determine how to handle this action).
The text/html media type instructs the client how to convert form
fields into name-value pairs. The application/x-www-urlencoded media
type instructs the client how to encode the name-value pairs into a URI
query segment. If and only if the protocol is HTTP, does RFC2616 come
into play, defining how to send the prepared query URI to the server as
a properly-formatted GET request and interpret the response code.
So I really can't emphasize strongly enough that just plugging HTTP
methods into a form and relying on the client's inherent knowledge of
HTTP to guess what to do, falls short of what's needed for hypertext
REST API development. That your particular project is JSON doesn't
matter. Any markup language can potentially be made into a hypertext
media type.
But the only way such a media type is useful in REST is if it
delineates the behavior of generic operations, and provides enough
structure to distinguish the variant purposes of different URIs and
different media types. Some URIs instruct clients how to interpret
representations (XML namespaces) and aren't meant for dereferencing,
just as some media type identifiers are meant to be transmitted as
headers, while others are meant as hypertext instructions to the user
agent (format these name-value pairs as a URI query string).
>
> >> Discovery of POST actions is completely different than PUT (since
> >> PUT's behavior is implied by a GET response). A JSON Schema can
> >> describe possible POST actions with submission links, including an
> >> acceptable content type (in the "enctype" property).
> >>
> >
> > I don't see how. Regardless of schema, there's simply no mention in
> > the media type definition of JSON for describing URIs or methods,
> > i.e. there's no forms language. The demo I posted consists of
> > XHTML steady- states derived from various source representationss
> > of other media types. These steady-states (will) provide a
> > self-documenting API to the underlying Atom-based system.
> >
> > The user isn't trying to discover PUT vs. POST actions. The user is
> > trying to drive an application to another steady-state. The user
> > agent needs to translate that user goal into HTTP interactions. If
> > the user is trying to add a new post, the user agent is instructed
> > to POST to the domain root. If the user is trying to add a new
> > comment, the user agent is instructed to POST to the appropriate
> > comment thread. If the user intent is to edit an existing entry,
> > the user agent is instructed to PUT to the existing URI. In each
> > case, the user agent is instructed to use application/atom+xml;
> > type=entry.
> >
> > There's no RESTful way to instruct any user agent that "this system
> > uses Atom Protocol" and this may not be inferred by the fact that
> > the system uses Atom. All I can do is provide a self-documenting
> > hypertext API which instructs user agents how to interact with the
> > system. This API may or may not conform to Atom Protocol. Whether
> > it does or not is less important to REST than its presence.
> >
> > None of this is any different for a system based on JSON rather than
> > Atom. As a REST system, I could change my Atom backend to a JSON
> > backend on a whim. I'm not saying it would be easy, but I am saying
> > that the application states wouldn't change. The HTML would still
> > present a textarea, changes to that textarea would be submitted to
> > the same URI, using whatever media type the form says to use -- all
> > HTML user agents automatically update to the new API.
> >
> > If you need to guess what media type to use then you can't possibly
> > be using REST. A REST API will always tell you exactly what media
> > type to use. It isn't implicit in any guessable fashion, it's
> > explicit. If it isn't explicit, it isn't REST. HTML says what
> > POST does, but only your hypertext can specify media type, if you
> > lack such hypertext you lack a critical REST constraint.
>
> There is certainly nothing wrong with a specifying what media type a
> server can handle in the media type definition or hypertext (JSON
> Schema allows for specifying an acceptable media type for requests as
> well)
>
What media types are acceptable is hard-coded into the user agent for
good reason. This discussion has been had here many times before, that
overriding that with something like @type inside <a> in an effort to
get a different variant goes against REST. It's playing frisbee with
your dog all backwards. In the case of a negotiated resource, if there
is need to instruct the client to retrieve a specific variant (override
conneg), then assign that variant a URI and send that to the client.
Again, REST isn't about performing prefetch optimization, it's about
"sending whatever is most likely to be optimal first and then provide
a list of alternatives for the client to retrieve if the first response
is unsatisfactory." Web architecture is based on the notion that an
@type on a link is a hint and only a hint, this goes for JSON too --
you can allow for this hint in a schema, but a schema can't make that
hint override user agents' hard-coded Accept headers.
>
> however the dynamic representation/content negotiation
> principle implies that a server may have capabilities to handle
> various types that may independently evolve. I know my server software
> can handle various media types to update resources (JSON, JS, XML,
> url-encoded, etc.).
>
Of course. For any given request, I respond with the interface that's
most likely to be optimal, first. An Atom Protocol client will get raw
Atom and be able to interoperate with the system somewhat on that level,
but the user can always choose the rel='alternate' HTML variant and get
features (like PATCH-based social tagging) Atom Protocol clients can't
be instructed to use.
Or, the client is a browser supporting XForms, so it gets a full-blown
REST app that implements Atom Protocol and any additional features
(like PATCH-based social tagging). Otherwise, the browser gets an HTML
4.01 almost-REST API that doesn't quite implement Atom Protocol (no
PUT) or any additional features. User agents may introspect hypertext
in the form of HTTP Accept, Allow and Alternates headers, link elements
and/or link headers, etc. to determine alternative courses of action to
present to the user.
Or, the user is presented with the information needed to decide to
switch to a user agent with Xforms capability, to enable full
interaction with the underlying API. Or, I implement my HTML 4.01
forms extension using XHR code-on-demand, providing a full-blown REST
API that doesn't exactly follow Atom Protocol but yields the same
results and has all the additional features -- in which case I don't
care that nobody uses Xforms-enabled browsers or Atom Protocol clients.
If a user agent gets the wrong variant, it won't be a fatal error where
the user agent can't interact with the site. A non-xforms browser
can't possibly be triggered to use Xforms (this isn't a conneg issue),
a browser can't possibly get the raw Atom unless linked to it
explicitly, and the worst that can happen is a non-js browser will only
be able to use GET and POST (not-quite-REST as PUT is tunneled over
POST in such cases, not-quite-Atom Protocol because Atom is wrapped in
multipart/form-data). Standard graceful degradation, this.
So I don't understand what problem you're trying to solve by trying to
figure that all out _before_ receiving an initial representation. It's
a Sisyphean task -- by the time some third party figures out how to do
that for my system, I've changed the interfaces and their client breaks,
whereas if they'd have followed my hypertext their client would have
just self-updated. The same would go for any REST system, there's
simply no need to train your dog to throw the frisbee, or define media
types to support it.
Using @type on links in HTML is only meant as a hint, because some
resources are negotiated. Where resources aren't negotiated, there's
no excuse for this hint to be wrong (I call those unflagged 500 errors).
Its presence allows HTML code to be considered a self-documenting API.
Without that hint, the resource must be dereferenced to determine its
nature. That's self-descriptive messaging, but without @type inside
HTML (or such provision in some other markup language) there's no way
to self-document the API in application steady-states.
Under no circumstances is hypertext allowed to change the browser's
Accept header. This is a case where the real world is actually within
REST's constraints (layered system, in this case), I can't think of a
situation where a server is allowed to dictate to a client what media
type that client should Accept. I know the Javascript community would
like to do away with this, but I'm afraid it's a fundamental aspect of
Web architecture to which the "if it ain't broke, don't fix it" rule
must apply.
-Eric
On May 30, 2010, at 9:15 PM, Markus KARG wrote: >> JAX-RS is server-side only, and the more interesting stuff happens on >> the client. But I wonder what exactly JAX-RS does wrong in your >> opinion >> (from a REST POV)? > > I have to chime in, just for clarification: JAX-RS *currently* is > server-side only, but chances are good that Jersey's current API > will be > proposed as a mandatory part of the specification in a future release, I hope that other contributors will also want to input their ideas and client APIs to any future JAX-RS effort, so that we select the best and proven ideas. > as > lots of users asked for that, there is a pending RFE and Paul Sandoz > (spec > member and RI author) is not unwilling to do that step... > Very willing :-) i think this is a must have for any JAX-RS 2.0 effort. Paul.
Hi guys I think I mentioned this, but I'll be in the UK the week for July 13 to 18. I've spoken to Sebastian about a little "RESTful" get together :-) Anyone down (assuming you are in that neck of the woods). Thanks Glenn
Glenn, I can certainly do the 13th, certainly can't do the 15th and the other days are uncertain (it depends on how my job hunting goes). Regards, Alan Dean On Mon, May 31, 2010 at 08:47, Glenn Block <glenn.block@...> wrote: > > > Hi guys > > I think I mentioned this, but I'll be in the UK the week for July 13 to 18. > I've spoken to Sebastian about a little "RESTful" get together :-) Anyone > down (assuming you are in that neck of the woods). > > Thanks > Glenn > > >
Glenn, On May 31, 2010, at 9:47 AM, Glenn Block wrote: > > > Hi guys > > I think I mentioned this, but I'll be in the UK the week for July 13 to 18. Meet you there any of those days. Perfect match to my schedule. "UK" meaning "London" or some rural place? Possible to nail that down quickly to catch early booking rates? Jan > I've spoken to Sebastian about a little "RESTful" get together :-) Anyone down (assuming you are in that neck of the woods). > > Thanks > Glenn > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
Correction: I misread July for June. I can do either the Saturday or Sunday (17th / 18th) and can probably arrange time during the week if the weekend doesn't work for others. Regards, Alan Dean On Mon, May 31, 2010 at 08:51, Alan Dean <alan.dean@...> wrote: > Glenn, > > I can certainly do the 13th, certainly can't do the 15th and the other days > are uncertain (it depends on how my job hunting goes). > > Regards, > Alan Dean > > > On Mon, May 31, 2010 at 08:47, Glenn Block <glenn.block@...> wrote: > >> >> >> Hi guys >> >> I think I mentioned this, but I'll be in the UK the week for July 13 to >> 18. I've spoken to Sebastian about a little "RESTful" get together :-) >> Anyone down (assuming you are in that neck of the woods). >> >> Thanks >> Glenn >> >> >> > >
Jan, Microsoft have two main offices in the UK: one in central London near Victoria Station and the other in Reading at Thames Valley Park. Both are easily reachable from Heathrow. Regards, Alan Dean On Mon, May 31, 2010 at 08:58, Jan Algermissen <algermissen1971@...>wrote: > > > Glenn, > > > On May 31, 2010, at 9:47 AM, Glenn Block wrote: > > > > > > > Hi guys > > > > I think I mentioned this, but I'll be in the UK the week for July 13 to > 18. > > Meet you there any of those days. Perfect match to my schedule. > > "UK" meaning "London" or some rural place? > > Possible to nail that down quickly to catch early booking rates? > > Jan > > > > I've spoken to Sebastian about a little "RESTful" get together :-) Anyone > down (assuming you are in that neck of the woods). > > > > Thanks > > Glenn > > > > > > > > ----------------------------------- > Jan Algermissen, Consultant > NORD Software Consulting > > Mail: algermissen@... <algermissen%40acm.org> > Blog: http://www.nordsc.com/blog/ > Work: http://www.nordsc.com/ > ----------------------------------- > > >
Fantastic, I guess we'll meet sooner than I had hoped :-) I will get crisp on what days I will have free this week. Glenn On Mon, May 31, 2010 at 12:58 AM, Jan Algermissen <algermissen1971@...>wrote: > Glenn, > > On May 31, 2010, at 9:47 AM, Glenn Block wrote: > > > > > > > Hi guys > > > > I think I mentioned this, but I'll be in the UK the week for July 13 to > 18. > > Meet you there any of those days. Perfect match to my schedule. > > "UK" meaning "London" or some rural place? > > Possible to nail that down quickly to catch early booking rates? > > > Jan > > > > I've spoken to Sebastian about a little "RESTful" get together :-) Anyone > down (assuming you are in that neck of the woods). > > > > Thanks > > Glenn > > > > > > > > ----------------------------------- > Jan Algermissen, Consultant > NORD Software Consulting > > Mail: algermissen@... > Blog: http://www.nordsc.com/blog/ > Work: http://www.nordsc.com/ > ----------------------------------- > > > > >
Yes, I won't be at MS offices though :-) (though I may stop in Reading at some point) I am doing an event plus a set of user group talks. On Mon, May 31, 2010 at 1:03 AM, Alan Dean <alan.dean@...> wrote: > Jan, > > Microsoft have two main offices in the UK: one in central London near > Victoria Station and the other in Reading at Thames Valley Park. Both are > easily reachable from Heathrow. > > Regards, > Alan Dean > > On Mon, May 31, 2010 at 08:58, Jan Algermissen <algermissen1971@...>wrote: > >> >> >> Glenn, >> >> >> On May 31, 2010, at 9:47 AM, Glenn Block wrote: >> >> > >> > >> > Hi guys >> > >> > I think I mentioned this, but I'll be in the UK the week for July 13 to >> 18. >> >> Meet you there any of those days. Perfect match to my schedule. >> >> "UK" meaning "London" or some rural place? >> >> Possible to nail that down quickly to catch early booking rates? >> >> Jan >> >> >> > I've spoken to Sebastian about a little "RESTful" get together :-) >> Anyone down (assuming you are in that neck of the woods). >> > >> > Thanks >> > Glenn >> > >> > >> > >> >> ----------------------------------- >> Jan Algermissen, Consultant >> NORD Software Consulting >> >> Mail: algermissen@... <algermissen%40acm.org> >> Blog: http://www.nordsc.com/blog/ >> Work: http://www.nordsc.com/ >> ----------------------------------- >> >> >> > >
I've twittered this http://twitter.com/adean/status/15093922711 <http://twitter.com/adean/status/15093922711> Regards, Alan Dean On Mon, May 31, 2010 at 08:47, Glenn Block <glenn.block@...> wrote: > > > Hi guys > > I think I mentioned this, but I'll be in the UK the week for July 13 to 18. > I've spoken to Sebastian about a little "RESTful" get together :-) Anyone > down (assuming you are in that neck of the woods). > > Thanks > Glenn > > >
I'd rather suggest going somewhere, grab a lunch and spend some time in the afternoon chatting. There's lovely places to go in central London. I'm free most of that week, so will take whatever time is needed. Seb From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of Glenn Block Sent: 31 May 2010 09:05 To: Alan Dean Cc: Jan Algermissen; REST Discuss Subject: Re: [rest-discuss] Coming to the UK Yes, I won't be at MS offices though :-) (though I may stop in Reading at some point) I am doing an event plus a set of user group talks. On Mon, May 31, 2010 at 1:03 AM, Alan Dean <alan.dean@...<mailto:alan.dean@...>> wrote: Jan, Microsoft have two main offices in the UK: one in central London near Victoria Station and the other in Reading at Thames Valley Park. Both are easily reachable from Heathrow. Regards, Alan Dean On Mon, May 31, 2010 at 08:58, Jan Algermissen <algermissen1971@...<mailto:algermissen1971@mac.com>> wrote: Glenn, On May 31, 2010, at 9:47 AM, Glenn Block wrote: > > > Hi guys > > I think I mentioned this, but I'll be in the UK the week for July 13 to 18. Meet you there any of those days. Perfect match to my schedule. "UK" meaning "London" or some rural place? Possible to nail that down quickly to catch early booking rates? Jan > I've spoken to Sebastian about a little "RESTful" get together :-) Anyone down (assuming you are in that neck of the woods). > > Thanks > Glenn > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@...<mailto:algermissen%40acm.org> Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
> I hope that other contributors will also want to input their ideas and client APIs to any future JAX-RS effort, so that we select the best and proven ideas. I would love to be able to help on that one. How can we help improving it? Guilherme Silveira Caelum | Ensino e Inovao http://www.caelum.com.br/ 2010/5/31 Paul Sandoz <Paul.Sandoz@...> > > > On May 30, 2010, at 9:15 PM, Markus KARG wrote: > >> JAX-RS is server-side only, and the more interesting stuff happens on > >> the client. But I wonder what exactly JAX-RS does wrong in your > >> opinion > >> (from a REST POV)? > > > > I have to chime in, just for clarification: JAX-RS *currently* is > > server-side only, but chances are good that Jersey's current API > > will be > > proposed as a mandatory part of the specification in a future release, > > I hope that other contributors will also want to input their ideas and > client APIs to any future JAX-RS effort, so that we select the best > and proven ideas. > > > > as > > lots of users asked for that, there is a pending RFE and Paul Sandoz > > (spec > > member and RI author) is not unwilling to do that step... > > > > Very willing :-) i think this is a must have for any JAX-RS 2.0 effort. > > Paul. > > >
I'm up for a London RESTafarian meeting any time during those days, sounds like a great idea! On Mon, May 31, 2010 at 10:57 AM, Sebastien Lambla <seb@...>wrote: > > > Id rather suggest going somewhere, grab a lunch and spend some time in the > afternoon chatting. Theres lovely places to go in central London. > > > > Im free most of that week, so will take whatever time is needed. > > > > Seb > > > > *From:* rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] > *On Behalf Of *Glenn Block > *Sent:* 31 May 2010 09:05 > *To:* Alan Dean > *Cc:* Jan Algermissen; REST Discuss > *Subject:* Re: [rest-discuss] Coming to the UK > > > > > > > Yes, I won't be at MS offices though :-) (though I may stop in Reading at > some point) > > > > I am doing an event plus a set of user group talks. > > On Mon, May 31, 2010 at 1:03 AM, Alan Dean <alan.dean@...> wrote: > > Jan, > > > > Microsoft have two main offices in the UK: one in central London near > Victoria Station and the other in Reading at Thames Valley Park. Both are > easily reachable from Heathrow. > > > Regards, > Alan Dean > > On Mon, May 31, 2010 at 08:58, Jan Algermissen <algermissen1971@...> > wrote: > > > > Glenn, > > > > On May 31, 2010, at 9:47 AM, Glenn Block wrote: > > > > > > > Hi guys > > > > I think I mentioned this, but I'll be in the UK the week for July 13 to > 18. > > Meet you there any of those days. Perfect match to my schedule. > > "UK" meaning "London" or some rural place? > > Possible to nail that down quickly to catch early booking rates? > > Jan > > > > > I've spoken to Sebastian about a little "RESTful" get together :-) Anyone > down (assuming you are in that neck of the woods). > > > > Thanks > > Glenn > > > > > > > > ----------------------------------- > Jan Algermissen, Consultant > NORD Software Consulting > > Mail: algermissen@acm.org <algermissen%40acm.org> > Blog: http://www.nordsc.com/blog/ > Work: http://www.nordsc.com/ > ----------------------------------- > > > > > > > > >
On May 31, 2010, at 3:27 PM, Guilherme Silveira wrote: > > > > I hope that other contributors will also want to input their ideas > and > client APIs to any future JAX-RS effort, so that we select the best > and proven ideas. > > I would love to be able to help on that one. Great! > How can we help improving it? > When a JAX-RS 2.0 effort kick starts we can start the discussions, design and prototype work [*]. In the interim, although it is not quite the same thing, another way is to consider contributing to a JAX-RS implementation. Paul. [*] Unfortunately at the moment i do not have a clear time-frame as to when a JAX-RS 2.0 effort will kick start, we are waiting for the JCP to settle down after the Oracle acquisition. Once i have more info to share i can ping this list.
I was asked a very similar question - How can external services based on SOAP to call REST based services - and searching the list I found this post, but with no answers. Note that the question is to assess if the services *to be implemented* should use a REST approach or a WS-* approach, knowing that the clients of those to be implemented services will be probably disparate technologies, but including WS-*. Does someone have any pointers? On 5 May 2009 11:22, Sean Kennedy <seandkennedy@...> wrote: > > > Hi, > Any ideas on how to get a WS client to point to a completely different app while at the same time giving access to the XML section with minimal impact to the client? I am trying to map SOAP messages to RESTful URIs on the client prior to any message being issued. > > Thanks, > Sean. > > PS I am trying to come up with a way of calling an application (on the client) which will be able to access the XML section of a SOAP message and then map that to a RESTful URI, with minimal impact on the client. I was hoping that changing the WSDL URI might work (i.e. no change to client code) but I don't think that will work as I would then be tied to the operations/parameters in the WSDL (which does not suit). > >
Lunch sounds great to me, it will probably work better with my schedule :-) On 5/31/10, Sebastien Lambla <seb@...> wrote: > I'd rather suggest going somewhere, grab a lunch and spend some time in the > afternoon chatting. There's lovely places to go in central London. > > I'm free most of that week, so will take whatever time is needed. > > Seb > > From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On > Behalf Of Glenn Block > Sent: 31 May 2010 09:05 > To: Alan Dean > Cc: Jan Algermissen; REST Discuss > Subject: Re: [rest-discuss] Coming to the UK > > > > > Yes, I won't be at MS offices though :-) (though I may stop in Reading at > some point) > > I am doing an event plus a set of user group talks. > On Mon, May 31, 2010 at 1:03 AM, Alan Dean > <alan.dean@...<mailto:alan.dean@...>> wrote: > Jan, > > Microsoft have two main offices in the UK: one in central London near > Victoria Station and the other in Reading at Thames Valley Park. Both are > easily reachable from Heathrow. > > Regards, > Alan Dean > On Mon, May 31, 2010 at 08:58, Jan Algermissen > <algermissen1971@...<mailto:algermissen1971@...>> wrote: > > > Glenn, > > > On May 31, 2010, at 9:47 AM, Glenn Block wrote: > >> >> >> Hi guys >> >> I think I mentioned this, but I'll be in the UK the week for July 13 to >> 18. > Meet you there any of those days. Perfect match to my schedule. > > "UK" meaning "London" or some rural place? > > Possible to nail that down quickly to catch early booking rates? > > Jan > > >> I've spoken to Sebastian about a little "RESTful" get together :-) Anyone >> down (assuming you are in that neck of the woods). >> >> Thanks >> Glenn >> >> >> > ----------------------------------- > Jan Algermissen, Consultant > NORD Software Consulting > > Mail: algermissen@...<mailto:algermissen%40acm.org> > Blog: http://www.nordsc.com/blog/ > Work: http://www.nordsc.com/ > ----------------------------------- > > > > > > > -- Sent from my mobile device
Hi,
I'm currently working on a project that has a reverse HTTP(S) proxy as a central component. This proxy will shield multiple backend servers from direct access by clients. Currently it is planned for the reverse proxy to translate requests for
https://proxy.example.com/{bakend-server}/{path} (1)
to
http://{backend-server}.localdomain/{path} (2)
As you can see the path of the URL is altered by the proxy.
I wonder how hypermedia can be used in a RESTful way in such a setting. URIs in resource represenations for a client must be in form (1). The client must not know anything about the backend servers. So a resoure representation can not be in form (2). But a backend servers should only know about form (2).
Example:
Resource represenation as seen on a backend server:
{"title": "some resource", "uri": "http://foo.localdomain/path/for/resource"} (3)
Resource representation as needed by a client:
{"title": "some resource", "uri": "http://proxy.example.com/foo/path/for/resource"} (4)
Where in the whole setup is represenation (3) translated into form (4)
and back again?
What do you think?
Lutz
Translate in and translate out. As you should have a known media type you use with known semantics, that shouldn't be too difficult to do, it's simple url rewriting with media type translation.
-----Original Message-----
From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of Lutz
Sent: 31 May 2010 10:30
To: rest-discuss@yahoogroups.com
Subject: [rest-discuss] Hypermedia over a reverse proxy
Hi,
I'm currently working on a project that has a reverse HTTP(S) proxy as a central component. This proxy will shield multiple backend servers from direct access by clients. Currently it is planned for the reverse proxy to translate requests for
https://proxy.example.com/{bakend-server}/{path} (1)
to
http://{backend-server}.localdomain/{path} (2)
As you can see the path of the URL is altered by the proxy.
I wonder how hypermedia can be used in a RESTful way in such a setting. URIs in resource represenations for a client must be in form (1). The client must not know anything about the backend servers. So a resoure representation can not be in form (2). But a backend servers should only know about form (2).
Example:
Resource represenation as seen on a backend server:
{"title": "some resource", "uri": "http://foo.localdomain/path/for/resource"} (3)
Resource representation as needed by a client:
{"title": "some resource", "uri": "http://proxy.example.com/foo/path/for/resource"} (4)
Where in the whole setup is represenation (3) translated into form (4) and back again?
What do you think?
Lutz
------------------------------------
Yahoo! Groups Links
Jim Webber wrote: > > > > Have a look at JAX-RS from the Java world. > > I'm not certain that today's JAX-RS offers much more than today's WCF in > terms of REST support. If Glenn's team are going to do "REST like they > meant it" to paraphrase Guilherme, I don't think that JAX-RS is the > right way to go. But that's just an opinion. Or is there some technical criticism as well? Bill
Aside from the specification / documentation for a media type is there some common accepted practice for indicating which media type is expected to passed when calling a link? Thanks Glenn
GB: HTML, SMIL, SVG, & Atom all use the "type" attribute as an advisory value and depend largely on the media-type documentation of the various elements to handle expected media-types. XInclude (not a media-type) supports "accept" and "accept-language" as binding on the agent. mca http://amundsen.com/blog/ http://mamund.com/foaf.rdf#me On Tue, Jun 1, 2010 at 00:43, Glenn Block <glenn.block@...> wrote: > > > Aside from the specification / documentation for a media type is there some > common accepted practice for indicating which media type is expected to > passed when calling a link? > > Thanks > Glenn > > >
Note that it's very important that any hypermedia link only provide such an attribute as a "hint". If it was to become mandatory, you'd be changing the identifying function from URI to URI + Media type, which breaks most scenarios for which the identifying function exists to start with. We've had that conversation many times on this lists, yet some people still insist on this anti pattern, mostly because they want content negotiation to work in cases it wasn't designed to support. Seb From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of mike amundsen Sent: 01 June 2010 06:47 To: Glenn Block Cc: rest-discuss@yahoogroups.com Subject: Re: [rest-discuss] Determining which Media type for post/put GB: HTML, SMIL, SVG, & Atom all use the "type" attribute as an advisory value and depend largely on the media-type documentation of the various elements to handle expected media-types. XInclude (not a media-type) supports "accept" and "accept-language" as binding on the agent. mca http://amundsen.com/blog/ http://mamund.com/foaf.rdf#me On Tue, Jun 1, 2010 at 00:43, Glenn Block <glenn.block@...<mailto:glenn.block@...>> wrote: Aside from the specification / documentation for a media type is there some common accepted practice for indicating which media type is expected to passed when calling a link? Thanks Glenn
On Jun 1, 2010, at 9:15 AM, Sebastien Lambla wrote:
>
>
> Note that its very important that any hypermedia link only provide such an attribute as a hint.
Yes.
In addition, I'd recommend not to make use of type attributes because they lead to unintended coupling between representations (or the code that produces these representations) and the set of used media types. Any time you remove or add support for a certain media type, you need to check and update all the code that produces corresponding 'type' attributes. That's likely to be a maintenance nightmare.
HTTP (content negotiation) is designed to save us from such dependencies.
Here is the 'pattern':
Suppose the link <link href="/docs/example-doc" />, you'd then do this:
GET /docs/example-doc
Accept: text/html
200 Ok
Content-Type: text/html
Content-Location: /docs/example-doc.html
Vary: accept
<html>
...
</html>
The Content-Location header tells the client all it needs to know regarding the negotiation that took place. If the server wanted to inform the client about any variants, it could add an Alternates[1] header:
200 Ok
Content-Type: text/html
Content-Location: /docs/example-doc.html
Vary: accept
Alternates: {"example-doc.pdf" 1.0 {type application/pdf}},{"example-doc.txt" 1.0 {type text/plain}}
Jan
[1] http://www.faqs.org/rfcs/rfc2295.html
> If it was to become mandatory, youd be changing the identifying function from URI to URI + Media type, which breaks most scenarios for which the identifying function exists to start with.
>
> Weve had that conversation many times on this lists, yet some people still insist on this anti pattern, mostly because they want content negotiation to work in cases it wasnt designed to support.
>
> Seb
>
> From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of mike amundsen
> Sent: 01 June 2010 06:47
> To: Glenn Block
> Cc: rest-discuss@yahoogroups.com
> Subject: Re: [rest-discuss] Determining which Media type for post/put
>
>
>
>
> GB:
>
> HTML, SMIL, SVG, & Atom all use the "type" attribute as an advisory value and depend largely on the media-type documentation of the various elements to handle expected media-types.
>
> XInclude (not a media-type) supports "accept" and "accept-language" as binding on the agent.
>
> mca
> http://amundsen.com/blog/
> http://mamund.com/foaf.rdf#me
>
>
>
> On Tue, Jun 1, 2010 at 00:43, Glenn Block <glenn.block@gmail.com> wrote:
>
>
> Aside from the specification / documentation for a media type is there some common accepted practice for indicating which media type is expected to passed when calling a link?
>
> Thanks
> Glenn
>
>
>
>
>
>
>
>
>
-----------------------------------
Jan Algermissen, Consultant
NORD Software Consulting
Mail: algermissen@...
Blog: http://www.nordsc.com/blog/
Work: http://www.nordsc.com/
-----------------------------------
Glen, On Jun 1, 2010, at 6:43 AM, Glenn Block wrote: > > > Aside from the specification / documentation for a media type is there some common accepted practice for indicating which media type is expected to passed when calling a link? What do you mean by 'calling a link'? If you mean: 'submitting data via a PUT or POST request' then the usual means by which the server informs the client about what media types it is capable of processing is some information contained in the corresponding form. Examples are HTML's form enctype attribute on the <form> element AtomPub's <accept> element inside the <collection> element[1] OpenSearch's <parameters:enctype> attribute on the <Url> element[2] Jan [1] http://tools.ietf.org/html/rfc5023#section-8.3.4 [2] http://www.opensearch.org/Specifications/OpenSearch/Extensions/Parameter#The_.22Url.22_element > > Thanks > Glenn > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
Yes that's what I mean. Thanks for the clarification as well as for the explanation around content neg and alternates. Glenn On Tue, Jun 1, 2010 at 1:10 AM, Jan Algermissen <algermissen1971@...>wrote: > Glen, > > On Jun 1, 2010, at 6:43 AM, Glenn Block wrote: > > > > > > > Aside from the specification / documentation for a media type is there > some common accepted practice for indicating which media type is expected to > passed when calling a link? > > What do you mean by 'calling a link'? > > If you mean: 'submitting data via a PUT or POST request' then the usual > means by which the server informs the client about what media types it is > capable of processing is some information contained in the corresponding > form. > > Examples are > > HTML's form enctype attribute on the <form> element > AtomPub's <accept> element inside the <collection> element[1] > OpenSearch's <parameters:enctype> attribute on the <Url> element[2] > > Jan > > > [1] http://tools.ietf.org/html/rfc5023#section-8.3.4 > [2] > http://www.opensearch.org/Specifications/OpenSearch/Extensions/Parameter#The_.22Url.22_element > > > > > > Thanks > > Glenn > > > > > > > > ----------------------------------- > Jan Algermissen, Consultant > NORD Software Consulting > > Mail: algermissen@... > Blog: http://www.nordsc.com/blog/ > Work: http://www.nordsc.com/ > ----------------------------------- > > > > >
On 28 May 2010 16:32, Eric J. Bowman <eric@...> wrote: > > > REST is an architectural style favoring the large-grain transfer of > data; as per Fielding, 5.1.5: > > "The trade-off, though, is that a uniform interface degrades > efficiency, since information is transferred in a standardized form > rather than one which is specific to an application's needs." > > Although that quotation, there is not a imposition that REST style should not to be applicable outside of that particular scenario. After all, is not what the "serendipity property" of REST is all about, to apply the same principles to scenarios that were not devised in the first place? Nevertheless what interests me in this particular example (serving different sets of fields to different users) is it's use not to shrink the data volume on the wire but for security reasons, as a way to implement some kind of "roles" where users of type A can get fields A B and C and users of type B just A and B (another scenario will be custom reporting). The approach used by LinkedIn seems to implement too much coupling between client and server, but what are the alternatives? Is the media-type alternative viable?
Seb, Actually, the identifying function of HTTP is URI + any control data. Any control data involved in the identification can be indicated in the response with the Vary mechanism - that is the entire point of the vary mechanism. If the type attribute in links wasn't designed that way.. What exactly is the point of it, if it is not intended to affect client behavior? There is an argument that if the type attribute wasn't designed to support that case then a mistake was made and it was poorly defined. Cheers, Mike On Tue, Jun 1, 2010 at 8:15 AM, Sebastien Lambla <seb@...> wrote: > > > Note that its very important that any hypermedia link only provide such an > attribute as a hint. If it was to become mandatory, youd be changing the > identifying function from URI to URI + Media type, which breaks most > scenarios for which the identifying function exists to start with. > > > > Weve had that conversation many times on this lists, yet some people still > insist on this anti pattern, mostly because they want content negotiation to > work in cases it wasnt designed to support. > > > > Seb > > > > *From:* rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] > *On Behalf Of *mike amundsen > *Sent:* 01 June 2010 06:47 > *To:* Glenn Block > *Cc:* rest-discuss@yahoogroups.com > *Subject:* Re: [rest-discuss] Determining which Media type for post/put > > > > > > > GB: > > > > HTML, SMIL, SVG, & Atom all use the "type" attribute as an advisory value > and depend largely on the media-type documentation of the various elements > to handle expected media-types. > > > > XInclude (not a media-type) supports "accept" and "accept-language" as > binding on the agent. > > > > mca > http://amundsen.com/blog/ > http://mamund.com/foaf.rdf#me > > > On Tue, Jun 1, 2010 at 00:43, Glenn Block <glenn.block@...> wrote: > > > > Aside from the specification / documentation for a media type is there some > common accepted practice for indicating which media type is expected to > passed when calling a link? > > > > Thanks > > Glenn > > > > > > > > > > > >
On 1 June 2010 08:15, Sebastien Lambla <seb@...> wrote: > > > Note that its very important that any hypermedia link only provide such an > attribute as a hint. If it was to become mandatory, youd be changing the > identifying function from URI to URI + Media type, which breaks most > scenarios for which the identifying function exists to start with. > I understand how that is important in the case of a GET, but are you saying that is also the case for a PUT or POST? Because the server has to impose some format of what is acceptable as input...
Just read through the RFC on the "transparent content negotiation scheme". Makes a lot of sense, that is the piece i was missing :-) On Tue, Jun 1, 2010 at 1:10 AM, Jan Algermissen <algermissen1971@...>wrote: > Glen, > > On Jun 1, 2010, at 6:43 AM, Glenn Block wrote: > > > > > > > Aside from the specification / documentation for a media type is there > some common accepted practice for indicating which media type is expected to > passed when calling a link? > > What do you mean by 'calling a link'? > > If you mean: 'submitting data via a PUT or POST request' then the usual > means by which the server informs the client about what media types it is > capable of processing is some information contained in the corresponding > form. > > Examples are > > HTML's form enctype attribute on the <form> element > AtomPub's <accept> element inside the <collection> element[1] > OpenSearch's <parameters:enctype> attribute on the <Url> element[2] > > Jan > > > [1] http://tools.ietf.org/html/rfc5023#section-8.3.4 > [2] > http://www.opensearch.org/Specifications/OpenSearch/Extensions/Parameter#The_.22Url.22_element > > > > > > Thanks > > Glenn > > > > > > > > ----------------------------------- > Jan Algermissen, Consultant > NORD Software Consulting > > Mail: algermissen@... > Blog: http://www.nordsc.com/blog/ > Work: http://www.nordsc.com/ > ----------------------------------- > > > > >
Hello Bill, >> I'm not certain that today's JAX-RS offers much more than today's WCF in >> terms of REST support. If Glenn's team are going to do "REST like they >> meant it" to paraphrase Guilherme, I don't think that JAX-RS is the >> right way to go. > > But that's just an opinion. Or is there some technical criticism as well? It's an opinion - I don't have any carefully gathered empirical evidence to back it up. However both WCF and JAX-RS avoid hypermedia which is pretty important for RESTful solutions. In other bits of that email, I pointed out that some JAX-RS compliant frameworks (e.g. Jersey) are now experimenting with hypermedia which makes them much more useful if the abstractions come out right. In terms of technical critique, I think JAX-RS comes out ahead of WCF because it's marginally easier to TDD with it, and so much more of the framework is above the waterline rather than buried down deep. However at this point both frameworks are simply nicer programmatic interfaces atop a Web server, and both short-change client-side developers (with JAX-RS again being better than WCF). Since neither has hypermedia support from the start, retrofitting it at this point may result in horrid abstractions. That's why I'm broadly supportive of Glenn's outreach and very supportive of the people behind Restfulie (as well as being encouraged by the steps the Jersey team are taking). Jim
Jan, 'coupling representations' is a straw-man argument given that the type attribute is optional. What we're talking about here is a case in which you would actually *want* to provide a client with a link to a specific representation, otherwise you should just omit the attribute altogether.. If you assume that you can't use the type attribute for this purpose you have to reside yourself to exposing the representation at its own URI, and explicitly link to it - that is actually far *less* evolveable (not to mention a less visible interaction). So, in order to get a web browser to access the pdf representation instead of the html representation of my /document resource, I have to link explicitly to a 'pdf document resource' like so: <a href="/document.pdf"> I have to do it that way because all web browsers will stick to their default Accept header and incorrectly indicate a preference for html when following this link: <a type="application/pdf" href="/document"> Ignore that the latter doesn't actually work - both of those links are "coupled" to a representation, so nothing is actually gained and flexibility is lost by not using the type attribute for that purpose. Cheers, Mike On Tue, Jun 1, 2010 at 9:01 AM, Jan Algermissen <algermissen1971@mac.com>wrote: > > On Jun 1, 2010, at 9:15 AM, Sebastien Lambla wrote: > > > > > > > Note that its very important that any hypermedia link only provide such > an attribute as a hint. > > Yes. > > In addition, I'd recommend not to make use of type attributes because they > lead to unintended coupling between representations (or the code that > produces these representations) and the set of used media types. Any time > you remove or add support for a certain media type, you need to check and > update all the code that produces corresponding 'type' attributes. That's > likely to be a maintenance nightmare. >
And this is yet again the same conversation I was indeed refering to. Point is, identification on the web is done with the URI. The link provided by Mike (the anti pattern i was talking about earlier) is impossible to copy on a piece of paper and give to someone else. You've added the representation type to the identification function, making it incompatible with URIs, and creating something that is just not the web. That point has been going around in circles with no positive outcome for months, and Mike's view on the subject still breaks the fact that hte web uses URIs as identifiers, not "that a element with that type attribute and a URI in href". As such I'll just avoid entering the debate once more, the point has been made many times before, on here, on the TAG at the W3C and by Roy himself a couple of times. Seb ________________________________ From: Mike Kelly [mike@mykanjo.co.uk] Sent: 01 June 2010 10:35 To: Jan Algermissen Cc: Sebastien Lambla; mike amundsen; Glenn Block; rest-discuss@yahoogroups.com Subject: Re: [rest-discuss] Determining which Media type for post/put Jan, 'coupling representations' is a straw-man argument given that the type attribute is optional. What we're talking about here is a case in which you would actually *want* to provide a client with a link to a specific representation, otherwise you should just omit the attribute altogether.. If you assume that you can't use the type attribute for this purpose you have to reside yourself to exposing the representation at its own URI, and explicitly link to it - that is actually far *less* evolveable (not to mention a less visible interaction). So, in order to get a web browser to access the pdf representation instead of the html representation of my /document resource, I have to link explicitly to a 'pdf document resource' like so: <a href="/document.pdf"> I have to do it that way because all web browsers will stick to their default Accept header and incorrectly indicate a preference for html when following this link: <a type="application/pdf" href="/document"> Ignore that the latter doesn't actually work - both of those links are "coupled" to a representation, so nothing is actually gained and flexibility is lost by not using the type attribute for that purpose. Cheers, Mike On Tue, Jun 1, 2010 at 9:01 AM, Jan Algermissen <algermissen1971@...<mailto:algermissen1971@...>> wrote: On Jun 1, 2010, at 9:15 AM, Sebastien Lambla wrote: > > > Note that its very important that any hypermedia link only provide such an attribute as a hint. Yes. In addition, I'd recommend not to make use of type attributes because they lead to unintended coupling between representations (or the code that produces these representations) and the set of used media types. Any time you remove or add support for a certain media type, you need to check and update all the code that produces corresponding 'type' attributes. That's likely to be a maintenance nightmare.
Statements like "identification on the web is done with the URI" don't help much - what do URIs identify? I think the clue might be in the name! :) Suggesting that HTML has effectively governed the web into stagnation on this issue is a bit ironic considering the supposed evolutionary benefits of hypermedia. I don't share that point of view and I haven't seen it explained well or in any detail, providing that explanation might be helpful. Regardless - It wouldn't be incompatible with URIs. If it was already "the web" there wouldn't be much to talk about. As an application designer if I want to trade-off helping people who share links on pieces of paper or through interpretive dance, that should be my choice. There's also a massive assumption there that no mechanism(s) could be conceived to deal with these issues. I'm pretty sure they can, actually. You force this round in circles by repeatedly ignoring that the web uses URIs + relevant HTTP control data to "identify" representations. Hence: http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.44 "the Vary field value advises the user agent about the criteria that were used to select the representation". Cheers, Mike On Tue, Jun 1, 2010 at 11:06 AM, Sebastien Lambla <seb@...> wrote: > And this is yet again the same conversation I was indeed refering to. > > Point is, identification on the web is done with the URI. The link provided > by Mike (the anti pattern i was talking about earlier) is impossible to copy > on a piece of paper and give to someone else. You've added the > representation type to the identification function, making it incompatible > with URIs, and creating something that is just not the web. > > That point has been going around in circles with no positive outcome for > months, and Mike's view on the subject still breaks the fact that hte web > uses URIs as identifiers, not "that a element with that type attribute and a > URI in href". > > As such I'll just avoid entering the debate once more, the point has been > made many times before, on here, on the TAG at the W3C and by Roy himself a > couple of times. > > Seb > ------------------------------ > *From:* Mike Kelly [mike@...] > *Sent:* 01 June 2010 10:35 > *To:* Jan Algermissen > *Cc:* Sebastien Lambla; mike amundsen; Glenn Block; > rest-discuss@yahoogroups.com > > *Subject:* Re: [rest-discuss] Determining which Media type for post/put > > Jan, > > 'coupling representations' is a straw-man argument given that the type > attribute is optional. What we're talking about here is a case in which you > would actually *want* to provide a client with a link to a specific > representation, otherwise you should just omit the attribute altogether.. > > If you assume that you can't use the type attribute for this purpose you > have to reside yourself to exposing the representation at its own URI, and > explicitly link to it - that is actually far *less* evolveable (not to > mention a less visible interaction). So, in order to get a web browser to > access the pdf representation instead of the html representation of my > /document resource, I have to link explicitly to a 'pdf document resource' > like so: > > <a href="/document.pdf"> > > I have to do it that way because all web browsers will stick to their > default Accept header and incorrectly indicate a preference for html when > following this link: > > <a type="application/pdf" href="/document"> > > Ignore that the latter doesn't actually work - both of those links are > "coupled" to a representation, so nothing is actually gained and flexibility > is lost by not using the type attribute for that purpose. > > Cheers, > Mike > > > On Tue, Jun 1, 2010 at 9:01 AM, Jan Algermissen <algermissen1971@...>wrote: > >> >> On Jun 1, 2010, at 9:15 AM, Sebastien Lambla wrote: >> >> > >> > >> > Note that its very important that any hypermedia link only provide such >> an attribute as a hint. >> >> Yes. >> >> In addition, I'd recommend not to make use of type attributes because they >> lead to unintended coupling between representations (or the code that >> produces these representations) and the set of used media types. Any time >> you remove or add support for a certain media type, you need to check and >> update all the code that produces corresponding 'type' attributes. That's >> likely to be a maintenance nightmare. >> > > >
http://www.w3.org/TR/webarch/ ________________________________ From: Mike Kelly [mike@...] Sent: 01 June 2010 12:41 To: Sebastien Lambla Cc: Jan Algermissen; mike amundsen; Glenn Block; rest-discuss@yahoogroups.com Subject: Re: [rest-discuss] Determining which Media type for post/put Statements like "identification on the web is done with the URI" don't help much - what do URIs identify? I think the clue might be in the name! :) Suggesting that HTML has effectively governed the web into stagnation on this issue is a bit ironic considering the supposed evolutionary benefits of hypermedia. I don't share that point of view and I haven't seen it explained well or in any detail, providing that explanation might be helpful. Regardless - It wouldn't be incompatible with URIs. If it was already "the web" there wouldn't be much to talk about. As an application designer if I want to trade-off helping people who share links on pieces of paper or through interpretive dance, that should be my choice. There's also a massive assumption there that no mechanism(s) could be conceived to deal with these issues. I'm pretty sure they can, actually. You force this round in circles by repeatedly ignoring that the web uses URIs + relevant HTTP control data to "identify" representations. Hence: http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.44 "the Vary field value advises the user agent about the criteria that were used to select the representation". Cheers, Mike On Tue, Jun 1, 2010 at 11:06 AM, Sebastien Lambla <seb@...<mailto:seb@...>> wrote: And this is yet again the same conversation I was indeed refering to. Point is, identification on the web is done with the URI. The link provided by Mike (the anti pattern i was talking about earlier) is impossible to copy on a piece of paper and give to someone else. You've added the representation type to the identification function, making it incompatible with URIs, and creating something that is just not the web. That point has been going around in circles with no positive outcome for months, and Mike's view on the subject still breaks the fact that hte web uses URIs as identifiers, not "that a element with that type attribute and a URI in href". As such I'll just avoid entering the debate once more, the point has been made many times before, on here, on the TAG at the W3C and by Roy himself a couple of times. Seb ________________________________ From: Mike Kelly [mike@...<mailto:mike@...>] Sent: 01 June 2010 10:35 To: Jan Algermissen Cc: Sebastien Lambla; mike amundsen; Glenn Block; rest-discuss@yahoogroups.com<mailto:rest-discuss@yahoogroups.com> Subject: Re: [rest-discuss] Determining which Media type for post/put Jan, 'coupling representations' is a straw-man argument given that the type attribute is optional. What we're talking about here is a case in which you would actually *want* to provide a client with a link to a specific representation, otherwise you should just omit the attribute altogether.. If you assume that you can't use the type attribute for this purpose you have to reside yourself to exposing the representation at its own URI, and explicitly link to it - that is actually far *less* evolveable (not to mention a less visible interaction). So, in order to get a web browser to access the pdf representation instead of the html representation of my /document resource, I have to link explicitly to a 'pdf document resource' like so: <a href="/document.pdf"> I have to do it that way because all web browsers will stick to their default Accept header and incorrectly indicate a preference for html when following this link: <a type="application/pdf" href="/document"> Ignore that the latter doesn't actually work - both of those links are "coupled" to a representation, so nothing is actually gained and flexibility is lost by not using the type attribute for that purpose. Cheers, Mike On Tue, Jun 1, 2010 at 9:01 AM, Jan Algermissen <algermissen1971@...<mailto:algermissen1971@...>> wrote: On Jun 1, 2010, at 9:15 AM, Sebastien Lambla wrote: > > > Note that its very important that any hypermedia link only provide such an attribute as a hint. Yes. In addition, I'd recommend not to make use of type attributes because they lead to unintended coupling between representations (or the code that produces these representations) and the set of used media types. Any time you remove or add support for a certain media type, you need to check and update all the code that produces corresponding 'type' attributes. That's likely to be a maintenance nightmare.
Maybe http://www.ltg.ed.ac.uk/~ht/eSI_URIs.html will be clearer. ________________________________ From: rest-discuss@yahoogroups.com [rest-discuss@yahoogroups.com] on behalf of Sebastien Lambla [seb@...] Sent: 01 June 2010 12:42 To: Mike Kelly Cc: Jan Algermissen; mike amundsen; Glenn Block; rest-discuss@yahoogroups.com Subject: RE: [rest-discuss] Determining which Media type for post/put http://www.w3.org/TR/webarch/ ________________________________ From: Mike Kelly [mike@...] Sent: 01 June 2010 12:41 To: Sebastien Lambla Cc: Jan Algermissen; mike amundsen; Glenn Block; rest-discuss@yahoogroups.com Subject: Re: [rest-discuss] Determining which Media type for post/put Statements like "identification on the web is done with the URI" don't help much - what do URIs identify? I think the clue might be in the name! :) Suggesting that HTML has effectively governed the web into stagnation on this issue is a bit ironic considering the supposed evolutionary benefits of hypermedia. I don't share that point of view and I haven't seen it explained well or in any detail, providing that explanation might be helpful. Regardless - It wouldn't be incompatible with URIs. If it was already "the web" there wouldn't be much to talk about. As an application designer if I want to trade-off helping people who share links on pieces of paper or through interpretive dance, that should be my choice. There's also a massive assumption there that no mechanism(s) could be conceived to deal with these issues. I'm pretty sure they can, actually. You force this round in circles by repeatedly ignoring that the web uses URIs + relevant HTTP control data to "identify" representations. Hence: http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.44 "the Vary field value advises the user agent about the criteria that were used to select the representation". Cheers, Mike On Tue, Jun 1, 2010 at 11:06 AM, Sebastien Lambla <seb@...<mailto:seb@...>> wrote: And this is yet again the same conversation I was indeed refering to. Point is, identification on the web is done with the URI. The link provided by Mike (the anti pattern i was talking about earlier) is impossible to copy on a piece of paper and give to someone else. You've added the representation type to the identification function, making it incompatible with URIs, and creating something that is just not the web. That point has been going around in circles with no positive outcome for months, and Mike's view on the subject still breaks the fact that hte web uses URIs as identifiers, not "that a element with that type attribute and a URI in href". As such I'll just avoid entering the debate once more, the point has been made many times before, on here, on the TAG at the W3C and by Roy himself a couple of times. Seb ________________________________ From: Mike Kelly [mike@...<mailto:mike@...>] Sent: 01 June 2010 10:35 To: Jan Algermissen Cc: Sebastien Lambla; mike amundsen; Glenn Block; rest-discuss@yahoogroups.com<mailto:rest-discuss@yahoogroups.com> Subject: Re: [rest-discuss] Determining which Media type for post/put Jan, 'coupling representations' is a straw-man argument given that the type attribute is optional. What we're talking about here is a case in which you would actually *want* to provide a client with a link to a specific representation, otherwise you should just omit the attribute altogether.. If you assume that you can't use the type attribute for this purpose you have to reside yourself to exposing the representation at its own URI, and explicitly link to it - that is actually far *less* evolveable (not to mention a less visible interaction). So, in order to get a web browser to access the pdf representation instead of the html representation of my /document resource, I have to link explicitly to a 'pdf document resource' like so: <a href="/document.pdf"> I have to do it that way because all web browsers will stick to their default Accept header and incorrectly indicate a preference for html when following this link: <a type="application/pdf" href="/document"> Ignore that the latter doesn't actually work - both of those links are "coupled" to a representation, so nothing is actually gained and flexibility is lost by not using the type attribute for that purpose. Cheers, Mike On Tue, Jun 1, 2010 at 9:01 AM, Jan Algermissen <algermissen1971@...<mailto:algermissen1971@...>> wrote: On Jun 1, 2010, at 9:15 AM, Sebastien Lambla wrote: > > > Note that its very important that any hypermedia link only provide such an attribute as a hint. Yes. In addition, I'd recommend not to make use of type attributes because they lead to unintended coupling between representations (or the code that produces these representations) and the set of used media types. Any time you remove or add support for a certain media type, you need to check and update all the code that produces corresponding 'type' attributes. That's likely to be a maintenance nightmare.
Hi,
For a UI grid to access data it requires the data to be presented in a specific JSON format which is different to the default JSON format that we would normally serve up. I'd be interested to hear if there is any consensus on how to 'best' enable content negotiation where the negotiation includes different formats of the same media type?
Some possible options include:
Using a type/format parameter on the standard media type, e.g. application/json;type=uigrid
Using a specific media type, e.g. application/json+uigrid or something from the vnd space
Using a query string parameter, e.g. http://mydomain/widgets?format=uigrid
Please bear in mind that this is for use within an enterprise and not on the internet.
Thanks in advance,
Phil
Philip N. Ruelle | Technical Architect & Development Manager
Altis Partners (Jersey) Limited
2 Hill Street | St. Helier | Jersey, JE2 4UA
t. +44 (0)1534 787 746 | f. +44 (0)1534 832465 | w. www.altispartners.com
Altis Partners Jersey Limited - registered with the Jersey Financial Services Commission for the conduct of investment business
This email and its contents is issued by Altis Partners Jersey Limited ('APJL') and is for private circulation only. APJL is registered with the Jersey Financial Services Commission for the conduct of investment business. The information contained in this email is strictly confidential. The information and opinions contained in this email are for background purposes only, and do not purport to be full or complete. Nor does this email constitute investment advice. APJL is not hereby arranging or agreeing to arrange any transaction in any investment or other undertaking requiring regulatory authorisation. This email does not constitute or form part of any offer to issue or sell, or any solicitation of an offer to subscribe or purchase, any investment nor shall it or the fact of its distribution form the basis of, or be relied on in connection with, any contract therefore. No representation, warranty, or undertaking, express or limited, is given as to the accuracy or completeness of the information or opinions contained in this email by any of APJL, its partners or employees and no liability is accepted by such persons for the accuracy or completeness of such information or opinions. As such, no reliance may be placed on the information and opinions in this email.
This email may contain confidential and/or privileged information. If you are not the intended recipient or have received this email in error please notify the sender immediately and delete immediately. Any unauthorised copying, disclosure or distribution of this email is strictly forbidden.
ALTIS-JFSC-NFAHi Phil,
On Jun 1, 2010, at 1:44 PM, Philip N. Ruelle wrote:
>
>
> Hi,
>
> For a UI grid to access data it requires the data to be presented in a specific JSON format which is different to the default JSON format that we would normally serve up. Id be interested to hear if there is any consensus on how to best enable content negotiation where the negotiation includes different formats of the same media type?
Regarding your requirements:
- are the two versions really so different that you cannot serve one kind and just use a subset of the data in the special use case?
- is this really a conneg issue or could you link to a different resource in the previous representations received by the special user agent (e.g. determined from the User-Agent header)?
>
> Some possible options include:
> Using a type/format parameter on the standard media type, e.g. application/json;type=uigrid
> Using a specific media type, e.g. application/json+uigrid or something from the vnd space
> Using a query string parameter, e.g. http://mydomain/widgets?format=uigrid
>
> Please bear in mind that this is for use within an enterprise and not on the internet.
I'd always make the variants different resources and link to a negotiated 'parent' resource. Requests to the parent then perform conneg and serve the representation of the best variant and a Content-Location header to tell the user agent the 'real' URI. (See my last posting here)
Only use a different media type if the processing semantics are different.
Jan
P.S. Make sure you do not use application/json but mint your special type. Esp. sincle you are behind closed doors.
>
> Thanks in advance,
> Phil
>
>
> Philip N. Ruelle | Technical Architect & Development Manager
> Altis Partners (Jersey) Limited
> 2 Hill Street | St. Helier | Jersey, JE2 4UA
> t. +44 (0)1534 787 746 | f. +44 (0)1534 832465 | w. www.altispartners.com
> Altis Partners (Jersey) Limited - registered with the Jersey Financial Services Commission for the conduct of Investment Business and Fund Services Business
>
> This email and its contents is issued by Altis Partners Jersey Limited ('APJL') and is for private circulation only. APJL is registered with the Jersey Financial Services Commission of the States of Jersey for the conduct of investment business. The information contained in this email is strictly confidential. The information and opinions contained in this email are for background purposes only, and do not purport to be full or complete. Nor does this email constitute investment advice. APJL is not hereby arranging or agreeing to arrange any transaction in any investment or other undertaking requiring regulatory authorisation. This email does not constitute or form part of any offer to issue or sell, or any solicitation of an offer to subscribe or purchase, any investment nor shall it or the fact of its distribution form the basis of, or be relied on in connection with, any contract therefore. No representation, warranty, or undertaking, express or limited, is given as to the accuracy or completeness of the information or opinions contained in this email by any of APJL, its partners or employees and no liability is accepted by such persons for the accuracy or completeness of such information or opinions. As such, no reliance may be placed on the information and opinions in this email.
>
> This email may contain confidential and/or privileged information. If you are not the intended recipient or have received this email in error please notify the sender immediately and delete immediately. Any unauthorised copying, disclosure or distribution of this email is strictly forbidden
> Altis-Jersey Financial Services Commission-NFA
>
>
>
>
-----------------------------------
Jan Algermissen, Consultant
NORD Software Consulting
Mail: algermissen@...
Blog: http://www.nordsc.com/blog/
Work: http://www.nordsc.com/
-----------------------------------
Philip N. Ruelle wrote: > Using a specific media type, e.g. application/json+uigrid or something from the vnd space I'm sure I read a TAG finding or webarch paper or something declaring media type names of the form "a/b+c" harmful, but I can't for the life of me find that doc again. Does it ring a bell for anyone? Robert Brewer fumanchu@...
That was about application/x-vendor or x- prefixes in general, doubtful there would be findings about app/vnd. being considered harmful for the non-standardized space. -----Original Message----- From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of Robert Brewer Sent: 01 June 2010 15:55 To: rest-discuss@yahoogroups.com Subject: RE: [rest-discuss] Identifying a particular JSON schema/layout Philip N. Ruelle wrote: > Using a specific media type, e.g. application/json+uigrid or something from the vnd space I'm sure I read a TAG finding or webarch paper or something declaring media type names of the form "a/b+c" harmful, but I can't for the life of me find that doc again. Does it ring a bell for anyone? Robert Brewer fumanchu@... ------------------------------------ Yahoo! Groups Links
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 This? http://www.mnot.net/blog/2009/02/18/x- On 6/1/2010 9:03 AM, Sebastien Lambla wrote: > > > That was about application/x-vendor or x- prefixes in general, > doubtful there would be findings about app/vnd. being considered > harmful for the non-standardized space. > > -----Original Message----- > From: rest-discuss@yahoogroups.com > <mailto:rest-discuss%40yahoogroups.com> > [mailto:rest-discuss@yahoogroups.com > <mailto:rest-discuss%40yahoogroups.com>] On Behalf Of Robert Brewer > Sent: 01 June 2010 15:55 > To: rest-discuss@yahoogroups.com <mailto:rest-discuss%40yahoogroups.com> > Subject: RE: [rest-discuss] Identifying a particular JSON schema/layout > > Philip N. Ruelle wrote: > > Using a specific media type, e.g. application/json+uigrid or something > from the vnd space > > I'm sure I read a TAG finding or webarch paper or something > declaring media type names of the form "a/b+c" harmful, but I can't > for the life of me find that doc again. Does it ring a bell for anyone? > > Robert Brewer > fumanchu@... <mailto:fumanchu%40aminus.org> > > ------------------------------------ > > Yahoo! Groups Links > > - -- Kris Zyp SitePen (503) 806-1841 http://sitepen.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (MingW32) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAkwFIgkACgkQ9VpNnHc4zAx0CwCcCBKqaktzgH3o7wvPXAe9pTLF uXYAnRZEva6EQHrJlq4EBWKKxA/B1/ZR =o8xl -----END PGP SIGNATURE-----
No; it wasn't about "vnd" or "x-"; it was specifically about the "+". But it's not a big deal if I onlyl dreamed it up ;) Robert Brewer fumanchu@... From: Kris Zyp [mailto:kris@...] Sent: Tuesday, June 01, 2010 8:07 AM To: Sebastien Lambla Cc: Robert Brewer; rest-discuss@yahoogroups.com Subject: Re: [rest-discuss] Identifying a particular JSON schema/layout -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 This? http://www.mnot.net/blog/2009/02/18/x- On 6/1/2010 9:03 AM, Sebastien Lambla wrote: > > > That was about application/x-vendor or x- prefixes in general, > doubtful there would be findings about app/vnd. being considered > harmful for the non-standardized space. > > -----Original Message----- > From: rest-discuss@yahoogroups.com > <mailto:rest-discuss%40yahoogroups.com> <mailto:rest-discuss%40yahoogroups.com> > [mailto:rest-discuss@yahoogroups.com > <mailto:rest-discuss%40yahoogroups.com> <mailto:rest-discuss%40yahoogroups.com> ] On Behalf Of Robert Brewer > Sent: 01 June 2010 15:55 > To: rest-discuss@yahoogroups.com <mailto:rest-discuss%40yahoogroups.com> <mailto:rest-discuss%40yahoogroups.com> > Subject: RE: [rest-discuss] Identifying a particular JSON schema/layout > > Philip N. Ruelle wrote: > > Using a specific media type, e.g. application/json+uigrid or something > from the vnd space > > I'm sure I read a TAG finding or webarch paper or something > declaring media type names of the form "a/b+c" harmful, but I can't > for the life of me find that doc again. Does it ring a bell for anyone? > > Robert Brewer > fumanchu@... <mailto:fumanchu%40aminus.org> <mailto:fumanchu%40aminus.org> > > ------------------------------------ > > Yahoo! Groups Links > > - -- Kris Zyp SitePen (503) 806-1841 http://sitepen.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (MingW32) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAkwFIgkACgkQ9VpNnHc4zAx0CwCcCBKqaktzgH3o7wvPXAe9pTLF uXYAnRZEva6EQHrJlq4EBWKKxA/B1/ZR =o8xl -----END PGP SIGNATURE-----
Robert: I have some recollection that the JSON gang (Crockford?) didn't want to see +json spread as was the case for +xml, but I can't find any reference to that, either. mca http://amundsen.com/blog/ http://mamund.com/foaf.rdf#me On Tue, Jun 1, 2010 at 11:10, Robert Brewer <fumanchu@...> wrote: > > > No; it wasn't about "vnd" or "x-"; it was specifically about the "+". But > it's not a big deal if I onlyl dreamed it up ;) > > > > > > Robert Brewer > > fumanchu@... > > > > > > *From:* Kris Zyp [mailto:kris@...] > *Sent:* Tuesday, June 01, 2010 8:07 AM > *To:* Sebastien Lambla > *Cc:* Robert Brewer; rest-discuss@yahoogroups.com > *Subject:* Re: [rest-discuss] Identifying a particular JSON schema/layout > > > > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > This? > http://www.mnot.net/blog/2009/02/18/x- > > > On 6/1/2010 9:03 AM, Sebastien Lambla wrote: > > > > > > > > That was about application/x-vendor or x- prefixes in general, > > > doubtful there would be findings about app/vnd. being considered > > > harmful for the non-standardized space. > > > > > > -----Original Message----- > > > From: rest-discuss@yahoogroups.com > > > <mailto:rest-discuss%40yahoogroups.com><rest-discuss%40yahoogroups.com> > > > [mailto:rest-discuss@yahoogroups.com <rest-discuss@yahoogroups.com> > > > <mailto:rest-discuss%40yahoogroups.com><rest-discuss%40yahoogroups.com>] > On Behalf Of > > Robert Brewer > > > Sent: 01 June 2010 15:55 > > > To: rest-discuss@yahoogroups.com > > <mailto:rest-discuss%40yahoogroups.com> <rest-discuss%40yahoogroups.com> > > > Subject: RE: [rest-discuss] Identifying a particular JSON > > schema/layout > > > > > > Philip N. Ruelle wrote: > > > > Using a specific media type, e.g. application/json+uigrid > > or something > > > from the vnd space > > > > > > I'm sure I read a TAG finding or webarch paper or something > > > declaring media type names of the form "a/b+c" harmful, but I > > can't > > > for the life of me find that doc again. Does it ring a bell for > > anyone? > > > > > > Robert Brewer > > > fumanchu@... <mailto:fumanchu%40aminus.org><fumanchu%40aminus.org> > > > > > > ------------------------------------ > > > > > > Yahoo! Groups Links > > > > > > > > - -- > Kris Zyp > SitePen > (503) 806-1841 > http://sitepen.com > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.9 (MingW32) > Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ > > iEYEARECAAYFAkwFIgkACgkQ9VpNnHc4zAx0CwCcCBKqaktzgH3o7wvPXAe9pTLF > uXYAnRZEva6EQHrJlq4EBWKKxA/B1/ZR > =o8xl > -----END PGP SIGNATURE----- > > >
On Tue, Jun 1, 2010 at 7:44 AM, Philip N. Ruelle <philip@...> wrote: > For a UI grid to access data it requires the data to be presented in a specific JSON format which is different to the default JSON format that we would normally serve up. Id be interested to hear if there is any consensus on how to best enable content negotiation where the negotiation includes different formats of the same media type? I'm afraid that JSON use has fallen into the same trap as XML use did, though perhaps even more so. Using "application/json" for all your JSON data is about as useful as using "application/xml" for all your XML; that is to say, not very. If you have different flavours of JSON with different required fields, different meaning given to certain data structures, etc.., then *those* are your media types. > Using a specific media type, e.g. application/json+uigrid or something from the vnd space uigrid+json, but yah, that's more like it. Or, as folks are pointing out, you can leave out "+json" since it's non-standard plus probably won't buy you much, just like "+xml" didn't buy us very much. > Please bear in mind that this is for use within an enterprise and not on the internet. So you can get away with application/vnd.altispartners.uigrid Congrats on the new gig! Mark.
Doesen't the spec says that given a media-type application/mytype+xml, if a client could not recognize the mytype part then it should treat the representation as if it were only application/xml? If so the +xml has meaning... On 1 Jun 2010 16:56, "Mark Baker" <distobj@...> wrote: On Tue, Jun 1, 2010 at 7:44 AM, Philip N. Ruelle <philip@...> wrote: > For a UI grid t... I'm afraid that JSON use has fallen into the same trap as XML use did, though perhaps even more so. Using "application/json" for all your JSON data is about as useful as using "application/xml" for all your XML; that is to say, not very. If you have different flavours of JSON with different required fields, different meaning given to certain data structures, etc.., then *those* are your media types. > Using a specific media type, e.g. application/json+uigrid or something from the vnd space uigrid+json, but yah, that's more like it. Or, as folks are pointing out, you can leave out "+json" since it's non-standard plus probably won't buy you much, just like "+xml" didn't buy us very much. > Please bear in mind that this is for use within an enterprise and not on the internet. So you can get away with application/vnd.altispartners.uigrid Congrats on the new gig! Mark. ------------------------------------ Yahoo! Groups Links
Mike Kelly wrote: > > Actually, the identifying function of HTTP is URI + any control data. > Absolutely NOT. URIs identify _resources_ the control data is used to select _representations_ and the two are _not_ the same thing. > > If the type attribute in links wasn't designed that way.. What > exactly is the point of it, if it is not intended to affect client > behavior? There is an argument that if the type attribute wasn't > designed to support that case then a mistake was made and it was > poorly defined. > The point of it is to allow us to self-document our APIs. It is a violation of both the layered-system and identification of resources constraints to use @type in any other way. The server is not to dictate to the client what media types are acceptable to the client. If you need to directly reference a specific variant, assign it a URI and sent *that* to the client. THAT is the solution. It works. There is no "problem" left to be solved by borking @type. -Eric
---------- Forwarded message ---------- From: "Antnio Mota" <amsmota@gmail.com> Date: 1 Jun 2010 09:46 Subject: Re: [rest-discuss] Determining which Media type for post/put To: "Glenn Block" <glenn.block@...> 2010/6/1 Glenn Block <glenn.block@...>: > If you look at the RFC (as I just did) the server can return the acceptable > list through content... Are you referring to chapter 12.2 Negotiation on transactions other than GET and HEAD? Because the way I see it that refers to response representation that is eventually returned from a POST, not for the content of the POST itself (like the enctype in a http form). For what I understand Content-Negotiation is pertinent only for GET and HEAD, not for PUT and POST.
Adding the list This the problem with email. I started off asking about what to post, then at some point we transitioned to discussions about contentneg / what to return which didn't answer the first question :-) It sounds like for sure media type docs will tell you what you need to post, and there is also the possibility for annotations within the media type schema itself. Is that correct? On 6/1/10, Antnio Mota <amsmota@...> wrote: > Hi Glen, just a quick note, was your response meant to be for me only or for > the list? If for the list you forgot to cc it. > > Best regards. > > On 1 Jun 2010 16:40, "Glenn Block" <glenn.block@...> wrote: > > That's a good point :-) The RFC is taling about the response, not the > content in a POST/PUT. Transparent ContentNeg doesn't address what to send, > it addresses finding available representations. > > Sounds like enctype in the http form sounds or an annotation within a media > type like the AtomPub "Accept" or something custom if it is a custom media > type would work. > > Regards > Glenn > > 2010/6/1 Antnio Mota <amsmota@...> > >> 2010/6/1 Glenn Block <glenn.block@...>: >> >> >> > > If you look at the RFC (as I just did) the server can return the >> acceptable >> > > list through co... >> > -- Sent from my mobile device
Hello Bill, >> I'm not certain that today's JAX-RS offers much more than today's WCF in >> terms of REST support. If Glenn's team are going to do "REST like they >> meant it" to paraphrase Guilherme, I don't think that JAX-RS is the >> right way to go. > > But that's just an opinion. Or is there some technical criticism as well? It seems like the client part of a REST client was not so clear at that time, and there were not so many attemps to create generic consumers. Every service provided their "own specific REST APIs" for their "specific REST services", i.e. twitter, facebook, and hundreds of others. The first JAX-RS spec did not take hypermedia in account, so if you think about REST without hypermedia, it will not be problem. But it seems like REST depends on using hypermedia, right? If you believe so and want your consumers to use hypermedia, using a Java framework, you have to rely on Restfulie, Jersey and Restlet, who are trying to do so. As Paul mentioned, its a matter of time for it to enter the JAX-RS specs. Regards Guilherme Silveira Caelum | Ensino e Inovao http://www.caelum.com.br/ 2010/6/1 Jim Webber <jim@...> > > > Hello Bill, > > > >> I'm not certain that today's JAX-RS offers much more than today's WCF in > > >> terms of REST support. If Glenn's team are going to do "REST like they > >> meant it" to paraphrase Guilherme, I don't think that JAX-RS is the > >> right way to go. > > > > But that's just an opinion. Or is there some technical criticism as well? > > It's an opinion - I don't have any carefully gathered empirical evidence to > back it up. However both WCF and JAX-RS avoid hypermedia which is pretty > important for RESTful solutions. In other bits of that email, I pointed out > that some JAX-RS compliant frameworks (e.g. Jersey) are now experimenting > with hypermedia which makes them much more useful if the abstractions come > out right. > > In terms of technical critique, I think JAX-RS comes out ahead of WCF > because it's marginally easier to TDD with it, and so much more of the > framework is above the waterline rather than buried down deep. However at > this point both frameworks are simply nicer programmatic interfaces atop a > Web server, and both short-change client-side developers (with JAX-RS again > being better than WCF). > > Since neither has hypermedia support from the start, retrofitting it at > this point may result in horrid abstractions. That's why I'm broadly > supportive of Glenn's outreach and very supportive of the people behind > Restfulie (as well as being encouraged by the steps the Jersey team are > taking). > > Jim > > >
Antnio Mota wrote: > On 28 May 2010 16:32, Eric J. Bowman wrote: > > > > > > > REST is an architectural style favoring the large-grain transfer of > > data; as per Fielding, 5.1.5: > > > > "The trade-off, though, is that a uniform interface degrades > > efficiency, since information is transferred in a standardized form > > rather than one which is specific to an application's needs." > > > > > Although that quotation, there is not a imposition that REST style > should not to be applicable outside of that particular scenario. > After all, is not what the "serendipity property" of REST is all > about, to apply the same principles to scenarios that were not > devised in the first place? > Following REST allows serendipitous re-use, sure. That guarantee can't be made for architectures which deviate from REST. It's simply a fact that the Uniform Interface degrades efficiency; overcoming this limitation results in a non-Uniform Interface, i.e. something that fundamentally isn't REST, so the serendipitous re-use benefit of the Uniform Interface can't be said to apply. > > Nevertheless what interests me in this particular example (serving > different sets of fields to different users) is it's use not to > shrink the data volume on the wire but for security reasons, as a way > to implement some kind of "roles" where users of type A can get > fields A B and C and users of type B just A and B (another scenario > will be custom reporting). The approach used by LinkedIn seems to > implement too much coupling between client and server, but what are > the alternatives? Is the media-type alternative viable? > If you look at my online demo, you'll see that every steady-state is generated using an XSLT stylesheet. My real solution is to make that XSLT stylesheet URI negotiate based on user authentication. So each user gets their own role-based (or personalized) XSLT, without changing the initial representation or its URI. More importantly, without making a whole bunch of individualized sub-resources with username or role information in the URIs. This technique may be used for any resource included by the initial representation, so you can effectively restrict content, or offer different interface capabilities, based on user role. -Eric
On Jun 1, 2010, at 7:29 PM, Glenn Block wrote: > Adding the list > > This the problem with email. I started off asking about what to post, > then at some point we transitioned to discussions about contentneg / > what to return which didn't answer the first question :-) > > It sounds like for sure media type docs will tell you what you need to > post, and there is also the possibility for annotations within the > media type schema itself. > > Is that correct? No. If you place that information inside the media type specs, you couple the spec to the choice of formats. Such information should be provided at runtime. For example via the mechanism I sent in my first reply (e.g. HTML's enctype attribute). The media type specs might define that resources that are pointed to by a certain link relation expect a certain kind of information (e.g. Orders) but the association of how such an 'order' is represented should not be part of the media type specs. There might be suggestions or examples, but the set of media types that actually make sense are 'determined' by the ones in common use in the given environment (e.g. Web-wide, org-wide). IOW, the client-side developer will pick the types the user agent supports to send from that set of well-known types. Jan > > On 6/1/10, Antnio Mota <amsmota@...> wrote: >> Hi Glen, just a quick note, was your response meant to be for me only or for >> the list? If for the list you forgot to cc it. >> >> Best regards. >> >> On 1 Jun 2010 16:40, "Glenn Block" <glenn.block@...> wrote: >> >> That's a good point :-) The RFC is taling about the response, not the >> content in a POST/PUT. Transparent ContentNeg doesn't address what to send, >> it addresses finding available representations. >> >> Sounds like enctype in the http form sounds or an annotation within a media >> type like the AtomPub "Accept" or something custom if it is a custom media >> type would work. >> >> Regards >> Glenn >> >> 2010/6/1 Antnio Mota <amsmota@...> >> >>> 2010/6/1 Glenn Block <glenn.block@...>: >>> >>> >>>>> If you look at the RFC (as I just did) the server can return the >>> acceptable >>>>> list through co... >>> >> > > -- > Sent from my mobile device > > > ------------------------------------ > > Yahoo! Groups Links > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
Jan: <snip> >> It sounds like for sure media type docs will tell you what you need to >> post, and there is also the possibility for annotations within the >> media type schema itself. >> >> Is that correct? > > No. If you place that information inside the media type specs, you couple the spec to the choice of formats. > > Such information should be provided at runtime. For example via the mechanism I sent in my first reply (e.g. HTML's enctype attribute). </snip> This is a matter of design choices, not hard & fast rules. HTML offers agents a limited number of content-types for POST [1]. Atom does not allow the agent to select a content-type at all and even has clear expectations of the format of the XML payload that is acceptable for POST and PUT [2]. In both cases, these rules appear in the documentation for that media-type. [1] http://www.w3.org/TR/html5/forms.html#attr-fs-formenctype [2] http://bitworking.org/projects/atom/rfc5023.html#collection_resource mca http://amundsen.com/blog/ http://mamund.com/foaf.rdf#me On Tue, Jun 1, 2010 at 14:22, Jan Algermissen <algermissen1971@...> wrote: > > On Jun 1, 2010, at 7:29 PM, Glenn Block wrote: > >> Adding the list >> >> This the problem with email. I started off asking about what to post, >> then at some point we transitioned to discussions about contentneg / >> what to return which didn't answer the first question :-) >> >> It sounds like for sure media type docs will tell you what you need to >> post, and there is also the possibility for annotations within the >> media type schema itself. >> >> Is that correct? > > No. If you place that information inside the media type specs, you couple the spec to the choice of formats. > > Such information should be provided at runtime. For example via the mechanism I sent in my first reply (e.g. HTML's enctype attribute). > > > The media type specs might define that resources that are pointed to by a certain link relation expect a certain kind of information (e.g. Orders) but the association of how such an 'order' is represented should not be part of the media type specs. There might be suggestions or examples, but the set of media types that actually make sense are 'determined' by the ones in common use in the given environment (e.g. Web-wide, org-wide). IOW, the client-side developer will pick the types the user agent supports to send from that set of well-known types. > > Jan > >> >> On 6/1/10, Antnio Mota <amsmota@...> wrote: >>> Hi Glen, just a quick note, was your response meant to be for me only or for >>> the list? If for the list you forgot to cc it. >>> >>> Best regards. >>> >>> On 1 Jun 2010 16:40, "Glenn Block" <glenn.block@gmail.com> wrote: >>> >>> That's a good point :-) The RFC is taling about the response, not the >>> content in a POST/PUT. Transparent ContentNeg doesn't address what to send, >>> it addresses finding available representations. >>> >>> Sounds like enctype in the http form sounds or an annotation within a media >>> type like the AtomPub "Accept" or something custom if it is a custom media >>> type would work. >>> >>> Regards >>> Glenn >>> >>> 2010/6/1 Antnio Mota <amsmota@...> >>> >>>> 2010/6/1 Glenn Block <glenn.block@...>: >>>> >>>> >>>>>> If you look at the RFC (as I just did) the server can return the >>>> acceptable >>>>>> list through co... >>>> >>> >> >> -- >> Sent from my mobile device >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> > > ----------------------------------- > Jan Algermissen, Consultant > NORD Software Consulting > > Mail: algermissen@... > Blog: http://www.nordsc.com/blog/ > Work: http://www.nordsc.com/ > ----------------------------------- > > > > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
Hi Eric, Comments in line On Tue, Jun 1, 2010 at 5:51 PM, Eric J. Bowman <eric@...>wrote: > Mike Kelly wrote: > > > > Actually, the identifying function of HTTP is URI + any control data. > > > > Absolutely NOT. URIs identify _resources_ the control data is used to > select _representations_ and the two are _not_ the same thing. > You sound like you're agreeing with me, the way Seb uses the term 'identifying function' implied we were talking abouts representations not resource, which is what I was addressing: I don't know if you've ever had to develop a non-trivial hypermedia-driven application that needs to service (amongst other clients) browsers via HTML - but this conflation of resource and representation is *exactly* the problem that I am taking issue with.. you can't make a browser negotiate any other type of representation over HTML, which means you end up having to pretend representations are resources and ignoring negotiation altogether in order to make the representations accessible to browsers. > > > > > If the type attribute in links wasn't designed that way.. What > > exactly is the point of it, if it is not intended to affect client > > behavior? There is an argument that if the type attribute wasn't > > designed to support that case then a mistake was made and it was > > poorly defined. > > > > The point of it is to allow us to self-document our APIs. What does that even mean? What is the objective of doing that? What are you documenting if, as you're suggesting, it doesn't make any mechanical difference? > It is a > violation of both the layered-system and identification of resources > constraints to use @type in any other way. Afaik this is nothing to do with either of those constraints > The server is not to dictate > to the client what media types are acceptable to the client. > Sure sure, unfortunately the reality is that users of browsers care about certain representations of resources depending on the context and the solution used in the browser+html world right now is to link *directly* to a media-type specific URI, so in practice it is actually *no different at all*, and is in fact a much worse solution since the link itself is less descriptive to the client (the client has no idea the link is intended to be media type specific, URIs are opaque), and the interaction is less visible to intermediaries (since no negotiation is taking place). > > If you need to directly reference a specific variant, assign it a URI > and sent *that* to the client. THAT is the solution. It works. There > is no "problem" left to be solved by borking @type. > > .. I take it you haven't tried designing a RESTful system that handles browser clients then. Cheers, Mike
OK, thank you again. On 6/1/10, Jan Algermissen <algermissen1971@...> wrote: > > On Jun 1, 2010, at 7:29 PM, Glenn Block wrote: > >> Adding the list >> >> This the problem with email. I started off asking about what to post, >> then at some point we transitioned to discussions about contentneg / >> what to return which didn't answer the first question :-) >> >> It sounds like for sure media type docs will tell you what you need to >> post, and there is also the possibility for annotations within the >> media type schema itself. >> >> Is that correct? > > No. If you place that information inside the media type specs, you couple > the spec to the choice of formats. > > Such information should be provided at runtime. For example via the > mechanism I sent in my first reply (e.g. HTML's enctype attribute). > > > The media type specs might define that resources that are pointed to by a > certain link relation expect a certain kind of information (e.g. Orders) but > the association of how such an 'order' is represented should not be part of > the media type specs. There might be suggestions or examples, but the set of > media types that actually make sense are 'determined' by the ones in common > use in the given environment (e.g. Web-wide, org-wide). IOW, the client-side > developer will pick the types the user agent supports to send from that set > of well-known types. > > Jan > >> >> On 6/1/10, Antnio Mota <amsmota@...> wrote: >>> Hi Glen, just a quick note, was your response meant to be for me only or >>> for >>> the list? If for the list you forgot to cc it. >>> >>> Best regards. >>> >>> On 1 Jun 2010 16:40, "Glenn Block" <glenn.block@...> wrote: >>> >>> That's a good point :-) The RFC is taling about the response, not the >>> content in a POST/PUT. Transparent ContentNeg doesn't address what to >>> send, >>> it addresses finding available representations. >>> >>> Sounds like enctype in the http form sounds or an annotation within a >>> media >>> type like the AtomPub "Accept" or something custom if it is a custom >>> media >>> type would work. >>> >>> Regards >>> Glenn >>> >>> 2010/6/1 Antnio Mota <amsmota@gmail.com> >>> >>>> 2010/6/1 Glenn Block <glenn.block@...>: >>>> >>>> >>>>>> If you look at the RFC (as I just did) the server can return the >>>> acceptable >>>>>> list through co... >>>> >>> >> >> -- >> Sent from my mobile device >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> > > ----------------------------------- > Jan Algermissen, Consultant > NORD Software Consulting > > Mail: algermissen@... > Blog: http://www.nordsc.com/blog/ > Work: http://www.nordsc.com/ > ----------------------------------- > > > > > -- Sent from my mobile device
2010/6/1 Antnio Mota <amsmota@...>: > Doesen't the spec says that given a media-type application/mytype+xml, if a > client could not recognize the mytype part then it should treat the > representation as if it were only application/xml? Not exactly, it just says it uses XML syntax, but close enough... > If so the +xml has meaning... Sure, it's just not proven particularly valuable in practice. Opera does XML namespace dispatching on unknown */*+xml types, but that's a huge security problem (or at least will be at some point in the future), and it's not even standardized. Mark.
On Jun 1, 2010, at 8:35 PM, mike amundsen wrote: > Jan: > > <snip> >>> It sounds like for sure media type docs will tell you what you need to >>> post, and there is also the possibility for annotations within the >>> media type schema itself. >>> >>> Is that correct? >> >> No. If you place that information inside the media type specs, you couple the spec to the choice of formats. >> >> Such information should be provided at runtime. For example via the mechanism I sent in my first reply (e.g. HTML's enctype attribute). > </snip> > > This is a matter of design choices, not hard & fast rules. Really? I'd opt for orthogonal specs every time. I see no disadvantage, only positive effects. > HTML offers > agents a limited number of content-types for POST [1]. Just checked - yes, HTML5 does but HTML4 wisely does not. HTML4 mandates that browsers support certain types but leaves an option for other ones: <http://www.w3.org/TR/REC-html40/interact/forms.html#form-content-type> The choice made by HTML5 is just bad design (does anyone have an idea why the set of types is *not* open in the case of HTML5?) > Atom does not > allow the agent to select a content-type at all What do you mean? Isn't the <accept> element doing just that? > and even has clear > expectations of the format of the XML payload that is acceptable for > POST and PUT [2]. Which is (IMHO) an unnecessary overspecification. It should allow the unspecified case to enable evolution there. > In both cases, these rules appear in the > documentation for that media-type. ... and in both cases I'd consider them bad design *given the design goals underlying Web architecture*. To be clear: Suggestions and hints are IMHO ok, but not limiting the possible set to a fixed number of types. Jan > > [1] http://www.w3.org/TR/html5/forms.html#attr-fs-formenctype > [2] http://bitworking.org/projects/atom/rfc5023.html#collection_resource > > mca > http://amundsen.com/blog/ > http://mamund.com/foaf.rdf#me > > > > > On Tue, Jun 1, 2010 at 14:22, Jan Algermissen <algermissen1971@...> wrote: >> >> On Jun 1, 2010, at 7:29 PM, Glenn Block wrote: >> >>> Adding the list >>> >>> This the problem with email. I started off asking about what to post, >>> then at some point we transitioned to discussions about contentneg / >>> what to return which didn't answer the first question :-) >>> >>> It sounds like for sure media type docs will tell you what you need to >>> post, and there is also the possibility for annotations within the >>> media type schema itself. >>> >>> Is that correct? >> >> No. If you place that information inside the media type specs, you couple the spec to the choice of formats. >> >> Such information should be provided at runtime. For example via the mechanism I sent in my first reply (e.g. HTML's enctype attribute). >> >> >> The media type specs might define that resources that are pointed to by a certain link relation expect a certain kind of information (e.g. Orders) but the association of how such an 'order' is represented should not be part of the media type specs. There might be suggestions or examples, but the set of media types that actually make sense are 'determined' by the ones in common use in the given environment (e.g. Web-wide, org-wide). IOW, the client-side developer will pick the types the user agent supports to send from that set of well-known types. >> >> Jan >> >>> >>> On 6/1/10, Antnio Mota <amsmota@...> wrote: >>>> Hi Glen, just a quick note, was your response meant to be for me only or for >>>> the list? If for the list you forgot to cc it. >>>> >>>> Best regards. >>>> >>>> On 1 Jun 2010 16:40, "Glenn Block" <glenn.block@...> wrote: >>>> >>>> That's a good point :-) The RFC is taling about the response, not the >>>> content in a POST/PUT. Transparent ContentNeg doesn't address what to send, >>>> it addresses finding available representations. >>>> >>>> Sounds like enctype in the http form sounds or an annotation within a media >>>> type like the AtomPub "Accept" or something custom if it is a custom media >>>> type would work. >>>> >>>> Regards >>>> Glenn >>>> >>>> 2010/6/1 Antnio Mota <amsmota@...> >>>> >>>>> 2010/6/1 Glenn Block <glenn.block@...>: >>>>> >>>>> >>>>>>> If you look at the RFC (as I just did) the server can return the >>>>> acceptable >>>>>>> list through co... >>>>> >>>> >>> >>> -- >>> Sent from my mobile device >>> >>> >>> ------------------------------------ >>> >>> Yahoo! Groups Links >>> >>> >>> >> >> ----------------------------------- >> Jan Algermissen, Consultant >> NORD Software Consulting >> >> Mail: algermissen@... >> Blog: http://www.nordsc.com/blog/ >> Work: http://www.nordsc.com/ >> ----------------------------------- >> >> >> >> >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> >> ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@acm.org Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
You sound like you're agreeing with me, the way Seb uses the term 'identifying function' implied we were talking abouts representations not resource, which is what I was addressing: No. Please (re?)read my message, the various quotes, and the many various extensive conversations on this list. I'm certain I talk about identifying *resources* with URIs. I've been very consistent in the links and explanations I gave you many times as to why your wish to *identify* representations by adding content negotiation to the identification function was redundant, not compatible with the current architecture of the web and due mostly to your apparent desire of using resource identification as a logical grouping of multiple resources rather than as what it is right now, arguing that browsers are so limited they can't understand the specificity of the protocol you're describing, and that anyone disagreeing with your point of view has either never worked with browsers or is not explaining what you want to hear. As there are archives on this mailing list, I won't reiterate any of those points. Seb
my final comments inline... mca http://amundsen.com/blog/ http://mamund.com/foaf.rdf#me On Tue, Jun 1, 2010 at 15:56, Jan Algermissen <algermissen1971@...> wrote: > > On Jun 1, 2010, at 8:35 PM, mike amundsen wrote: > >> Jan: >> >> <snip> >>>> It sounds like for sure media type docs will tell you what you need to >>>> post, and there is also the possibility for annotations within the >>>> media type schema itself. >>>> >>>> Is that correct? >>> >>> No. If you place that information inside the media type specs, you couple the spec to the choice of formats. >>> >>> Such information should be provided at runtime. For example via the mechanism I sent in my first reply (e.g. HTML's enctype attribute). >> </snip> >> >> This is a matter of design choices, not hard & fast rules. > > Really? I'd opt for orthogonal specs every time. I see no disadvantage, only positive effects. You're free to opt for whatever you wish. > >> HTML offers >> agents a limited number of content-types for POST [1]. > > Just checked - yes, HTML5 does but HTML4 wisely does not. HTML4 mandates that browsers support certain types but leaves an option for other ones: > <http://www.w3.org/TR/REC-html40/interact/forms.html#form-content-type> Yes, HTML5 mentions text/plain explictly where HTML4 makes only a passing mention in the very section you cite. Neither prohibit other types (indeed code-on-demand clients use other types all the time), but I know of no other serializations to other common media types (application/json, application/xml) implemented by widely-used browsers. > > The choice made by HTML5 is just bad design (does anyone have an idea why the set of types is *not* open in the case of HTML5?) > >> Atom does not >> allow the agent to select a content-type at all > > What do you mean? Isn't the <accept> element doing just that? Accept is not used for POST/PUT > >> and even has clear >> expectations of the format of the XML payload that is acceptable for >> POST and PUT [2]. > > Which is (IMHO) an unnecessary overspecification. It should allow the unspecified case to enable evolution there. Yes, "IMHO" is exactly my point. > >> In both cases, these rules appear in the >> documentation for that media-type. > > ... and in both cases I'd consider them bad design *given the design goals underlying Web architecture*. That's a consideration held by many; hence my reference to design choices. > > To be clear: Suggestions and hints are IMHO ok, but not limiting the possible set to a fixed number of types. > > Jan > > >> >> [1] http://www.w3.org/TR/html5/forms.html#attr-fs-formenctype >> [2] http://bitworking.org/projects/atom/rfc5023.html#collection_resource >> >> mca >> http://amundsen.com/blog/ >> http://mamund.com/foaf.rdf#me >> >> >> >> >> On Tue, Jun 1, 2010 at 14:22, Jan Algermissen <algermissen1971@...> wrote: >>> >>> On Jun 1, 2010, at 7:29 PM, Glenn Block wrote: >>> >>>> Adding the list >>>> >>>> This the problem with email. I started off asking about what to post, >>>> then at some point we transitioned to discussions about contentneg / >>>> what to return which didn't answer the first question :-) >>>> >>>> It sounds like for sure media type docs will tell you what you need to >>>> post, and there is also the possibility for annotations within the >>>> media type schema itself. >>>> >>>> Is that correct? >>> >>> No. If you place that information inside the media type specs, you couple the spec to the choice of formats. >>> >>> Such information should be provided at runtime. For example via the mechanism I sent in my first reply (e.g. HTML's enctype attribute). >>> >>> >>> The media type specs might define that resources that are pointed to by a certain link relation expect a certain kind of information (e.g. Orders) but the association of how such an 'order' is represented should not be part of the media type specs. There might be suggestions or examples, but the set of media types that actually make sense are 'determined' by the ones in common use in the given environment (e.g. Web-wide, org-wide). IOW, the client-side developer will pick the types the user agent supports to send from that set of well-known types. >>> >>> Jan >>> >>>> >>>> On 6/1/10, Antnio Mota <amsmota@...> wrote: >>>>> Hi Glen, just a quick note, was your response meant to be for me only or for >>>>> the list? If for the list you forgot to cc it. >>>>> >>>>> Best regards. >>>>> >>>>> On 1 Jun 2010 16:40, "Glenn Block" <glenn.block@...> wrote: >>>>> >>>>> That's a good point :-) The RFC is taling about the response, not the >>>>> content in a POST/PUT. Transparent ContentNeg doesn't address what to send, >>>>> it addresses finding available representations. >>>>> >>>>> Sounds like enctype in the http form sounds or an annotation within a media >>>>> type like the AtomPub "Accept" or something custom if it is a custom media >>>>> type would work. >>>>> >>>>> Regards >>>>> Glenn >>>>> >>>>> 2010/6/1 Antnio Mota <amsmota@...> >>>>> >>>>>> 2010/6/1 Glenn Block <glenn.block@...>: >>>>>> >>>>>> >>>>>>>> If you look at the RFC (as I just did) the server can return the >>>>>> acceptable >>>>>>>> list through co... >>>>>> >>>>> >>>> >>>> -- >>>> Sent from my mobile device >>>> >>>> >>>> ------------------------------------ >>>> >>>> Yahoo! Groups Links >>>> >>>> >>>> >>> >>> ----------------------------------- >>> Jan Algermissen, Consultant >>> NORD Software Consulting >>> >>> Mail: algermissen@... >>> Blog: http://www.nordsc.com/blog/ >>> Work: http://www.nordsc.com/ >>> ----------------------------------- >>> >>> >>> >>> >>> >>> >>> ------------------------------------ >>> >>> Yahoo! Groups Links >>> >>> >>> >>> > > ----------------------------------- > Jan Algermissen, Consultant > NORD Software Consulting > > Mail: algermissen@... > Blog: http://www.nordsc.com/blog/ > Work: http://www.nordsc.com/ > ----------------------------------- > > > > >
On Jun 1, 2010, at 10:11 PM, mike amundsen wrote: > my final comments inline... > > mca > http://amundsen.com/blog/ > http://mamund.com/foaf.rdf#me > > > > > On Tue, Jun 1, 2010 at 15:56, Jan Algermissen <algermissen1971@...> wrote: >> >> On Jun 1, 2010, at 8:35 PM, mike amundsen wrote: >> >> >>> Atom does not >>> allow the agent to select a content-type at all >> >> What do you mean? Isn't the <accept> element doing just that? > Accept is not used for POST/PUT > Hmm - but <accept> tells me what I can POST to a collection. Somehow I am missing what you mean...? Jan
Jan: <snip> > Hmm - but <accept> tells me what I can POST to a collection. Somehow I am missing what you mean...? </snip> Possibly I've mis-communicated. My recent replies on this thread were addressed to the use content-types with POST & PUT, not GET. The "Accept" header is a way for the agent to indicate preferred responses [1]. The "Content-Type" header is used to indicate what is sent to the recipient [2]. While HTTP spec has clear details that allow clients to negotiate the details of _response_ representations [3], there are no such instructions on how to negotiate the representation details of request bodies. [1] http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.1 [2] http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.17 [3] http://www.w3.org/Protocols/rfc2616/rfc2616-sec12.html mca http://amundsen.com/blog/ http://mamund.com/foaf.rdf#me On Tue, Jun 1, 2010 at 16:28, Jan Algermissen <algermissen1971@...> wrote: > > On Jun 1, 2010, at 10:11 PM, mike amundsen wrote: > >> my final comments inline... >> >> mca >> http://amundsen.com/blog/ >> http://mamund.com/foaf.rdf#me >> >> >> >> >> On Tue, Jun 1, 2010 at 15:56, Jan Algermissen <algermissen1971@...> wrote: >>> >>> On Jun 1, 2010, at 8:35 PM, mike amundsen wrote: >>> >>> >>>> Atom does not >>>> allow the agent to select a content-type at all >>> >>> What do you mean? Isn't the <accept> element doing just that? >> Accept is not used for POST/PUT >> > > Hmm - but <accept> tells me what I can POST to a collection. Somehow I am missing what you mean...? > > > Jan > >
On Jun 1, 2010, at 10:50 PM, mike amundsen wrote: > Jan: > > <snip> >> Hmm - but <accept> tells me what I can POST to a collection. Somehow I am missing what you mean...? > </snip> > > Possibly I've mis-communicated. > > My recent replies on this thread were addressed to the use > content-types with POST & PUT, not GET. The "Accept" header Hmm - but I am talking about the <accept> element...... <confused/> Jan > is a way > for the agent to indicate preferred responses [1]. The "Content-Type" > header is used to indicate what is sent to the recipient [2]. > > While HTTP spec has clear details that allow clients to negotiate the > details of _response_ representations [3], there are no such > instructions on how to negotiate the representation details of request > bodies. > > > [1] http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.1 > [2] http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.17 > [3] http://www.w3.org/Protocols/rfc2616/rfc2616-sec12.html > > mca > http://amundsen.com/blog/ > http://mamund.com/foaf.rdf#me > > > > > On Tue, Jun 1, 2010 at 16:28, Jan Algermissen <algermissen1971@...> wrote: >> >> On Jun 1, 2010, at 10:11 PM, mike amundsen wrote: >> >>> my final comments inline... >>> >>> mca >>> http://amundsen.com/blog/ >>> http://mamund.com/foaf.rdf#me >>> >>> >>> >>> >>> On Tue, Jun 1, 2010 at 15:56, Jan Algermissen <algermissen1971@...> wrote: >>>> >>>> On Jun 1, 2010, at 8:35 PM, mike amundsen wrote: >>>> >>>> >>>>> Atom does not >>>>> allow the agent to select a content-type at all >>>> >>>> What do you mean? Isn't the <accept> element doing just that? >>> Accept is not used for POST/PUT >>> >> >> Hmm - but <accept> tells me what I can POST to a collection. Somehow I am missing what you mean...? >> >> >> Jan >> >> ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
Aha! you mean the APP accept element [1]? [1] http://tools.ietf.org/html/rfc5023#section-8.3.4 mca http://amundsen.com/blog/ http://mamund.com/foaf.rdf#me On Tue, Jun 1, 2010 at 16:54, Jan Algermissen <algermissen1971@mac.com> wrote: > > On Jun 1, 2010, at 10:50 PM, mike amundsen wrote: > >> Jan: >> >> <snip> >>> Hmm - but <accept> tells me what I can POST to a collection. Somehow I am missing what you mean...? >> </snip> >> >> Possibly I've mis-communicated. >> >> My recent replies on this thread were addressed to the use >> content-types with POST & PUT, not GET. The "Accept" header > > Hmm - but I am talking about the <accept> element...... > > > <confused/> > > Jan > > >> is a way >> for the agent to indicate preferred responses [1]. The "Content-Type" >> header is used to indicate what is sent to the recipient [2]. >> >> While HTTP spec has clear details that allow clients to negotiate the >> details of _response_ representations [3], there are no such >> instructions on how to negotiate the representation details of request >> bodies. >> >> >> [1] http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.1 >> [2] http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.17 >> [3] http://www.w3.org/Protocols/rfc2616/rfc2616-sec12.html >> >> mca >> http://amundsen.com/blog/ >> http://mamund.com/foaf.rdf#me >> >> >> >> >> On Tue, Jun 1, 2010 at 16:28, Jan Algermissen <algermissen1971@mac.com> wrote: >>> >>> On Jun 1, 2010, at 10:11 PM, mike amundsen wrote: >>> >>>> my final comments inline... >>>> >>>> mca >>>> http://amundsen.com/blog/ >>>> http://mamund.com/foaf.rdf#me >>>> >>>> >>>> >>>> >>>> On Tue, Jun 1, 2010 at 15:56, Jan Algermissen <algermissen1971@...> wrote: >>>>> >>>>> On Jun 1, 2010, at 8:35 PM, mike amundsen wrote: >>>>> >>>>> >>>>>> Atom does not >>>>>> allow the agent to select a content-type at all >>>>> >>>>> What do you mean? Isn't the <accept> element doing just that? >>>> Accept is not used for POST/PUT >>>> >>> >>> Hmm - but <accept> tells me what I can POST to a collection. Somehow I am missing what you mean...? >>> >>> >>> Jan >>> >>> > > ----------------------------------- > Jan Algermissen, Consultant > NORD Software Consulting > > Mail: algermissen@... > Blog: http://www.nordsc.com/blog/ > Work: http://www.nordsc.com/ > ----------------------------------- > > > > >
On Jun 1, 2010, at 11:01 PM, mike amundsen wrote: > Aha! > > you mean the APP accept element [1]? Yes. Jan > > [1] http://tools.ietf.org/html/rfc5023#section-8.3.4 > > mca > http://amundsen.com/blog/ > http://mamund.com/foaf.rdf#me > > > > > On Tue, Jun 1, 2010 at 16:54, Jan Algermissen <algermissen1971@...> wrote: >> >> On Jun 1, 2010, at 10:50 PM, mike amundsen wrote: >> >>> Jan: >>> >>> <snip> >>>> Hmm - but <accept> tells me what I can POST to a collection. Somehow I am missing what you mean...? >>> </snip> >>> >>> Possibly I've mis-communicated. >>> >>> My recent replies on this thread were addressed to the use >>> content-types with POST & PUT, not GET. The "Accept" header >> >> Hmm - but I am talking about the <accept> element...... >> >> >> <confused/> >> >> Jan >> >> >>> is a way >>> for the agent to indicate preferred responses [1]. The "Content-Type" >>> header is used to indicate what is sent to the recipient [2]. >>> >>> While HTTP spec has clear details that allow clients to negotiate the >>> details of _response_ representations [3], there are no such >>> instructions on how to negotiate the representation details of request >>> bodies. >>> >>> >>> [1] http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.1 >>> [2] http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.17 >>> [3] http://www.w3.org/Protocols/rfc2616/rfc2616-sec12.html >>> >>> mca >>> http://amundsen.com/blog/ >>> http://mamund.com/foaf.rdf#me >>> >>> >>> >>> >>> On Tue, Jun 1, 2010 at 16:28, Jan Algermissen <algermissen1971@...> wrote: >>>> >>>> On Jun 1, 2010, at 10:11 PM, mike amundsen wrote: >>>> >>>>> my final comments inline... >>>>> >>>>> mca >>>>> http://amundsen.com/blog/ >>>>> http://mamund.com/foaf.rdf#me >>>>> >>>>> >>>>> >>>>> >>>>> On Tue, Jun 1, 2010 at 15:56, Jan Algermissen <algermissen1971@...> wrote: >>>>>> >>>>>> On Jun 1, 2010, at 8:35 PM, mike amundsen wrote: >>>>>> >>>>>> >>>>>>> Atom does not >>>>>>> allow the agent to select a content-type at all >>>>>> >>>>>> What do you mean? Isn't the <accept> element doing just that? >>>>> Accept is not used for POST/PUT >>>>> >>>> >>>> Hmm - but <accept> tells me what I can POST to a collection. Somehow I am missing what you mean...? >>>> >>>> >>>> Jan >>>> >>>> >> >> ----------------------------------- >> Jan Algermissen, Consultant >> NORD Software Consulting >> >> Mail: algermissen@... >> Blog: http://www.nordsc.com/blog/ >> Work: http://www.nordsc.com/ >> ----------------------------------- >> >> >> >> >> ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
Mike Kelly wrote: > > Eric J. Bowman wrote: > > > Mike Kelly wrote: > > > > > > Actually, the identifying function of HTTP is URI + any control > > > data. > > > > > > > Absolutely NOT. URIs identify _resources_ the control data is used > > to select _representations_ and the two are _not_ the same thing. > > > > > You sound like you're agreeing with me, the way Seb uses the term > 'identifying function' implied we were talking abouts representations > not resource, which is what I was addressing: > I assure you, I am not agreeing with you. > > I don't know if you've ever had to develop a non-trivial > hypermedia-driven application that needs to service (amongst other > clients) browsers via HTML > Yes, I've been designing hypermedia-driven HTML applications for browsers since 1993, using conneg since 1998. You? > > > > > If you need to directly reference a specific variant, assign it a > > URI and sent *that* to the client. THAT is the solution. It > > works. There is no "problem" left to be solved by borking @type. > > > > > .. I take it you haven't tried designing a RESTful system that handles > browser clients then. > I recently posted an online demo of my under-development REST system, and published the URIs on this very list, so perhaps you need to recall that old adage about making assumptions. It uses content negotiation to handle not only different browsers, but clients that aren't browsers, as well. The conneg is disabled on the static demo, but I assure you it works just fine on the live system -- without borking @type. > > - but this conflation of resource and representation is *exactly* the > problem that I am taking issue with.. you can't make a browser > negotiate any other type of representation over HTML, which means you > end up having to pretend representations are resources and ignoring > negotiation altogether in order to make the representations > accessible to browsers. > But it's a *non* problem. Any negotiated resource on my system may respond with Atom or HTML as appropriate to the client. Browsers get XHTML + XSLT, the browser-resident XSLT transcludes the Atom source files. Obviously, I don't want the browser to reference the negotiated resource, because then it will retrieve the HTML instead of the Atom. So, I assign the Atom variant a URI, making it a resource in its own right. The XSLT document() calls may now retrieve Atom regardless of the Accept header sent by the user agent. This is not "pretending" that the Atom representation is a resource. Since there are situations where I want to access Atom and only Atom, why on Earth would I want to dereference a negotiated resource? That Atom and only Atom resource is a *different resource* whose entity happens to overlap with that of some other resource -- which happens all the time in REST. > > > It is a > > violation of both the layered-system and identification of resources > > constraints to use @type in any other way. > > Afaik this is nothing to do with either of those constraints > Since the Atom-and-only-Atom resource is not the same resource (by the definition of resource from Roy's thesis) as the negotiated resource, trying to access that resource without assigning it a URI of its own violates the identification of resources constraint. In order for conneg to work, a user agent must send the Accept header deemed appropriate by its developer(s). The server must respond to that header to negotiate by media type. If the server sends some representation to the client which changes the developers' intended Accept header, it's a blatant violation of the separation of concerns that the layered system constraint is all about. In Web architecture, conneg is reactive, not proactive. The architecture simply does not support having the server tell the client what media types it accepts. This is a feature, not a bug, since (as you've been told dozens of times) the problem you are having is soooo easily solved by assigning the desired variant its own URI, making it a resource in its own right, and sending that conneg-free URI to the user agent. That is how to override conneg, not @type. > > > > > > > > > If the type attribute in links wasn't designed that way.. What > > > exactly is the point of it, if it is not intended to affect client > > > behavior? There is an argument that if the type attribute wasn't > > > designed to support that case then a mistake was made and it was > > > poorly defined. > > > > > > > The point of it is to allow us to self-document our APIs. > > What does that even mean? What is the objective of doing that? What > are you documenting if, as you're suggesting, it doesn't make any > mechanical difference? > How can I tell that the stylesheet an HTML page links to is text/css utf-8, without @type and @charset? By making a HEAD request to the CSS target, of course, and that is the only authoritative source of the media type and charset of the resource. Those attributes are simply not necessary to make the Web work. But the purpose of a hypertext API is to have a self-documenting API, one which tells me that I can _expect_ a HEAD request to tell me the resource is utf-8 text/css. Without @type and @charset, all I can document in the hypertext is the URL, which isn't very informative, let alone self-documenting. So @type and @charset, far from being useless, allow us to annotate our hyperlinks with metadata (which, in a properly designed system, will happen to be exactly accurate). If you assign these attributes some other role, then some other attributes will need to be created to fill the annotation role @type and @charset were intended to fill. So instead of saying that everyone else is wrong and these attributes are somehow broken, shouldn't you be proposing the addition of some new attributes with your desired semantics? I'd hate to see old work start to break because browsers suddenly decide to take @type and @charset literally, which is what would happen. > > > The server is not to dictate > > to the client what media types are acceptable to the client. > > > > Sure sure, unfortunately the reality is that users of browsers care > about certain representations of resources depending on the context > That's sorta right. The user could frankly care less what representation the server sends, provided that the resulting steady- state works. In the context of browser-resident XSLT transformation, I obviously want the browser to retrieve Atom-and-only-Atom, so I link to the Atom-and-only-Atom resource in the XSLT instead of the negotiated resource which also happens to, given the proper Accept header, return the same exact Atom variant. But that isn't a problem. See REST's discussion of "author's preferred version", there's no constraint which says multiple resources can't return the same representation when dereferenced. If you have a different context (like needing the Atom variants for XSLT), then you arguably have a different resource, so give it its own URI instead of wasting everyone's time trying to find fault with that perfectly- functional, time-tested, Web-proven, RESTful solution. > > and the solution used in the browser+html world right now is to link > *directly* to a media-type specific URI, so in practice it is > actually *no different at all*, and is in fact a much worse solution > since the link itself is less descriptive to the client (the client > has no idea the link is intended to be media type specific, URIs are > opaque), and the interaction is less visible to intermediaries (since > no negotiation is taking place). > Uhhh, how is an interaction without conneg *less* visible than one with conneg? That's just wrong. The solution of assigning some variant its own URI so it may be referred to outside of the conneg context, is called the "identification of resources" constraint. It is not a "worse solution" and I don't begin to see how it could be. The client may be informed unambiguously using @type and @charset what to expect, and could frankly care less that the representation also happens to be part of the set of representations of some other resource. Hitting a conneg URI on my system with a browser will return an HTML response. If that representation isn't appropriate, the user agent may present the user with alternatives, taken from either an Alternates header or a bunch of <link/> tags in the <head> listing each variant, its URI, and its @type/@charset -- nothing could be clearer when they all have rel='alternate'. As always, my advice when using conneg is to assign each variant its own URI, and return that URI in Content-Location such that caching will work, with the exception of negotiating for compression. This is not a problem, it is BEST PRACTICE. -Eric
I'm with Eric on this. As an example, each @type has it's own lanneg+conneg 'leaf' URI on my CV website http://about.alan-dean.com/ which is controlled from the server using Accept and Accept-Language from a root URI. For lanneg, it falls back to 'en' rather than giving an error whilst conneg will throw an Unsupported Type error as appropriate. I'm not using the browser-side XML+XSLT trick though; my (X)HTML is server-rendered. The one PITA is that IE *still* (sigh) doesn't recognise application/xhtml+xml on the wire and so thinks that http://www.alan-dean.com/about.en.xhtml is a download. This means that xhtml URI's aren't actually universal page links, unfortunately. You can also see the same lanneg and conneg in action on http://example.moveme.com/zoopla/letters alongside the standard PRG pattern (for non-UK readers, you can use "AA1 1AA" as a postcode and "01234 567 890" as a telephone number). It's only a test site, so feel free to use an @... email address. Please don't report any errors to me as I left moveme.com last Friday. Regards, Alan Dean On Wed, Jun 2, 2010 at 00:56, Eric J. Bowman <eric@...> wrote: > > > Mike Kelly wrote: > > > > > Eric J. Bowman wrote: > > > > > Mike Kelly wrote: > > > > > > > > Actually, the identifying function of HTTP is URI + any control > > > > data. > > > > > > > > > > Absolutely NOT. URIs identify _resources_ the control data is used > > > to select _representations_ and the two are _not_ the same thing. > > > > > > > > > You sound like you're agreeing with me, the way Seb uses the term > > 'identifying function' implied we were talking abouts representations > > not resource, which is what I was addressing: > > > > I assure you, I am not agreeing with you. > > > > > > I don't know if you've ever had to develop a non-trivial > > hypermedia-driven application that needs to service (amongst other > > clients) browsers via HTML > > > > Yes, I've been designing hypermedia-driven HTML applications for > browsers since 1993, using conneg since 1998. You? > > > > > > > > > > If you need to directly reference a specific variant, assign it a > > > URI and sent *that* to the client. THAT is the solution. It > > > works. There is no "problem" left to be solved by borking @type. > > > > > > > > .. I take it you haven't tried designing a RESTful system that handles > > browser clients then. > > > > I recently posted an online demo of my under-development REST system, > and published the URIs on this very list, so perhaps you need to recall > that old adage about making assumptions. It uses content negotiation > to handle not only different browsers, but clients that aren't browsers, > as well. The conneg is disabled on the static demo, but I assure you > it works just fine on the live system -- without borking @type. > > > > > > - but this conflation of resource and representation is *exactly* the > > problem that I am taking issue with.. you can't make a browser > > negotiate any other type of representation over HTML, which means you > > end up having to pretend representations are resources and ignoring > > negotiation altogether in order to make the representations > > accessible to browsers. > > > > But it's a *non* problem. Any negotiated resource on my system may > respond with Atom or HTML as appropriate to the client. Browsers get > XHTML + XSLT, the browser-resident XSLT transcludes the Atom source > files. Obviously, I don't want the browser to reference the negotiated > resource, because then it will retrieve the HTML instead of the Atom. > > So, I assign the Atom variant a URI, making it a resource in its own > right. The XSLT document() calls may now retrieve Atom regardless of > the Accept header sent by the user agent. This is not "pretending" > that the Atom representation is a resource. > > Since there are situations where I want to access Atom and only Atom, > why on Earth would I want to dereference a negotiated resource? That > Atom and only Atom resource is a *different resource* whose entity > happens to overlap with that of some other resource -- which happens all > the time in REST. > > > > > > > It is a > > > violation of both the layered-system and identification of resources > > > constraints to use @type in any other way. > > > > Afaik this is nothing to do with either of those constraints > > > > Since the Atom-and-only-Atom resource is not the same resource (by the > definition of resource from Roy's thesis) as the negotiated resource, > trying to access that resource without assigning it a URI of its own > violates the identification of resources constraint. > > In order for conneg to work, a user agent must send the Accept header > deemed appropriate by its developer(s). The server must respond to > that header to negotiate by media type. If the server sends some > representation to the client which changes the developers' intended > Accept header, it's a blatant violation of the separation of concerns > that the layered system constraint is all about. > > In Web architecture, conneg is reactive, not proactive. The > architecture simply does not support having the server tell the client > what media types it accepts. This is a feature, not a bug, since (as > you've been told dozens of times) the problem you are having is soooo > easily solved by assigning the desired variant its own URI, making it a > resource in its own right, and sending that conneg-free URI to the user > agent. That is how to override conneg, not @type. > > > > > > > > > > > > > > > If the type attribute in links wasn't designed that way.. What > > > > exactly is the point of it, if it is not intended to affect client > > > > behavior? There is an argument that if the type attribute wasn't > > > > designed to support that case then a mistake was made and it was > > > > poorly defined. > > > > > > > > > > The point of it is to allow us to self-document our APIs. > > > > What does that even mean? What is the objective of doing that? What > > are you documenting if, as you're suggesting, it doesn't make any > > mechanical difference? > > > > How can I tell that the stylesheet an HTML page links to is text/css > utf-8, without @type and @charset? By making a HEAD request to the CSS > target, of course, and that is the only authoritative source of the > media type and charset of the resource. Those attributes are simply > not necessary to make the Web work. > > But the purpose of a hypertext API is to have a self-documenting API, > one which tells me that I can _expect_ a HEAD request to tell me the > resource is utf-8 text/css. Without @type and @charset, all I can > document in the hypertext is the URL, which isn't very informative, let > alone self-documenting. > > So @type and @charset, far from being useless, allow us to annotate our > hyperlinks with metadata (which, in a properly designed system, will > happen to be exactly accurate). If you assign these attributes some > other role, then some other attributes will need to be created to fill > the annotation role @type and @charset were intended to fill. > > So instead of saying that everyone else is wrong and these attributes > are somehow broken, shouldn't you be proposing the addition of some new > attributes with your desired semantics? I'd hate to see old work start > to break because browsers suddenly decide to take @type and @charset > literally, which is what would happen. > > > > > > > The server is not to dictate > > > to the client what media types are acceptable to the client. > > > > > > > Sure sure, unfortunately the reality is that users of browsers care > > about certain representations of resources depending on the context > > > > That's sorta right. The user could frankly care less what > representation the server sends, provided that the resulting steady- > state works. In the context of browser-resident XSLT transformation, I > obviously want the browser to retrieve Atom-and-only-Atom, so I link to > the Atom-and-only-Atom resource in the XSLT instead of the negotiated > resource which also happens to, given the proper Accept header, return > the same exact Atom variant. > > But that isn't a problem. See REST's discussion of "author's preferred > version", there's no constraint which says multiple resources can't > return the same representation when dereferenced. If you have a > different context (like needing the Atom variants for XSLT), then you > arguably have a different resource, so give it its own URI instead of > wasting everyone's time trying to find fault with that perfectly- > functional, time-tested, Web-proven, RESTful solution. > > > > > > and the solution used in the browser+html world right now is to link > > *directly* to a media-type specific URI, so in practice it is > > actually *no different at all*, and is in fact a much worse solution > > since the link itself is less descriptive to the client (the client > > has no idea the link is intended to be media type specific, URIs are > > opaque), and the interaction is less visible to intermediaries (since > > no negotiation is taking place). > > > > Uhhh, how is an interaction without conneg *less* visible than one with > conneg? That's just wrong. The solution of assigning some variant its > own URI so it may be referred to outside of the conneg context, is > called the "identification of resources" constraint. It is not a > "worse solution" and I don't begin to see how it could be. The client > may be informed unambiguously using @type and @charset what to expect, > and could frankly care less that the representation also happens to be > part of the set of representations of some other resource. > > Hitting a conneg URI on my system with a browser will return an HTML > response. If that representation isn't appropriate, the user agent may > present the user with alternatives, taken from either an Alternates > header or a bunch of <link/> tags in the <head> listing each variant, > its URI, and its @type/@charset -- nothing could be clearer when they > all have rel='alternate'. > > As always, my advice when using conneg is to assign each variant its > own URI, and return that URI in Content-Location such that caching will > work, with the exception of negotiating for compression. This is not a > problem, it is BEST PRACTICE. > > -Eric > > >
Glenn, On May 31, 2010, at 7:44 PM, Glenn Block wrote: > Lunch sounds great to me, it will probably work better with my schedule :-) Will this only be lunch or are we planning for lunch+afternoon+drinkies? Glenn, will *you* set the exact date (so we can arrange travel?). Jan > > On 5/31/10, Sebastien Lambla <seb@...> wrote: >> I'd rather suggest going somewhere, grab a lunch and spend some time in the >> afternoon chatting. There's lovely places to go in central London. >> >> I'm free most of that week, so will take whatever time is needed. >> >> Seb >> >> From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On >> Behalf Of Glenn Block >> Sent: 31 May 2010 09:05 >> To: Alan Dean >> Cc: Jan Algermissen; REST Discuss >> Subject: Re: [rest-discuss] Coming to the UK >> >> >> >> >> Yes, I won't be at MS offices though :-) (though I may stop in Reading at >> some point) >> >> I am doing an event plus a set of user group talks. >> On Mon, May 31, 2010 at 1:03 AM, Alan Dean >> <alan.dean@...<mailto:alan.dean@...>> wrote: >> Jan, >> >> Microsoft have two main offices in the UK: one in central London near >> Victoria Station and the other in Reading at Thames Valley Park. Both are >> easily reachable from Heathrow. >> >> Regards, >> Alan Dean >> On Mon, May 31, 2010 at 08:58, Jan Algermissen >> <algermissen1971@...<mailto:algermissen1971@...>> wrote: >> >> >> Glenn, >> >> >> On May 31, 2010, at 9:47 AM, Glenn Block wrote: >> >>> >>> >>> Hi guys >>> >>> I think I mentioned this, but I'll be in the UK the week for July 13 to >>> 18. >> Meet you there any of those days. Perfect match to my schedule. >> >> "UK" meaning "London" or some rural place? >> >> Possible to nail that down quickly to catch early booking rates? >> >> Jan >> >> >>> I've spoken to Sebastian about a little "RESTful" get together :-) Anyone >>> down (assuming you are in that neck of the woods). >>> >>> Thanks >>> Glenn >>> >>> >>> >> ----------------------------------- >> Jan Algermissen, Consultant >> NORD Software Consulting >> >> Mail: algermissen@...<mailto:algermissen%40acm.org> >> Blog: http://www.nordsc.com/blog/ >> Work: http://www.nordsc.com/ >> ----------------------------------- >> >> >> >> >> >> >> > > -- > Sent from my mobile device ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
I can guarantee you lunch + afternoon + drinkies in central London if that helps. :) -----Original Message----- From: Jan Algermissen [mailto:algermissen1971@...] Sent: 02 June 2010 08:24 To: Glenn Block Cc: Sebastien Lambla; Alan Dean; REST Discuss Subject: Re: Coming to the UK Glenn, On May 31, 2010, at 7:44 PM, Glenn Block wrote: > Lunch sounds great to me, it will probably work better with my > schedule :-) Will this only be lunch or are we planning for lunch+afternoon+drinkies? Glenn, will *you* set the exact date (so we can arrange travel?). Jan > > On 5/31/10, Sebastien Lambla <seb@...> wrote: >> I'd rather suggest going somewhere, grab a lunch and spend some time >> in the afternoon chatting. There's lovely places to go in central London. >> >> I'm free most of that week, so will take whatever time is needed. >> >> Seb >> >> From: rest-discuss@yahoogroups.com >> [mailto:rest-discuss@yahoogroups.com] On Behalf Of Glenn Block >> Sent: 31 May 2010 09:05 >> To: Alan Dean >> Cc: Jan Algermissen; REST Discuss >> Subject: Re: [rest-discuss] Coming to the UK >> >> >> >> >> Yes, I won't be at MS offices though :-) (though I may stop in >> Reading at some point) >> >> I am doing an event plus a set of user group talks. >> On Mon, May 31, 2010 at 1:03 AM, Alan Dean >> <alan.dean@...<mailto:alan.dean@...>> wrote: >> Jan, >> >> Microsoft have two main offices in the UK: one in central London near >> Victoria Station and the other in Reading at Thames Valley Park. Both >> are easily reachable from Heathrow. >> >> Regards, >> Alan Dean >> On Mon, May 31, 2010 at 08:58, Jan Algermissen >> <algermissen1971@...<mailto:algermissen1971@...>> wrote: >> >> >> Glenn, >> >> >> On May 31, 2010, at 9:47 AM, Glenn Block wrote: >> >>> >>> >>> Hi guys >>> >>> I think I mentioned this, but I'll be in the UK the week for July 13 >>> to 18. >> Meet you there any of those days. Perfect match to my schedule. >> >> "UK" meaning "London" or some rural place? >> >> Possible to nail that down quickly to catch early booking rates? >> >> Jan >> >> >>> I've spoken to Sebastian about a little "RESTful" get together :-) >>> Anyone down (assuming you are in that neck of the woods). >>> >>> Thanks >>> Glenn >>> >>> >>> >> ----------------------------------- >> Jan Algermissen, Consultant >> NORD Software Consulting >> >> Mail: algermissen@...<mailto:algermissen%40acm.org> >> Blog: http://www.nordsc.com/blog/ >> Work: http://www.nordsc.com/ >> ----------------------------------- >> >> >> >> >> >> >> > > -- > Sent from my mobile device ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
I should know tomorrow. On Wednesday, June 2, 2010, Sebastien Lambla <seb@...> wrote: > I can guarantee you lunch + afternoon + drinkies in central London if that helps. :) > > -----Original Message----- > From: Jan Algermissen [mailto:algermissen1971@...] > Sent: 02 June 2010 08:24 > To: Glenn Block > Cc: Sebastien Lambla; Alan Dean; REST Discuss > Subject: Re: Coming to the UK > > Glenn, > On May 31, 2010, at 7:44 PM, Glenn Block wrote: > >> Lunch sounds great to me, it will probably work better with my >> schedule :-) > > Will this only be lunch or are we planning for lunch+afternoon+drinkies? > > Glenn, will *you* set the exact date (so we can arrange travel?). > > Jan > > >> >> On 5/31/10, Sebastien Lambla <seb@...> wrote: >>> I'd rather suggest going somewhere, grab a lunch and spend some time >>> in the afternoon chatting. There's lovely places to go in central London. >>> >>> I'm free most of that week, so will take whatever time is needed. >>> >>> Seb >>> >>> From: rest-discuss@yahoogroups.com >>> [mailto:rest-discuss@yahoogroups.com] On Behalf Of Glenn Block >>> Sent: 31 May 2010 09:05 >>> To: Alan Dean >>> Cc: Jan Algermissen; REST Discuss >>> Subject: Re: [rest-discuss] Coming to the UK >>> >>> >>> >>> >>> Yes, I won't be at MS offices though :-) (though I may stop in >>> Reading at some point) >>> >>> I am doing an event plus a set of user group talks. >>> On Mon, May 31, 2010 at 1:03 AM, Alan Dean >>> <alan.dean@...<mailto:alan.dean@...>> wrote: >>> Jan, >>> >>> Microsoft have two main offices in the UK: one in central London near >>> Victoria Station and the other in Reading at Thames Valley Park. Both >>> are easily reachable from Heathrow. >>> >>> Regards, >>> Alan Dean >>> On Mon, May 31, 2010 at 08:58, Jan Algermissen >>> <algermissen1971@...<mailto:algermissen1971@...>> wrote: >>> >>> >>> Glenn, >>> >>> >>> On May 31, 2010, at 9:47 AM, Glenn Block wrote: >>> >>>> >>>> >>>> Hi guys >>>> >>>> I think I mentioned this, but I'll be in the UK the week for July 13 >>>> to 18. >>> Meet you there any of those days. Perfect match to my schedule. >>> >>> "UK" meaning "London" or some rural place? >>> >>> Possible to nail that down quickly to catch early booking rates? >>> >>> Jan >>> >>> >>>> I've spoken to Sebastian about a little "RESTful" get together :-) >>>> Anyone down (assuming you are in that neck of the woods). >>>> >>>> Thanks >>>> Glenn >>>> >>>> >>>> >>> ----------------------------------- >>> Jan Algermissen, Consultant >>> NORD Software Consulting >>> >>> Mail: algermissen@...<mailto:algermissen%40acm.org> >>> Blog: http://www.nordsc.com/blog/ >>> Work: http://www.nordsc.com/ >>> ----------------------------------- >>> >>> >>> >>> >>> >>> >>> >>> >>> Your email settings: Individual Email|Traditional Change settings via >>> the >>> Web<http://groups.yahoo.com/group/rest-discuss/join;_ylc=X3oDMTJmdHN0 >>> NHZiBF9TAzk3NDc2NTkwBGdycElkAzQzMTkyNTUEZ3Jwc3BJZAMxNzA1NzAxMDE0BHNlY >>> wNmdHIEc2xrA3N0bmdzBHN0aW1lAzEyNzUyOTMxMzg-> >>> (Yahoo! ID requir
Hi Glen, Fwiw some other things beyond the central role of hypermedia that personally I'd like to see: 1.) Creating new resources should be cheap and hassle free. Don't make me create a new domain model entity on which I have to call to_xml or similar just to create a new resource (HTML representations thankfully don't require me to create new domain model with a to_html method every time I simply need a new web page). 2.) Resources should be decoupled from the domain model: see above. Generally my resources are at least a superset of my domain model entities. My apps frequently seem to end up using a Two Step View pattern, where the first step creates the resource and the second step creates the representation of that resource. Yes it should be trivial for me to create a resource based on a domain model entity, but it should be equally trivial to create a resource that is not. 3.) Please consider integrating your new RESTful parts of WCF into ASP.NET MVC - don't make us use ASP.NET MVC for HTML-based REST applications and WCF for other media types J 4.) Following on from the above, in MVC terms Resource and Representation should be View concerns not Model concerns. You already have the basis of this in ViewData<T> and ViewPage<T>. I would like to see you do something like extend ViewData<T> into a Resource<T> finite state machine that includes a list of States and an IList<State> PermittedTransitions for each state. You could then extend ViewPage<T> as Representation<PublicDomainMediaType> that included a IList<HyperText> GenerateHypermedia(IList<State> permittedTransitions) or similar, and was responsible for rendering Resources as Representations. How could this be hooked into WCF? Well Resource<T> would actually be performing a very similar role to a WCF DataContract, but between Model/Controller and View rather than Service and Client (I guess an alternative approach would be to extend a WCF DataContract into a FiniteStateMachineContract and use that as the resource?). This would additionally mean that the presentation tier of a web app could be managed and version controlled as essentially a separate application, which to me would be a very good thing indeed. 5.) Finally could you speak to the IIS team about making their product more HTTP 1.1 compliant J (e.g. things like full HTTP verb support for default documents) HTH cheers Julian From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of Guilherme Silveira Sent: 01 June 2010 18:55 To: Jim Webber Cc: Bill de hOra; Rest Discussion Group Subject: Re: [rest-discuss] Thinking about REST and HTTP Hello Bill, >> I'm not certain that today's JAX-RS offers much more than today's WCF in >> terms of REST support. If Glenn's team are going to do "REST like they >> meant it" to paraphrase Guilherme, I don't think that JAX-RS is the >> right way to go. > > But that's just an opinion. Or is there some technical criticism as well? It seems like the client part of a REST client was not so clear at that time, and there were not so many attemps to create generic consumers. Every service provided their "own specific REST APIs" for their "specific REST services", i.e. twitter, facebook, and hundreds of others. The first JAX-RS spec did not take hypermedia in account, so if you think about REST without hypermedia, it will not be problem. But it seems like REST depends on using hypermedia, right? If you believe so and want your consumers to use hypermedia, using a Java framework, you have to rely on Restfulie, Jersey and Restlet, who are trying to do so. As Paul mentioned, its a matter of time for it to enter the JAX-RS specs. Regards Guilherme Silveira Caelum | Ensino e Inovao http://www.caelum.com.br/ 2010/6/1 Jim Webber <jim@...> Hello Bill, >> I'm not certain that today's JAX-RS offers much more than today's WCF in >> terms of REST support. If Glenn's team are going to do "REST like they >> meant it" to paraphrase Guilherme, I don't think that JAX-RS is the >> right way to go. > > But that's just an opinion. Or is there some technical criticism as well? It's an opinion - I don't have any carefully gathered empirical evidence to back it up. However both WCF and JAX-RS avoid hypermedia which is pretty important for RESTful solutions. In other bits of that email, I pointed out that some JAX-RS compliant frameworks (e.g. Jersey) are now experimenting with hypermedia which makes them much more useful if the abstractions come out right. In terms of technical critique, I think JAX-RS comes out ahead of WCF because it's marginally easier to TDD with it, and so much more of the framework is above the waterline rather than buried down deep. However at this point both frameworks are simply nicer programmatic interfaces atop a Web server, and both short-change client-side developers (with JAX-RS again being better than WCF). Since neither has hypermedia support from the start, retrofitting it at this point may result in horrid abstractions. That's why I'm broadly supportive of Glenn's outreach and very supportive of the people behind Restfulie (as well as being encouraged by the steps the Jersey team are taking). Jim This e-mail (and any attachments) is confidential and may contain personal views which are not the views of the BBC unless specifically stated. If you have received it in error, please delete it from your system. Do not use, copy or disclose the information in any way nor act in reliance on it and notify the sender immediately. Please note that the BBC monitors e-mails sent or received. Further communication will signify your consent to this This e-mail has been sent by one of the following wholly-owned subsidiaries of the BBC: BBC Worldwide Limited, Registration Number: 1420028 England, Registered Address: BBC Media Centre, 201 Wood Lane, London, W12 7TQ BBC World News Limited, Registration Number: 04514407 England, Registered Address: BBC Media Centre, 201 Wood Lane, London, W12 7TQ BBC World Distribution Limited, Registration Number: 04514408, Registered Address: BBC Media Centre, 201 Wood Lane, London, W12 7TQ
As Julian is making those points, I'll personally say that I don't see adding this functionality to either the existing WCF or the existing MVC as a worthwhile effort. Where the suckage lives at the moment is in the infrastructure that supports application frameworks such as MVC or WCF. It's a big mess, the asp.net codebase is outdated, etc etc. The work that has been done in OpenRasta has been mostly stripping out all the dependencies on all the legacy .net code that exists in System.Web, and that has taken away time that could've been used providing more applicaiton-level features. As for the programming model, I won't comment on that part for very obvious conflict of interest reasons. :) ________________________________ From: rest-discuss@yahoogroups.com [rest-discuss@yahoogroups.com] on behalf of Julian Everett [julian.everett@...] Sent: 02 June 2010 12:08 To: Guilherme Silveira; Jim Webber; Glenn Block Cc: Bill de hOra; Rest Discussion Group Subject: RE: [rest-discuss] Thinking about REST and HTTP Hi Glen, Fwiw some other things beyond the central role of hypermedia that personally Id like to see: 1.) Creating new resources should be cheap and hassle free. Dont make me create a new domain model entity on which I have to call to_xml or similar just to create a new resource (HTML representations thankfully dont require me to create new domain model with a to_html method every time I simply need a new web page). 2.) Resources should be decoupled from the domain model: see above. Generally my resources are at least a superset of my domain model entities. My apps frequently seem to end up using a Two Step View pattern, where the first step creates the resource and the second step creates the representation of that resource. Yes it should be trivial for me to create a resource based on a domain model entity, but it should be equally trivial to create a resource that is not. 3.) Please consider integrating your new RESTful parts of WCF into ASP.NET MVC dont make us use ASP.NET MVC for HTML-based REST applications and WCF for other media types :) 4.) Following on from the above, in MVC terms Resource and Representation should be View concerns not Model concerns. You already have the basis of this in ViewData<T> and ViewPage<T>. I would like to see you do something like extend ViewData<T> into a Resource<T> finite state machine that includes a list of States and an IList<State> PermittedTransitions for each state. You could then extend ViewPage<T> as Representation<PublicDomainMediaType> that included a IList<HyperText> GenerateHypermedia(IList<State> permittedTransitions) or similar, and was responsible for rendering Resources as Representations. How could this be hooked into WCF? Well Resource<T> would actually be performing a very similar role to a WCF DataContract, but between Model/Controller and View rather than Service and Client (I guess an alternative approach would be to extend a WCF DataContract into a FiniteStateMachineContract and use that as the resource?). This would additionally mean that the presentation tier of a web app could be managed and version controlled as essentially a separate application, which to me would be a very good thing indeed. 5.) Finally could you speak to the IIS team about making their product more HTTP 1.1 compliant :) (e.g. things like full HTTP verb support for default documents) HTH cheers Julian From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of Guilherme Silveira Sent: 01 June 2010 18:55 To: Jim Webber Cc: Bill de hOra; Rest Discussion Group Subject: Re: [rest-discuss] Thinking about REST and HTTP Hello Bill, >> I'm not certain that today's JAX-RS offers much more than today's WCF in >> terms of REST support. If Glenn's team are going to do "REST like they >> meant it" to paraphrase Guilherme, I don't think that JAX-RS is the >> right way to go. > > But that's just an opinion. Or is there some technical criticism as well? It seems like the client part of a REST client was not so clear at that time, and there were not so many attemps to create generic consumers. Every service provided their "own specific REST APIs" for their "specific REST services", i.e. twitter, facebook, and hundreds of others. The first JAX-RS spec did not take hypermedia in account, so if you think about REST without hypermedia, it will not be problem. But it seems like REST depends on using hypermedia, right? If you believe so and want your consumers to use hypermedia, using a Java framework, you have to rely on Restfulie, Jersey and Restlet, who are trying to do so. As Paul mentioned, its a matter of time for it to enter the JAX-RS specs. Regards Guilherme Silveira Caelum | Ensino e Inovao http://www.caelum.com.br/ 2010/6/1 Jim Webber <jim@...<mailto:jim@webber.name>> Hello Bill, >> I'm not certain that today's JAX-RS offers much more than today's WCF in >> terms of REST support. If Glenn's team are going to do "REST like they >> meant it" to paraphrase Guilherme, I don't think that JAX-RS is the >> right way to go. > > But that's just an opinion. Or is there some technical criticism as well? It's an opinion - I don't have any carefully gathered empirical evidence to back it up. However both WCF and JAX-RS avoid hypermedia which is pretty important for RESTful solutions. In other bits of that email, I pointed out that some JAX-RS compliant frameworks (e.g. Jersey) are now experimenting with hypermedia which makes them much more useful if the abstractions come out right. In terms of technical critique, I think JAX-RS comes out ahead of WCF because it's marginally easier to TDD with it, and so much more of the framework is above the waterline rather than buried down deep. However at this point both frameworks are simply nicer programmatic interfaces atop a Web server, and both short-change client-side developers (with JAX-RS again being better than WCF). Since neither has hypermedia support from the start, retrofitting it at this point may result in horrid abstractions. That's why I'm broadly supportive of Glenn's outreach and very supportive of the people behind Restfulie (as well as being encouraged by the steps the Jersey team are taking). Jim This e-mail (and any attachments) is confidential and may contain personal views which are not the views of the BBC unless specifically stated. If you have received it in error, please delete it from your system. Do not use, copy or disclose the information in any way nor act in reliance on it and notify the sender immediately. Please note that the BBC monitors e-mails sent or received. Further communication will signify your consent to this This e-mail has been sent by one of the following wholly-owned subsidiaries of the BBC: BBC Worldwide Limited, Registration Number: 1420028 England, Registered Address: BBC Media Centre, 201 Wood Lane, London, W12 7TQ BBC World News Limited, Registration Number: 04514407 England, Registered Address: BBC Media Centre, 201 Wood Lane, London, W12 7TQ BBC World Distribution Limited, Registration Number: 04514408, Registered Address: BBC Media Centre, 201 Wood Lane, London, W12 7TQ
On Wed, Jun 2, 2010 at 12:56 AM, Eric J. Bowman <eric@...>wrote: > That > Atom and only Atom resource is a *different resource* whose entity > happens to overlap with that of some other resource -- which happens all > the time in REST. > Agreed, that does happen all the time, but at a significant cost to visibility since it results in invisible resources dependencies. Reduced visibility directly impacts on your ability to leverage the layered constraint which, amongst other things, weakens your ability to compensate for the inefficiencies that emerge from the REST style. That is a problem. Particularly at scale. Bizarrely, the research I've been doing on cache invalidation[1] is all based around ways to mitigate the sorts of problems that kind of reduced visibility causes. The solution, at least the one I've proposed, requires extending the system's uniform interface in order to compensate. The point being; you don't get it for free - and in my opinion it actually solves a problem that could be avoided altogether by re-evaluating the mechanisms that encourage the practices which cause these issues to arise in the first place. Cheers, Mike [1] http://restafari.blogspot.com/2010/04/link-header-based-invalidation-of.html
In yesterday's thread the topic of transparent content neg was brought up. ( http://tools.ietf.org/html/rfc5023#section-8.3.4) How critical / useful is relying on transparent content neg? Can any one give me some concrete use cases? Regards Glenn
I get HATEOS for human interaction. I am trying to undestand pure m2m benefits. If I think of a scenario of an Order management server host. I can imagine that when I retrieve an Order resource I can perform several "actions" on that order like Approve, Hold, etc. I would see these as links. In this case if I had a machine talking to such a host, it can post to those links with the advantage being that the agent / calling ocde doesn't have coupling to where those links point to or how the URIs are formed. However the client code which interacts with the agent has some idea that Approve and Hold exist, so it is coupled in that sense. I started to think about m2m scenarios (other than web crawlers) where I could truly leverage the full decoupling that HATEOS i.e. an adaptable system that only knows to look for types of links but doesn't expect any specific instances of those links (like Approve) to be present. I thought back to my past life prior to MS when I used to work in the financial services industry. In those days we had to do a ton of back end processing off data we received from multiple financial sources over FTP. I built a generic FTP scanner that looked for files, ripped them open, parsed them and then started executing all types of rules (I Blame Jan partially for forcing me down this mental road). There was basically zero human interaction other than monitoring the status of the processing. That system was coupled, it required data to be send in a specific schema and that had a predefined set of rules that were implicit as part of the schema. *Disclaimer: Thought experiment follows* Now let's say I decided to design such a system today in a RESTFul manner around resources and HATEOS. In this case I am imagining I have a Jobs\Pending resource. I do a get on that resource and get back a list of jobs that are pending to be processed. Now each Job has rules that have to execute. So in this world, I am thinking the rules are links with each link point to a resource that handles processing that rule. Or maybe the link returns me some code that gets executed on the fly. Anyway the advantage I saw in such a system is my the institutions that work with me can create any arbitrary set of rules they like. My which processes jobs doesn't know anything about the rules or care. All it knows it that is has some URI that it can post to (or possibly Get) in order to execute some arbitrary logic. I now have an autonomous system that can easily adapt to new requirements. I am truly reaping the benefits of HATEOS in such a system. Thoughts? Glenn
Correction to types in the last para: Anyway the advantage I saw in such a system is that the institutions that work with me can create any arbitrary set of rules they like. My system which processes jobs doesn't know anything about the types of rules or care. All it knows it that is has some URI that it can post to (or possibly Get) in order to execute some arbitrary logic. I now have an autonomous system that can easily adapt to new requirements. I am truly reaping the benefits of HATEOS in such a system. On Wed, Jun 2, 2010 at 5:26 PM, Glenn Block <glenn.block@...> wrote: > I get HATEOS for human interaction. I am trying to undestand pure m2m > benefits. If I think of a scenario of an Order management server host. I can > imagine that when I retrieve an Order resource I can perform several > "actions" on that order like Approve, Hold, etc. I would see these as links. > In this case if I had a machine talking to such a host, it can post to those > links with the advantage being that the agent / calling ocde doesn't > have coupling to where those links point to or how the URIs are formed. > However the client code which interacts with the agent has some idea that > Approve and Hold exist, so it is coupled in that sense. > > I started to think about m2m scenarios (other than web crawlers) where I > could truly leverage the full decoupling that HATEOS i.e. an adaptable > system that only knows to look for types of links but doesn't expect any > specific instances of those links (like Approve) to be present. > > I thought back to my past life prior to MS when I used to work in the > financial services industry. In those days we had to do a ton of back end > processing off data we received from multiple financial sources over FTP. I > built a generic FTP scanner that looked for files, ripped them open, parsed > them and then started executing all types of rules (I Blame Jan partially > for forcing me down this mental road). There was basically zero human > interaction other than monitoring the status of the processing. That system > was coupled, it required data to be send in a specific schema and that had a > predefined set of rules that were implicit as part of the schema. > > *Disclaimer: Thought experiment follows* > > Now let's say I decided to design such a system today in a RESTFul manner > around resources and HATEOS. In this case I am imagining I have a > Jobs\Pending resource. I do a get on that resource and get back a list of > jobs that are pending to be processed. Now each Job has rules that have to > execute. So in this world, I am thinking the rules are links with each link > point to a resource that handles processing that rule. Or maybe the link > returns me some code that gets executed on the fly. > > Anyway the advantage I saw in such a system is my the institutions that > work with me can create any arbitrary set of rules they like. My which > processes jobs doesn't know anything about the rules or care. All it knows > it that is has some URI that it can post to (or possibly Get) in order to > execute some arbitrary logic. I now have an autonomous system that can > easily adapt to new requirements. I am truly reaping the benefits of HATEOS > in such a system. > > Thoughts? > Glenn > > > >
At that level, your client is "pretty stupid". It's simply iterating
through links in some pre-determined manner. May as well smash the "Next
Line" button in your debugger.
That's not, IMHO, what HATEOS is about, even, heck, especially, in m2m
scenarios.
HATEOS allows for several things.
It allows for URI to be truly opaque. Simply, beyond a few well defined,
"cool URI" entry points, you never need to create, or generate a URI. They
are all given to you by the application. So, you can easily see, for
example, an application at sales.example.com directing you to
receiving.example.com, without your system even being aware of it -- because
your system never needs to look inside of the URIs.
Why is it going to receiving.example.com? Who cares. That's not your
concern. What is your concern is that the URI responds properly as the
define media type said it would respond (i.e. when you go to the #shipping
rel, with a xml/vnd.shipping document, the Right Stuff happens). Meanwhile
the backend has all sorts of flexibility regarding allocation of resources,
application distribution, etc.
Second, HATEOS allows for extensible interfaces. For example, say that the
company now offers Express Shipping. If it's media type is extensible enough
(and it's not difficult to do this), they can simply add a properly flagged
rel to expressshippping.example.com in the data.
Now, your application will completely ignore Express Shipping. It has NO
IDEA what Express Shipping is, what to do with it, when to call it. It's
just a dumb computer application.
But that's ok, because someone else read the latest documention, saw the
update, decided they liked express shipping, and added support to their
client. YOUR "outdated" client continues to function as it should, since the
change made doesn't break backward compatibility. Meanwhile, update clients
get to leverage the new functionality.
This evolutionary API approach can be very robust, allowing system to
upgrade and migrate gracefully.
If and when some functionality get REMOVED from the system, then your system
will start faulting because the #oldfeature rel is no longer in the payload.
Your code said "send data.xml to #oldfeature rel", and it failed. "method
not found", whatever. There is no URI to access. At this point you get a
exploding error log and your phone rings asking why it suddenly stops
working.
Turns out, not a whole lot you can do about that. Perhaps you weren't on the
mailing list discussing the changes. Perhaps the service provider were
complete jerks and just yanked the API out from under you. But there you
are.
However, this is no different than any other procedure. You tell your
assistant "go to this site, buy pencils and click on the express shipping
button". Then you get a call "I can't find the express shipping button. What
do I do now?" Indeed, what does she do.
What she does, or you do, is you read the screen and see perhaps what you
can do instead, find some other button to push, some option to check. You
relearn the API on the fly.
The M2M machine can't do that, but you, the developer, can read the payload
and try to reinterpret it, follow (ideally embedded) links to API docs, or
whatever and fix your client.
The only difference between the M2M client and the human client is that the
human tends to be a bit more flexible and adaptable to changes, but you can
see situations where even humans aren't that flexible. M2M programs need to
be recoded to affect change. Humans need to make decisions or delegate and
get referrals to make decisions ("There's Turbo Shipping, should I click
that instead?").
HATEOS allows more separation and more abstractions. It helps remove
assumption. You don't code your system to hit the order.example.com URI and
then immediately followed by expressshipping.example.com, ASSUMING that it's
the logical, appropriate, next step.
No, the application TELLS you what the appropriate next steps are, and leads
your client program through the process appropriately.
IMHO it keeps both clients and server more flexible and more robust assuming
properly written clients and servers.
Regards,
Will Hartung
This was an over simplification. As I mentioned there is quite a bit of
processing going on, this was just one example of rules becoming more
scalable. I also am not proposing any inspection of URIs. Instead I am
simply saying that the hypermedia is driving the processing.
Regars
Glenn
On Wed, Jun 2, 2010 at 6:41 PM, Will Hartung <willh@...> wrote:
> At that level, your client is "pretty stupid". It's simply iterating
> through links in some pre-determined manner. May as well smash the "Next
> Line" button in your debugger.
>
> That's not, IMHO, what HATEOS is about, even, heck, especially, in m2m
> scenarios.
>
> HATEOS allows for several things.
>
> It allows for URI to be truly opaque. Simply, beyond a few well defined,
> "cool URI" entry points, you never need to create, or generate a URI. They
> are all given to you by the application. So, you can easily see, for
> example, an application at sales.example.com directing you to
> receiving.example.com, without your system even being aware of it --
> because your system never needs to look inside of the URIs.
>
> Why is it going to receiving.example.com? Who cares. That's not your
> concern. What is your concern is that the URI responds properly as the
> define media type said it would respond (i.e. when you go to the #shipping
> rel, with a xml/vnd.shipping document, the Right Stuff happens). Meanwhile
> the backend has all sorts of flexibility regarding allocation of resources,
> application distribution, etc.
>
> Second, HATEOS allows for extensible interfaces. For example, say that the
> company now offers Express Shipping. If it's media type is extensible enough
> (and it's not difficult to do this), they can simply add a properly flagged
> rel to expressshippping.example.com in the data.
>
> Now, your application will completely ignore Express Shipping. It has NO
> IDEA what Express Shipping is, what to do with it, when to call it. It's
> just a dumb computer application.
>
> But that's ok, because someone else read the latest documention, saw the
> update, decided they liked express shipping, and added support to their
> client. YOUR "outdated" client continues to function as it should, since the
> change made doesn't break backward compatibility. Meanwhile, update clients
> get to leverage the new functionality.
>
> This evolutionary API approach can be very robust, allowing system to
> upgrade and migrate gracefully.
>
> If and when some functionality get REMOVED from the system, then your
> system will start faulting because the #oldfeature rel is no longer in the
> payload. Your code said "send data.xml to #oldfeature rel", and it failed.
> "method not found", whatever. There is no URI to access. At this point you
> get a exploding error log and your phone rings asking why it suddenly stops
> working.
>
> Turns out, not a whole lot you can do about that. Perhaps you weren't on
> the mailing list discussing the changes. Perhaps the service provider were
> complete jerks and just yanked the API out from under you. But there you
> are.
>
> However, this is no different than any other procedure. You tell your
> assistant "go to this site, buy pencils and click on the express shipping
> button". Then you get a call "I can't find the express shipping button. What
> do I do now?" Indeed, what does she do.
>
> What she does, or you do, is you read the screen and see perhaps what you
> can do instead, find some other button to push, some option to check. You
> relearn the API on the fly.
>
> The M2M machine can't do that, but you, the developer, can read the payload
> and try to reinterpret it, follow (ideally embedded) links to API docs, or
> whatever and fix your client.
>
> The only difference between the M2M client and the human client is that the
> human tends to be a bit more flexible and adaptable to changes, but you can
> see situations where even humans aren't that flexible. M2M programs need to
> be recoded to affect change. Humans need to make decisions or delegate and
> get referrals to make decisions ("There's Turbo Shipping, should I click
> that instead?").
>
> HATEOS allows more separation and more abstractions. It helps remove
> assumption. You don't code your system to hit the order.example.com URI
> and then immediately followed by expressshipping.example.com, ASSUMING
> that it's the logical, appropriate, next step.
>
> No, the application TELLS you what the appropriate next steps are, and
> leads your client program through the process appropriately.
>
> IMHO it keeps both clients and server more flexible and more robust
> assuming properly written clients and servers.
>
> Regards,
>
> Will Hartung
>
>
Just read through your detailed explanation so thank you for that.
I think the system I described is doing many of the things you
described. Rules are arbitrary resources that live somewhere on the
net. They expect a specific media type to be posted to them. Customers
submit jobs by posting to the Jobs resource (or whatever I called it).
Those customers have their own rules which are then part of the post.
The rules being links back to their systems or other systems.
Now the job execution engine queries for pending jobs. As the jobs are
exectued it looks at the hypermedia and calls out to the rules. The
rules are arbtirary, the system doesn't even have to know about rules,
it knows about links.
What is missinlg in ypur mind and bearing with the simplified version
of the real picture that I painted, is it in the realm of things one
might do in a RESTful manner?
Thanks
Glenn
On 6/2/10, Will Hartung <willh@...> wrote:
> At that level, your client is "pretty stupid". It's simply iterating
> through links in some pre-determined manner. May as well smash the "Next
> Line" button in your debugger.
>
> That's not, IMHO, what HATEOS is about, even, heck, especially, in m2m
> scenarios.
>
> HATEOS allows for several things.
>
> It allows for URI to be truly opaque. Simply, beyond a few well defined,
> "cool URI" entry points, you never need to create, or generate a URI. They
> are all given to you by the application. So, you can easily see, for
> example, an application at sales.example.com directing you to
> receiving.example.com, without your system even being aware of it -- because
> your system never needs to look inside of the URIs.
>
> Why is it going to receiving.example.com? Who cares. That's not your
> concern. What is your concern is that the URI responds properly as the
> define media type said it would respond (i.e. when you go to the #shipping
> rel, with a xml/vnd.shipping document, the Right Stuff happens). Meanwhile
> the backend has all sorts of flexibility regarding allocation of resources,
> application distribution, etc.
>
> Second, HATEOS allows for extensible interfaces. For example, say that the
> company now offers Express Shipping. If it's media type is extensible enough
> (and it's not difficult to do this), they can simply add a properly flagged
> rel to expressshippping.example.com in the data.
>
> Now, your application will completely ignore Express Shipping. It has NO
> IDEA what Express Shipping is, what to do with it, when to call it. It's
> just a dumb computer application.
>
> But that's ok, because someone else read the latest documention, saw the
> update, decided they liked express shipping, and added support to their
> client. YOUR "outdated" client continues to function as it should, since the
> change made doesn't break backward compatibility. Meanwhile, update clients
> get to leverage the new functionality.
>
> This evolutionary API approach can be very robust, allowing system to
> upgrade and migrate gracefully.
>
> If and when some functionality get REMOVED from the system, then your system
> will start faulting because the #oldfeature rel is no longer in the payload.
> Your code said "send data.xml to #oldfeature rel", and it failed. "method
> not found", whatever. There is no URI to access. At this point you get a
> exploding error log and your phone rings asking why it suddenly stops
> working.
>
> Turns out, not a whole lot you can do about that. Perhaps you weren't on the
> mailing list discussing the changes. Perhaps the service provider were
> complete jerks and just yanked the API out from under you. But there you
> are.
>
> However, this is no different than any other procedure. You tell your
> assistant "go to this site, buy pencils and click on the express shipping
> button". Then you get a call "I can't find the express shipping button. What
> do I do now?" Indeed, what does she do.
>
> What she does, or you do, is you read the screen and see perhaps what you
> can do instead, find some other button to push, some option to check. You
> relearn the API on the fly.
>
> The M2M machine can't do that, but you, the developer, can read the payload
> and try to reinterpret it, follow (ideally embedded) links to API docs, or
> whatever and fix your client.
>
> The only difference between the M2M client and the human client is that the
> human tends to be a bit more flexible and adaptable to changes, but you can
> see situations where even humans aren't that flexible. M2M programs need to
> be recoded to affect change. Humans need to make decisions or delegate and
> get referrals to make decisions ("There's Turbo Shipping, should I click
> that instead?").
>
> HATEOS allows more separation and more abstractions. It helps remove
> assumption. You don't code your system to hit the order.example.com URI and
> then immediately followed by expressshipping.example.com, ASSUMING that it's
> the logical, appropriate, next step.
>
> No, the application TELLS you what the appropriate next steps are, and leads
> your client program through the process appropriately.
>
> IMHO it keeps both clients and server more flexible and more robust assuming
> properly written clients and servers.
>
> Regards,
>
> Will Hartung
>
--
Sent from my mobile device
On Jun 3, 2010, at 2:26 AM, Glenn Block wrote: [snip] > Disclaimer: Thought experiment follows > > Now let's say I decided to design such a system today in a RESTFul manner around resources and HATEOS. In this case I am imagining I have a Jobs\Pending resource. I do a get on that resource and get back a list of jobs that are pending to be processed. Now each Job has rules that have to execute. So in this world, I am thinking the rules are links with each link point to a resource that handles processing that rule. Or maybe the link returns me some code that gets executed on the fly. That is the mobile code style (part of REST), where the user agent functionality is extended by transferring code as part of the response. The precondition is of course that the user agent understands the language and can execute the code in a meaningful way. > > Anyway the advantage I saw in such a system is my the institutions that work with me can create any arbitrary set of rules they like. My which processes jobs doesn't know anything about the rules or care. All it knows it that is has some URI that it can post to (or possibly Get) in order to execute some arbitrary logic. I now have an autonomous system that can easily adapt to new requirements. I am truly reaping the benefits of HATEOS in such a system. > > Thoughts? Beware that you are ok as long as you only control the user agent. Once you start controlling client side aspects beyond the agent, you introduce a coupling between client side apps and the server side code. The hypermedia constraint applies to what the user agent does, only. Not what the user does. JavaScript is only controlling my browser - not, for example, whether I open or close my eyes or take a zip from my coffee. Jan > Glenn > > > > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
Glenn, On Jun 3, 2010, at 2:26 AM, Glenn Block wrote: > > > I get HATEOS for human interaction. Since you are early to use that silly acronym there might be a chance for a fix :-) The thingy is called "hypermedia constraint" if you can, please erase the acronym froom your brain :o) > I am trying to undestand pure m2m benefits. If I think of a scenario of an Order management server host. I can imagine that when I retrieve an Order resource I can perform several "actions" on that order like Approve, Hold, etc. If your user agent expects those next state transitions through the application to be present then you are already violating the hypermedia constraint; the user agent implementation is violating it because it assumes something at design time that it should only discover at runtime. The transitions *might* be there, but they need not be. The user agent must simply react on what it is given by the server. If the user agent is designed to expose an 'approve' hypermedia control to the user (machine or human) for activation (e.g. an 'approve' button in a GUI case) then the correct implementation of the user agent is to expose the control if the corresponding link is found and not expose it (e.g. deactivate the button) when the link is not found. This is why the user agent can never break - it just assumes nothing, just makes application state accessible. Now, the user's intent is an entirely different thing to consider. Obviously, I cannot buy a book at Amazon if that book is out of stock - my expectations that I can breaks. So what? All I need is a strategy to work on from that point. Machine users simply need to do the same. The common misconception is that an object oriented interface definition of a Product class can magically ensure that a product will be orderable because the IDL says there is a order() method. That is an illusion because the IDL makes not go away the fact that you are dealing with a networked environment and have no control about what the server is doing whatsoever. REST helps with this because it makes explicit the distinction between the user and the component through which the user interacts with the networked application. And REST provides a way to eliminate all coupling between that component and the server besides the agreement on the uniform interface and the media types used in the interaction. .... The aspect of all this that currently absorbs all my brain energy is the question of what portion of the client side process should be designed as a user agent. I think that in many cases, the user agent is the whole process (as in the case of a crawler). Finding the right amount of client side 'pieces' to form the user agent is at the heart of client side design activity. > I would see these as links. In this case if I had a machine talking to such a host, it can post to those links with the advantage being that the agent / calling ocde doesn't have coupling to where those links point to or how the URIs are formed. However the client code which interacts with the agent has some idea that Approve and Hold exist, so it is coupled in that sense. > > I started to think about m2m scenarios (other than web crawlers) where I could truly leverage the full decoupling that HATEOS i.e. an adaptable system that only knows to look for types of links but doesn't expect any specific instances of those links (like Approve) to be present. > > I thought back to my past life prior to MS when I used to work in the financial services industry. In those days we had to do a ton of back end processing off data we received from multiple financial sources over FTP. I built a generic FTP scanner that looked for files, ripped them open, parsed them and then started executing all types of rules (I Blame Jan partially for forcing me down this mental road). There was basically zero human interaction other than monitoring the status of the processing. That system was coupled, it required data to be send in a specific schema and that had a predefined set of rules that were implicit as part of the schema. But the coupling in this cases solely comes from the fact that a common format was assumed. If your files would have had associated meta data that told the scanner what format they contained (IOW: how to process the content) and if the scanner understood all possible formats, you would not have had that coupling. The association of schema and processing rules you mention above is what the Web calles a media type. In fact, you above system sounds pretty RESTful already :-) The FTP directory serves sort of as an index file providing all the links to the to-be-scanned documents (file system inodes as a form of hypermedia if you want). Jan > > Disclaimer: Thought experiment follows > > Now let's say I decided to design such a system today in a RESTFul manner around resources and HATEOS. In this case I am imagining I have a Jobs\Pending resource. I do a get on that resource and get back a list of jobs that are pending to be processed. Now each Job has rules that have to execute. So in this world, I am thinking the rules are links with each link point to a resource that handles processing that rule. Or maybe the link returns me some code that gets executed on the fly. > > Anyway the advantage I saw in such a system is my the institutions that work with me can create any arbitrary set of rules they like. My which processes jobs doesn't know anything about the rules or care. All it knows it that is has some URI that it can post to (or possibly Get) in order to execute some arbitrary logic. I now have an autonomous system that can easily adapt to new requirements. I am truly reaping the benefits of HATEOS in such a system. > > Thoughts? > Glenn > > > > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
Glenn,
Personally, I find it easier to sketch out what the HTTP interactions might
look like. In your example of financials data processing, I think that I
would separate out data from rules and have them interact something like the
following (the sketch is indicative only, for example it ignores any
security concerns):
-->
PUT /datasets/{datasetname}.xml
Host: example.net
Content-Type: application/vnd.example-dataset+xml
<?xml version="1.0"?>
<dataset>{payload}</dataset>
<--
201 Created
-->
PUT /rules/{rulename}.xml
Host: example.net
Content-Type: application/vnd.example-rule+xml
<?xml version="1.0"?>
<rule>{payload}</rule>
<--
201 Created
-->
POST /jobs
Host: example.com
Content-Type: application/vnd.example-job+xml
<?xml version="1.0"?>
<job xmlns:xlink="http://www.w3.org/1999/xlink">
<data xlink:href="http://example.net/datasets/{datasetname}.xml" />
<rules>
<rule xlink:href="http://example.net/rules/{rulename}.xml" />
</rules>
</job>
<--
303 See Other
Location: http://example.com/datasets/{datasetname}.xml
-->
GET /datasets/{datasetname}.xml
Host: example.com
Content-Type: application/vnd.example-dataset+xml
<--
204 No Content
[time passes, the job is done]
-->
GET /datasets/{datasetname}.xml
Host: example.com
Content-Type: application/vnd.example-dataset+xml
<--
200 OK
Content-Type: application/vnd.example-dataset+xml
<?xml version="1.0"?>
<dataset>{payload}</dataset>
Please note that the original data and rules are stored on example.net and
then processed by example.com; this is to emphasise that the job processor
has the responsibility of retrieving the data and rules. In my sketch, the
processor stores result datasets locally and the UA would have to move the
data to an alternate location after completion but it could easily be
modelled otherwise. The important point is that any server which understands
the application/vnd.example+xml protocol can participate in job processing;
the UA does not need to have prior knowledge of the endpoint specifics.
Looking at the sketch, it is easy to see how a UA could chain together
processing from a series of servers and a sequence of datasets to accomplish
some goal. The sketch therefore supports serendipity of reuse.
Regards,
Alan Dean
On Thu, Jun 3, 2010 at 01:29, Glenn Block <glenn.block@gmail.com> wrote:
>
>
> Correction to types in the last para:
>
> Anyway the advantage I saw in such a system is that the institutions that
> work with me can create any arbitrary set of rules they like. My system
> which processes jobs doesn't know anything about the types of rules or care.
> All it knows it that is has some URI that it can post to (or possibly Get)
> in order to execute some arbitrary logic. I now have an autonomous system
> that can easily adapt to new requirements. I am truly reaping the benefits
> of HATEOS in such a system.
>
> On Wed, Jun 2, 2010 at 5:26 PM, Glenn Block <glenn.block@...> wrote:
>
>> I get HATEOS for human interaction. I am trying to undestand pure m2m
>> benefits. If I think of a scenario of an Order management server host. I can
>> imagine that when I retrieve an Order resource I can perform several
>> "actions" on that order like Approve, Hold, etc. I would see these as links.
>> In this case if I had a machine talking to such a host, it can post to those
>> links with the advantage being that the agent / calling ocde doesn't
>> have coupling to where those links point to or how the URIs are formed.
>> However the client code which interacts with the agent has some idea that
>> Approve and Hold exist, so it is coupled in that sense.
>>
>> I started to think about m2m scenarios (other than web crawlers) where I
>> could truly leverage the full decoupling that HATEOS i.e. an adaptable
>> system that only knows to look for types of links but doesn't expect any
>> specific instances of those links (like Approve) to be present.
>>
>> I thought back to my past life prior to MS when I used to work in the
>> financial services industry. In those days we had to do a ton of back end
>> processing off data we received from multiple financial sources over FTP. I
>> built a generic FTP scanner that looked for files, ripped them open, parsed
>> them and then started executing all types of rules (I Blame Jan partially
>> for forcing me down this mental road). There was basically zero human
>> interaction other than monitoring the status of the processing. That system
>> was coupled, it required data to be send in a specific schema and that had a
>> predefined set of rules that were implicit as part of the schema.
>>
>> *Disclaimer: Thought experiment follows*
>>
>> Now let's say I decided to design such a system today in a RESTFul manner
>> around resources and HATEOS. In this case I am imagining I have a
>> Jobs\Pending resource. I do a get on that resource and get back a list of
>> jobs that are pending to be processed. Now each Job has rules that have to
>> execute. So in this world, I am thinking the rules are links with each link
>> point to a resource that handles processing that rule. Or maybe the link
>> returns me some code that gets executed on the fly.
>>
>> Anyway the advantage I saw in such a system is my the institutions that
>> work with me can create any arbitrary set of rules they like. My which
>> processes jobs doesn't know anything about the rules or care. All it knows
>> it that is has some URI that it can post to (or possibly Get) in order to
>> execute some arbitrary logic. I now have an autonomous system that can
>> easily adapt to new requirements. I am truly reaping the benefits of HATEOS
>> in such a system.
>>
>> Thoughts?
>> Glenn
>>
>>
>>
>>
>
>
>
Savas Parastatidis, Jim Webber, Guilherme Silveira and Ian Robinson wrote a paper for the REST track at WWW2010 titled "The Role of Hypermedia in Distributed Application Development". There are slides available from the talk Ian gave[1], and their full paper is contained in the preliminary proceedings of ws://rest.2010 [2]. That work is, for me, the most coherent guide available right now on the why's and how's of hypermedia. It's definitely a great place to start. Cheers, Mike [1] http://ws-rest.org/files/02-The%20Role%20of%20Hypermedia%20in%20Distributed%20Application%20Development.pdf [2] http://ws-rest.org/files/WSREST2010-Preliminary-Proceedings.pdf On Thu, Jun 3, 2010 at 1:26 AM, Glenn Block <glenn.block@...> wrote: > > > I get HATEOS for human interaction. I am trying to undestand pure m2m > benefits. If I think of a scenario of an Order management server host. I can > imagine that when I retrieve an Order resource I can perform several > "actions" on that order like Approve, Hold, etc. I would see these as links. > In this case if I had a machine talking to such a host, it can post to those > links with the advantage being that the agent / calling ocde doesn't > have coupling to where those links point to or how the URIs are formed. > However the client code which interacts with the agent has some idea that > Approve and Hold exist, so it is coupled in that sense. > > I started to think about m2m scenarios (other than web crawlers) where I > could truly leverage the full decoupling that HATEOS i.e. an adaptable > system that only knows to look for types of links but doesn't expect any > specific instances of those links (like Approve) to be present. > > I thought back to my past life prior to MS when I used to work in the > financial services industry. In those days we had to do a ton of back end > processing off data we received from multiple financial sources over FTP. I > built a generic FTP scanner that looked for files, ripped them open, parsed > them and then started executing all types of rules (I Blame Jan partially > for forcing me down this mental road). There was basically zero human > interaction other than monitoring the status of the processing. That system > was coupled, it required data to be send in a specific schema and that had a > predefined set of rules that were implicit as part of the schema. > > *Disclaimer: Thought experiment follows* > > Now let's say I decided to design such a system today in a RESTFul manner > around resources and HATEOS. In this case I am imagining I have a > Jobs\Pending resource. I do a get on that resource and get back a list of > jobs that are pending to be processed. Now each Job has rules that have to > execute. So in this world, I am thinking the rules are links with each link > point to a resource that handles processing that rule. Or maybe the link > returns me some code that gets executed on the fly. > > Anyway the advantage I saw in such a system is my the institutions that > work with me can create any arbitrary set of rules they like. My which > processes jobs doesn't know anything about the rules or care. All it knows > it that is has some URI that it can post to (or possibly Get) in order to > execute some arbitrary logic. I now have an autonomous system that can > easily adapt to new requirements. I am truly reaping the benefits of HATEOS > in such a system. > > Thoughts? > Glenn > > > > > >
There's only one valid answer to taht question: It depends :) Transparent conneg, be it locale, charset or media type, is useful as long as the difference between the representations resulting from the conneg are insignificant, aka whatever alternatives are conneg'd should be processed in the same way. If you try and cram anything else in conneg, you'll end up finding all the hairy issues and misunderstandings that most people fall into when they discover content negotiation. ________________________________ From: rest-discuss@yahoogroups.com [rest-discuss@yahoogroups.com] on behalf of Glenn Block [glenn.block@...] Sent: 02 June 2010 21:05 To: rest-discuss@yahoogroups.com Subject: [rest-discuss] Transparent content neg - Useful? In yesterday's thread the topic of transparent content neg was brought up. (http://tools.ietf.org/html/rfc5023#section-8.3.4) How critical / useful is relying on transparent content neg? Can any one give me some concrete use cases? Regards Glenn
On Wed, 2 Jun 2010 15:11:26 +0100 Mike Kelly wrote: > > Eric J. Bowman wrote: > > > That > > Atom and only Atom resource is a *different resource* whose entity > > happens to overlap with that of some other resource -- which > > happens all the time in REST. > > > > Agreed, that does happen all the time, but at a significant cost to > visibility since it results in invisible resources dependencies. > What cost? What dependencies? There is no "invisible dependency" between a negotiated resource and any resources that also happen to be variants. That makes no sense whatsoever. > > Reduced visibility directly impacts on your ability to leverage the > layered constraint which, amongst other things, weakens your ability > to compensate for the inefficiencies that emerge from the REST style. > That is a problem. Particularly at scale. > By leveraging the layered system constraint, do you mean caching? The caching on my Atom files served from non-negotiated URIs is more efficient than that of the exact same files served as variants of a negotiated URI. In fact, if I don't assign the variants their own URIs and send that along in Content-Location, caching breaks down entirely on the negotiated resources. REST development involves tradeoffs. If you aren't willing to make those tradeoffs (by "compensating for inefficiencies"), then you aren't following the REST style, and the consequences will be felt most particularly at scale. Self-descriptive messaging means that all knowledge needed to understand a request is contained within the request, that it does not depend on any other resource state (as with cookies when used for storing state). You can't seriously be claiming that assigning URIs to variants of a negotiated resource somehow has a negative impact on visibility? Reality itself proves otherwise, since caching the variants of a negotiated resource only works (except for compression) when you assign URIs to those variants. Only the visibility provided by the Content- Location URI allows for scaling to occur on systems which implement conneg. As with cookies, what you propose also violates the self-descriptive messaging constraint. This is simple enough to check -- does the response representation vary depending on the context of the request? If @type worked the way you wanted it to, dereferencing a resource would return a media type based on the native Accept header if the URI is typed or pasted in, or a link is followed from a page that doesn't set @type. Yet if the same URI is dereferenced from some other page that does set @type, the response representation has just changed based on the prior application state, i.e. shared context. This is a major problem, which totally violates the layered-system and identification of resources constraints also, which may be easily and simply avoided by the best-practice solution of assigning URIs to variants. Problem _solved_. Period. If your compensation for these perceived inefficiencies violates three critical REST constraints, on what do you base your scalability claim? Even if you were getting the concept of visibility right, it is a benefit of REST, not a constraint that best-practice conneg somehow violates. > > Bizarrely, the research I've been doing on cache invalidation[1] is > all based around ways to mitigate the sorts of problems that kind of > reduced visibility causes. > Which might explain its incomprehensibility to a REST architect. Specifically: "This does, however, require any client to make several subsequent requests for each item resource. This behaviour is generally considered overly 'chatty' and inefficient and therefore in the real world clear identification of resources and their state is traded-away for network efficiency gains." No, this is not considered "overly chatty and inefficient" and I've yet to see proof that working around this with batch messaging or anything else results in better scaling than a REST architecture. A sequence of requests for resources which results in cache hits anywhere from the client cache to the server cache is more efficient than a cache miss. Trying to avoid the "inefficient overhead" of making a whole slew of requests in a limited number of over-the-wire transactions is simply not the REST style, so claiming that doing so gives the same benefits as REST is non sequitir. When a browser dereferences a URI on my demo, it will make a whole slew of requests in order to render a steady-state. Due to the browser-resident XSLT architecture and the caching it allows, the server is averaging 1KB/hit with a 70% cache-hit ratio. A lot of this traffic is due to the 'must-revalidate' directive I left in the static, conneg-stripped demo where it does nothing, to simulate the conneg implementation that depends on it. I can serve 1KB responses all day. The fact that my server only spends 30% of its time actually serving files is what makes REST so awesome. Taking exception with the number of URIs I've assigned to achieve that, or the number of requests which make up a steady-state, is nit-picking in the face of real-world results which disprove the notion that doing either incurs any significant costs relative to the actual benefits. IOW, when you point to that and claim in the face of all evidence to the contrary (70% cache-hit ratios don't result from systems with visibility problems) that it somehow lacks visibility, you're claiming that the REST style is severely flawed. Reality itself is the counter- argument -- this isn't a philosophical debate, it's technology, and the proof is in the server logs of any REST system. > > The solution, at least the one I've > proposed, requires extending the system's uniform interface in order > to compensate. > So in order to compensate for a non-problem, your solution is to violate three fundamental REST constraints? Such a result is not, by definition, a Uniform Interface. > > The point being; you don't get it for free > Yes, you do. Your point is that my assignment of URIs to variants has had some detrimental impact to my system's visibility by introducing some phantom coupling between my resources, thus my assignment of URIs to variants has incurred some cost. My counterpoint is, again, reality. I can't for the life of me figure out what cost minting those URIs has incurred on my system, when the cost of removing them would cut my cache-hit ratio by at least half. > > - and in my > opinion it actually solves a problem that could be avoided altogether > by re-evaluating the mechanisms that encourage the practices which > cause these issues to arise in the first place. > It's a non-problem. Any philosophical debate is irrelevant in the face of statistical analysis of any REST system's logs. Problems exist with conneg, to be sure, but this isn't it. And this is hardly the solution. In the future, new protocols will evolve to replace HTTP 1.1, which will certainly address these shortcomings, preferably in ways consistent with the constraints of REST. -Eric
Jan Algermissen wrote: > > or take a zip from my coffee. > Hmmm... Zip Coffee! Have a Zip today! I think you're onto something, Jan. -Eric
Jan Algermissen wrote: > > I think that in many cases, the user agent is the whole process (as > in the case of a crawler). > If I set about to write some crawler, the logic expressing the goals of the whole process would be completely separate from the logic of the user agent, which in my case would be libcurl. IOW, I see the user agent as distinct from any process utilizing it. -Eric
Hi Eric! On Thu, Jun 3, 2010 at 12:15 PM, Eric J. Bowman <eric@...>wrote: > On Wed, 2 Jun 2010 15:11:26 +0100 > Mike Kelly wrote: > > > > Eric J. Bowman wrote: > > > > > That > > > Atom and only Atom resource is a *different resource* whose entity > > > happens to overlap with that of some other resource -- which > > > happens all the time in REST. > > > > > > > Agreed, that does happen all the time, but at a significant cost to > > visibility since it results in invisible resources dependencies. > > > > What cost? What dependencies? There is no "invisible dependency" > between a negotiated resource and any resources that also happen to be > variants. That makes no sense whatsoever. > Given that URI's are opaque, the following resources: /sales-order/123 /sales-order/123.html /sales-order/123.pdf Could share a dependency that is not visible to an intermediary if you avoid conneg. What does PUT /sales-order/123 do to the html/pdf resources from a cache's point of view, does it invalidate them? It probably should, that's a pretty useful behavior to be able to rely on. > > > > > Reduced visibility directly impacts on your ability to leverage the > > layered constraint which, amongst other things, weakens your ability > > to compensate for the inefficiencies that emerge from the REST style. > > That is a problem. Particularly at scale. > > > > By leveraging the layered system constraint, do you mean caching? The > caching on my Atom files served from non-negotiated URIs is more > efficient than that of the exact same files served as variants of a > negotiated URI. In fact, if I don't assign the variants their own URIs and send that along in Content-Location, caching breaks down entirely > on the negotiated resources. > That doesn't make sense - isn't this what the Vary mechanism is for? > You can't seriously be claiming that assigning URIs to variants of a > negotiated resource somehow has a negative impact on visibility? > Seriously. > Reality itself proves otherwise, since caching the variants of a > negotiated resource only works (except for compression) when you assign > URIs to those variants. Only the visibility provided by the Content- > Location URI allows for scaling to occur on systems which implement > conneg. > Really? http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.44 "The Vary field value indicates the set of request-header fields that fully determines, while the response is fresh, whether a cache is permitted to use the response to reply to a subsequent request without revalidation" "An HTTP/1.1 server SHOULD include a Vary header field with any cacheable response that is subject to server-driven negotiation. Doing so allows a cache to properly interpret future requests on that resource and informs the user agent about the presence of negotiation on that resource. A server MAY include a Vary header field with a non-cacheable response that is subject to server-driven negotiation, since this might provide the user agent with useful information about the dimensions over which the response varies at the time of the response." > > As with cookies, what you propose also violates the self-descriptive > messaging constraint. This is simple enough to check -- does the > response representation vary depending on the context of the request? > > If @type worked the way you wanted it to, dereferencing a resource > would return a media type based on the native Accept header if the URI > is typed or pasted in, or a link is followed from a page that doesn't > set @type. Yet if the same URI is dereferenced from some other page > that does set @type, the response representation has just changed based > on the prior application state, i.e. shared context. > Yes, this is exactly the behavior I am advocating. This problem would be avoided if, say, the browser address bar was amended by the Content-Location header of the negotiated response. Besides, I thought URIs were opaque and hypertext was the engine of application state? > > This is a major problem, which totally violates the layered-system and > identification of resources constraints also, which may be easily and > simply avoided by the best-practice solution of assigning URIs to > variants. Problem _solved_. Period. > It doesn't violate any constraints, the only problem it does pose is potential UX issues - which can be solved. Your post is too long and this conversation is getting tiresome, I think I'm pretty much spent here unless we can condense this stuff a bit more. I do have one final question though; Assuming what you're saying is right, what is the benefit in drawing any high level distinction between representations and resources if all we are actually talking about is resources and link relations? It's confusing. (Range-14 confusing) Cheers, Mike
Mike Kelly wrote: > > Eric J. Bowman wrote: > > > Mike Kelly wrote: > > > > > > Eric J. Bowman wrote: > > > > > > > That Atom and only Atom resource is a *different resource* whose > > > > entity happens to overlap with that of some other resource -- > > > > which happens all the time in REST. > > > > > > > > > > Agreed, that does happen all the time, but at a significant cost > > > to visibility since it results in invisible resources > > > dependencies. > > > > > > > What cost? What dependencies? There is no "invisible dependency" > > between a negotiated resource and any resources that also happen to > > be variants. That makes no sense whatsoever. > > > > Given that URI's are opaque, the following resources: > > /sales-order/123 > /sales-order/123.html > /sales-order/123.pdf > > Could share a dependency that is not visible to an intermediary if > you avoid conneg. > Why would the intermediary need to know about resource relationships? Origin servers manage those. Caches only deal with the caching parameters of representations. You'll notice my demo doesn't implement conneg, but sends Vary and Content-Location as if it did. This doesn't matter to a cache, which just maps all Accept headers to one single representation, effectively. Activating conneg won't impact cache performance, because of the layered-system constraint. When the origin server changes its behavior, caches change their behavior to match, without needing any knowledge (and definitely without being able to make any assertions about anything, there's no guarantee on the Web that you'll get fresh data) of the system the origin server implements to manage its resources. This is dumb caching, and it's proven to scale; why replace it with smart caching, the coupling of which can't possibly scale? REST is all about leveraging scalable, dumb caching. These dependencies you speak of aren't invisible, they're phantoms. > > What does PUT /sales-order/123 do to the html/pdf resources from a > cache's point of view, does it invalidate them? It probably should, > that's a pretty useful behavior to be able to rely on. > First, a cache invalidates only that resource involved in the PUT transaction -- it probably shouldn't behave any other way since that would defeat the entire purpose of REST and be some other, totally unproven, theoretical architecture bearing no resemblance to the real-world Web of today. Second, this is one of many uses of the 'must-revalidate' directive. I serve HTML representations of Atom resources (if you will) to browsers, but the mechanism whereby content is posted is based on Atom Protocol. Obviously, updating the Atom resources updates the negotiated resources, but the way REST architecture works is that I include 'must-revalidate' on negotiated variants such that they cache-validate properly. Granted, this results in overhead, but then again the overall bandwidth saved dwarfs the bandwidth consumed by 'must-revalidate' traffic (sub- 1K/hit 304 traffic I can serve all day) and renders the fact that caches can't expire negotiated resources properly without it, moot. Optimizing PUT to begin with isn't really worthwile though, as REST emphasizes optimizing the hell out of GET because that's over 99% of a real-world system's traffic. I care more about having those variants cache properly than I do about having intermediaries synchronously expire them when one changes -- the coupling required for such a system would go against fundamental REST architecture -- for the sake of what, you have failed to explain. > > > > > > > > > Reduced visibility directly impacts on your ability to leverage > > > the layered constraint which, amongst other things, weakens your > > > ability to compensate for the inefficiencies that emerge from the > > > REST style. That is a problem. Particularly at scale. > > > > > > > By leveraging the layered system constraint, do you mean caching? > > The caching on my Atom files served from non-negotiated URIs is more > > efficient than that of the exact same files served as variants of a > > negotiated URI. In fact, if I don't assign the variants their own > > URIs and send that along in Content-Location, caching breaks down > > entirely on the negotiated resources. > > > > That doesn't make sense - isn't this what the Vary mechanism is for? > I spent years studying how caching works before I began shooting my mouth off about it on mailing lists. The Vary header informs client connectors of two things. First, that the resource implements conneg (most often, for compression). Second, which request headers the origin server considered to generate the varied response. Most caches, and certainly the overwhelming majority of deployed shared caches on the public Web, simply won't cache responses whose Vary header consists of anything more than 'Accept-Encoding' in the absence of a Content-Location URI. Which makes perfect sense, as caches "key" their database of stored representations by... URI, of course. This makes no difference when the conneg only varies by compression, an intermediary can store the representation compressed or not, then zip or unzip it on-the-fly as needed. But, when varying by media type, the intermediary needs to store multiple representations associated with the negotiated URI. Assuming these variants will send Content- Location keeps cache development simple, as mapping is URI-based. Some caches, certainly not widely-enough deployed to have an appreciable impact, will in fact use an internal identifier and store variants even without Content-Location headers. Most, however, rely on the URI sent in the Content-Location header. Let's break this down a bit -- cache receives response for request /a with Content-Location /a.html . Cache receives subsequent request for /a.html and serves it without making a request to the origin server (unless there's a 'must-revalidate'), since the requested representation is already cached. GET optimization. So in fact, caches have enough information to expire /a on PUT to /a.html but it is not in Web architecture nor REST architecture for the cache to make any assumption about the connection between the two. The intermediary passing on the PUT request may not be in-circuit for the response. If the response was 4xx then the cache which expired /a has just violated the assumption rule, and it's pretty presumptuous (albeit specified) for it to have even expired /a.html . This is called the 'stateless messaging' constraint. Caches just can't know what effect the PUT request to one variant's URI, or even to the negotiated URI, will have on any other variant because they may not be party to the response to the PUT request. Solutions to this "problem" include using FTP or other stateful protocol, or advocating that the telco infrastructure revert from packet-switching to circuit-switching. > > > You can't seriously be claiming that assigning URIs to variants of a > > negotiated resource somehow has a negative impact on visibility? > > > > Seriously. > Then you'll have your work cut out for you, as you'll need to rebut Chapter 5 of Roy's thesis paragraph-for-paragraph using the language and examples established in Chapters 1-4. Start with the identification of resources constraint, it's closest to the top, and explain why it's a bad thing. Or, respond to what I said the other day -- if your context is such that conneg is required, that's one resource. If your context is such that a specific media type is required, that's another resource entirely. The fact that one variant of the negotiated resource happens to be the same as the representation of some other resource, is part of the style (author's preferred version), which mandates that each resource have its own URI. So the key mistake you're making, is failing to identify two discrete resources with two discrete URIs, i.e. you're failing to apply the identification of resources constraint. This is fundamental to REST, arguing against it is tilting at a windmill. > > > Reality itself proves otherwise, since caching the variants of a > > negotiated resource only works (except for compression) when you > > assign URIs to those variants. Only the visibility provided by the > > Content-Location URI allows for scaling to occur on systems which > > implement conneg. > > > > Really? > > http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.44 > I explained the two functions of the Vary header, above. Neither has to do with caching. Content-Location in the absence of Vary means that different intermediaries will cache different variants, depending upon the media type the server sent in response to the first request for the resource passing through each intermediary. Vary screams 'conneg' to all interested parties, which in turn, if they even care, may decide for themselves whether they're interested in the parameters of the Vary. Vary in the absence of Content-Location (with the exception of compression) works out, in reality on the Web, as an uncacheable response (which is why this best-practice is labeled as SHOULD). There is also considerable interoperability variation with real-world caches when Vary-header parameters get too complicated. Most can handle one or two, but many fail with my system, which sometimes sends 'Vary: Accept, User-Agent, Accept-Encoding' which some caches will reject just by virtue of having three fields, or by virtue of one of the fields being User-Agent, or by combining User-Agent with Accept. That's why the advice on this list where conneg is concerned is KISS. Real-world caching is less effective the more complex the conneg scheme. Unfortunately, the leading use of conneg, after compression, is probably dealing with IE by parsing User-Agent strings, which have such infinite variation that caching is defeated. My system is designed to maximize cachability of the variants making up negotiated resources, in a way which offsets caching penalties from the conneg for the initial representation by including persistent data from multiple, more-cache-persistent sources. These source files are also the representations transferred using PUT and POST, so rendering a steady-state in HTML also primes the local cache with the Atom file included by other resources (like the editing page), and gives the user agent the Etag it will need when making a conditional PUT request. That's GET optimization, and as the analysis of my server logs shows, it's quite effective at reducing traffic over the wire. My most-complex Vary headers are for resources whose HTML content is just a stub, which links to a persistent XSLT file, which includes more-cache-persistent resources to build the steady-state. My architectural approach moots the deficiencies of HTTP 1.1 conneg by *applying* REST, not working around it. HTTP's limitations aren't REST's fault nor are they limitations of REST, but REST can make HTTP's limitations irrelevant. > > > > > As with cookies, what you propose also violates the self-descriptive > > messaging constraint. This is simple enough to check -- does the > > response representation vary depending on the context of the > > request? > > > > If @type worked the way you wanted it to, dereferencing a resource > > would return a media type based on the native Accept header if the > > URI is typed or pasted in, or a link is followed from a page that > > doesn't set @type. Yet if the same URI is dereferenced from some > > other page that does set @type, the response representation has > > just changed based on the prior application state, i.e. shared > > context. > > > > Yes, this is exactly the behavior I am advocating. This problem would > be avoided if, say, the browser address bar was amended by the > Content-Location header of the negotiated response. > Wow. There's active debate on why browsers break with the spec and fail to change the address bar for Location headers. But one point that everybody but you agrees on, is that the entire purpose of conneg is that the address bar *doesn't* change to reflect Content-Location. So the solution to your non-problem is that we scrap Web architecture and start over, because everybody but you fails to recognize this as broken behavior? How should the address bar change to inform the user that an uncompressed variant was selected, as opposed to compressed? Seriously. I point out a behavior that breaks a REST constraint, you then say that's exactly what you're advocating, but claim to be following REST? Because what you're advocating results in exactly the behavior that is typically accomplished using cookies, and nobody who claims such a solution to be RESTful can be taken seriously -- unless they've taken the trouble to rebut Chapter 5, chapter and verse, first. > > Besides, I thought URIs were opaque and hypertext was the engine of > application state? > Yes, URIs are opaque, this is why it goes against both Web architecture and REST for the aforementioned PUT to expire anything beyond the resource involved in the request. Yes, hypertext is the engine of application state, in fact the Content- Location header is hypertext which informs client connectors (browsers, caches etc.) of the URI for the returned representation. > > > > > This is a major problem, which totally violates the layered-system > > and identification of resources constraints also, which may be > > easily and simply avoided by the best-practice solution of > > assigning URIs to variants. Problem _solved_. Period. > > > > It doesn't violate any constraints, the only problem it does pose is > potential UX issues - which can be solved. > -1 The constraint violations have been thoroughly detailed for you. Your refusal to engage, and insistence on dismissing everything I've said out-of-hand to restate your positions, is Exhibit A for cognitive-REST- dissonance. Servers send representations which inform user agents how to render application steady-states. They don't send representations which modify how user agents work at the protocol level. If you can't describe, in REST terms, what the problem is with assigning URIs to variants, and you can't explain, in REST terms, what is wrong with my (and others') analysis of the constraints you're blatantly violating, then you just aren't going to convince me that there's a problem here to begin with, let alone one worth solving. This fundamental separation of client-server concerns is called the layered system constraint, and it's how the real-world Web happens to have always worked. When a server is breaking through the separation of layers to dictate to the client how it should behave at the protocol level, the layered system constraint is broken. This is a feature of REST and Web architecture, not a bug. Cookies are bad enough. (For the hundredth time...) If the server needs to instruct the client to retrieve a specific media type, then the server needs to send the client the URI for that _separate_ resource, instead of the URI for the negotiated resource. NOT the same resource. Again, this is the identification of resources constraint. If you're serious about REST, then you need to accept the fundamentals of the style, especially those which are reflected in how things actually work on the Web, instead of railing against them. > > Your post is too long and this conversation is getting tiresome, I > think I'm pretty much spent here unless we can condense this stuff a > bit more. > Yes, I modified my last response to remove the part where I point out your obvious cognitive dissonance, quote what Seb said days ago, and give it a +1. But, I feel it's important to clear up your misinformed point of view for anyone reading the thread in the properly open frame of mind required to learn, even if it has no effect on you. We've reached the point where it's pretty obvious you're more committed to defending your assumptions than learning REST, and continuing to take you seriously would only serve to encourage you. Tiresome, indeed. If you want to engage me in actual debate to change my opinion on any topic in the world, your grasp of fundamentals must first be sound. Otherwise, long lectures aimed at others struggling to actually learn (to prevent their being misinformed) will be the result... > > I do have one final question though; Assuming what you're saying is > right, what is the benefit in drawing any high level distinction > between representations and resources if all we are actually talking > about is resources and link relations? It's confusing. (Range-14 > confusing) > This whole thread has been about that very benefit. Without the distinction, it wouldn't be possible to have late binding based on the capabilities of the client. Think about compression. Are the zipped and unzipped variants of a resource, different resources? No? Then surely they need to share an identifier, right? The resource vs. representation distinction follows naturally from this problem of having one text "noun" and one binary "noun" which both "adverb" the same "noun". Saying this distinction is philosophical rather than real is to dispute the fact that compression really does work on the Web, and it's nothing more than content negotiation. (For more information on exactly why this distinction must be made, and why it has been made in terms of resource vs. representation, don't take my word for it, see Roy's thesis. While it isn't aging well in terms of examples given using WAIS and such, it's about the most informative piece of work that's been written about a specific application of networked software architecture, and definitely ahead of its time. When you understand Roy's thesis, you will understand why Range-14 couldn't be resolved in any other way, and it ceases to be confusing. Keep at it, all.) What's the link relationship between zipped and unzipped variants? None. I assure you, whatever semantics you wish to use to describe it, there does exist a distinction between resource and representation, and compression wouldn't work without it. It would be pretty stupid to have no mechanism in HTTP for compressing text, or to require clients to implement decompression in order to read text, or to consider the two to be different things, or allow HTML to force a user agent to request a compressed file it can't decipher by altering request headers. REST shows that there are valid reasons why things are the way they are on the Web, with solid grounding in the fundamentals of computer science. Despite its flaws, which require a successor protocol instead of the radical changes to the existing architecture you propose, the HTTP 1.1-based Web has proven to be wildly successful. To learn REST is to learn why the Web works as well as it does, so you can hit a sweet spot where it works incredibly well in terms of scaling (i.e. 1KB/hit with a 70% cache-hit ratio). Instead of learning REST, you're taking well-established best practices that can be demonstrated to work incredibly well, finding flaws which nobody else can see, and proposing solutions which, by implication, fault a whole buttload of work by a whole buttload of people agreeing on certain fundamentals which you alone insist are incorrect. Good luck with that, Eric
On Thu, Jun 3, 2010 at 4:12 PM, Eric J. Bowman <eric@...>wrote: > > > > > Given that URI's are opaque, the following resources: > > > > /sales-order/123 > > /sales-order/123.html > > /sales-order/123.pdf > > > > Could share a dependency that is not visible to an intermediary if > > you avoid conneg. > > > > Why would the intermediary need to know about resource relationships? > Origin servers manage those. Caches only deal with the caching > parameters of representations. You'll notice my demo doesn't implement > conneg, but sends Vary and Content-Location as if it did. This doesn't > matter to a cache, which just maps all Accept headers to one single > representation, effectively. Activating conneg won't impact cache > performance, because of the layered-system constraint. > If 123.html and 123.pdf were representations of the same data wouldn't you want them to both reflect changes? What it order #123 changed? My understanding of caching was that you can let the Expires or Cache-control headers take care of business, but you also have/want the ability to flush all representations of a resource if the resource changes. -- David blog: http://www.traceback.org twitter: http://twitter.com/dstanek
On Thu, Jun 3, 2010 at 9:12 PM, Eric J. Bowman <eric@...>wrote: > > When the origin server changes its behavior, caches change their > behavior to match, without needing any knowledge (and definitely > without being able to make any assertions about anything, there's no > guarantee on the Web that you'll get fresh data) of the system the > origin server implements to manage its resources. This is dumb caching, > and it's proven to scale; why replace it with smart caching, the > coupling of which can't possibly scale? REST is all about leveraging > scalable, dumb caching. > Gateway intermediaries/caches within the same organisational domain as the origin server do not have to be 'dumb'. I take it you are not a fan of, say, cache channels then? > > These dependencies you speak of aren't invisible, they're phantoms. > Ok Eric, if you say so. > > > > > What does PUT /sales-order/123 do to the html/pdf resources from a > > cache's point of view, does it invalidate them? It probably should, > > that's a pretty useful behavior to be able to rely on. > > > > First, a cache invalidates only that resource involved in the PUT > transaction -- it probably shouldn't behave any other way since that > would defeat the entire purpose of REST and be some other, totally > unproven, theoretical architecture bearing no resemblance to the > real-world Web of today. > Are you sure about that? I don't think REST has anything to say here, provided the mechanism is part of the system's uniform interface it doesn't violate any REST constraints. Don't take it from me, HTTP does this anyway: http://tools.ietf.org/html/draft-ietf-httpbis-p6-cache-06#section-2.5 The following HTTP methods MUST cause a cache to invalidate the Request-URI *as well as the Location and Content-Location headers* (if present): o PUT o DELETE o POST > > Second, this is one of many uses of the 'must-revalidate' directive. I > serve HTML representations of Atom resources (if you will) to browsers, > but the mechanism whereby content is posted is based on Atom Protocol. > Obviously, updating the Atom resources updates the negotiated resources, > but the way REST architecture works is that I include 'must-revalidate' > on negotiated variants such that they cache-validate properly. > > Granted, this results in overhead, but then again the overall bandwidth > saved dwarfs the bandwidth consumed by 'must-revalidate' traffic (sub- > 1K/hit 304 traffic I can serve all day) and renders the fact that caches > can't expire negotiated resources properly without it, moot. > If we're talking about gateway caches, who's primary purpose is to *reduce demands on origin servers* - hitting the origin server to validate every request doesn't actually achieve much. There's different types and levels of caching, all aimed at solving different problems. > > Optimizing PUT to begin with isn't really worthwile though, as REST > emphasizes optimizing the hell out of GET because that's over 99% of a > real-world system's traffic. I care more about having those variants > cache properly than I do about having intermediaries synchronously > expire them when one changes -- the coupling required for such a system > would go against fundamental REST architecture -- for the sake of what, > you have failed to explain. > My research on gateway cache invalidation provides analysis of caching mechanisms by way of explanation, I've already created a thread on this list about it you can contribute to if you disagree or you can comment on the blog post. Given the high degree of threat you feel my perspective poses to the community, I think you have a duty to sort this out in order to prevent me poisoning any more minds with improper thoughts. Perhaps you could hold confession for anyone that has already moved to the dark side? > > Some caches, certainly not widely-enough deployed to have an appreciable > impact, will in fact use an internal identifier and store variants even > without Content-Location headers. So what you're saying is that the HTTP does actually support it, but it's generally not implemented properly? That's an interesting observation, but it's circular justification to use that as an argument for preventing the behaviour from occuring . Maybe the fact it's "not allowed" is the very reason no caches implement it properly? If the accept header was not intended to be context specific and only intended as a 'static' description of UA preferences, why do browsers amend the accept header for script and style element hyperlinks? If @type was changed to amend the accept header - what do you envisage breaking? How would this prevent your approach from working? Cheers, Mike
Mike Kelly wrote: > > Eric J. Bowman wrote: > > > > > When the origin server changes its behavior, caches change their > > behavior to match, without needing any knowledge (and definitely > > without being able to make any assertions about anything, there's no > > guarantee on the Web that you'll get fresh data) of the system the > > origin server implements to manage its resources. This is dumb > > caching, and it's proven to scale; why replace it with smart > > caching, the coupling of which can't possibly scale? REST is all > > about leveraging scalable, dumb caching. > > > > Gateway intermediaries/caches within the same organisational domain > as the origin server do not have to be 'dumb'. > So what? We're talking about REST, right? Which is all about anarchic scalability across organizational boundaries, right? A smart gateway cache isn't the only architectural solution. One which needs no extensions to work the same way, is a cache connector on the server component itself, instead of relying on an external (squid) cache. > > I take it you are not a fan of, say, cache channels then? > To quote mnot, "It’s just one more tool in the box." This gives squid and other caches more kick when used as gateway caches. Don't assume that makes it more efficient than a cache connector on the server component. The cache benefits of cache channels don't apply beyond the gateway cache, do they? It isn't widely-deployed, proven technology like HTTP 1.1 caching as-is, is it? So it has no effect on the anarchic scalability of a REST system beyond the organizational boundary, does it? If my webhost were to offer it, I would implement it, for more effective caching at that one point (which impacts my bandwidth charges). But I would expect it to be several more years at least, before I can expect PUT to expire the way you'd like it to, due to cache channels, and it still won't change the fact that this is an optimization that only affects 1% of traffic. > > > > > These dependencies you speak of aren't invisible, they're phantoms. > > > > Ok Eric, if you say so. > This has nothing to do with my say-so. You've failed to convince *anyone* here of your viewpoint, because it's just wrong. Cognitive dissonance is your inability to accept that, no matter how many times you're told, even by Roy, as Seb also pointed out. Don't troll. > > > > > > > > > What does PUT /sales-order/123 do to the html/pdf resources from a > > > cache's point of view, does it invalidate them? It probably > > > should, that's a pretty useful behavior to be able to rely on. > > > > > > > First, a cache invalidates only that resource involved in the PUT > > transaction -- it probably shouldn't behave any other way since that > > would defeat the entire purpose of REST and be some other, totally > > unproven, theoretical architecture bearing no resemblance to the > > real-world Web of today. > > > > Are you sure about that? I don't think REST has anything to say here, > provided the mechanism is part of the system's uniform interface it > doesn't violate any REST constraints. > > Don't take it from me, HTTP does this anyway: > The normative reference for HTTP is not draft HTTPbis. Anyway, both say exactly what I said. When encountering PUT traffic, a cache invalidates the request URI -- how is this not what I said? PUT requests don't have Location or Content-Location headers pointing to other resources to invalidate, do they? Draft HTTPbis adds that extra bit, which you're right, says that a cache that handled a previous GET can expire a PUT based on that knowledge, but proving that I can save 2-3% overhead on 1% of my traffic at some point in the future when caches are upgraded to HTTPbis, is really stretching to make a point. What does REST have to say about this? "Don't violate HTTP." If you think violating HTTP to implement a REST constraint is OK, you're wrong. > > > > > Second, this is one of many uses of the 'must-revalidate' > > directive. I serve HTML representations of Atom resources (if you > > will) to browsers, but the mechanism whereby content is posted is > > based on Atom Protocol. Obviously, updating the Atom resources > > updates the negotiated resources, but the way REST architecture > > works is that I include 'must-revalidate' on negotiated variants > > such that they cache-validate properly. > > > > Granted, this results in overhead, but then again the overall > > bandwidth saved dwarfs the bandwidth consumed by 'must-revalidate' > > traffic (sub- 1K/hit 304 traffic I can serve all day) and renders > > the fact that caches can't expire negotiated resources properly > > without it, moot. > > > > If we're talking about gateway caches, who's primary purpose is to > *reduce demands on origin servers* - hitting the origin server to > validate every request doesn't actually achieve much. There's > different types and levels of caching, all aimed at solving different > problems. > It achieves exactly what my demo shows that it achieves. You're bitching that my 1KB/hit isn't 0.9KB/hit, and my cache-hit ratio of 70% isn't 71%. The bandwidth savings comes from the server not serving files, not from the server being relieved of those dastardly 304 responses. The 'must-revalidate' directive exists, and solves the exact caching problem you seem to be having. That doesn't make it the only solution, but it does work with all existing caches instead of only the latest version of squid, at a negligible cost compared to its advantages (like being able to cache variants in the first place, without it resulting in stale responses everywhere). > > > > > Optimizing PUT to begin with isn't really worthwile though, as REST > > emphasizes optimizing the hell out of GET because that's over 99% > > of a real-world system's traffic. I care more about having those > > variants cache properly than I do about having intermediaries > > synchronously expire them when one changes -- the coupling required > > for such a system would go against fundamental REST architecture -- > > for the sake of what, you have failed to explain. > > > > My research on gateway cache invalidation provides analysis of caching > mechanisms by way of explanation, I've already created a thread on > this list about it you can contribute to if you disagree or you can > comment on the blog post. > Then stop making that blog post the topic of this thread, as you have. > > Given the high degree of threat you feel my > perspective poses to the community, I think you have a duty to sort > this out in order to prevent me poisoning any more minds with > improper thoughts. > No, debunking your mythology is not incumbent upon me. It is incumbent upon you to convince the REST community of your points, by showing that you grasp the fundamentals, first. You keep rejecting the fundamentals, even when it's Roy who puts them to you, over the years. > > Perhaps you could hold confession for anyone that > has already moved to the dark side? > Perhaps those of you who keep insisting, year after year, that REST is flawed and Roy is wrong, could get together and form your own group? I'm still waiting for the paragraph-by-paragraph rebuttal of Chapter 5 from y'all. Until then, engaging with y'all consists of correcting the misinformation you post on rest-discuss for others to read, or putting y'all on 'ignore'. I do both, depending on mood. > > > > > Some caches, certainly not widely-enough deployed to have an > > appreciable impact, will in fact use an internal identifier and > > store variants even without Content-Location headers. > > So what you're saying is that the HTTP does actually support it, but > it's generally not implemented properly? > No, HTTP says nothing about how a cache should index its content. I merely point out that the logical, and in fact most-widely adopted, indexing strategy for caches is to use the URIs provided by the origin server. Some origin servers refuse to send Content-Location with Vary, which has led some high-end cache vendors to use non-URI cache-index strategies to compensate. This is why the best-practice rule-of-thumb has been, and likely always will be until HTTP 1.1 is replaced, to send Content-Location when you're sending Vary (for reasons other than compression). > > If the accept header was not intended to be context specific and only > intended as a 'static' description of UA preferences, why do > browsers amend the accept header for script and style element > hyperlinks? > Reference, please. What browser does this? Not that browser behavior, i.e. what you can make work, has ever been a measure of RESTfulness... see media-type sniffing. > > If @type was changed to amend the accept header - what do > you envisage breaking? How would this prevent your approach from > working? > There exists a separation of concerns between client and server in Web architecture and in REST. Another example is URI fragments. Your question could just as easily be, "Why can't we change the Web such that clients send URI fragments to servers?" Because all existing content was written based on the layered-system constraint, when it comes to things like URI fragments and @type. When you change the semantics of existing markup, the existing markup (and there's an awful lot of it on the Web) can't be expected _not_ to break because of that paradigm shift, can it? Aside from what would break, changing @type introduces a coupling between client and server which goes against the layered-system constraint. Solving problems by violating REST constraints is not how I operate. Building coupled systems, or being forced to build coupled systems if this change is forced through, is what I seek to avoid. -Eric
>> If @type was changed to amend the accept header - what do >> you envisage breaking? How would this prevent your approach from >> working? >> > > There exists a separation of concerns between client and server in Web > architecture and in REST. Another example is URI fragments. Your > question could just as easily be, "Why can't we change the Web such > that clients send URI fragments to servers?" Because all existing > content was written based on the layered-system constraint, when it > comes to things like URI fragments and @type. When you change the > semantics of existing markup, the existing markup (and there's an awful > lot of it on the Web) can't be expected _not_ to break because of that > paradigm shift, can it? > > Aside from what would break, changing @type introduces a coupling > between client and server which goes against the layered-system > constraint. Solving problems by violating REST constraints is not how > I operate. Building coupled systems, or being forced to build coupled > systems if this change is forced through, is what I seek to avoid. I've tried to follow this thread as best I can, but I don't see the coupling. I thought the @type was simply a hint from the server to the client about what representation is available at the other end. The server could have: <a rel=next type=text/html href="...">Search Me</a> <a rel=next type=application/atom+xml href="...">Search Me</a> In this case, @type is simply used as link selection criteria in concert with @rel. The URI may or may not be exactly the same. I don't see the coupling. In the end, there are a finite number of representations available for a resource and a finite number of representations understood by the client, and this just seems like a helpful way for the server to help the client in negotiating the most useful one. But then again... I may have missed some essential info in this long thread. On a complete side note, FWIW, your overly condescending tone degrades your otherwise high quality signal. --tim
Tim Williams wrote: > > >> If @type was changed to amend the accept header - what do > >> you envisage breaking? How would this prevent your approach from > >> working? > >> > > > > There exists a separation of concerns between client and server in > > Web architecture and in REST. Another example is URI fragments. > > Your question could just as easily be, "Why can't we change the > > Web such that clients send URI fragments to servers?" Because all > > existing content was written based on the layered-system > > constraint, when it comes to things like URI fragments and @type. > > When you change the semantics of existing markup, the existing > > markup (and there's an awful lot of it on the Web) can't be > > expected _not_ to break because of that paradigm shift, can it? > > > > Aside from what would break, changing @type introduces a coupling > > between client and server which goes against the layered-system > > constraint. Solving problems by violating REST constraints is not > > how I operate. Building coupled systems, or being forced to build > > coupled systems if this change is forced through, is what I seek to > > avoid. > > I've tried to follow this thread as best I can, but I don't see the > coupling. I thought the @type was simply a hint from the server to > the client about what representation is available at the other end. > You're exactly right. I'm saying that changing the semantics of @type from annotation to instruction, introduces coupling. > > The server could have: > <a rel=next type=text/html href="...">Search Me</a> > <a rel=next type=application/atom+xml href="...">Search Me</a> > Well, it would be more RESTful for a representation to use as rel= 'next' the same media type as itself. Then, make a list of rel= 'alternate' links for each negotiated representation, with separate URIs, and give their @types, so you're responding with the most optimal representation first, then allowing the user agent to recover, instead of making the user agent pre-negotiate rel='next'. > > In this case, @type is simply used as link selection criteria in > concert with @rel. The URI may or may not be exactly the same. > Exactly. Although in practice, where the URI is the same for a list of alternates, user agents won't see it that way. For example, browsers have no problem displaying a feed icon in the presence of a link rel= 'alternate' with type='application/atom+xml', provided that the href isn't the same as the request URI for the current application state. So assigning URIs to variants not only clears things up for caches, but for browsers as well. > > On a complete side note, FWIW, your overly condescending tone degrades > your otherwise high quality signal. > Yes, which is why I mostly avoid these discussions. Point taken. -Eric
David Stanek wrote: > > Eric J. Bowman wrote: > > > > > > > > Given that URI's are opaque, the following resources: > > > > > > /sales-order/123 > > > /sales-order/123.html > > > /sales-order/123.pdf > > > > > > Could share a dependency that is not visible to an intermediary if > > > you avoid conneg. > > > > > > > Why would the intermediary need to know about resource > > relationships? Origin servers manage those. Caches only deal with > > the caching parameters of representations. You'll notice my demo > > doesn't implement conneg, but sends Vary and Content-Location as if > > it did. This doesn't matter to a cache, which just maps all Accept > > headers to one single representation, effectively. Activating > > conneg won't impact cache performance, because of the > > layered-system constraint. > > > > If 123.html and 123.pdf were representations of the same data > wouldn't you want them to both reflect changes? > Sure, that's what the origin server does when an update request is received and accepted. But, the origin server has no way to alert caches that /123 has changed because /123.pdf has changed. Really, the only thing you can feasibly do is set must-revalidate on negotiated resources, when using large TTL values. You can also decrease the TTL, if set to a minute, then all variants will expire within a minute of any update. Finer control would be nice, but I don't see a compelling need to optimize PUT to save an origin server maybe a handful of 304s. > > What it order #123 > changed? My understanding of caching was that you can let the Expires > or Cache-control headers take care of business, but you also > have/want the ability to flush all representations of a resource if > the resource changes. > Not exactly. I can always add a 'max-age=0' directive to a request, to get a response from an origin server. But there's no way to 'flush' caches, in fact there's no requirement that caches obey expiration times or must-revalidate, say in the case of the origin server being unavailable. To cache, is to cede control. -Eric
Yes I am thinking of something a bit different though. Insteads of having just a lunch, what if we meet for two to three ours at the MS campus in London? This way we could have a brainstorming / design type discussion. I'll supply lunch :-) My thinking would be Thursday the week of the 12th of July. What to you guys think? Glenn On 6/4/10, Jan Algermissen <algermissen1971@...> wrote: > Glenn, > > sorry to be impatient - do you have news on the date for London? > > (My preferred airline has brilliant deals when booking before Sunday :-) > > Jan > -- Sent from my mobile device
On Jun 4, 2010, at 6:13 PM, Glenn Block wrote: > Yes > > I am thinking of something a bit different though. Insteads of having > just a lunch, what if we meet for two to three ours at the MS campus > in London? Great. The more time you can spend on this the better. > This way we could have a brainstorming / design type > discussion. I'll supply lunch :-) > Looking forward to it :-) > My thinking would be Thursday the week of the 12th of July. > That would be the 15th, yes? > What to you guys think? > Sounds fine. Jan > Glenn > > On 6/4/10, Jan Algermissen <algermissen1971@...> wrote: >> Glenn, >> >> sorry to be impatient - do you have news on the date for London? >> >> (My preferred airline has brilliant deals when booking before Sunday :-) >> >> Jan >> > > -- > Sent from my mobile device ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
On Fri, Jun 4, 2010 at 11:39 AM, Eric J. Bowman <eric@...> wrote: > Tim Williams wrote: >> >> >> If @type was changed to amend the accept header - what do >> >> you envisage breaking? How would this prevent your approach from >> >> working? >> >> >> > >> > There exists a separation of concerns between client and server in >> > Web architecture and in REST. Another example is URI fragments. >> > Your question could just as easily be, "Why can't we change the >> > Web such that clients send URI fragments to servers?" Because all >> > existing content was written based on the layered-system >> > constraint, when it comes to things like URI fragments and @type. >> > When you change the semantics of existing markup, the existing >> > markup (and there's an awful lot of it on the Web) can't be >> > expected _not_ to break because of that paradigm shift, can it? >> > >> > Aside from what would break, changing @type introduces a coupling >> > between client and server which goes against the layered-system >> > constraint. Solving problems by violating REST constraints is not >> > how I operate. Building coupled systems, or being forced to build >> > coupled systems if this change is forced through, is what I seek to >> > avoid. >> >> I've tried to follow this thread as best I can, but I don't see the >> coupling. I thought the @type was simply a hint from the server to >> the client about what representation is available at the other end. >> > > You're exactly right. I'm saying that changing the semantics of @type > from annotation to instruction, introduces coupling. Ah, I missed the distinction, thanks. I don't see how its necessarily harmful though. I mean, if I've got two representations of a given resource and I give you links to both - it seems to me the result will be the same whether you used a @type attribute to populate your Accept header or used your own quality-based Accept header. For example, if I give you a link to <a href="..." type="text/html" - wouldn't the result would be the same whether it's a hint or instructive? >> The server could have: >> <a rel=next type=text/html href="...">Search Me</a> >> <a rel=next type=application/atom+xml href="...">Search Me</a> >> > > Well, it would be more RESTful for a representation to use as rel= > 'next' the same media type as itself. Then, make a list of rel= > 'alternate' links for each negotiated representation, I think either of these lives within the constraints of REST - it seems more of a discussion about good media type design than RESTfulness to me. The link relations aren't exclusive though, so perhaps: <a rel="next" type="text/html" href="...">Search Me</a> <a rel="next alternate" type="application/atom+xml" href="...">Search Me</a> > ... with separate > URIs, and give their @types, so you're responding with the most optimal > representation first, then allowing the user agent to recover, instead > of making the user agent pre-negotiate rel='next'. Why separate URIs? If it weren't for some UAs having mostly static Accept headers, my preference would firmly be for the same URI and the @type only used as a hint for the UA to perhaps adjust Accept header. You're saying this for practical reasons of UA behavior? >> In this case, @type is simply used as link selection criteria in >> concert with @rel. The URI may or may not be exactly the same. >> > > Exactly. Although in practice, where the URI is the same for a list of > alternates, user agents won't see it that way. For example, browsers > have no problem displaying a feed icon in the presence of a link rel= > 'alternate' with type='application/atom+xml', provided that the href > isn't the same as the request URI for the current application state. Didn't know that, I was more thinking of the static Accept header limitation of UAs not this, but that's interesting. > So assigning URIs to variants not only clears things up for caches, but > for browsers as well. Just when I thought I got it. It seems to me, we should promote fixing the UAs rather than assigning URIs to each representation. On this list, sometimes it's tough for me to determine whether folks are arguing a particular position because of the merits of the idea or because of the practical conditions of existing, running software in the wild. In this case, on a practical note, I agree that this is what we're stuck with but would you agree that the ideal is to fix the source (e.g. UAs)? That's my interpretation of section 5.2.1.1 too: "This abstract definition of a resource enables key features of the Web architecture. First, it provides generality by encompassing many sources of information without artificially distinguishing them by type or implementation. Second, it allows late binding of the reference to a representation, enabling content negotiation to take place based on characteristics of the request. Finally, it allows an author to reference the concept rather than some singular representation of that concept, thus removing the need to change all existing links whenever the representation changes (assuming the author used the right identifier)." Thanks, --tim
Suits me. Alan On Fri, Jun 4, 2010 at 17:13, Glenn Block <glenn.block@gmail.com> wrote: > > > Yes > > I am thinking of something a bit different though. Insteads of having > just a lunch, what if we meet for two to three ours at the MS campus > in London? This way we could have a brainstorming / design type > discussion. I'll supply lunch :-) > > My thinking would be Thursday the week of the 12th of July. > > What to you guys think? > > Glenn > > On 6/4/10, Jan Algermissen <algermissen1971@...<algermissen1971%40mac.com>> > wrote: > > Glenn, > > > > sorry to be impatient - do you have news on the date for London? > > > > (My preferred airline has brilliant deals when booking before Sunday :-) > > > > Jan > > > > -- > Sent from my mobile device > >
Tim Williams wrote: > > > > > You're exactly right. I'm saying that changing the semantics of > > @type from annotation to instruction, introduces coupling. > > Ah, I missed the distinction, thanks. I don't see how its necessarily > harmful though. I mean, if I've got two representations of a given > resource and I give you links to both - it seems to me the result will > be the same whether you used a @type attribute to populate your Accept > header or used your own quality-based Accept header. For example, if > I give you a link to <a href="..." type="text/html" - wouldn't the > result would be the same whether it's a hint or instructive? > XHTML 1.0 is a polyglot media type which may be served as text/html or as application/xhtml+xml. If you change your Content-Type from text/ html to application/xhtml+xml, and @type says text/html, if it's a hint then a browser will retrieve application/xhtml+xml, if it's an instruction then you have to update all your HTML to reflect the change instead of having user agents automatically adapt to that change. If you have to update your markup to change a media type, you're coupled. REST allows us to change a GIF to a PNG without changing its *.gif URI, so there's no need to edit the markup to change that file extension. Unless @type is an instruction, in which case all links with type='image/gif' go 406, instead of letting the user agent just grab the file regardless of its media type, or negotiate for it. Conversely, how should a browser that doesn't handle application/xhtml+ xml handle an @type instruction that specifies application/xhtml+xml? Forcing the user agent to ask the user to download the wrong variant instead of negotiating for the correct variant, kinda defeats the whole purpose of content negotiation. If you need clients to dereference a specific variant, then you send the client the URI for that variant, not the negotiated URI. I just don't understand why this solution, a well-established best-practice which works perfectly well in practice, is somehow inadequate and requires us to change the semantics of @type to accomplish exactly the same thing (by violating REST constraints, no less)... > > >> The server could have: > >> <a rel=next type=text/html href="...">Search Me</a> > >> <a rel=next type=application/atom+xml href="...">Search Me</a> > >> > > > > Well, it would be more RESTful for a representation to use as rel= > > 'next' the same media type as itself. Then, make a list of rel= > > 'alternate' links for each negotiated representation, > > I think either of these lives within the constraints of REST - it > seems more of a discussion about good media type design than > RESTfulness to me. The link relations aren't exclusive though, so > perhaps: > Yes, it is a discussion about proper media type design, but it's a discussion of proper media type design within the context of REST. Presumably, the representation containing the alternate links was returned as the result of content negotiation. So it's safe to assume that the browser's next request will use the same Accept header (except for IE < 7 of course, sigh). So if you're presenting a rel='next' it makes sense that, if it doesn't just point to another negotiated resource, it would point to the URI of a variant with the same media type the server already determined was correct for the user agent. The concept in REST is that the most optimal representation is sent first, and that the representation contain enough hypertext that the user agent can inform the user of other representations in case of problems. It is not the REST concept for conneg to happen within the application steady-state, i.e. before the next request is even made, by performing non-HTTP-protocol-based negotiation (selecting the best option from a list). Error recovery in REST must be allowed to happen after-the-fact, not headed off at the pass by trying to infer the next transition from a list of possible media types -- conneg's a protocol-layer action, not a markup-rendering action, i.e. HTTP conneg occurs over-the-wire. In HTTP conneg, the server makes the decision by weighing the q values of the Accept header against its internal qs values for the available media types. It isn't a matter of returning what the client prefers, it's a matter of returning what the server deems to be the best combination of client preference and server quality. For a server to send a list of links to the client, and for conneg to occur within the application steady-state before a transition may be understood, would require @qs in the markup and some way to express the algorithm the server is using to make the determination. > > <a rel="next" type="text/html" href="...">Search Me</a> > <a rel="next alternate" type="application/atom+xml" href="...">Search > Me</a> > This is not correct, as it's only valid to have one rel='next'. While there's nothing wrong with the syntax of rel='next alternate', or with multiple alternates, 'next alternate' means the link is an alternate and is also the next transition in a series. It doesn't mean it's the "next" alternate, the semantics of "alternate" mean an alternate to the loaded representation, not the alternate for a 'next' or 'prev' or anything else. The list of alternate links is what allows a user agent to recover, usually through interaction with the user, in the case of an incorrect variant being sent. It is not meant to pre-negotiate a representation, that is not the REST style. This is a job for the server, not the client, according to the layered-system constraint of REST reflected in the HTTP protocol. > > > ... with separate > > URIs, and give their @types, so you're responding with the most > > optimal representation first, then allowing the user agent to > > recover, instead of making the user agent pre-negotiate rel='next'. > > Why separate URIs? If it weren't for some UAs having mostly static > Accept headers, my preference would firmly be for the same URI and the > @type only used as a hint for the UA to perhaps adjust Accept header. > You're saying this for practical reasons of UA behavior? > Yes, I'd like to give a link of alternates which have the same URI but different media types, and that's what I initially implemented. But it didn't work when I tested it, because real-world user agents just don't get it. Why should they? RFC 2616 says you SHOULD assign URIs to variants and send them in Content-Location, and I've never seen a good reason put forth for ignoring that SHOULD when caching or direct- referencing a variant are concerned. HTML media types say nothing about how to evaluate or choose from a list of alternates, they only describe what an alternate link *means*, there is no conneg algorithm for HTML. But there is one for HTTP. So do your conneg in HTTP where it's specified, not HTML. So I'm saying what I'm saying for practical reasons of UA behavior, yes, but I'm also saying there's nothing wrong with that UA behavior since it follows what the specs and REST both say. I'm saying assign URIs to your variants because that's Web architecture, which is why browsers work the way they do, as specced in RFC 2616 with SHOULD, and because it works. It is best practice to assign URIs to variants, because *that's how the Web works* not because UAs are broken and we must work with these broken UAs. > > >> In this case, @type is simply used as link selection criteria in > >> concert with @rel. The URI may or may not be exactly the same. > >> > > > > Exactly. Although in practice, where the URI is the same for a > > list of alternates, user agents won't see it that way. For > > example, browsers have no problem displaying a feed icon in the > > presence of a link rel= 'alternate' with type='application/atom > > +xml', provided that the href isn't the same as the request URI for > > the current application state. > > Didn't know that, I was more thinking of the static Accept header > limitation of UAs not this, but that's interesting. > There's nothing in the HTML media types which says anything about a relationship between alternate links and Accept headers. So I don't see how a user agent can be expected to infer that a list of alternates with the same URI but different media types, correlates in any way with its Accept header, or conneg, or anything else but to present the user with some options if it can't render the dereferenced representation. Or display a feed icon, if a link exists with one of a set of specific media types, which is presenting the user with an alternative in case the representation they're viewing is inadequate or unreadable due to its styling, or if the user wants to subscribe to a related feed. > > > So assigning URIs to variants not only clears things up for caches, > > but for browsers as well. > > Just when I thought I got it. It seems to me, we should promote > fixing the UAs rather than assigning URIs to each representation. > I don't agree that UAs are broken -- they're interpreting RFC 2616 the way it should be interpreted, which is that variants of negotiated resources are expected to have their own URIs. Again, since assigning a URI is just no big deal, has no downside, and has no interoperability issues, I am still confused as to why following that SHOULD would cause any problems for anyone, let alone lead to this much debate. Follow best practice, and user agents will work fine. Deviating from best practice is shaky ground from which to claim that user agents, or HTML, or HTTP or REST are broken, or inadequate, or anything else. What do you see as the downside to assigning URIs to variants, as the spec calls for? Nobody has explained this to me yet, except in FUD terms. I build systems this way. They work. I have no complaints. Others refuse to build their systems this way, their systems don't work, but the fault lies elsewhere? Seriously, I just don't get it. > > this list, sometimes it's tough for me to determine whether folks are > arguing a particular position because of the merits of the idea or > because of the practical conditions of existing, running software in > the wild. > Yeah, that's bound to happen since REST is an architectural style. In most cases, people here are trying to build systems for the real-world Web as-is, and REST is all about hitting the scaling sweet-spot there. But, REST also applies to other systems, and may be used to analyze any changes proposed to the existing system. Building a REST system using HTTP conneg for use on the as-is real-world Web, requires you to assign URIs to your variants just like RFC 2616 says you should. Protesting that you don't want to do it that way but would prefer @type to mandate the Accept header just doesn't seem very productive. The Web doesn't work like that, REST's constraints argue against such change, and again, it's a non-problem since you can always follow HTTP and assign URIs to variants. What real-world problem that causes is still a complete mystery to me, left unarticulated by those who want to change @type's semantics. To me, it's the same as insisting that because you want it to work that way, URI fragments should be sent to the server, when anything that can be a fragment can also be sent as a query. Yet some insist that query isn't right, fragment is, so we need to fix the broken UAs. I don't get either argument. If I want to drive a rough county road, I take my 4x4, instead of taking my Prelude and bitching about road maintenance when I get stuck. The truck works, use that. Query works, use that. Assigning URIs to variants *works*, so use that. Assigning URIs to variants is Pragmatic REST. > > In this case, on a practical note, I agree that this is > what we're stuck with but would you agree that the ideal is to fix the > source (e.g. UAs)? That's my interpretation of section 5.2.1.1 too: > But I don't agree that UAs are broken in this way. If you don't care about caching your variants, if you don't care about accessing them directly, then don't assign them URIs (it's only a SHOULD, after all). However, if you are expecting your variants to cache, and you desire direct variant access without a conneg layer, then a solution exists that's so simple I can't believe I'm spending this much time defending it as if it didn't work or something. Give your variants URIs. In REST, we call this the "identification of resources" constraint. That browsers and caches require you to abide by it, doesn't mean they're broken, it means they're expecting you to follow RFC 2616's SHOULD. > > "This abstract definition of a resource enables key features of the > Web architecture. First, it provides generality by encompassing many > sources of information without artificially distinguishing them by > type or implementation. Second, it allows late binding of the > reference to a representation, enabling content negotiation to take > place based on characteristics of the request. Finally, it allows an > author to reference the concept rather than some singular > representation of that concept, thus removing the need to change all > existing links whenever the representation changes (assuming the > author used the right identifier)." > Yes, REST allows the late binding of representation to resource. Which is why I can't understand why @type should be an instruction which prevents this from occurring. Having the browser perform some sort of @type juju amounts to binding the representation to the resource before the request has even been made! Such coupling is not copasetic to the REST style -- if you want to avoid conneg, don't instruct the client to use a negotiated URI. It's really that simple, which is why it amazes me that we keep having this debate here. No downside == no problem. -Eric
I wouldn't mind tagging along... Dave -----Original Message----- From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of Alan Dean Sent: 04 June 2010 18:25 To: Glenn Block Cc: rest-discuss@yahoogroups.com Subject: Re: [rest-discuss] Re: London Meeting Dates Suits me. Alan On Fri, Jun 4, 2010 at 17:13, Glenn Block <glenn.block@...m> wrote: Yes I am thinking of something a bit different though. Insteads of having just a lunch, what if we meet for two to three ours at the MS campus in London? This way we could have a brainstorming / design type discussion. I'll supply lunch :-) My thinking would be Thursday the week of the 12th of July. What to you guys think? Glenn On 6/4/10, Jan Algermissen <algermissen1971@... <mailto:algermissen1971%40mac.com> > wrote: > Glenn, > > sorry to be impatient - do you have news on the date for London? > > (My preferred airline has brilliant deals when booking before Sunday :-) > > Jan > -- Sent from my mobile device
> > To me, it's the same as insisting that because you want it to work > that way, URI fragments should be sent to the server, when anything > that can be a fragment can also be sent as a query. > Or the corollary from xsl-list today, insisting that there must be some way to use ampersands in an XML file without having to use & amp; or & #38;. Basic stuff. My frustration comes out in my posts, sorry. -Eric
> > Or the corollary from xsl-list today, insisting that there must be > some way to use ampersands in an XML file without having to use & > amp; or & #38;... > Or CDATA. -Eric
I'm looking for examples of MIME types/protocols that work with collections of things, but support batch updates on collection members rather than requiring separate updates for each collection member. The use case is supporting user-defined lists with an arbitrary number of columns. We have chosen to treat a list as a collection of row elements. A browser-based client will support editing of the list in a tabular view. In addition to adding and removing entire rows, users will also be able to edit individual fields within a row. We would like to support a Save button that saves all changes (possibly across multiple rows) on the current screen in one http request. Our current thinking is that we would send back a collection containing only the rows to be updated (or inserted/deleted). However (without getting down into the details) the back-end will need to be able to determine whether individual fields in each row actually need to be updated, so there is some question as to how to represent whether or not an individual field has been changed. We would like to use the same document type both for getting the list entries and posting changes back to the list. Does anyone have any pointers to some examples of MIME types or application protocols that support this sort of model? --Chuck
On Fri, Jun 4, 2010 at 4:15 PM, chucking24 <chuck.hinson@...> wrote: > > > I'm looking for examples of MIME types/protocols that work with collections > of things, but support batch updates on collection members rather than > requiring separate updates for each collection member. > > The use case is supporting user-defined lists with an arbitrary number of > columns. We have chosen to treat a list as a collection of row elements. A > browser-based client will support editing of the list in a tabular view. In > addition to adding and removing entire rows, users will also be able to edit > individual fields within a row. We would like to support a Save button that > saves all changes (possibly across multiple rows) on the current screen in > one http request. > > Our current thinking is that we would send back a collection containing > only the rows to be updated (or inserted/deleted). However (without getting > down into the details) the back-end will need to be able to determine > whether individual fields in each row actually need to be updated, so there > is some question as to how to represent whether or not an individual field > has been changed. We would like to use the same document type both for > getting the list entries and posting changes back to the list. > > Does anyone have any pointers to some examples of MIME types or application > protocols that support this sort of model? > > One bit of prior art that would be worth taking a look at is how WebDAV[1] deals with "batch" type requests. Essentially, it packages up a "multi-status response" (with a 207 status code), with individual response elements (including an HTTP status) for each individual update transaction, sort of like what you would have received if you had submitted them individually. Craig [1] http://www.webdav.org/specs/rfc2518.html > --Chuck > > >
Guilherme Silveira wrote: > > > Hello Bill, > > >> I'm not certain that today's JAX-RS offers much more than today's > WCF in > >> terms of REST support. If Glenn's team are going to do "REST like they > >> meant it" to paraphrase Guilherme, I don't think that JAX-RS is the > >> right way to go. > > > > But that's just an opinion. Or is there some technical criticism as well? > > It seems like the client part of a REST client was not so clear at that > time, and there were not so many attemps to create generic consumers. > Every service provided their "own specific REST APIs" for their > "specific REST services", i.e. twitter, facebook, and hundreds of others. > > The first JAX-RS spec did not take hypermedia in account, so if you > think about REST without hypermedia, it will not be problem. But it > seems like REST depends on using hypermedia, right? > > If you believe so and want your consumers to use hypermedia, using a > Java framework, you have to rely on Restfulie, Jersey and Restlet, who > are trying to do so. > > As Paul mentioned, its a matter of time for it to enter the JAX-RS specs. Exactly. JAX-RS doesn't have a client, how can it be the wrong way to go yet. The server part that is specified, deals well enough with the protocol elements and doesn't prevent me from using formats or object models that contain links. One thing the spec does do well is UriBuilder/UriInfo - regardless of whether the builder pattern is to taste, it helps solves a layering problem in Java between service code and http code. The JAX-RS impls haven't gotten in my way yet when it comes to working media types, http, or just applying REST in general, which is more than I can say for most frameworks on the JVM. I'm free to figure out the data. When it comes to building a client I suspect the problem will be dealing with Java's type system and generics (I'm not sure C# would be much better). To get around that in restfulie, it seems to be via hardcoding <link rel|href> into a class incorrectly called 'Resource', like this <http://github.com/caelum/restfulie-java/blob/master/core/src/main/java/br/com/caelum/restfulie/Resource.java> <http://github.com/caelum/restfulie-java/blob/master/core/src/main/java/br/com/caelum/restfulie/Relation.java> That's a car subclass of carpark kind of error - resources aren't representations and not all media types have relations If that's the 'right way to go' for hateoas, then perhaps I don't understand the concept of hateaos. Bill
First, +1 to Craig's answer. Second, I'm answering Chuck backwards... Chuck Hinson wrote: > > Does anyone have any pointers to some examples of MIME types or > application protocols that support this sort of model? > Yes, XHTML + Xforms. It sounds like you've modeled your data as tabular, in which case HTML's <table> markup has all the machine- readable goodness you require. Putting lists inside of table cells is a powerful data structure, but editing it can be a pain. If you're able to consider Xforms, it becomes a breeze. The caveat is, the degree of difficulty to do batch requests in Xforms is significantly higher than it is for the separate-update paradigm. But, I don't think you need batch-update requests. If you insist, I still think you should build your system without that optimization, first. It'll be up and running faster, and give you a baseline to benchmark your PUT optimization against to prove that it doesn't do much... > > Our current thinking is that we would send back a collection > containing only the rows to be updated (or inserted/deleted). However > (without getting down into the details) the back-end will need to be > able to determine whether individual fields in each row actually need > to be updated, so there is some question as to how to represent > whether or not an individual field has been changed. We would like > to use the same document type both for getting the list entries and > posting changes back to the list. > I'm not sure I understand. Is your application steady-state made up of multiple resources, i.e. each row has its own URL? If a row has its own URL, then they're snippets of HTML, which are hopefully well-formed text/xml. You include those into the steady-state with Xforms. Each snippet is now a "Model" as in MVC architecture (within the browser). When you edit any row in the steady-state, Xforms is altering its Model. Xforms can submit each Model back to its own URI on the origin server as text/xml using PUT. But you're saying that for whatever reason, the origin server must only receive the changed fields. In which case, what you want to send is a delta, in which case the proper HTTP method is PATCH (Xforms can do this, but the degree of difficulty of implementing your delta format is up there). But you're saying that you want to use the same media type both ways, which rules out PATCH. Note that borking PUT to have partial-update semantics is not the answer (in REST, anyway). But I'm not sure what you're saying, so maybe what I'm going to say at the end will help... > > I'm looking for examples of MIME types/protocols that work with > collections of things, but support batch updates on collection > members rather than requiring separate updates for each collection > member. > One of the harder things about learning Chemistry is wrapping your head around the notion that glass is a liquid, despite its obvious solidity. The Zen of REST is that by separating your application steady-states out into multiple sources (if each row indeed has its own URL), you have made it possible to optimize the hell out of GET. The result is what *appears* to be the gross inefficiency of multiple requests to update that steady-state. However, the reality is that these inefficiencies pale in comparison to the scalability gained by implementing the solution that wrought them. The exception to this is a system whose traffic is expected to be predominantly PUT/POST. Real-world systems are 99% GET traffic, so any optimizations you make to PUT (like batching) are severely limited in their overall impact. > > The use case is supporting user-defined lists with an arbitrary > number of columns. We have chosen to treat a list as a collection of > row elements. A browser-based client will support editing of the > list in a tabular view. In addition to adding and removing entire > rows, users will also be able to edit individual fields within a > row. We would like to support a Save button that saves all changes > (possibly across multiple rows) on the current screen in one http > request. > It's pretty trivial to set up Xforms to do a separate PUT for each row that's been changed, skipping the unchanged rows, triggered by a master save button. Making this save button batch, Xforms or otherwise, is nontrivial by comparison. Even if it were less difficult, ask yourself how much time you can cost-justify implementing an optimization to 1% of your traffic, making sure to weigh that against the fact that PUT is only less optimal due to your optimization of GET, which bought you a lot more for less. The *perceived* inefficiency is merely a tradeoff, one that buys you a lot more than it costs you (unless you fight it). -Eric
Guys, we would like to meet in the afternoon, from 1 to 4. Does that work? Also I need a list of folks who would like to attend. We have limited seating (probably 15 max). I need to know before hand who is interested. I think about 5 people so far said yes. Any others? On 6/4/10, Dave Evans <list@...> wrote: > I wouldn't mind tagging along... > > Dave > > > -----Original Message----- > From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On > Behalf Of Alan Dean > Sent: 04 June 2010 18:25 > To: Glenn Block > Cc: rest-discuss@yahoogroups.com > Subject: Re: [rest-discuss] Re: London Meeting Dates > > > > Suits me. > > Alan > > > On Fri, Jun 4, 2010 at 17:13, Glenn Block <glenn.block@...> wrote: > > > > > Yes > > I am thinking of something a bit different though. Insteads of having > just a lunch, what if we meet for two to three ours at the MS campus > in London? This way we could have a brainstorming / design type > discussion. I'll supply lunch :-) > > My thinking would be Thursday the week of the 12th of July. > > What to you guys think? > > Glenn > > On 6/4/10, Jan Algermissen <algermissen1971@... > <mailto:algermissen1971%40mac.com> > wrote: > > Glenn, > > > > sorry to be impatient - do you have news on the date for London? > > > > (My preferred airline has brilliant deals when booking before Sunday :-) > > > > Jan > > > > -- > Sent from my mobile device > > > > > > > > > -- Sent from my mobile device
Also if the afternoon is a problem let me know. On 6/5/10, Glenn Block <glenn.block@...> wrote: > Guys, we would like to meet in the afternoon, from 1 to 4. Does that work? > > Also I need a list of folks who would like to attend. We have limited > seating (probably 15 max). I need to know before hand who is > interested. I think about 5 people so far said yes. Any others? > > On 6/4/10, Dave Evans <list@...> wrote: >> I wouldn't mind tagging along... >> >> Dave >> >> >> -----Original Message----- >> From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] >> On >> Behalf Of Alan Dean >> Sent: 04 June 2010 18:25 >> To: Glenn Block >> Cc: rest-discuss@yahoogroups.com >> Subject: Re: [rest-discuss] Re: London Meeting Dates >> >> >> >> Suits me. >> >> Alan >> >> >> On Fri, Jun 4, 2010 at 17:13, Glenn Block <glenn.block@...> wrote: >> >> >> >> >> Yes >> >> I am thinking of something a bit different though. Insteads of having >> just a lunch, what if we meet for two to three ours at the MS campus >> in London? This way we could have a brainstorming / design type >> discussion. I'll supply lunch :-) >> >> My thinking would be Thursday the week of the 12th of July. >> >> What to you guys think? >> >> Glenn >> >> On 6/4/10, Jan Algermissen <algermissen1971@... >> <mailto:algermissen1971%40mac.com> > wrote: >> > Glenn, >> > >> > sorry to be impatient - do you have news on the date for London? >> > >> > (My preferred airline has brilliant deals when booking before Sunday >> :-) >> > >> > Jan >> > >> >> -- >> Sent from my mobile device >> >> >> >> >> >> >> >> >> > > -- > Sent from my mobile device > -- Sent from my mobile device
I'll have the whole day, as the opportunity to sleep in is too high to miss, so count me in. Beyond the current subscriber list, I really wish Ian R, Mammund and Jim W could make it. Someone needs to try and make sure they do. :) Seb -----Original Message----- From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of Glenn Block Sent: 05 June 2010 17:58 To: Dave Evans; rest-discuss@yahoogroups.com Subject: [rest-discuss] Re: London Meeting Dates Also if the afternoon is a problem let me know. On 6/5/10, Glenn Block <glenn.block@...> wrote: > Guys, we would like to meet in the afternoon, from 1 to 4. Does that work? > > Also I need a list of folks who would like to attend. We have limited > seating (probably 15 max). I need to know before hand who is > interested. I think about 5 people so far said yes. Any others? > > On 6/4/10, Dave Evans <list@pml1.co.uk> wrote: >> I wouldn't mind tagging along... >> >> Dave >> >> >> -----Original Message----- >> From: rest-discuss@yahoogroups.com >> [mailto:rest-discuss@yahoogroups.com] >> On >> Behalf Of Alan Dean >> Sent: 04 June 2010 18:25 >> To: Glenn Block >> Cc: rest-discuss@yahoogroups.com >> Subject: Re: [rest-discuss] Re: London Meeting Dates >> >> >> >> Suits me. >> >> Alan >> >> >> On Fri, Jun 4, 2010 at 17:13, Glenn Block <glenn.block@gmail.com> wrote: >> >> >> >> >> Yes >> >> I am thinking of something a bit different though. Insteads of having >> just a lunch, what if we meet for two to three ours at the MS campus >> in London? This way we could have a brainstorming / design type >> discussion. I'll supply lunch :-) >> >> My thinking would be Thursday the week of the 12th of July. >> >> What to you guys think? >> >> Glenn >> >> On 6/4/10, Jan Algermissen <algermissen1971@... >> <mailto:algermissen1971%40mac.com> > wrote: >> > Glenn, >> > >> > sorry to be impatient - do you have news on the date for London? >> > >> > (My preferred airline has brilliant deals when booking before >> Sunday >> :-) >> > >> > Jan >> > >> >> -- >> Sent from my mobile device >> >> >> >> >> >> >> >> >> > > -- > Sent from my mobile device > -- Sent from my mobile device ------------------------------------ Yahoo! Groups Links
I'm beginning to realize that calling this a batch might not have been appropriate. While I may be sending multiple records to be updated, it would still be a single transaction - either all the records get updated or none do. Thanks for the pointer to WebDAV - I had forgotten about it and will take a look to see if I can find something useful. --Chuck On Fri, Jun 4, 2010 at 8:49 PM, Craig McClanahan <craigmcc@...> wrote: > > > On Fri, Jun 4, 2010 at 4:15 PM, chucking24 <chuck.hinson@...> wrote: > >> >> >> I'm looking for examples of MIME types/protocols that work with >> collections of things, but support batch updates on collection members >> rather than requiring separate updates for each collection member. >> >> The use case is supporting user-defined lists with an arbitrary number of >> columns. We have chosen to treat a list as a collection of row elements. A >> browser-based client will support editing of the list in a tabular view. In >> addition to adding and removing entire rows, users will also be able to edit >> individual fields within a row. We would like to support a Save button that >> saves all changes (possibly across multiple rows) on the current screen in >> one http request. >> >> Our current thinking is that we would send back a collection containing >> only the rows to be updated (or inserted/deleted). However (without getting >> down into the details) the back-end will need to be able to determine >> whether individual fields in each row actually need to be updated, so there >> is some question as to how to represent whether or not an individual field >> has been changed. We would like to use the same document type both for >> getting the list entries and posting changes back to the list. >> >> Does anyone have any pointers to some examples of MIME types or >> application protocols that support this sort of model? >> >> One bit of prior art that would be worth taking a look at is how WebDAV[1] > deals with "batch" type requests. Essentially, it packages up a > "multi-status response" (with a 207 status code), with individual response > elements (including an HTTP status) for each individual update transaction, > sort of like what you would have received if you had submitted them > individually. > > Craig > > [1] http://www.webdav.org/specs/rfc2518.html > > > >> --Chuck >> >> >> > >
That would be great, at least in Michael's case I doubt he'll fly across the atlantic for 3 hrs ;-) On 6/5/10, Sebastien Lambla <seb@...> wrote: > I'll have the whole day, as the opportunity to sleep in is too high to miss, > so count me in. > > Beyond the current subscriber list, I really wish Ian R, Mammund and Jim W > could make it. Someone needs to try and make sure they do. :) > > Seb > > -----Original Message----- > From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On > Behalf Of Glenn Block > Sent: 05 June 2010 17:58 > To: Dave Evans; rest-discuss@yahoogroups.com > Subject: [rest-discuss] Re: London Meeting Dates > > Also if the afternoon is a problem let me know. > > On 6/5/10, Glenn Block <glenn.block@...> wrote: >> Guys, we would like to meet in the afternoon, from 1 to 4. Does that work? >> >> Also I need a list of folks who would like to attend. We have limited >> seating (probably 15 max). I need to know before hand who is >> interested. I think about 5 people so far said yes. Any others? >> >> On 6/4/10, Dave Evans <list@...> wrote: >>> I wouldn't mind tagging along... >>> >>> Dave >>> >>> >>> -----Original Message----- >>> From: rest-discuss@yahoogroups.com >>> [mailto:rest-discuss@yahoogroups.com] >>> On >>> Behalf Of Alan Dean >>> Sent: 04 June 2010 18:25 >>> To: Glenn Block >>> Cc: rest-discuss@yahoogroups.com >>> Subject: Re: [rest-discuss] Re: London Meeting Dates >>> >>> >>> >>> Suits me. >>> >>> Alan >>> >>> >>> On Fri, Jun 4, 2010 at 17:13, Glenn Block <glenn.block@...> wrote: >>> >>> >>> >>> >>> Yes >>> >>> I am thinking of something a bit different though. Insteads of having >>> just a lunch, what if we meet for two to three ours at the MS campus >>> in London? This way we could have a brainstorming / design type >>> discussion. I'll supply lunch :-) >>> >>> My thinking would be Thursday the week of the 12th of July. >>> >>> What to you guys think? >>> >>> Glenn >>> >>> On 6/4/10, Jan Algermissen <algermissen1971@... >>> <mailto:algermissen1971%40mac.com> > wrote: >>> > Glenn, >>> > >>> > sorry to be impatient - do you have news on the date for London? >>> > >>> > (My preferred airline has brilliant deals when booking before >>> Sunday >>> :-) >>> > >>> > Jan >>> > >>> >>> -- >>> Sent from my mobile device >>> >>> >>> >>> >>> >>> >>> >>> >>> >> >> -- >> Sent from my mobile device >> > > -- > Sent from my mobile device > > > ------------------------------------ > > Yahoo! Groups Links > > > > -- Sent from my mobile device
That's great to hear Seb. On 6/5/10, Sebastien Lambla <seb@...> wrote: > I'll have the whole day, as the opportunity to sleep in is too high to miss, > so count me in. > > Beyond the current subscriber list, I really wish Ian R, Mammund and Jim W > could make it. Someone needs to try and make sure they do. :) > > Seb > > -----Original Message----- > From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On > Behalf Of Glenn Block > Sent: 05 June 2010 17:58 > To: Dave Evans; rest-discuss@yahoogroups.com > Subject: [rest-discuss] Re: London Meeting Dates > > Also if the afternoon is a problem let me know. > > On 6/5/10, Glenn Block <glenn.block@...> wrote: >> Guys, we would like to meet in the afternoon, from 1 to 4. Does that work? >> >> Also I need a list of folks who would like to attend. We have limited >> seating (probably 15 max). I need to know before hand who is >> interested. I think about 5 people so far said yes. Any others? >> >> On 6/4/10, Dave Evans <list@...> wrote: >>> I wouldn't mind tagging along... >>> >>> Dave >>> >>> >>> -----Original Message----- >>> From: rest-discuss@yahoogroups.com >>> [mailto:rest-discuss@yahoogroups.com] >>> On >>> Behalf Of Alan Dean >>> Sent: 04 June 2010 18:25 >>> To: Glenn Block >>> Cc: rest-discuss@yahoogroups.com >>> Subject: Re: [rest-discuss] Re: London Meeting Dates >>> >>> >>> >>> Suits me. >>> >>> Alan >>> >>> >>> On Fri, Jun 4, 2010 at 17:13, Glenn Block <glenn.block@...> wrote: >>> >>> >>> >>> >>> Yes >>> >>> I am thinking of something a bit different though. Insteads of having >>> just a lunch, what if we meet for two to three ours at the MS campus >>> in London? This way we could have a brainstorming / design type >>> discussion. I'll supply lunch :-) >>> >>> My thinking would be Thursday the week of the 12th of July. >>> >>> What to you guys think? >>> >>> Glenn >>> >>> On 6/4/10, Jan Algermissen <algermissen1971@... >>> <mailto:algermissen1971%40mac.com> > wrote: >>> > Glenn, >>> > >>> > sorry to be impatient - do you have news on the date for London? >>> > >>> > (My preferred airline has brilliant deals when booking before >>> Sunday >>> :-) >>> > >>> > Jan >>> > >>> >>> -- >>> Sent from my mobile device >>> >>> >>> >>> >>> >>> >>> >>> >>> >> >> -- >> Sent from my mobile device >> > > -- > Sent from my mobile device > > > ------------------------------------ > > Yahoo! Groups Links > > > > -- Sent from my mobile device
Maybe we could entice him with a weekend of drinking, and the opportunity of coding away on frameworks for another day. That'd surely make it worth it? :) -----Original Message----- From: Glenn Block [mailto:glenn.block@gmail.com] Sent: 06 June 2010 01:54 To: Sebastien Lambla; Dave Evans; rest-discuss@yahoogroups.com Subject: Re: London Meeting Dates That would be great, at least in Michael's case I doubt he'll fly across the atlantic for 3 hrs ;-) On 6/5/10, Sebastien Lambla <seb@...> wrote: > I'll have the whole day, as the opportunity to sleep in is too high to > miss, so count me in. > > Beyond the current subscriber list, I really wish Ian R, Mammund and > Jim W could make it. Someone needs to try and make sure they do. :) > > Seb > > -----Original Message----- > From: rest-discuss@yahoogroups.com > [mailto:rest-discuss@yahoogroups.com] On Behalf Of Glenn Block > Sent: 05 June 2010 17:58 > To: Dave Evans; rest-discuss@yahoogroups.com > Subject: [rest-discuss] Re: London Meeting Dates > > Also if the afternoon is a problem let me know. > > On 6/5/10, Glenn Block <glenn.block@...> wrote: >> Guys, we would like to meet in the afternoon, from 1 to 4. Does that work? >> >> Also I need a list of folks who would like to attend. We have limited >> seating (probably 15 max). I need to know before hand who is >> interested. I think about 5 people so far said yes. Any others? >> >> On 6/4/10, Dave Evans <list@...> wrote: >>> I wouldn't mind tagging along... >>> >>> Dave >>> >>> >>> -----Original Message----- >>> From: rest-discuss@yahoogroups.com >>> [mailto:rest-discuss@yahoogroups.com] >>> On >>> Behalf Of Alan Dean >>> Sent: 04 June 2010 18:25 >>> To: Glenn Block >>> Cc: rest-discuss@yahoogroups.com >>> Subject: Re: [rest-discuss] Re: London Meeting Dates >>> >>> >>> >>> Suits me. >>> >>> Alan >>> >>> >>> On Fri, Jun 4, 2010 at 17:13, Glenn Block <glenn.block@...> wrote: >>> >>> >>> >>> >>> Yes >>> >>> I am thinking of something a bit different though. Insteads of having >>> just a lunch, what if we meet for two to three ours at the MS campus >>> in London? This way we could have a brainstorming / design type >>> discussion. I'll supply lunch :-) >>> >>> My thinking would be Thursday the week of the 12th of July. >>> >>> What to you guys think? >>> >>> Glenn >>> >>> On 6/4/10, Jan Algermissen <algermissen1971@... >>> <mailto:algermissen1971%40mac.com> > wrote: >>> > Glenn, >>> > >>> > sorry to be impatient - do you have news on the date for London? >>> > >>> > (My preferred airline has brilliant deals when booking before >>> Sunday >>> :-) >>> > >>> > Jan >>> > >>> >>> -- >>> Sent from my mobile device >>> >>> >>> >>> >>> >>> >>> >>> >>> >> >> -- >> Sent from my mobile device >> > > -- > Sent from my mobile device > > > ------------------------------------ > > Yahoo! Groups Links > > > > -- Sent from my mobile device
On Sat, Jun 5, 2010 at 7:03 AM, Eric J. Bowman <eric@...> wrote: > First, +1 to Craig's answer. Second, I'm answering Chuck backwards... > > Chuck Hinson wrote: >> >> Does anyone have any pointers to some examples of MIME types or >> application protocols that support this sort of model? >> > > Yes, XHTML + Xforms. It sounds like you've modeled your data as > tabular, in which case HTML's <table> markup has all the machine- > readable goodness you require. It's not that we're modeling our data as a table - it is a table. We've been asked to provide users the ability to define and create tables with arbitrary numbers of columns (this presents some interesting but solvable database issues). Over time, they will need to update the values in various cells in these tables. When they do their updates, they are going to want to make all of their changes and then click save; they will not stand for having to press save after editing each cell or row in the table - hence the batch update. [Also, if I send the changes as a single batch, I get a sort of poor-mans transaction - either the request completes (and all of the changes in the batch are succeed) or the request fails (and all of the changes in the batch fail).] > But, I don't think you need batch-update requests. If you insist, I > still think you should build your system without that optimization, > first. It'll be up and running faster, and give you a baseline to > benchmark your PUT optimization against to prove that it doesn't do > much... I dont think there's much question the batch update is needed. If a user wants to update a status column in every row in a 50 row table, I'm not sure sending 50 PUT requests when they click the save button is appropriate. But that's really a separate issue, and it's not the one I'm worried about. The harder part is the result of dealing with user-defined tables with arbitrary numbers of columns. I dont want to get down in the weeds, but basically, rows from the user-defined tables do not map directly to records in the database. Instead, each field in a row is stored as a separate record in the database. A table with 10 columns and 50 rows is 500 records in the database. If a user is only updating values in a single column I'd rather only do 50 updates, and not 500. I could change my logical model to let clients deal with tables at the cell level, but that seems to me to be making clients deal with an unnecessarily complex model (though having the client mark which fields in a row have been changed is probably not much different). I would much rather stick with a logical model where clients can deal with the table at the row level and not burden them with complexity that's really a result of the back-end implementation. > > I'm not sure I understand. Is your application steady-state made up of > multiple resources, i.e. each row has its own URL? > At this point, the primary resource is the user-defined table. It is possible that other services may want to access individual rows, but more likely, they'll want to be able to retrieve one or two columns for all of the rows in the table. And even if they do want a single row, it'll likely be addressed as a query of the table resource, and not directly. So most get requests will be for the table (or some portion of it) and not individual rows. > >> >> I'm looking for examples of MIME types/protocols that work with >> collections of things, but support batch updates on collection >> members rather than requiring separate updates for each collection >> member. >> > My main concern at this point is to come up with a document type that can be used for all interactions with these user-defined tables. I would prefer that clients not have to use a separate document type to do updates. --Chuck
Chuck Hinson wrote: > On Sat, Jun 5, 2010 at 7:03 AM, Eric J. Bowman <eric@bisonsystems.net> > wrote: > > Chuck Hinson wrote: > >> > >> Does anyone have any pointers to some examples of MIME types or > >> application protocols that support this sort of model? > >> > > > > Yes, XHTML + Xforms. It sounds like you've modeled your data as > > tabular, in which case HTML's <table> markup has all the machine- > > readable goodness you require. > > It's not that we're modeling our data as a table - it is a table. > We've been asked to provide users the ability to define and create > tables with arbitrary numbers of columns (this presents some > interesting but solvable database issues). "Tables with arbitrary numbers of columns" = "spreadsheet" > Over time, they will need > to update the values in various cells in these tables. When they do > their updates, they are going to want to make all of their changes and > then click save; they will not stand for having to press save after > editing each cell or row in the table - hence the batch update. "Not having to hit save" = AJAX "AJAX + spreadsheet" = http://www.socialtext.com/products/spreadsheets.php > I dont think there's much question the batch update is needed. If a > user wants to update a status column in every row in a 50 row table, > I'm not sure sending 50 PUT requests when they click the save button > is appropriate. If you distribute the 50 PUT requests out as they type, it's very appropriate. Seems to be working for SocialCalc (the successor to wikiCalc, created by Dan Bricklin, the inventor of VisiCalc). > But that's really a separate issue, and it's not the one I'm worried > about. The harder part is the result of dealing with user-defined > tables with arbitrary numbers of columns. I dont want to get down in > the weeds, but basically, rows from the user-defined tables do not map > directly to records in the database. Instead, each field in a row is > stored as a separate record in the database. A table with 10 columns > and 50 rows is 500 records in the database. If a user is only > updating values in a single column I'd rather only do 50 updates, and > not 500. Give each cell its own URL and you'll have 50 updates, not 500. > I could change my logical model to let clients deal with tables at the > cell level, but that seems to me to be making clients deal with an > unnecessarily complex model (though having the client mark which > fields in a row have been changed is probably not much different). I > would much rather stick with a logical model where clients can deal > with the table at the row level and not burden them with complexity > that's really a result of the back-end implementation. > ... > At this point, the primary resource is the user-defined table. It is > possible that other services may want to access individual rows, but > more likely, they'll want to be able to retrieve one or two columns > for all of the rows in the table. And even if they do want a single > row, it'll likely be addressed as a query of the table resource, and > not directly. So most get requests will be for the table (or some > portion of it) and not individual rows. If clients are retrieving arbitrary subsets of the table based on both rows and columns, it seems odd, then, to standardize on row-based representations. > My main concern at this point is to come up with a document type that > can be used for all interactions with these user-defined tables. I > would prefer that clients not have to use a separate document type to > do updates. I think that document type is the cell. It fits the requirements, is RESTful, and has prior art now in the field. Or you could go buy SocialText off the shelf. Or hand Google Docs to your client. Or any of http://en.wikipedia.org/wiki/List_of_online_spreadsheets Robert Brewer fumanchu@...
Chuck Hinson wrote: > > It's not that we're modeling our data as a table - it is a table. > OK, let's call your table resource "Resource A" and stipulate that its representation is text/xml using the XHTML namespace and having <table> as the root element. We need to interact with this resource, so we need to create a hypertext interface for it. We'll call the hypertext API "Resource B". Resource B's representation is an Xforms document, which uses <table> markup to display a bunch of form fields. These form fields are populated with the data from Resource A when the steady-state is rendered. The user can alter the form fields, which alters the loaded representation of Resource A. When the user submits changes, a PUT request is made to Resource A with the altered <table> as text/xml. > > We've been asked to provide users the ability to define and create > tables with arbitrary numbers of columns (this presents some > interesting but solvable database issues). Over time, they will need > to update the values in various cells in these tables. When they do > their updates, they are going to want to make all of their changes and > then click save; they will not stand for having to press save after > editing each cell or row in the table - hence the batch update. > I wouldn't say "hence the batch update," I'd say "hence the architectural choices" and avoid any discussion of batching. Let's define Resources C-z as the fifty rows in the table. When a PUT is made to Resource A, the origin server's processing of that request will update those Resources C-z which need changing. So that PUT to Resource A was really a batch-update request. Similarly, a change to a single row that's PUT to Resource C, would amount to a partial update of Resource A, resulting in a different steady-state next time Resource B is dereferenced. In this way Resource A may be partially-updated without resorting to PATCH. This is how we solve partial- or batch-update problems in REST, by assigning URIs and making architectural choices which avoid the need for batching. > > [Also, if I send the changes as a single batch, I get a sort of > poor-mans transaction - either the request completes (and all of the > changes in the batch are succeed) or the request fails (and all of the > changes in the batch fail).] > OK, but if each row is a separate request, it's easier to notify the user of where the error was which prevented the transaction, and it's more user-friendly to accept the good data instead of rejecting the entire change. Just a thought. > > > But, I don't think you need batch-update requests. If you insist, I > > still think you should build your system without that optimization, > > first. It'll be up and running faster, and give you a baseline to > > benchmark your PUT optimization against to prove that it doesn't do > > much... > > I dont think there's much question the batch update is needed. If a > user wants to update a status column in every row in a 50 row table, > I'm not sure sending 50 PUT requests when they click the save button > is appropriate. > There's nothing inappropriate about it. Many real-world Web pages wind up making dozens of GET requests to render a steady-state. If those 50 requests result from breaking a large resource into more-manageable sub- resources, the size of the PUT requests is small, so 50 fly right by. That 50 separate PUT requests may result from some action, only becomes a scalability concern if the action that triggers it occurs frequently enough to account for significant traffic. You just don't save that much overhead on your system overall, by saving 49 PUT requests for a transaction that occurs once for every 100 GET requests for a steady- state that's built with 50 GET requests. > > But that's really a separate issue, and it's not the one I'm worried > about. > No problem, just trying to enlighten. > > The harder part is the result of dealing with user-defined > tables with arbitrary numbers of columns. I dont want to get down in > the weeds, but basically, rows from the user-defined tables do not map > directly to records in the database. Instead, each field in a row is > stored as a separate record in the database. A table with 10 columns > and 50 rows is 500 records in the database. If a user is only > updating values in a single column I'd rather only do 50 updates, and > not 500. > > I could change my logical model to let clients deal with tables at the > cell level, but that seems to me to be making clients deal with an > unnecessarily complex model (though having the client mark which > fields in a row have been changed is probably not much different). I > would much rather stick with a logical model where clients can deal > with the table at the row level and not burden them with complexity > that's really a result of the back-end implementation. > Abstracting away the complexities and limitations of the backend is what I'm trying to help you with, indeed that's what REST is for. Specifically, I'm trying to get a feel for your resources. Obviously, your table is a resource, but your explanations lead me to consider that each row is also a resource since that seems to be how you're treating them... > > > > > I'm not sure I understand. Is your application steady-state made > > up of multiple resources, i.e. each row has its own URL? > > > > At this point, the primary resource is the user-defined table. It is > possible that other services may want to access individual rows, but > more likely, they'll want to be able to retrieve one or two columns > for all of the rows in the table. And even if they do want a single > row, it'll likely be addressed as a query of the table resource, and > not directly. So most get requests will be for the table (or some > portion of it) and not individual rows. > Extracting a table row doesn't require any query language. It merely requires treating each row as a resource. This means assigning URIs to Resources C-z, which could of course be hash URIs. Instead of querying for a row, just identify it as /table#row2 by sticking an @id in, like <tr id='row2'>. This references a nodeset containing <th> and/or <td> elements. Columns, different story. > > >> > >> I'm looking for examples of MIME types/protocols that work with > >> collections of things, but support batch updates on collection > >> members rather than requiring separate updates for each collection > >> member. > >> > > > > My main concern at this point is to come up with a document type that > can be used for all interactions with these user-defined tables. I > would prefer that clients not have to use a separate document type to > do updates. > Right. That's exactly what my Xforms approach does. In fact, a submit button for the whole table, and submit buttons for each row, can coexist if Resources C-z are given (non-hash) URIs. The master submit button is a PUT to Resource A, each row's submit is a PUT to one Resource C-z. The master submit doesn't have to trigger the PUT for each row, nor does it need to be a batch request which addresses C-z from within a composite media type. Instead of debating these intricacies, it would help if I could get my head wrapped around your problem a little better. Like, what are the shortcomings in terms of what you're trying to do, with my Xforms Resource A / Resource B single-PUT update solution. -Eric
On Sun, Jun 6, 2010 at 10:36 AM, Robert Brewer <fumanchu@...> wrote: > Chuck Hinson wrote: >> >> It's not that we're modeling our data as a table - it is a table. >> We've been asked to provide users the ability to define and create >> tables with arbitrary numbers of columns (this presents some >> interesting but solvable database issues). > > "Tables with arbitrary numbers of columns" = "spreadsheet" Sort of. We've had that discussion, though we haven't come to agreement (the spreadsheet faction will be happy to have your vote). The sticking point centers around the fact that users define columns when they create the table and then give each column a datatype. Each row describes a logical entity - cells really dont mean much on their own. Aside from users editing invidual cells in the table, most of the rest of the system will be querying these tables and dealing with them in a row-oriented fashion (give me list of all of the radar sites with green status, with the results containing the site name, latitude and longitude) A spreadsheet model is still on the table (no pun intended), but we may end up having a hybrid model where the UI sees a spreadsheet model, but other services in the system see a table model. > >> Over time, they will need >> to update the values in various cells in these tables. When they do >> their updates, they are going to want to make all of their changes and >> then click save; they will not stand for having to press save after >> editing each cell or row in the table - hence the batch update. > > "Not having to hit save" = AJAX It's not about having to hit save or not, it's about hitting save once, and having all of the changes applied in a single transaction. > > "AJAX + spreadsheet" = > http://www.socialtext.com/products/spreadsheets.php > >> I dont think there's much question the batch update is needed. If a >> user wants to update a status column in every row in a 50 row table, >> I'm not sure sending 50 PUT requests when they click the save button >> is appropriate. > > If you distribute the 50 PUT requests out as they type, it's very > appropriate. Seems to be working for SocialCalc (the successor to > wikiCalc, created by Dan Bricklin, the inventor of VisiCalc). See above - changes shouldnt be saved until users are ready to save them all. (I'll have to check if that's really a requirement or just an assumption that we've made.) > >> But that's really a separate issue, and it's not the one I'm worried >> about. The harder part is the result of dealing with user-defined >> tables with arbitrary numbers of columns. I dont want to get down in >> the weeds, but basically, rows from the user-defined tables do not map >> directly to records in the database. Instead, each field in a row is >> stored as a separate record in the database. A table with 10 columns >> and 50 rows is 500 records in the database. If a user is only >> updating values in a single column I'd rather only do 50 updates, and >> not 500. > > Give each cell its own URL and you'll have 50 updates, not 500. Yes - but I was hoping to have one network interaction, not 50. > >> At this point, the primary resource is the user-defined table. It is >> possible that other services may want to access individual rows, but >> more likely, they'll want to be able to retrieve one or two columns >> for all of the rows in the table. And even if they do want a single >> row, it'll likely be addressed as a query of the table resource, and >> not directly. So most get requests will be for the table (or some >> portion of it) and not individual rows. > > If clients are retrieving arbitrary subsets of the table based on both > rows and columns, it seems odd, then, to standardize on row-based > representations. > Maybe. On the other hand, when you query a database table for a subset of the rows and columns, what you get back is a row-based representation. >> My main concern at this point is to come up with a document type that >> can be used for all interactions with these user-defined tables. I >> would prefer that clients not have to use a separate document type to >> do updates. > > I think that document type is the cell. It fits the requirements, is > RESTful, and has prior art now in the field. Or you could go buy > SocialText off the shelf. Or hand Google Docs to your client. Or any of > http://en.wikipedia.org/wiki/List_of_online_spreadsheets > Thanks for the pointers - they should provide useful fodder... Unfortunately, those are options we wish we had. Our customers are military, so internet-based solutions are non-starters. If we were going to go off-the-shelf, it would be to use sharepoint lists (not because we like that, but because that's where an awful lot of our customer's data already is.) Also, off-the-shelf solutions require a tedious security approval process, so any off-the-shelf solution would have to have compelling advantage over sharepoint. We've had much debate about even providing this feature in the first place because we really dont want to be re-inventing the wheel - but a few (important) customers dont have access to sharepoint and so need an alternative with equivalent functionality. --Chuck
Darrel and I were chatting and we started to discuss the problem of using "RPC over HTTP' to mean "REST". The result of that conversation seemed to be that the problem is the usage of the term 'REST' in that nothing about RPC over HTTP is RESTful in nature. My argument, the fact that folks want to use HTTP as a pure transport for loose XML/JSON in itself is not a problem asd long as one has proper expectations about the pros/cons. That led me to think of a new term to represent the "RPC over HTTP" movement and which is along the spirit of the whole "No SQL" mantra. "No SOAP" - the meaning of which is "Not Only SOAP". LIke it or hate it? Regards Glenn
personally, i don't like negative labels in general (including NoSQL). however, if you wanted to go that route, i'd be more inclined to use NoREST since that seems to be the point here. keeping in mind that REST is not an HTTP thing, REST over HTTP is just as valid a phrase - possibly more accurate. in cases where i'm in an audience that my have inaccurate assumptions the nature of REST, i usually use "REST/HTTP" or "HTTP using REST" or "RESTful HTTP." there was some talk of this on the #REST IRC channel, too. nothing definitive came up there, IIRC. mca http://amundsen.com/blog/ http://mamund.com/foaf.rdf#me On Sun, Jun 6, 2010 at 18:10, Glenn Block <glenn.block@...> wrote: > > > Darrel and I were chatting and we started to discuss the problem of using > "RPC over HTTP' to mean "REST". The result of that conversation seemed to be > that the problem is the usage of the term 'REST' in that nothing about RPC > over HTTP is RESTful in nature. My argument, the fact that folks want to use > HTTP as a pure transport for loose XML/JSON in itself is not a problem asd > long as one has proper expectations about the pros/cons. > > That led me to think of a new term to represent the "RPC over HTTP" > movement and which is along the spirit of the whole "No SQL" mantra. "No > SOAP" - the meaning of which is "Not Only SOAP". > > LIke it or hate it? > > Regards > Glenn > > >
Well I think the "No" hits a nerve on folks that have felt the pain. No SQL hits a nerve with folks that have struggled around making relational dbs fit for a set of scenarios. I does have an "I'm fed up" notion to it though. I don't think "No REST" would be appropriate here as it is not that folks have tried REST in it's purity and struggled with it, they have with SOAP though. "RESTful HTTP" implies it is close to REST, on inspection though I don't think it is. One is about exposing methods over the wire (RPC) and one isn't. My $.02 Glenn On Sun, Jun 6, 2010 at 3:45 PM, mike amundsen <mamund@...> wrote: > personally, i don't like negative labels in general (including NoSQL). > > however, if you wanted to go that route, i'd be more inclined to use NoREST > since that seems to be the point here. > > keeping in mind that REST is not an HTTP thing, REST over HTTP is just as > valid a phrase - possibly more accurate. in cases where i'm in an audience > that my have inaccurate assumptions the nature of REST, i usually use > "REST/HTTP" or "HTTP using REST" or "RESTful HTTP." > > there was some talk of this on the #REST IRC channel, too. > nothing definitive came up there, IIRC. > > mca > http://amundsen.com/blog/ > http://mamund.com/foaf.rdf#me > > > > On Sun, Jun 6, 2010 at 18:10, Glenn Block <glenn.block@...> wrote: > >> >> >> Darrel and I were chatting and we started to discuss the problem of using >> "RPC over HTTP' to mean "REST". The result of that conversation seemed to be >> that the problem is the usage of the term 'REST' in that nothing about RPC >> over HTTP is RESTful in nature. My argument, the fact that folks want to use >> HTTP as a pure transport for loose XML/JSON in itself is not a problem asd >> long as one has proper expectations about the pros/cons. >> >> That led me to think of a new term to represent the "RPC over HTTP" >> movement and which is along the spirit of the whole "No SQL" mantra. "No >> SOAP" - the meaning of which is "Not Only SOAP". >> >> LIke it or hate it? >> >> Regards >> Glenn >> >> >> > > >
G: yeah, i was a bit careless in my reply. what i meant to say is that when i am talking about REST i use REST/HTTP or REST over HTTP. thus, when people look at me a bit odd or ask what i mean by that i can say "Well, REST style using the HTTP protocol. not RPC style over HTTP." i think much of this goes away if you use the world "style" or "architectural style." another way i talk about this is claiming that frameworks don't do arch style, they do protocol spec. <unsolicited-advice> IMO, the way to do this at MSFT is to author a class library that supports the HTTP protocol spec (httpClient.*) and, on a parallel track, work on P&P guides that show readers how to use the new class library to author server and client apps using the principles of a particular arch style (REST). technically, you can do step two today by calling out the existing class libraries in System.* (WebRequest/Response, HttpContext.* System,.Net, etc.) along with a set of helpers to fill in the gaps (mime parsing, http-client calls, cache-tag aids, http-auth) keeping the two items separated is the key to success, i think. </unsolicited-advice> mca http://amundsen.com/blog/ http://mamund.com/foaf.rdf#me On Sun, Jun 6, 2010 at 19:06, Glenn Block <glenn.block@...> wrote: > Well I think the "No" hits a nerve on folks that have felt the pain. No SQL > hits a nerve with folks that have struggled around making relational dbs fit > for a set of scenarios. I does have an "I'm fed up" notion to it though. > > I don't think "No REST" would be appropriate here as it is not that folks > have tried REST in it's purity and struggled with it, they have with SOAP > though. > > "RESTful HTTP" implies it is close to REST, on inspection though I don't > think it is. One is about exposing methods over the wire (RPC) and one > isn't. > > My $.02 > Glenn > On Sun, Jun 6, 2010 at 3:45 PM, mike amundsen <mamund@...> wrote: > >> personally, i don't like negative labels in general (including NoSQL). >> >> however, if you wanted to go that route, i'd be more inclined to use >> NoREST since that seems to be the point here. >> >> keeping in mind that REST is not an HTTP thing, REST over HTTP is just as >> valid a phrase - possibly more accurate. in cases where i'm in an audience >> that my have inaccurate assumptions the nature of REST, i usually use >> "REST/HTTP" or "HTTP using REST" or "RESTful HTTP." >> >> there was some talk of this on the #REST IRC channel, too. >> nothing definitive came up there, IIRC. >> >> mca >> http://amundsen.com/blog/ >> http://mamund.com/foaf.rdf#me >> >> >> >> On Sun, Jun 6, 2010 at 18:10, Glenn Block <glenn.block@...>wrote: >> >>> >>> >>> Darrel and I were chatting and we started to discuss the problem of using >>> "RPC over HTTP' to mean "REST". The result of that conversation seemed to be >>> that the problem is the usage of the term 'REST' in that nothing about RPC >>> over HTTP is RESTful in nature. My argument, the fact that folks want to use >>> HTTP as a pure transport for loose XML/JSON in itself is not a problem asd >>> long as one has proper expectations about the pros/cons. >>> >>> That led me to think of a new term to represent the "RPC over HTTP" >>> movement and which is along the spirit of the whole "No SQL" mantra. "No >>> SOAP" - the meaning of which is "Not Only SOAP". >>> >>> LIke it or hate it? >>> >>> Regards >>> Glenn >>> >>> >>> >> >> >> >
Glenn, On Jun 7, 2010, at 12:10 AM, Glenn Block wrote: > > > Darrel and I were chatting and we started to discuss the problem of using "RPC over HTTP' to mean "REST". The result of that conversation seemed to be that the problem is the usage of the term 'REST' in that nothing about RPC over HTTP is RESTful in nature. I think one of the big problems in this area is the lack of proper names for all the various abuses of REST. Having names helps to differentiate. I created a classification of mis-uses a while back: http://nordsc.com/ext/classification_of_http_based_apis.html > My argument, the fact that folks want to use HTTP as a pure transport for loose XML/JSON in itself is not a problem asd long as one has proper expectations about the pros/cons. Yes, but the problem being the end of the sentence: "as long as one has proper expectations". I seriously doubt that a significant number of people understand the trade offs (if they did, they would just do true REST because the gains outweigh (IMHO small) the additional effort). While there might not be a problem if you understand what you are doing, it also makes no real sense to do it (it is not really justifiable). Jan > > That led me to think of a new term to represent the "RPC over HTTP" movement and which is along the spirit of the whole "No SQL" mantra. "No SOAP" - the meaning of which is "Not Only SOAP". > > LIke it or hate it? > > Regards > Glenn > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
You really think people are going to say "Woo hoo I am using HTTP Type 1" ? :-) REST sounds cool...that's why they like it. On your second point, I am not sure. That makes the argument that RPC over HTTP is absolutely bad. Even though I can see the value that a RESTful style provides, I don't see why a person building an AJAX style application who uses RPC over HTTP as a means to improve responsiveness is in itself a bad thing. I personally worked on quite a few RPC over HTTP style AJAX apps where the goal was improving responsiveness in the application and to offer a rich client type experience in a brower. Yes it dishonors hypermedia constraints (we didn't even know what those were), but it worked and made paying customers pretty happy that they had a more interactive / responsive UI. That was the primary goal and the benefit. I don't think we were deluded into any other benefits. Do you see that as "wrong"? Personally I feel like calling that REST is wrong, in the way calling some that is not an implementation of MVC, MVC. Glenn On Sun, Jun 6, 2010 at 9:04 PM, Jan Algermissen <algermissen1971@...>wrote: > Glenn, > > On Jun 7, 2010, at 12:10 AM, Glenn Block wrote: > > > > > > > Darrel and I were chatting and we started to discuss the problem of using > "RPC over HTTP' to mean "REST". The result of that conversation seemed to be > that the problem is the usage of the term 'REST' in that nothing about RPC > over HTTP is RESTful in nature. > > I think one of the big problems in this area is the lack of proper names > for all the various abuses of REST. Having names helps to differentiate. I > created a classification of mis-uses a while back: > http://nordsc.com/ext/classification_of_http_based_apis.html > > > My argument, the fact that folks want to use HTTP as a pure transport for > loose XML/JSON in itself is not a problem asd long as one has proper > expectations about the pros/cons. > > Yes, but the problem being the end of the sentence: "as long as one has > proper expectations". I seriously doubt that a significant number of people > understand the trade offs (if they did, they would just do true REST because > the gains outweigh (IMHO small) the additional effort). > > While there might not be a problem if you understand what you are doing, it > also makes no real sense to do it (it is not really justifiable). > > Jan > > > > > > That led me to think of a new term to represent the "RPC over HTTP" > movement and which is along the spirit of the whole "No SQL" mantra. "No > SOAP" - the meaning of which is "Not Only SOAP". > > > > LIke it or hate it? > > > > > > > > Regards > > Glenn > > > > > > > > ----------------------------------- > Jan Algermissen, Consultant > NORD Software Consulting > > Mail: algermissen@... > Blog: http://www.nordsc.com/blog/ > Work: http://www.nordsc.com/ > ----------------------------------- > > > > >
REST places no restrictions on UI behavior including responsiveness or interactivity. I start many REST/HTTP talks w/ a very simple example app to illustrate this very point: http://amundsen.com/examples/zipcheck/ <http://amundsen.com/examples/zipcheck/>Instant user feedback w/o AJAX (or RPC). mca http://amundsen.com/blog/ http://mamund.com/foaf.rdf#me On Mon, Jun 7, 2010 at 00:51, Glenn Block <glenn.block@...> wrote: > > > You really think people are going to say "Woo hoo I am using HTTP Type 1" ? > :-) > > REST sounds cool...that's why they like it. > > On your second point, I am not sure. That makes the argument that RPC over > HTTP is absolutely bad. Even though I can see the value that a RESTful style > provides, I don't see why a person building an AJAX style application who > uses RPC over HTTP as a means to improve responsiveness is in itself a bad > thing. > > I personally worked on quite a few RPC over HTTP style AJAX apps where the > goal was improving responsiveness in the application and to offer a rich > client type experience in a brower. Yes it dishonors hypermedia constraints > (we didn't even know what those were), but it worked and made paying > customers pretty happy that they had a more interactive / responsive UI. > That was the primary goal and the benefit. I don't think we were deluded > into any other benefits. > > Do you see that as "wrong"? > > Personally I feel like calling that REST is wrong, in the way calling some > that is not an implementation of MVC, MVC. > > Glenn > > > > On Sun, Jun 6, 2010 at 9:04 PM, Jan Algermissen <algermissen1971@...>wrote: > >> Glenn, >> >> On Jun 7, 2010, at 12:10 AM, Glenn Block wrote: >> >> > >> > >> > Darrel and I were chatting and we started to discuss the problem of >> using "RPC over HTTP' to mean "REST". The result of that conversation seemed >> to be that the problem is the usage of the term 'REST' in that nothing about >> RPC over HTTP is RESTful in nature. >> >> I think one of the big problems in this area is the lack of proper names >> for all the various abuses of REST. Having names helps to differentiate. I >> created a classification of mis-uses a while back: >> http://nordsc.com/ext/classification_of_http_based_apis.html >> >> > My argument, the fact that folks want to use HTTP as a pure transport >> for loose XML/JSON in itself is not a problem asd long as one has proper >> expectations about the pros/cons. >> >> Yes, but the problem being the end of the sentence: "as long as one has >> proper expectations". I seriously doubt that a significant number of people >> understand the trade offs (if they did, they would just do true REST because >> the gains outweigh (IMHO small) the additional effort). >> >> While there might not be a problem if you understand what you are doing, >> it also makes no real sense to do it (it is not really justifiable). >> >> Jan >> >> >> > >> > That led me to think of a new term to represent the "RPC over HTTP" >> movement and which is along the spirit of the whole "No SQL" mantra. "No >> SOAP" - the meaning of which is "Not Only SOAP". >> > >> > LIke it or hate it? >> >> >> >> >> > >> > Regards >> > Glenn >> > >> > >> > >> >> ----------------------------------- >> Jan Algermissen, Consultant >> NORD Software Consulting >> >> Mail: algermissen@... >> Blog: http://www.nordsc.com/blog/ >> Work: http://www.nordsc.com/ >> ----------------------------------- >> >> >> >> >> > > >
On Jun 7, 2010, at 6:51 AM, Glenn Block wrote: > You really think people are going to say "Woo hoo I am using HTTP Type 1" ? :-) Well, no :-) But then....why not? Anyhow - it helps to being able to send people a pointer to 'here is what your API is'. > > REST sounds cool...that's why they like it. > > On your second point, I am not sure. That makes the argument that RPC over HTTP is absolutely bad. Yes, it is. Far worse than using traditional RPC mechanisms that use an IDL to define the interface. When using HTTP you allways are at risk of not documenting the contract at all. In addition, RPC over HTTP make it easy to (wrongly) assume that the benefits of HTTP (e.g. caching) are at your fingertips. They are all lost when treating HTTP as transport. > Even though I can see the value that a RESTful style provides, I don't see why a person building an AJAX style application who uses RPC over HTTP as a means to improve responsiveness is in itself a bad thing. See http://nordsc.com/ext/classification_of_http_based_apis.html#uri-rpc-effect > > I personally worked on quite a few RPC over HTTP style AJAX apps where the goal was improving responsiveness in the application and to offer a rich client type experience in a brower. Yes it dishonors hypermedia constraints (we didn't even know what those were), but it worked and made paying customers pretty happy that they had a more interactive / responsive UI. That was the primary goal and the benefit. I don't think we were deluded into any other benefits. Hmm, I am not sure you did something unRESTful there. It sounds like you built an AJAX Web interface, yes? If your user agent is a browser and you use HTML and JSON, there is not really anything you violate (except for stuff like authentication, frames and other violations you commonly see on the human web). Did you have a non-browser user agent at all? > > Do you see that as "wrong"? Not necessarily. It would be wrong if you used a user agent that had hard-coded knowledge of out-of-band information (such as the meaning of some action=order parameter in the URI). Note though, that the issue with regard to REST's benefits are not how well one application works, but how well the overall architecture controls conplexity, enables evolution and allows for scalability (e.g. can you put a Web cache in your application to improve performance without knowing the semantics of the application?) > > Personally I feel like calling that REST is wrong, Not sure - see above. To strengthen my point again: Yes, some/many people abuse HTTP and frameworks should somewhat support those (ab)uses. But it is IMHO a huge mistake to even cause the impression such (ab)uses are any good at all. I'd rather say: if you want to do RPC, use a proper technology (RMI, Corba, WS-*) and do it right. REST done wrong is IMHO worse than RPC done right (due to the out-of-band (often undocumented) information and the performance penalty you pay for HTTP-style interactions). Jan P.S. Using a ranting style to foster discussion (no offense intended). > in the way calling some that is not an implementation of MVC, MVC. > > Glenn > > > > On Sun, Jun 6, 2010 at 9:04 PM, Jan Algermissen <algermissen1971@...> wrote: > Glenn, > > On Jun 7, 2010, at 12:10 AM, Glenn Block wrote: > > > > > > > Darrel and I were chatting and we started to discuss the problem of using "RPC over HTTP' to mean "REST". The result of that conversation seemed to be that the problem is the usage of the term 'REST' in that nothing about RPC over HTTP is RESTful in nature. > > I think one of the big problems in this area is the lack of proper names for all the various abuses of REST. Having names helps to differentiate. I created a classification of mis-uses a while back: http://nordsc.com/ext/classification_of_http_based_apis.html > > > My argument, the fact that folks want to use HTTP as a pure transport for loose XML/JSON in itself is not a problem asd long as one has proper expectations about the pros/cons. > > Yes, but the problem being the end of the sentence: "as long as one has proper expectations". I seriously doubt that a significant number of people understand the trade offs (if they did, they would just do true REST because the gains outweigh (IMHO small) the additional effort). > > While there might not be a problem if you understand what you are doing, it also makes no real sense to do it (it is not really justifiable). > > Jan > > > > > > That led me to think of a new term to represent the "RPC over HTTP" movement and which is along the spirit of the whole "No SQL" mantra. "No SOAP" - the meaning of which is "Not Only SOAP". > > > > LIke it or hate it? > > > > > > > > Regards > > Glenn > > > > > > > > ----------------------------------- > Jan Algermissen, Consultant > NORD Software Consulting > > Mail: algermissen@... > Blog: http://www.nordsc.com/blog/ > Work: http://www.nordsc.com/ > ----------------------------------- > > > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
Neat demo / bookmarked. That is technically AJAX in that you are dynamically manipulating the dom through javascript. AJAX does stand for "Asynchronous Javvascript and XML" doesn't it? How exactly would you say it's RESTful and not RPC? YOu are basically using zipcheck as a method with ziptext being passed along the query string. Regards Glenn On Sun, Jun 6, 2010 at 10:18 PM, mike amundsen <mamund@...> wrote: > REST places no restrictions on UI behavior including responsiveness or > interactivity. > > I start many REST/HTTP talks w/ a very simple example app to illustrate > this very point: > http://amundsen.com/examples/zipcheck/ > > <http://amundsen.com/examples/zipcheck/>Instant user feedback w/o AJAX > (or RPC). > > mca > http://amundsen.com/blog/ > http://mamund.com/foaf.rdf#me > > > > On Mon, Jun 7, 2010 at 00:51, Glenn Block <glenn.block@...> wrote: > >> >> >> You really think people are going to say "Woo hoo I am using HTTP Type 1" >> ? :-) >> >> REST sounds cool...that's why they like it. >> >> On your second point, I am not sure. That makes the argument that RPC over >> HTTP is absolutely bad. Even though I can see the value that a RESTful style >> provides, I don't see why a person building an AJAX style application who >> uses RPC over HTTP as a means to improve responsiveness is in itself a bad >> thing. >> >> I personally worked on quite a few RPC over HTTP style AJAX apps where the >> goal was improving responsiveness in the application and to offer a rich >> client type experience in a brower. Yes it dishonors hypermedia constraints >> (we didn't even know what those were), but it worked and made paying >> customers pretty happy that they had a more interactive / responsive UI. >> That was the primary goal and the benefit. I don't think we were deluded >> into any other benefits. >> >> Do you see that as "wrong"? >> >> Personally I feel like calling that REST is wrong, in the way calling some >> that is not an implementation of MVC, MVC. >> >> Glenn >> >> >> >> On Sun, Jun 6, 2010 at 9:04 PM, Jan Algermissen <algermissen1971@... >> > wrote: >> >>> Glenn, >>> >>> On Jun 7, 2010, at 12:10 AM, Glenn Block wrote: >>> >>> > >>> > >>> > Darrel and I were chatting and we started to discuss the problem of >>> using "RPC over HTTP' to mean "REST". The result of that conversation seemed >>> to be that the problem is the usage of the term 'REST' in that nothing about >>> RPC over HTTP is RESTful in nature. >>> >>> I think one of the big problems in this area is the lack of proper names >>> for all the various abuses of REST. Having names helps to differentiate. I >>> created a classification of mis-uses a while back: >>> http://nordsc.com/ext/classification_of_http_based_apis.html >>> >>> > My argument, the fact that folks want to use HTTP as a pure transport >>> for loose XML/JSON in itself is not a problem asd long as one has proper >>> expectations about the pros/cons. >>> >>> Yes, but the problem being the end of the sentence: "as long as one has >>> proper expectations". I seriously doubt that a significant number of people >>> understand the trade offs (if they did, they would just do true REST because >>> the gains outweigh (IMHO small) the additional effort). >>> >>> While there might not be a problem if you understand what you are doing, >>> it also makes no real sense to do it (it is not really justifiable). >>> >>> Jan >>> >>> >>> > >>> > That led me to think of a new term to represent the "RPC over HTTP" >>> movement and which is along the spirit of the whole "No SQL" mantra. "No >>> SOAP" - the meaning of which is "Not Only SOAP". >>> > >>> > LIke it or hate it? >>> >>> >>> >>> >>> > >>> > Regards >>> > Glenn >>> > >>> > >>> > >>> >>> ----------------------------------- >>> Jan Algermissen, Consultant >>> NORD Software Consulting >>> >>> Mail: algermissen@... >>> Blog: http://www.nordsc.com/blog/ >>> Work: http://www.nordsc.com/ >>> ----------------------------------- >>> >>> >>> >>> >>> >> >> >> >> > >
On Jun 7, 2010, at 7:34 AM, Glenn Block wrote: > > > Neat demo / bookmarked. > > That is technically AJAX in that you are dynamically manipulating the dom through javascript. AJAX does stand for "Asynchronous Javvascript and XML" doesn't it? Yes. > > How exactly would you say it's RESTful and not RPC? YOu are basically using zipcheck as a method with ziptext being passed along the query string. > Does the user agent need to know about any kind of out-of-band information to do what it is intended to do? All the user agent does in Mike's example is to properly react on received hypermedia - there is no RPC involved. Beware that all that matters with regard to REST is the interaction between the components that constitute an application (user agent, origin server, intermediaries, data elements). IOW, REST as an architectural style constrains the architectire in which these components and data elements exist and interact. Whatever the user of the user agent thinks or understands is completely out of scope. UI-based user agents are somewhat misleading when learning REST because we take them and their behavior for granted. The servcer side seems to realize the application but in fact, the user agent component (the browser) plays an equally important part. Consider how the browser understands (by implementing HTML) how to construct the appropriate request when the user activates (submits) a form. It does so on the basis of standardized hypermedia semantics, not based on knowing some instruction specific to the server it currently interacts with. The former is REST, the latter would be RPC. Jan > Regards > Glenn > > On Sun, Jun 6, 2010 at 10:18 PM, mike amundsen <mamund@...> wrote: > REST places no restrictions on UI behavior including responsiveness or interactivity. > > I start many REST/HTTP talks w/ a very simple example app to illustrate this very point: > http://amundsen.com/examples/zipcheck/ > > Instant user feedback w/o AJAX (or RPC). > > mca > http://amundsen.com/blog/ > http://mamund.com/foaf.rdf#me > > > > On Mon, Jun 7, 2010 at 00:51, Glenn Block <glenn.block@...> wrote: > > > You really think people are going to say "Woo hoo I am using HTTP Type 1" ? :-) > > REST sounds cool...that's why they like it. > > On your second point, I am not sure. That makes the argument that RPC over HTTP is absolutely bad. Even though I can see the value that a RESTful style provides, I don't see why a person building an AJAX style application who uses RPC over HTTP as a means to improve responsiveness is in itself a bad thing. > > I personally worked on quite a few RPC over HTTP style AJAX apps where the goal was improving responsiveness in the application and to offer a rich client type experience in a brower. Yes it dishonors hypermedia constraints (we didn't even know what those were), but it worked and made paying customers pretty happy that they had a more interactive / responsive UI. That was the primary goal and the benefit. I don't think we were deluded into any other benefits. > > Do you see that as "wrong"? > > Personally I feel like calling that REST is wrong, in the way calling some that is not an implementation of MVC, MVC. > > Glenn > > > > On Sun, Jun 6, 2010 at 9:04 PM, Jan Algermissen <algermissen1971@...> wrote: > Glenn, > > On Jun 7, 2010, at 12:10 AM, Glenn Block wrote: > > > > > > > Darrel and I were chatting and we started to discuss the problem of using "RPC over HTTP' to mean "REST". The result of that conversation seemed to be that the problem is the usage of the term 'REST' in that nothing about RPC over HTTP is RESTful in nature. > > I think one of the big problems in this area is the lack of proper names for all the various abuses of REST. Having names helps to differentiate. I created a classification of mis-uses a while back: http://nordsc.com/ext/classification_of_http_based_apis.html > > > My argument, the fact that folks want to use HTTP as a pure transport for loose XML/JSON in itself is not a problem asd long as one has proper expectations about the pros/cons. > > Yes, but the problem being the end of the sentence: "as long as one has proper expectations". I seriously doubt that a significant number of people understand the trade offs (if they did, they would just do true REST because the gains outweigh (IMHO small) the additional effort). > > While there might not be a problem if you understand what you are doing, it also makes no real sense to do it (it is not really justifiable). > > Jan > > > > > > That led me to think of a new term to represent the "RPC over HTTP" movement and which is along the spirit of the whole "No SQL" mantra. "No SOAP" - the meaning of which is "Not Only SOAP". > > > > LIke it or hate it? > > > > > > > > Regards > > Glenn > > > > > > > > ----------------------------------- > Jan Algermissen, Consultant > NORD Software Consulting > > Mail: algermissen@... > Blog: http://www.nordsc.com/blog/ > Work: http://www.nordsc.com/ > ----------------------------------- > > > > > > > > > > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
yeah, it can look like RPC. however, zipcheck is a search resource, not a method. and responses are cached. while a single JS line is used, there are no async calls and XML is not returned, in image is returned (not an image URI). FWIW, there is another version that supports conneg for several media-types. and a version that returns the query document if no data is passed in the request. mca http://amundsen.com/blog/ http://mamund.com/foaf.rdf#me On Mon, Jun 7, 2010 at 01:34, Glenn Block <glenn.block@...> wrote: > Neat demo / bookmarked. > > That is technically AJAX in that you are dynamically manipulating the dom > through javascript. AJAX does stand for "Asynchronous Javvascript and XML" > doesn't it? > > How exactly would you say it's RESTful and not RPC? YOu are basically using > zipcheck as a method with ziptext being passed along the query string. > > Regards > Glenn > > On Sun, Jun 6, 2010 at 10:18 PM, mike amundsen <mamund@...> wrote: > >> REST places no restrictions on UI behavior including responsiveness or >> interactivity. >> >> I start many REST/HTTP talks w/ a very simple example app to illustrate >> this very point: >> http://amundsen.com/examples/zipcheck/ >> >> <http://amundsen.com/examples/zipcheck/>Instant user feedback w/o AJAX >> (or RPC). >> >> mca >> http://amundsen.com/blog/ >> http://mamund.com/foaf.rdf#me >> >> >> >> On Mon, Jun 7, 2010 at 00:51, Glenn Block <glenn.block@...> wrote: >> >>> >>> >>> You really think people are going to say "Woo hoo I am using HTTP Type 1" >>> ? :-) >>> >>> REST sounds cool...that's why they like it. >>> >>> On your second point, I am not sure. That makes the argument that RPC >>> over HTTP is absolutely bad. Even though I can see the value that a RESTful >>> style provides, I don't see why a person building an AJAX style application >>> who uses RPC over HTTP as a means to improve responsiveness is in itself a >>> bad thing. >>> >>> I personally worked on quite a few RPC over HTTP style AJAX apps where >>> the goal was improving responsiveness in the application and to offer a rich >>> client type experience in a brower. Yes it dishonors hypermedia constraints >>> (we didn't even know what those were), but it worked and made paying >>> customers pretty happy that they had a more interactive / responsive UI. >>> That was the primary goal and the benefit. I don't think we were deluded >>> into any other benefits. >>> >>> Do you see that as "wrong"? >>> >>> Personally I feel like calling that REST is wrong, in the way calling >>> some that is not an implementation of MVC, MVC. >>> >>> Glenn >>> >>> >>> >>> On Sun, Jun 6, 2010 at 9:04 PM, Jan Algermissen < >>> algermissen1971@...> wrote: >>> >>>> Glenn, >>>> >>>> On Jun 7, 2010, at 12:10 AM, Glenn Block wrote: >>>> >>>> > >>>> > >>>> > Darrel and I were chatting and we started to discuss the problem of >>>> using "RPC over HTTP' to mean "REST". The result of that conversation seemed >>>> to be that the problem is the usage of the term 'REST' in that nothing about >>>> RPC over HTTP is RESTful in nature. >>>> >>>> I think one of the big problems in this area is the lack of proper names >>>> for all the various abuses of REST. Having names helps to differentiate. I >>>> created a classification of mis-uses a while back: >>>> http://nordsc.com/ext/classification_of_http_based_apis.html >>>> >>>> > My argument, the fact that folks want to use HTTP as a pure transport >>>> for loose XML/JSON in itself is not a problem asd long as one has proper >>>> expectations about the pros/cons. >>>> >>>> Yes, but the problem being the end of the sentence: "as long as one has >>>> proper expectations". I seriously doubt that a significant number of people >>>> understand the trade offs (if they did, they would just do true REST because >>>> the gains outweigh (IMHO small) the additional effort). >>>> >>>> While there might not be a problem if you understand what you are doing, >>>> it also makes no real sense to do it (it is not really justifiable). >>>> >>>> Jan >>>> >>>> >>>> > >>>> > That led me to think of a new term to represent the "RPC over HTTP" >>>> movement and which is along the spirit of the whole "No SQL" mantra. "No >>>> SOAP" - the meaning of which is "Not Only SOAP". >>>> > >>>> > LIke it or hate it? >>>> >>>> >>>> >>>> >>>> > >>>> > Regards >>>> > Glenn >>>> > >>>> > >>>> > >>>> >>>> ----------------------------------- >>>> Jan Algermissen, Consultant >>>> NORD Software Consulting >>>> >>>> Mail: algermissen@... >>>> Blog: http://www.nordsc.com/blog/ >>>> Work: http://www.nordsc.com/ >>>> ----------------------------------- >>>> >>>> >>>> >>>> >>>> >>> >>> >>> >>> >> >> >
On Jun 7, 2010, at 8:16 AM, mike amundsen wrote: > > > yeah, it can look like RPC. however, zipcheck is a search resource, not a method. and responses are cached. Yep, good point. Maybe it helps to think about it this way: It is not RPC if the method that is invoked by/of the connector is uniform. The browser invokes HTTP's GET, not some method that is application specific. And it means to invoke GET, not to invoke something else by means of a GET. Jan > > while a single JS line is used, there are no async calls and XML is not returned, in image is returned (not an image URI). > > FWIW, there is another version that supports conneg for several media-types. and a version that returns the query document if no data is passed in the request. > > > mca > http://amundsen.com/blog/ > http://mamund.com/foaf.rdf#me > > > > On Mon, Jun 7, 2010 at 01:34, Glenn Block <glenn.block@...> wrote: > Neat demo / bookmarked. > > That is technically AJAX in that you are dynamically manipulating the dom through javascript. AJAX does stand for "Asynchronous Javvascript and XML" doesn't it? > > How exactly would you say it's RESTful and not RPC? YOu are basically using zipcheck as a method with ziptext being passed along the query string. > > Regards > Glenn > > On Sun, Jun 6, 2010 at 10:18 PM, mike amundsen <mamund@...> wrote: > REST places no restrictions on UI behavior including responsiveness or interactivity. > > I start many REST/HTTP talks w/ a very simple example app to illustrate this very point: > http://amundsen.com/examples/zipcheck/ > > Instant user feedback w/o AJAX (or RPC). > > mca > http://amundsen.com/blog/ > http://mamund.com/foaf.rdf#me > > > > On Mon, Jun 7, 2010 at 00:51, Glenn Block <glenn.block@...> wrote: > > > You really think people are going to say "Woo hoo I am using HTTP Type 1" ? :-) > > REST sounds cool...that's why they like it. > > On your second point, I am not sure. That makes the argument that RPC over HTTP is absolutely bad. Even though I can see the value that a RESTful style provides, I don't see why a person building an AJAX style application who uses RPC over HTTP as a means to improve responsiveness is in itself a bad thing. > > I personally worked on quite a few RPC over HTTP style AJAX apps where the goal was improving responsiveness in the application and to offer a rich client type experience in a brower. Yes it dishonors hypermedia constraints (we didn't even know what those were), but it worked and made paying customers pretty happy that they had a more interactive / responsive UI. That was the primary goal and the benefit. I don't think we were deluded into any other benefits. > > Do you see that as "wrong"? > > Personally I feel like calling that REST is wrong, in the way calling some that is not an implementation of MVC, MVC. > > Glenn > > > > On Sun, Jun 6, 2010 at 9:04 PM, Jan Algermissen <algermissen1971@...> wrote: > Glenn, > > On Jun 7, 2010, at 12:10 AM, Glenn Block wrote: > > > > > > > Darrel and I were chatting and we started to discuss the problem of using "RPC over HTTP' to mean "REST". The result of that conversation seemed to be that the problem is the usage of the term 'REST' in that nothing about RPC over HTTP is RESTful in nature. > > I think one of the big problems in this area is the lack of proper names for all the various abuses of REST. Having names helps to differentiate. I created a classification of mis-uses a while back: http://nordsc.com/ext/classification_of_http_based_apis.html > > > My argument, the fact that folks want to use HTTP as a pure transport for loose XML/JSON in itself is not a problem asd long as one has proper expectations about the pros/cons. > > Yes, but the problem being the end of the sentence: "as long as one has proper expectations". I seriously doubt that a significant number of people understand the trade offs (if they did, they would just do true REST because the gains outweigh (IMHO small) the additional effort). > > While there might not be a problem if you understand what you are doing, it also makes no real sense to do it (it is not really justifiable). > > Jan > > > > > > That led me to think of a new term to represent the "RPC over HTTP" movement and which is along the spirit of the whole "No SQL" mantra. "No SOAP" - the meaning of which is "Not Only SOAP". > > > > LIke it or hate it? > > > > > > > > Regards > > Glenn > > > > > > > > ----------------------------------- > Jan Algermissen, Consultant > NORD Software Consulting > > Mail: algermissen@... > Blog: http://www.nordsc.com/blog/ > Work: http://www.nordsc.com/ > ----------------------------------- > > > > > > > > > > > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
If a series of RPC's follows the following guidelines, I would call it REST: 1. URI and unifies methods for remote process calls 2. Explicit self-descriptive messages 3. HATEOAS Cheers, Dong On Mon, Jun 7, 2010 at 12:25 AM, Jan Algermissen <algermissen1971@...>wrote: > > > > On Jun 7, 2010, at 8:16 AM, mike amundsen wrote: > > > > > > > yeah, it can look like RPC. however, zipcheck is a search resource, not a > method. and responses are cached. > > Yep, good point. > > Maybe it helps to think about it this way: It is not RPC if the method that > is invoked by/of the connector is uniform. The browser invokes HTTP's GET, > not some method that is application specific. And it means to invoke GET, > not to invoke something else by means of a GET. > > Jan > > > > > > while a single JS line is used, there are no async calls and XML is not > returned, in image is returned (not an image URI). > > > > FWIW, there is another version that supports conneg for several > media-types. and a version that returns the query document if no data is > passed in the request. > > > > > > mca > > http://amundsen.com/blog/ > > http://mamund.com/foaf.rdf#me > > > > > > > > On Mon, Jun 7, 2010 at 01:34, Glenn Block <glenn.block@...<glenn.block%40gmail.com>> > wrote: > > Neat demo / bookmarked. > > > > That is technically AJAX in that you are dynamically manipulating the dom > through javascript. AJAX does stand for "Asynchronous Javvascript and XML" > doesn't it? > > > > How exactly would you say it's RESTful and not RPC? YOu are basically > using zipcheck as a method with ziptext being passed along the query string. > > > > Regards > > Glenn > > > > On Sun, Jun 6, 2010 at 10:18 PM, mike amundsen <mamund@yahoo.com<mamund%40yahoo.com>> > wrote: > > REST places no restrictions on UI behavior including responsiveness or > interactivity. > > > > I start many REST/HTTP talks w/ a very simple example app to illustrate > this very point: > > http://amundsen.com/examples/zipcheck/ > > > > Instant user feedback w/o AJAX (or RPC). > > > > mca > > http://amundsen.com/blog/ > > http://mamund.com/foaf.rdf#me > > > > > > > > On Mon, Jun 7, 2010 at 00:51, Glenn Block <glenn.block@...<glenn.block%40gmail.com>> > wrote: > > > > > > You really think people are going to say "Woo hoo I am using HTTP Type 1" > ? :-) > > > > REST sounds cool...that's why they like it. > > > > On your second point, I am not sure. That makes the argument that RPC > over HTTP is absolutely bad. Even though I can see the value that a RESTful > style provides, I don't see why a person building an AJAX style application > who uses RPC over HTTP as a means to improve responsiveness is in itself a > bad thing. > > > > I personally worked on quite a few RPC over HTTP style AJAX apps where > the goal was improving responsiveness in the application and to offer a rich > client type experience in a brower. Yes it dishonors hypermedia constraints > (we didn't even know what those were), but it worked and made paying > customers pretty happy that they had a more interactive / responsive UI. > That was the primary goal and the benefit. I don't think we were deluded > into any other benefits. > > > > Do you see that as "wrong"? > > > > Personally I feel like calling that REST is wrong, in the way calling > some that is not an implementation of MVC, MVC. > > > > Glenn > > > > > > > > On Sun, Jun 6, 2010 at 9:04 PM, Jan Algermissen <algermissen1971@mac.com<algermissen1971%40mac.com>> > wrote: > > Glenn, > > > > On Jun 7, 2010, at 12:10 AM, Glenn Block wrote: > > > > > > > > > > > Darrel and I were chatting and we started to discuss the problem of > using "RPC over HTTP' to mean "REST". The result of that conversation seemed > to be that the problem is the usage of the term 'REST' in that nothing about > RPC over HTTP is RESTful in nature. > > > > I think one of the big problems in this area is the lack of proper names > for all the various abuses of REST. Having names helps to differentiate. I > created a classification of mis-uses a while back: > http://nordsc.com/ext/classification_of_http_based_apis.html > > > > > My argument, the fact that folks want to use HTTP as a pure transport > for loose XML/JSON in itself is not a problem asd long as one has proper > expectations about the pros/cons. > > > > Yes, but the problem being the end of the sentence: "as long as one has > proper expectations". I seriously doubt that a significant number of people > understand the trade offs (if they did, they would just do true REST because > the gains outweigh (IMHO small) the additional effort). > > > > While there might not be a problem if you understand what you are doing, > it also makes no real sense to do it (it is not really justifiable). > > > > Jan > > > > > > > > > > That led me to think of a new term to represent the "RPC over HTTP" > movement and which is along the spirit of the whole "No SQL" mantra. "No > SOAP" - the meaning of which is "Not Only SOAP". > > > > > > LIke it or hate it? > > > > > > > > > > > > > > Regards > > > Glenn > > > > > > > > > > > > > ----------------------------------- > > Jan Algermissen, Consultant > > NORD Software Consulting > > > > Mail: algermissen@... <algermissen%40acm.org> > > Blog: http://www.nordsc.com/blog/ > > Work: http://www.nordsc.com/ > > ----------------------------------- > > > > > > > > > > > > > > > > > > > > > > > > > > > > ----------------------------------- > Jan Algermissen, Consultant > NORD Software Consulting > > Mail: algermissen@... <algermissen%40acm.org> > Blog: http://www.nordsc.com/blog/ > Work: http://www.nordsc.com/ > ----------------------------------- > > >
Just create a resource that abstracts the things being updated and manipulate that resource to get the same effect. See http://my.safaribooksonline.com/9780596809140/chapter-misc-writes for examples. Other solutions tend to reduce protocol visibility as well introduce challenges such as poor scalability or even DoS attacks. Subbu On Jun 4, 2010, at 4:15 PM, chucking24 wrote: > I'm looking for examples of MIME types/protocols that work with collections of things, but support batch updates on collection members rather than requiring separate updates for each collection member. > > The use case is supporting user-defined lists with an arbitrary number of columns. We have chosen to treat a list as a collection of row elements. A browser-based client will support editing of the list in a tabular view. In addition to adding and removing entire rows, users will also be able to edit individual fields within a row. We would like to support a Save button that saves all changes (possibly across multiple rows) on the current screen in one http request. > > Our current thinking is that we would send back a collection containing only the rows to be updated (or inserted/deleted). However (without getting down into the details) the back-end will need to be able to determine whether individual fields in each row actually need to be updated, so there is some question as to how to represent whether or not an individual field has been changed. We would like to use the same document type both for getting the list entries and posting changes back to the list. > > Does anyone have any pointers to some examples of MIME types or application protocols that support this sort of model? > > --Chuck > > > > ------------------------------------ > > Yahoo! Groups Links > > >
What would you call just using HTTP as a transport if it doesn't satisfy those guidelines? Glenn On Mon, Jun 7, 2010 at 8:34 AM, Dong Liu <edongliu@gmail.com> wrote: > > > If a series of RPC's follows the following guidelines, I would call it > REST: > > 1. URI and unifies methods for remote process calls > 2. Explicit self-descriptive messages > 3. HATEOAS > > Cheers, > > Dong > > > On Mon, Jun 7, 2010 at 12:25 AM, Jan Algermissen <algermissen1971@...>wrote: > >> >> >> >> On Jun 7, 2010, at 8:16 AM, mike amundsen wrote: >> >> > >> > >> > yeah, it can look like RPC. however, zipcheck is a search resource, not >> a method. and responses are cached. >> >> Yep, good point. >> >> Maybe it helps to think about it this way: It is not RPC if the method >> that is invoked by/of the connector is uniform. The browser invokes HTTP's >> GET, not some method that is application specific. And it means to invoke >> GET, not to invoke something else by means of a GET. >> >> Jan >> >> >> > >> > while a single JS line is used, there are no async calls and XML is not >> returned, in image is returned (not an image URI). >> > >> > FWIW, there is another version that supports conneg for several >> media-types. and a version that returns the query document if no data is >> passed in the request. >> > >> > >> > mca >> > http://amundsen.com/blog/ >> > http://mamund.com/foaf.rdf#me >> > >> > >> > >> > On Mon, Jun 7, 2010 at 01:34, Glenn Block <glenn.block@...<glenn.block%40gmail.com>> >> wrote: >> > Neat demo / bookmarked. >> > >> > That is technically AJAX in that you are dynamically manipulating the >> dom through javascript. AJAX does stand for "Asynchronous Javvascript and >> XML" doesn't it? >> > >> > How exactly would you say it's RESTful and not RPC? YOu are basically >> using zipcheck as a method with ziptext being passed along the query string. >> > >> > Regards >> > Glenn >> > >> > On Sun, Jun 6, 2010 at 10:18 PM, mike amundsen <mamund@...<mamund%40yahoo.com>> >> wrote: >> > REST places no restrictions on UI behavior including responsiveness or >> interactivity. >> > >> > I start many REST/HTTP talks w/ a very simple example app to illustrate >> this very point: >> > http://amundsen.com/examples/zipcheck/ >> > >> > Instant user feedback w/o AJAX (or RPC). >> > >> > mca >> > http://amundsen.com/blog/ >> > http://mamund.com/foaf.rdf#me >> > >> > >> > >> > On Mon, Jun 7, 2010 at 00:51, Glenn Block <glenn.block@...<glenn.block%40gmail.com>> >> wrote: >> > >> > >> > You really think people are going to say "Woo hoo I am using HTTP Type >> 1" ? :-) >> > >> > REST sounds cool...that's why they like it. >> > >> > On your second point, I am not sure. That makes the argument that RPC >> over HTTP is absolutely bad. Even though I can see the value that a RESTful >> style provides, I don't see why a person building an AJAX style application >> who uses RPC over HTTP as a means to improve responsiveness is in itself a >> bad thing. >> > >> > I personally worked on quite a few RPC over HTTP style AJAX apps where >> the goal was improving responsiveness in the application and to offer a rich >> client type experience in a brower. Yes it dishonors hypermedia constraints >> (we didn't even know what those were), but it worked and made paying >> customers pretty happy that they had a more interactive / responsive UI. >> That was the primary goal and the benefit. I don't think we were deluded >> into any other benefits. >> > >> > Do you see that as "wrong"? >> > >> > Personally I feel like calling that REST is wrong, in the way calling >> some that is not an implementation of MVC, MVC. >> > >> > Glenn >> > >> > >> > >> > On Sun, Jun 6, 2010 at 9:04 PM, Jan Algermissen < >> algermissen1971@mac.com <algermissen1971%40mac.com>> wrote: >> > Glenn, >> > >> > On Jun 7, 2010, at 12:10 AM, Glenn Block wrote: >> > >> > > >> > > >> > > Darrel and I were chatting and we started to discuss the problem of >> using "RPC over HTTP' to mean "REST". The result of that conversation seemed >> to be that the problem is the usage of the term 'REST' in that nothing about >> RPC over HTTP is RESTful in nature. >> > >> > I think one of the big problems in this area is the lack of proper names >> for all the various abuses of REST. Having names helps to differentiate. I >> created a classification of mis-uses a while back: >> http://nordsc.com/ext/classification_of_http_based_apis.html >> > >> > > My argument, the fact that folks want to use HTTP as a pure transport >> for loose XML/JSON in itself is not a problem asd long as one has proper >> expectations about the pros/cons. >> > >> > Yes, but the problem being the end of the sentence: "as long as one has >> proper expectations". I seriously doubt that a significant number of people >> understand the trade offs (if they did, they would just do true REST because >> the gains outweigh (IMHO small) the additional effort). >> > >> > While there might not be a problem if you understand what you are doing, >> it also makes no real sense to do it (it is not really justifiable). >> > >> > Jan >> > >> > >> > > >> > > That led me to think of a new term to represent the "RPC over HTTP" >> movement and which is along the spirit of the whole "No SQL" mantra. "No >> SOAP" - the meaning of which is "Not Only SOAP". >> > > >> > > LIke it or hate it? >> > >> > >> > >> > >> > > >> > > Regards >> > > Glenn >> > > >> > > >> > > >> > >> > ----------------------------------- >> > Jan Algermissen, Consultant >> > NORD Software Consulting >> > >> > Mail: algermissen@... <algermissen%40acm.org> >> > Blog: http://www.nordsc.com/blog/ >> > Work: http://www.nordsc.com/ >> > ----------------------------------- >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> >> ----------------------------------- >> Jan Algermissen, Consultant >> NORD Software Consulting >> >> Mail: algermissen@... <algermissen%40acm.org> >> Blog: http://www.nordsc.com/blog/ >> Work: http://www.nordsc.com/ >> ----------------------------------- >> >> > >
On Jun 5, 2010, at 11:59 AM, Bill de hOra wrote: > JAX-RS doesn't have a client, how can it be the wrong way to go > yet. The server part that is specified, deals well enough with the > protocol elements and doesn't prevent me from using formats or object > models that contain links. One thing the spec does do well is > UriBuilder/UriInfo - regardless of whether the builder pattern is to > taste, it helps solves a layering problem in Java between service code > and http code. The JAX-RS impls haven't gotten in my way yet when it > comes to working media types, http, or just applying REST in general, > which is more than I can say for most frameworks on the JVM. I'm free to > figure out the data. +1. This is exactly why I feel claims that JAX-RS isn't RESTful aren't helping, they're misleading. And claiming JAX-RS is on the same level as WCF is somewhat insulting :-) Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/
On Mon, Jun 7, 2010 at 2:42 PM, Glenn Block <glenn.block@...> wrote: > > > What would you call just using HTTP as a transport if it doesn't satisfy > those guidelines? > > Glenn > > > Does it need to be called anything besides the monikers used today such as XML over HTTP? As has been identified to be "REST", there are a constrains that all need to be conformed too. Would the name be different if 2 (and not all) constraints were adhered too? Eb
On Mon, Jun 7, 2010 at 2:50 PM, Glenn Block <glenn.block@...> wrote: > It's also JSON.....it's not just about XML. > > > That's sort of my point actually. It could be a variety of types of data beyond both xml and json. So it's easier RESTful or not. The "other" really doesn't need a moniker (IMHO).
I wish I had a succinct way to talk to folks about *style* vs. spec. It would make dealing with "negotiated REST" (adopting only selected constraints) much easier. What would you call Gothic architecture if you used flying buttress, but no pointed arches? Is that what we want to do? mca http://amundsen.com/blog/ http://mamund.com/foaf.rdf#me On Mon, Jun 7, 2010 at 14:49, Eb <amaeze@...> wrote: > > > > > On Mon, Jun 7, 2010 at 2:42 PM, Glenn Block <glenn.block@...> wrote: > >> >> >> What would you call just using HTTP as a transport if it doesn't satisfy >> those guidelines? >> >> Glenn >> >> >> > Does it need to be called anything besides the monikers used today such as > XML over HTTP? As has been identified to be "REST", there are a constrains > that all need to be conformed too. Would the name be different if 2 (and > not all) constraints were adhered too? > > Eb > > > >
On Jun 7, 2010, at 8:58 PM, mike amundsen wrote: > What would you call Gothic architecture if you used flying buttress, but no pointed arches? > > Is that what we want to do? ... no .... :-) Jan
If it became prolific there probably would be a term for it and you definitely wouldn't want to call it Gothic! On Mon, Jun 7, 2010 at 12:00 PM, Jan Algermissen <algermissen1971@...>wrote: > > > > On Jun 7, 2010, at 8:58 PM, mike amundsen wrote: > > > What would you call Gothic architecture if you used flying buttress, but > no pointed arches? > > > > Is that what we want to do? > > ... no .... :-) > > Jan > > >
My whole reasoning was to remove the confusion that exists around those who today use the term REST though not truly intending to apply RESTful principles or even having the same goals in mind, with those who are intentionally applying a RESTful style. Glenn On Mon, Jun 7, 2010 at 11:53 AM, Eb <amaeze@...> wrote: > > > On Mon, Jun 7, 2010 at 2:50 PM, Glenn Block <glenn.block@...>wrote: > >> It's also JSON.....it's not just about XML. >> >> >> > That's sort of my point actually. It could be a variety of types of data > beyond both xml and json. So it's easier RESTful or not. The "other" > really doesn't need a moniker (IMHO). >
It would be XML-RPC if XML is used for messages. Just RPC over HTTP if no XML is involved? Cheers, Dong On Mon, Jun 7, 2010 at 12:42 PM, Glenn Block <glenn.block@...> wrote: > What would you call just using HTTP as a transport if it doesn't satisfy > those guidelines? > > Glenn > > On Mon, Jun 7, 2010 at 8:34 AM, Dong Liu <edongliu@...> wrote: > >> >> >> If a series of RPC's follows the following guidelines, I would call it >> REST: >> >> 1. URI and unifies methods for remote process calls >> 2. Explicit self-descriptive messages >> 3. HATEOAS >> >> Cheers, >> >> Dong >> >> >> On Mon, Jun 7, 2010 at 12:25 AM, Jan Algermissen <algermissen1971@... >> > wrote: >> >>> >>> >>> >>> On Jun 7, 2010, at 8:16 AM, mike amundsen wrote: >>> >>> > >>> > >>> > yeah, it can look like RPC. however, zipcheck is a search resource, not >>> a method. and responses are cached. >>> >>> Yep, good point. >>> >>> Maybe it helps to think about it this way: It is not RPC if the method >>> that is invoked by/of the connector is uniform. The browser invokes HTTP's >>> GET, not some method that is application specific. And it means to invoke >>> GET, not to invoke something else by means of a GET. >>> >>> Jan >>> >>> >>> > >>> > while a single JS line is used, there are no async calls and XML is not >>> returned, in image is returned (not an image URI). >>> > >>> > FWIW, there is another version that supports conneg for several >>> media-types. and a version that returns the query document if no data is >>> passed in the request. >>> > >>> > >>> > mca >>> > http://amundsen.com/blog/ >>> > http://mamund.com/foaf.rdf#me >>> > >>> > >>> > >>> > On Mon, Jun 7, 2010 at 01:34, Glenn Block <glenn.block@...<glenn.block%40gmail.com>> >>> wrote: >>> > Neat demo / bookmarked. >>> > >>> > That is technically AJAX in that you are dynamically manipulating the >>> dom through javascript. AJAX does stand for "Asynchronous Javvascript and >>> XML" doesn't it? >>> > >>> > How exactly would you say it's RESTful and not RPC? YOu are basically >>> using zipcheck as a method with ziptext being passed along the query string. >>> > >>> > Regards >>> > Glenn >>> > >>> > On Sun, Jun 6, 2010 at 10:18 PM, mike amundsen <mamund@...<mamund%40yahoo.com>> >>> wrote: >>> > REST places no restrictions on UI behavior including responsiveness or >>> interactivity. >>> > >>> > I start many REST/HTTP talks w/ a very simple example app to illustrate >>> this very point: >>> > http://amundsen.com/examples/zipcheck/ >>> > >>> > Instant user feedback w/o AJAX (or RPC). >>> > >>> > mca >>> > http://amundsen.com/blog/ >>> > http://mamund.com/foaf.rdf#me >>> > >>> > >>> > >>> > On Mon, Jun 7, 2010 at 00:51, Glenn Block <glenn.block@...<glenn.block%40gmail.com>> >>> wrote: >>> > >>> > >>> > You really think people are going to say "Woo hoo I am using HTTP Type >>> 1" ? :-) >>> > >>> > REST sounds cool...that's why they like it. >>> > >>> > On your second point, I am not sure. That makes the argument that RPC >>> over HTTP is absolutely bad. Even though I can see the value that a RESTful >>> style provides, I don't see why a person building an AJAX style application >>> who uses RPC over HTTP as a means to improve responsiveness is in itself a >>> bad thing. >>> > >>> > I personally worked on quite a few RPC over HTTP style AJAX apps where >>> the goal was improving responsiveness in the application and to offer a rich >>> client type experience in a brower. Yes it dishonors hypermedia constraints >>> (we didn't even know what those were), but it worked and made paying >>> customers pretty happy that they had a more interactive / responsive UI. >>> That was the primary goal and the benefit. I don't think we were deluded >>> into any other benefits. >>> > >>> > Do you see that as "wrong"? >>> > >>> > Personally I feel like calling that REST is wrong, in the way calling >>> some that is not an implementation of MVC, MVC. >>> > >>> > Glenn >>> > >>> > >>> > >>> > On Sun, Jun 6, 2010 at 9:04 PM, Jan Algermissen < >>> algermissen1971@... <algermissen1971%40mac.com>> wrote: >>> > Glenn, >>> > >>> > On Jun 7, 2010, at 12:10 AM, Glenn Block wrote: >>> > >>> > > >>> > > >>> > > Darrel and I were chatting and we started to discuss the problem of >>> using "RPC over HTTP' to mean "REST". The result of that conversation seemed >>> to be that the problem is the usage of the term 'REST' in that nothing about >>> RPC over HTTP is RESTful in nature. >>> > >>> > I think one of the big problems in this area is the lack of proper >>> names for all the various abuses of REST. Having names helps to >>> differentiate. I created a classification of mis-uses a while back: >>> http://nordsc.com/ext/classification_of_http_based_apis.html >>> > >>> > > My argument, the fact that folks want to use HTTP as a pure transport >>> for loose XML/JSON in itself is not a problem asd long as one has proper >>> expectations about the pros/cons. >>> > >>> > Yes, but the problem being the end of the sentence: "as long as one has >>> proper expectations". I seriously doubt that a significant number of people >>> understand the trade offs (if they did, they would just do true REST because >>> the gains outweigh (IMHO small) the additional effort). >>> > >>> > While there might not be a problem if you understand what you are >>> doing, it also makes no real sense to do it (it is not really justifiable). >>> > >>> > Jan >>> > >>> > >>> > > >>> > > That led me to think of a new term to represent the "RPC over HTTP" >>> movement and which is along the spirit of the whole "No SQL" mantra. "No >>> SOAP" - the meaning of which is "Not Only SOAP". >>> > > >>> > > LIke it or hate it? >>> > >>> > >>> > >>> > >>> > > >>> > > Regards >>> > > Glenn >>> > > >>> > > >>> > > >>> > >>> > ----------------------------------- >>> > Jan Algermissen, Consultant >>> > NORD Software Consulting >>> > >>> > Mail: algermissen@... <algermissen%40acm.org> >>> > Blog: http://www.nordsc.com/blog/ >>> > Work: http://www.nordsc.com/ >>> > ----------------------------------- >>> > >>> > >>> > >>> > >>> > >>> > >>> > >>> > >>> > >>> > >>> > >>> > >>> > >>> >>> ----------------------------------- >>> Jan Algermissen, Consultant >>> NORD Software Consulting >>> >>> Mail: algermissen@... <algermissen%40acm.org> >>> Blog: http://www.nordsc.com/blog/ >>> Work: http://www.nordsc.com/ >>> ----------------------------------- >>> >>> >> >> > >
On Mon, Jun 7, 2010 at 3:23 PM, Glenn Block <glenn.block@...> wrote: > My whole reasoning was to remove the confusion that exists around those who > today use the term REST though not truly intending to apply RESTful > principles or even having the same goals in mind, with those who are > intentionally applying a RESTful style. > > > I understand that, but do we really think they (including myself) will revert to this new naming convention(s) whatever it is? Additionally a lot of these REST "api's" have already been released to the wild for consumption. I think its better to continue to educate people on what is (and is not) REST so as to help us all making the distinctions when these claims are made.
On Mon, Jun 7, 2010 at 3:25 PM, Dong Liu <edongliu@...> wrote: > > > It would be XML-RPC if XML is used for messages. Just RPC over HTTP if no > XML is involved? > > Cheers, > > Dong > > > XML-RPC could be over TCP or FTP for all practical purposes and JSON could be sent over TCP so I don't think you're gaining much that way. In fact, as has been mentioned early, just because REST is the style used, doesn't mean it has to be HTTP. If something is not REST, it's non-REST. Maybe that's the "other style" right there (or just don't categorize whatever it is you are doing as REST). :)
You may be right, it could be a fool's errand. I agree we can do more from an education stand point. It just seems like the term has been "hijacked" so to speak off of it's original intent. Not the first time. Glenn On Mon, Jun 7, 2010 at 12:31 PM, Eb <amaeze@...> wrote: > > > On Mon, Jun 7, 2010 at 3:23 PM, Glenn Block <glenn.block@...>wrote: > >> My whole reasoning was to remove the confusion that exists around those >> who today use the term REST though not truly intending to apply RESTful >> principles or even having the same goals in mind, with those who are >> intentionally applying a RESTful style. >> >> >> > > I understand that, but do we really think they (including myself) will > revert to this new naming convention(s) whatever it is? Additionally a lot > of these REST "api's" have already been released to the wild for > consumption. I think its better to continue to educate people on what is > (and is not) REST so as to help us all making the distinctions when these > claims are made. >
Well that was why I was coming with "No SOAP" :-) SOAP doesn't have to be just HTTP either. Glenn On Mon, Jun 7, 2010 at 12:35 PM, Eb <amaeze@gmail.com> wrote: > > > > On Mon, Jun 7, 2010 at 3:25 PM, Dong Liu <edongliu@gmail.com> wrote: > >> >> >> It would be XML-RPC if XML is used for messages. Just RPC over HTTP if no >> XML is involved? >> >> Cheers, >> >> Dong >> >> >> > XML-RPC could be over TCP or FTP for all practical purposes and JSON could > be sent over TCP so I don't think you're gaining much that way. In fact, as > has been mentioned early, just because REST is the style used, doesn't mean > it has to be HTTP. > > If something is not REST, it's non-REST. Maybe that's the "other style" > right there (or just don't categorize whatever it is you are doing as REST). > :) > > >
I say focus on the spec (HTTP) and leave the style (REST) alone for now. Build a great library for MSFT devs that uses the HTTP name (not REST name) and you'll have lots of smart people talking about writing great web apps w/ HTTP. mca http://amundsen.com/blog/ http://mamund.com/foaf.rdf#me On Mon, Jun 7, 2010 at 15:37, Glenn Block <glenn.block@...> wrote: > > > You may be right, it could be a fool's errand. I agree we can do more from > an education stand point. It just seems like the term has been "hijacked" so > to speak off of it's original intent. Not the first time. > > Glenn > > On Mon, Jun 7, 2010 at 12:31 PM, Eb <amaeze@...> wrote: > >> >> >> On Mon, Jun 7, 2010 at 3:23 PM, Glenn Block <glenn.block@...>wrote: >> >>> My whole reasoning was to remove the confusion that exists around those >>> who today use the term REST though not truly intending to apply RESTful >>> principles or even having the same goals in mind, with those who are >>> intentionally applying a RESTful style. >>> >>> >>> >> >> I understand that, but do we really think they (including myself) will >> revert to this new naming convention(s) whatever it is? Additionally a lot >> of these REST "api's" have already been released to the wild for >> consumption. I think its better to continue to educate people on what is >> (and is not) REST so as to help us all making the distinctions when these >> claims are made. >> > > > > >
really? how about focusing on the spec with the style in mind? On Mon, Jun 7, 2010 at 3:40 PM, mike amundsen <mamund@...> wrote: > I say focus on the spec (HTTP) and leave the style (REST) alone for now. > > Build a great library for MSFT devs that uses the HTTP name (not REST name) > and you'll have lots of smart people talking about writing great web apps w/ > HTTP. > > mca > http://amundsen.com/blog/ > http://mamund.com/foaf.rdf#me > > > > On Mon, Jun 7, 2010 at 15:37, Glenn Block <glenn.block@...> wrote: > >> >> >> You may be right, it could be a fool's errand. I agree we can do more from >> an education stand point. It just seems like the term has been "hijacked" so >> to speak off of it's original intent. Not the first time. >> >> Glenn >> >> On Mon, Jun 7, 2010 at 12:31 PM, Eb <amaeze@...> wrote: >> >>> >>> >>> On Mon, Jun 7, 2010 at 3:23 PM, Glenn Block <glenn.block@...>wrote: >>> >>>> My whole reasoning was to remove the confusion that exists around those >>>> who today use the term REST though not truly intending to apply RESTful >>>> principles or even having the same goals in mind, with those who are >>>> intentionally applying a RESTful style. >>>> >>>> >>>> >>> >>> I understand that, but do we really think they (including myself) will >>> revert to this new naming convention(s) whatever it is? Additionally a lot >>> of these REST "api's" have already been released to the wild for >>> consumption. I think its better to continue to educate people on what is >>> (and is not) REST so as to help us all making the distinctions when these >>> claims are made. >>> >> >> >> >> >> > >
<snip> really? how about focusing on the spec with the style in mind? </snip> well, my comments here are mostly about tactics. they're based on my belief that too many folks conflate REST and HTTP or simply think REST is another name for HTTP, etc. thus, my notion of dropping reference to REST while MSFT builds a great library for HTTP and "sells" it as such to the community. just my blather, tho<g>. mca http://amundsen.com/blog/ http://mamund.com/foaf.rdf#me On Mon, Jun 7, 2010 at 15:44, Eb <amaeze@...> wrote: > really? how about focusing on the spec with the style in mind? > > > On Mon, Jun 7, 2010 at 3:40 PM, mike amundsen <mamund@...> wrote: > >> I say focus on the spec (HTTP) and leave the style (REST) alone for now. >> >> Build a great library for MSFT devs that uses the HTTP name (not REST >> name) and you'll have lots of smart people talking about writing great web >> apps w/ HTTP. >> >> mca >> http://amundsen.com/blog/ >> http://mamund.com/foaf.rdf#me >> >> >> >> On Mon, Jun 7, 2010 at 15:37, Glenn Block <glenn.block@...> wrote: >> >>> >>> >>> You may be right, it could be a fool's errand. I agree we can do more >>> from an education stand point. It just seems like the term has been >>> "hijacked" so to speak off of it's original intent. Not the first time. >>> >>> Glenn >>> >>> On Mon, Jun 7, 2010 at 12:31 PM, Eb <amaeze@...> wrote: >>> >>>> >>>> >>>> On Mon, Jun 7, 2010 at 3:23 PM, Glenn Block <glenn.block@...>wrote: >>>> >>>>> My whole reasoning was to remove the confusion that exists around those >>>>> who today use the term REST though not truly intending to apply RESTful >>>>> principles or even having the same goals in mind, with those who are >>>>> intentionally applying a RESTful style. >>>>> >>>>> >>>>> >>>> >>>> I understand that, but do we really think they (including myself) will >>>> revert to this new naming convention(s) whatever it is? Additionally a lot >>>> of these REST "api's" have already been released to the wild for >>>> consumption. I think its better to continue to educate people on what is >>>> (and is not) REST so as to help us all making the distinctions when these >>>> claims are made. >>>> >>> >>> >>> >>> >>> >> >> >
I do second, and triple the opinion that focusing on http implementations would help the community and users much more than anything that tries to follow any style either way. From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of mike amundsen Sent: 07 June 2010 20:48 To: Eb Cc: Glenn Block; REST Discuss Subject: Re: [rest-discuss] Thinking of a new term for RPC over HTTP <snip> really? how about focusing on the spec with the style in mind? </snip> well, my comments here are mostly about tactics. they're based on my belief that too many folks conflate REST and HTTP or simply think REST is another name for HTTP, etc. thus, my notion of dropping reference to REST while MSFT builds a great library for HTTP and "sells" it as such to the community. just my blather, tho<g>. mca http://amundsen.com/blog/ http://mamund.com/foaf.rdf#me On Mon, Jun 7, 2010 at 15:44, Eb <amaeze@...<mailto:amaeze@...>> wrote: really? how about focusing on the spec with the style in mind? On Mon, Jun 7, 2010 at 3:40 PM, mike amundsen <mamund@...<mailto:mamund@...>> wrote: I say focus on the spec (HTTP) and leave the style (REST) alone for now. Build a great library for MSFT devs that uses the HTTP name (not REST name) and you'll have lots of smart people talking about writing great web apps w/ HTTP. mca http://amundsen.com/blog/ http://mamund.com/foaf.rdf#me On Mon, Jun 7, 2010 at 15:37, Glenn Block <glenn.block@...<mailto:glenn.block@...>> wrote: You may be right, it could be a fool's errand. I agree we can do more from an education stand point. It just seems like the term has been "hijacked" so to speak off of it's original intent. Not the first time. Glenn On Mon, Jun 7, 2010 at 12:31 PM, Eb <amaeze@...<mailto:amaeze@...>> wrote: On Mon, Jun 7, 2010 at 3:23 PM, Glenn Block <glenn.block@...<mailto:glenn.block@...>> wrote: My whole reasoning was to remove the confusion that exists around those who today use the term REST though not truly intending to apply RESTful principles or even having the same goals in mind, with those who are intentionally applying a RESTful style. I understand that, but do we really think they (including myself) will revert to this new naming convention(s) whatever it is? Additionally a lot of these REST "api's" have already been released to the wild for consumption. I think its better to continue to educate people on what is (and is not) REST so as to help us all making the distinctions when these claims are made.
Our current focus is HTTP. We're thinking of this as "WCF HTTP Web evolution" ie the next phase of WCF support for HTTP. Sounds like building something with a RESTful style is completely dependent on good HTTP support. Beyond that it's interesting to explore other specific investments we could do to help those who want to specifically abide by the RESTful constraints. Not sure what that is yet, but some ideas are starting to form like proper link support, resources, etc. It may sound like this whole question of "No SOAP" was connected to this new effort, but it wasn't. I as a complete noob was just looking at all the confusion / misconceptions over REST vs HTTP RPC styles and seeing if there's some way to clear up the air. Regards Glenn On Mon, Jun 7, 2010 at 2:09 PM, Sebastien Lambla <seb@...> wrote: > I do second, and triple the opinion that focusing on http implementations > would help the community and users much more than anything that tries to > follow any style either way. > > > > *From:* rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] > *On Behalf Of *mike amundsen > *Sent:* 07 June 2010 20:48 > *To:* Eb > *Cc:* Glenn Block; REST Discuss > *Subject:* Re: [rest-discuss] Thinking of a new term for RPC over HTTP > > > > > > <snip> > > really? how about focusing on the spec with the style in mind? > > </snip> > > > > well, my comments here are mostly about tactics. they're based on my belief > that too many folks conflate REST and HTTP or simply think REST is another > name for HTTP, etc. thus, my notion of dropping reference to REST while MSFT > builds a great library for HTTP and "sells" it as such to the community. > > > > just my blather, tho<g>. > > > > mca > http://amundsen.com/blog/ > http://mamund.com/foaf.rdf#me > > > On Mon, Jun 7, 2010 at 15:44, Eb <amaeze@...> wrote: > > really? how about focusing on the spec with the style in mind? > > > > On Mon, Jun 7, 2010 at 3:40 PM, mike amundsen <mamund@...> wrote: > > I say focus on the spec (HTTP) and leave the style (REST) alone for now. > > > > Build a great library for MSFT devs that uses the HTTP name (not REST name) > and you'll have lots of smart people talking about writing great web apps w/ > HTTP. > > > mca > http://amundsen.com/blog/ > http://mamund.com/foaf.rdf#me > > > On Mon, Jun 7, 2010 at 15:37, Glenn Block <glenn.block@...> wrote: > > > > You may be right, it could be a fool's errand. I agree we can do more from > an education stand point. It just seems like the term has been "hijacked" so > to speak off of it's original intent. Not the first time. > > > > Glenn > > On Mon, Jun 7, 2010 at 12:31 PM, Eb <amaeze@...> wrote: > > > > On Mon, Jun 7, 2010 at 3:23 PM, Glenn Block <glenn.block@...> wrote: > > My whole reasoning was to remove the confusion that exists around those who > today use the term REST though not truly intending to apply RESTful > principles or even having the same goals in mind, with those who are > intentionally applying a RESTful style. > > > > > > > I understand that, but do we really think they (including myself) will > revert to this new naming convention(s) whatever it is? Additionally a lot > of these REST "api's" have already been released to the wild for > consumption. I think its better to continue to educate people on what is > (and is not) REST so as to help us all making the distinctions when these > claims are made. > > > > > > > > > > > > > >
In terms of names, common usages I've seen on this list have been web API, http api and POD (although the latter is the one I use, and have had very little success getting it to stick). ________________________________ From: Glenn Block [glenn.block@...] Sent: 08 June 2010 01:26 To: Sebastien Lambla Cc: mike amundsen; Eb; REST Discuss Subject: Re: [rest-discuss] Thinking of a new term for RPC over HTTP Our current focus is HTTP. We're thinking of this as "WCF HTTP Web evolution" ie the next phase of WCF support for HTTP. Sounds like building something with a RESTful style is completely dependent on good HTTP support. Beyond that it's interesting to explore other specific investments we could do to help those who want to specifically abide by the RESTful constraints. Not sure what that is yet, but some ideas are starting to form like proper link support, resources, etc. It may sound like this whole question of "No SOAP" was connected to this new effort, but it wasn't. I as a complete noob was just looking at all the confusion / misconceptions over REST vs HTTP RPC styles and seeing if there's some way to clear up the air. Regards Glenn On Mon, Jun 7, 2010 at 2:09 PM, Sebastien Lambla <seb@...<mailto:seb@...>> wrote: I do second, and triple the opinion that focusing on http implementations would help the community and users much more than anything that tries to follow any style either way. From: rest-discuss@yahoogroups.com<mailto:rest-discuss@yahoogroups.com> [mailto:rest-discuss@yahoogroups.com<mailto:rest-discuss@yahoogroups.com>] On Behalf Of mike amundsen Sent: 07 June 2010 20:48 To: Eb Cc: Glenn Block; REST Discuss Subject: Re: [rest-discuss] Thinking of a new term for RPC over HTTP <snip> really? how about focusing on the spec with the style in mind? </snip> well, my comments here are mostly about tactics. they're based on my belief that too many folks conflate REST and HTTP or simply think REST is another name for HTTP, etc. thus, my notion of dropping reference to REST while MSFT builds a great library for HTTP and "sells" it as such to the community. just my blather, tho<g>. mca http://amundsen.com/blog/ http://mamund.com/foaf.rdf#me On Mon, Jun 7, 2010 at 15:44, Eb <amaeze@...<mailto:amaeze@...>> wrote: really? how about focusing on the spec with the style in mind? On Mon, Jun 7, 2010 at 3:40 PM, mike amundsen <mamund@...<mailto:mamund@...>> wrote: I say focus on the spec (HTTP) and leave the style (REST) alone for now. Build a great library for MSFT devs that uses the HTTP name (not REST name) and you'll have lots of smart people talking about writing great web apps w/ HTTP. mca http://amundsen.com/blog/ http://mamund.com/foaf.rdf#me On Mon, Jun 7, 2010 at 15:37, Glenn Block <glenn.block@...<mailto:glenn.block@...>> wrote: You may be right, it could be a fool's errand. I agree we can do more from an education stand point. It just seems like the term has been "hijacked" so to speak off of it's original intent. Not the first time. Glenn On Mon, Jun 7, 2010 at 12:31 PM, Eb <amaeze@...<mailto:amaeze@...>> wrote: On Mon, Jun 7, 2010 at 3:23 PM, Glenn Block <glenn.block@...<mailto:glenn.block@...>> wrote: My whole reasoning was to remove the confusion that exists around those who today use the term REST though not truly intending to apply RESTful principles or even having the same goals in mind, with those who are intentionally applying a RESTful style. I understand that, but do we really think they (including myself) will revert to this new naming convention(s) whatever it is? Additionally a lot of these REST "api's" have already been released to the wild for consumption. I think its better to continue to educate people on what is (and is not) REST so as to help us all making the distinctions when these claims are made.
> Yes, I'd like to give a link of alternates which have the same URI but > different media types, and that's what I initially implemented. But it > didn't work when I tested it, because real-world user agents just don't > get it. Why should they? RFC 2616 says you SHOULD assign URIs to > variants and send them in Content-Location, and I've never seen a good > reason put forth for ignoring that SHOULD when caching or direct- > referencing a variant are concerned. > > HTML media types say nothing about how to evaluate or choose from a list > of alternates, they only describe what an alternate link *means*, there > is no conneg algorithm for HTML. But there is one for HTTP. So do your > conneg in HTTP where it's specified, not HTML. > > So I'm saying what I'm saying for practical reasons of UA behavior, > yes, but I'm also saying there's nothing wrong with that UA behavior > since it follows what the specs and REST both say. I'm saying assign > URIs to your variants because that's Web architecture, which is why > browsers work the way they do, as specced in RFC 2616 with SHOULD, and > because it works. It is best practice to assign URIs to variants, > because *that's how the Web works* not because UAs are broken and we > must work with these broken UAs. I've trimmed the rest of this because I think I've finally realized the essence of our disconnect. I gather the SHOULD is: "A server SHOULD provide a Content-Location for the variant corresponding to the response entity; especially in the case where a resource has multiple entities associated with it, and those entities actually have separate locations by which they might be individually accessed, the server SHOULD provide a Content-Location for the particular variant which is returned. "[1] You read that particular SHOULD statement as a prescriptive as in "you SHOULD expose those entities at separate locations." And I read it to mean, "when the situation exists, you SHOULD do it this way". I don't interpret it as asserting whether or not the existence of the situation itself is good or not. Put another way, if we had a weather forecast: http://weather.example.com/zip/22180 You seem to suggest it's desireable to have: http://weather.example.com/zip/22180.txt http://weather.example.com/zip/22180.html http://weather.example.com/zip/22180.pdf and I think the former as desirable. I've read web architecture[2] and don't see the basis for your conclusion that identifying variants is particularly desirable. Unless I've misinterpreted Roy's thoughts, I gather he supports the former: "If the resource is a concept independent of representation format, then its URI must not have any aspect that is specific to the representation format."[3] Personally, I see the coupling introduced with a @type attribute as much less offensive than the coupling (resource<->representation) that occurs by having the variants in their own URIs. You asked a fair question: what does it break/what's the downside? I think it breaks the essential ideal in REST that URIs identify resources as concepts separate from their representation. If you're relating resources together (e.g. linked data), for consistency, it's important to link the concepts together not a specific representation. I think the Cool URIs[4] document makes the case too. --tim [1] - http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html [2] - http://www.w3.org/TR/webarch/ [3] - http://thread.gmane.org/gmane.comp.web.services.rest/289/focus=319 [4] - http://www.w3.org/Provider/Style/URI
On Tue, Jun 8, 2010 at 11:21 AM, Tim Williams <williamstw@...> wrote: >> Yes, I'd like to give a link of alternates which have the same URI but >> different media types, and that's what I initially implemented. But it >> didn't work when I tested it, because real-world user agents just don't >> get it. Why should they? RFC 2616 says you SHOULD assign URIs to >> variants and send them in Content-Location, and I've never seen a good >> reason put forth for ignoring that SHOULD when caching or direct- >> referencing a variant are concerned. >> >> HTML media types say nothing about how to evaluate or choose from a list >> of alternates, they only describe what an alternate link *means*, there >> is no conneg algorithm for HTML. But there is one for HTTP. So do your >> conneg in HTTP where it's specified, not HTML. >> >> So I'm saying what I'm saying for practical reasons of UA behavior, >> yes, but I'm also saying there's nothing wrong with that UA behavior >> since it follows what the specs and REST both say. I'm saying assign >> URIs to your variants because that's Web architecture, which is why >> browsers work the way they do, as specced in RFC 2616 with SHOULD, and >> because it works. It is best practice to assign URIs to variants, >> because *that's how the Web works* not because UAs are broken and we >> must work with these broken UAs. > > I've trimmed the rest of this because I think I've finally realized > the essence of our disconnect. I gather the SHOULD is: > > "A server SHOULD provide a Content-Location for the variant > corresponding to the response entity; especially in the case > where a resource has multiple entities associated with it, and those > entities actually have separate locations by which they might be > individually accessed, the server SHOULD provide a Content-Location > for the particular variant which is returned. "[1] > > You read that particular SHOULD statement as a prescriptive as in "you > SHOULD expose those entities at separate locations." And I read it to > mean, "when the situation exists, you SHOULD do it this way". I don't > interpret it as asserting whether or not the existence of the > situation itself is good or not. > > Put another way, if we had a weather forecast: > > http://weather.example.com/zip/22180 > > You seem to suggest it's desireable to have: > > http://weather.example.com/zip/22180.txt > http://weather.example.com/zip/22180.html > http://weather.example.com/zip/22180.pdf > > and I think the former as desirable. I've read web architecture[2] > and don't see the basis for your conclusion that identifying variants > is particularly desirable. Unless I've misinterpreted Roy's thoughts, > I gather he supports the former: > > "If the resource is a concept independent of > representation format, then its URI must not have any aspect > that is specific to the representation format."[3] > > Personally, I see the coupling introduced with a @type attribute as > much less offensive than the coupling (resource<->representation) that > occurs by having the variants in their own URIs. > > You asked a fair question: what does it break/what's the downside? I > think it breaks the essential ideal in REST that URIs identify > resources as concepts separate from their representation. If you're > relating resources together (e.g. linked data), for consistency, it's > important to link the concepts together not a specific representation. > I think the Cool URIs[4] document makes the case too. Now, I see that you previously wrote: >> Absolutely NOT. URIs identify _resources_ the control data is used to >> select _representations_ and the two are _not_ the same thing. .. so, I dunno, I agree with you when you said this, but disagree with you when you suggest that each of those representations should be its own resource - at least when you assert that its a desirable/web arch/RESTful thing to do vs. a pragmatic necessity based on UA behavior. --tim > [1] - http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html > [2] - http://www.w3.org/TR/webarch/ > [3] - http://thread.gmane.org/gmane.comp.web.services.rest/289/focus=319 > [4] - http://www.w3.org/Provider/Style/URI >
Tim Williams wrote: > > I've trimmed the rest of this because I think I've finally realized > the essence of our disconnect. I gather the SHOULD is: > > "A server SHOULD provide a Content-Location for the variant > corresponding to the response entity; especially in the case > where a resource has multiple entities associated with it, and > those entities actually have separate locations by which they might be > individually accessed, the server SHOULD provide a Content-Location > for the particular variant which is returned. "[1] > > You read that particular SHOULD statement as a prescriptive as in "you > SHOULD expose those entities at separate locations." And I read it to > mean, "when the situation exists, you SHOULD do it this way". > The English is plain. "A server SHOULD provide a Content-Location for the variant corresponding to the response entity." It then goes on to note that this is *especially* the case if certain conditions are met. This does _not_ mean that the SHOULD only applies *if* those conditions are met. > > I don't > interpret it as asserting whether or not the existence of the > situation itself is good or not. > We're talking about a situation where the server is responding with a variant of a negotiated resource. Therefore, the SHOULD directly applies to the situation, without passing any judgments on it. > > Put another way, if we had a weather forecast: > > http://weather.example.com/zip/22180 > > You seem to suggest it's desireable to have: > > http://weather.example.com/zip/22180.txt > http://weather.example.com/zip/22180.html > http://weather.example.com/zip/22180.pdf > > and I think the former as desirable. > Yes, I agree, the former is desirable for the negotiated resource. The latter are also desirable as they meet the conditions of the SHOULD. > > I've read web architecture[2] > and don't see the basis for your conclusion that identifying variants > is particularly desirable. > Then refer to REST, where this desirability is called the "identification of resources constraint." If you are developing REST, then all REST's constraints are particularly desirable. If you are not developing a REST system, then go ahead and violate the identification of resources constraint, I won't stop you. But don't violate that constraint and call the result REST. > > Unless I've misinterpreted Roy's thoughts, > I gather he supports the former: > > "If the resource is a concept independent of > representation format, then its URI must not have any aspect > that is specific to the representation format."[3] > Roy is saying, don't negotiate for a URI ending in *.txt, basically. This doesn't really have anything to do with the topic at hand. > > Personally, I see the coupling introduced with a @type attribute as > much less offensive than the coupling (resource<->representation) that > occurs by having the variants in their own URIs. > What coupling is that? Allowing the representation to be a resource in its own right by assigning it a URI, is decoupling. Only being able to access that representation directly through @type is the coupling that assigning URIs to variants avoids... > > You asked a fair question: what does it break/what's the downside? I > think it breaks the essential ideal in REST that URIs identify > resources as concepts separate from their representation. > The essential ideas in REST are called constraints; following the Identification of Resources constraint does not violate any other constraints. Identification of Resources states that any resource you wish to manipulate must first have its own URI. If you intend to manipulate variants (or cache them), you must first identify those variants as resources, otherwise you are breaking this constraint, as has been pointed out 1000 times on this list as best-practice. > > If you're > relating resources together (e.g. linked data), for consistency, it's > important to link the concepts together not a specific representation. > Then link to negotiated URIs instead of variant URIs. The ability to link the concepts together, not specific representations, is provided by content negotiation. Assigning URIs to variants does not negate this ability. > > I think the Cool URIs[4] document makes the case too. > Quote, please. Cool URIs says nothing about representation vs. resource. It discusses the maintenance of URIs once they've been assigned, but has nothing to do with this topic. None of which matters. All that matters here, is RFC2616's admonishment, "A server SHOULD provide a Content-Location for the variant corresponding to the response entity". Black-and-white, cut-and-dried, no ifs, ands or buts about it, for all the reasons of making caches and user-agents work that I've mentioned in this thread. Best practice is best practice for valid reasons, here. The only reason that's a SHOULD and not a MUST, is that a MUST would require us to assign URIs to the compressed and the uncompressed variants of a resource. For that reason, a MUST makes no sense. If that isn't your reason, then abide by the SHOULD, by assigning URIs to variants. Your REST system will thank you. -Eric
Tim Williams wrote: > > .. so, I dunno, I agree with you when you said this, but disagree with > you when you suggest that each of those representations should be its > own resource - at least when you assert that its a desirable/web > arch/RESTful thing to do vs. a pragmatic necessity based on UA > behavior. > This has nothing to do with compensating for UA behavior. This would imply that all user agents are broken, for not knowing how to deal with variants that don't have URIs. The same goes for caches. The reason that caches and user agents behave this way is because they are following the spec. This is why the best-practice here, is to follow the spec. The only pragmatic necessity here is following the same spec that the user agents were built to, which says "SHOULD" loudly and clearly. The reason, as everyone keeps repeating year-in and year-out on this issue here, for assigning URIs to variants, is that it is required in order to meet REST's Identification of Resources constraint. The mess that results from not assigning URIs to variants only proves to me the wisdom of the constraint whose application avoids that very mess. This is a best-practice solution because it follows the Identification of Resources constraint. Nobody who advocates this solution is telling you to do this because browsers are borked, they're saying this because it's how the Web, in reality, works -- because that aspect of the Web, at least, was informed by REST. IOW, the real-world Web architecture requires the Identification of Resources constraint be applied in order to work properly. So the best practice is to apply it, because *that's* how the architecture we're dealing with, works. -Eric
All this discussion over how to instruct the client of what representation to use for POST/PUT got me wondering about the role of link relations outside of GET. Atompub uses a @rel=edit to indicate an editable link, which, seems ok. Then I was reading in the nifty new RESTful Cookbook (thanks mike), where it has an example using rel=add-review and this struck me as a bit rpc-ish. The book also suggests that the link relation documentation should: 1) Indicate valid methods for the target URI 2) Indicate expected media types on request/response for the target URI This seemed to me to be mixing concerns. For example, it seems that the resource should define what methods are allowed (all are valid?) itself - independent of any link relation. Besides pointing me to Atompub, are there other good examples out there? Thanks, --tim
Re-sending on list... On Tue, Jun 1, 2010 at 2:22 PM, Jan Algermissen <algermissen1971@...> wrote: > > On Jun 1, 2010, at 7:29 PM, Glenn Block wrote: > >> Adding the list >> >> This the problem with email. I started off asking about what to post, >> then at some point we transitioned to discussions about contentneg / >> what to return which didn't answer the first question :-) >> >> It sounds like for sure media type docs will tell you what you need to >> post, and there is also the possibility for annotations within the >> media type schema itself. >> >> Is that correct? > > No. If you place that information inside the media type specs, you couple the spec to the choice of formats. > > Such information should be provided at runtime. For example via the mechanism I sent in my first reply (e.g. HTML's > enctype attribute). So, for POST/PUT, a @type attribute is good and considered an instruction vs. its Ok but just a hint for GET? --tim
Tim Williams wrote: > > So, for POST/PUT, a @type attribute is good and considered an > instruction vs. its Ok but just a hint for GET? > Exactly. REST's self-descriptive messaging constraint requires that a sender of data, clearly label that data with its media type. When you GET, the media type returned is up to the server, because only the server can authoritatively determine the result of conneg. When you PUT, the media type must be set by the user agent, because only the client can be authoritative as to what it is sending. So you must specify in your hypertext, what media type the user agent should label the PUT request with. Media type is just a tagging-and- bagging thing. In a PUT form, @type instructs the client how to bag and tag the entity for transfer. It remains the case, that @type has no correlation to Accept headers, so @type can only be a hint for GET. -Eric
I'd like to make a pseudo-argument from authority. Normally, arguing from authority is a logical fallacy. However, in the case of REST, there is one person whose word must be taken as fact, and that's Roy. Over the years that I've participated here, this topic has come up often. The list always explains that best-practice is to follow the Identification of Resources constraint by assigning URIs to variants. Now, if I or anyone else had been wrong about this, I can guarantee with certainty that Roy would've jumped in, called us out, and explained how we were getting Identification of Resources wrong, or why the follow-best-practice advice was wrong. But, that hasn't happened. Given all the opportunities to correct this information, if it were truly wrong, one must assume that Roy would be leading the charge to point out our error. The fact that Roy hasn't stepped in and declared assigning URIs to variants "bad practice" or a violation of REST, over the years, implies that such advice is indeed correct. So I'm saying, don't take my word for it, but do think twice about pointing out REST errors here -- if that were the case, I highly doubt it would have escaped Roy's notice and fallen to others to point out... -Eric
If it helps, here are previous quotes from Roy about conneg [1] and media types [2] [1] http://delicious.com/alan.dean/conneg+Roy.Fielding <http://delicious.com/alan.dean/conneg+Roy.Fielding>[2] http://delicious.com/alan.dean/Roy.Fielding+media-type <http://delicious.com/alan.dean/Roy.Fielding+media-type> Regards, Alan Dean On Tue, Jun 8, 2010 at 18:34, Eric J. Bowman <eric@...> wrote: > > > I'd like to make a pseudo-argument from authority. Normally, arguing > from authority is a logical fallacy. However, in the case of REST, > there is one person whose word must be taken as fact, and that's Roy. > Over the years that I've participated here, this topic has come up > often. The list always explains that best-practice is to follow the > Identification of Resources constraint by assigning URIs to variants. > > Now, if I or anyone else had been wrong about this, I can guarantee > with certainty that Roy would've jumped in, called us out, and > explained how we were getting Identification of Resources wrong, or why > the follow-best-practice advice was wrong. But, that hasn't happened. > Given all the opportunities to correct this information, if it were > truly wrong, one must assume that Roy would be leading the charge to > point out our error. > > The fact that Roy hasn't stepped in and declared assigning URIs to > variants "bad practice" or a violation of REST, over the years, implies > that such advice is indeed correct. So I'm saying, don't take my word > for it, but do think twice about pointing out REST errors here -- if > that were the case, I highly doubt it would have escaped Roy's notice > and fallen to others to point out... > > -Eric > >
Here's an example of something I'm calling "hal":
GET /list
====
<link rel="self" href="/list">
<link rel="description" href="/list/description" />
<link rel="search" href="/list/search/{search_term}" />
<link rel="item" name=1 href="/items/some_item">
<title>Some Item</title>
<content>This is some item content</content>
</link>
<link rel="item" name=2 href="/foo/some_other_item">
<title>Some Other Item</title>
<content>This is content for some other item</content>
</link>
</link>
Hal just defines a standard way to express hyperlinks in xml via a simple
<link> element. The link element has the following attributes: @rel @href
@name
- Simple links can be written as solo/self-closing tags.
- Links used to indicate embedded representations from other resources
should be written with open and close tags, with the embedded representation
contained within.
- The root element must always be a link with an @rel of self and an
appropriate @href value.
- @name must be unique between all links in a document with the same @rel
value, but is not unique within the entire document. i.e. a link element
cannot be referred to by @name alone
- @href value may contain a URI template
Be interested to hear whether people think there's any value/legs in this,
problems with it, etc.
Cheers,
Mike
(see also:
http://restafari.blogspot.com/2010/06/please-accept-applicationhalxml.html )
Mike,
What is the problem this is trying to solve? Why would I choose to use this
over HTML, Atom, JSON or (for that matter) RDF?
Regards,
Alan Dean
On Wed, Jun 9, 2010 at 09:13, Mike Kelly <mike@...> wrote:
>
>
> Here's an example of something I'm calling "hal":
>
> GET /list
>
> ====
>
> <link rel="self" href="/list">
> <link rel="description" href="/list/description" />
> <link rel="search" href="/list/search/{search_term}" />
> <link rel="item" name=1 href="/items/some_item">
> <title>Some Item</title>
> <content>This is some item content</content>
> </link>
>
> <link rel="item" name=2 href="/foo/some_other_item">
>
> <title>Some Other Item</title>
>
> <content>This is content for some other item</content>
>
> </link>
> </link>
>
>
> Hal just defines a standard way to express hyperlinks in xml via a simple
> <link> element. The link element has the following attributes: @rel @href
> @name
>
> - Simple links can be written as solo/self-closing tags.
> - Links used to indicate embedded representations from other resources
> should be written with open and close tags, with the embedded representation
> contained within.
> - The root element must always be a link with an @rel of self and an
> appropriate @href value.
> - @name must be unique between all links in a document with the same @rel
> value, but is not unique within the entire document. i.e. a link element
> cannot be referred to by @name alone
> - @href value may contain a URI template
>
> Be interested to hear whether people think there's any value/legs in this,
> problems with it, etc.
>
> Cheers,
> Mike
>
> (see also:
> http://restafari.blogspot.com/2010/06/please-accept-applicationhalxml.html)
>
>
>
Yes, that was what I was about to ask, this is a solution to what problem?
I like it's simplicity, but OTOH I don't understand what @name is needed
for. I would like to see some real use-case scenarios...
_________________________________________________
Melhores cumprimentos / Beir beannacht / Best regards
Antnio Manuel dos Santos Mota
http://card.ly/amsmota
_________________________________________________
On 9 June 2010 09:20, Alan Dean <alan.dean@...> wrote:
>
>
> Mike,
>
> What is the problem this is trying to solve? Why would I choose to use this
> over HTML, Atom, JSON or (for that matter) RDF?
>
> Regards,
> Alan Dean
>
>
> On Wed, Jun 9, 2010 at 09:13, Mike Kelly <mike@...> wrote:
>
>>
>>
>> Here's an example of something I'm calling "hal":
>>
>> GET /list
>>
>> ====
>>
>> <link rel="self" href="/list">
>> <link rel="description" href="/list/description" />
>> <link rel="search" href="/list/search/{search_term}" />
>> <link rel="item" name=1 href="/items/some_item">
>> <title>Some Item</title>
>> <content>This is some item content</content>
>> </link>
>>
>> <link rel="item" name=2 href="/foo/some_other_item">
>>
>> <title>Some Other Item</title>
>>
>> <content>This is content for some other item</content>
>>
>> </link>
>> </link>
>>
>>
>> Hal just defines a standard way to express hyperlinks in xml via a simple
>> <link> element. The link element has the following attributes: @rel @href
>> @name
>>
>> - Simple links can be written as solo/self-closing tags.
>> - Links used to indicate embedded representations from other resources
>> should be written with open and close tags, with the embedded representation
>> contained within.
>> - The root element must always be a link with an @rel of self and an
>> appropriate @href value.
>> - @name must be unique between all links in a document with the same @rel
>> value, but is not unique within the entire document. i.e. a link element
>> cannot be referred to by @name alone
>> - @href value may contain a URI template
>>
>> Be interested to hear whether people think there's any value/legs in this,
>> problems with it, etc.
>>
>> Cheers,
>> Mike
>>
>> (see also:
>> http://restafari.blogspot.com/2010/06/please-accept-applicationhalxml.html)
>>
>>
>
>
@name is needed for a situation in which there are several links with the
same rel that need to be distinguishable by the client
The reason it's defined the way it is, is to stop the attribute being used
like @id in html to identify the link elements directly. It's important than
any client doesn't do this, and is first and foremost concerned with the
link relations. Darrel pointed this out to me yesterday
2010/6/9 Antnio Mota <amsmota@gmail.com>
>
>
> Yes, that was what I was about to ask, this is a solution to what problem?
>
> I like it's simplicity, but OTOH I don't understand what @name is needed
> for. I would like to see some real use-case scenarios...
>
>
> _________________________________________________
>
> Melhores cumprimentos / Beir beannacht / Best regards
>
> Antnio Manuel dos Santos Mota
>
> http://card.ly/amsmota
> _________________________________________________
>
>
>
>
> On 9 June 2010 09:20, Alan Dean <alan.dean@...> wrote:
>
>>
>>
>> Mike,
>>
>> What is the problem this is trying to solve? Why would I choose to use
>> this over HTML, Atom, JSON or (for that matter) RDF?
>>
>> Regards,
>> Alan Dean
>>
>>
>> On Wed, Jun 9, 2010 at 09:13, Mike Kelly <mike@...> wrote:
>>
>>>
>>>
>>> Here's an example of something I'm calling "hal":
>>>
>>> GET /list
>>>
>>> ====
>>>
>>> <link rel="self" href="/list">
>>> <link rel="description" href="/list/description" />
>>> <link rel="search" href="/list/search/{search_term}" />
>>> <link rel="item" name=1 href="/items/some_item">
>>> <title>Some Item</title>
>>> <content>This is some item content</content>
>>> </link>
>>>
>>> <link rel="item" name=2 href="/foo/some_other_item">
>>>
>>> <title>Some Other Item</title>
>>>
>>> <content>This is content for some other item</content>
>>>
>>> </link>
>>> </link>
>>>
>>>
>>> Hal just defines a standard way to express hyperlinks in xml via a simple
>>> <link> element. The link element has the following attributes: @rel @href
>>> @name
>>>
>>> - Simple links can be written as solo/self-closing tags.
>>> - Links used to indicate embedded representations from other resources
>>> should be written with open and close tags, with the embedded representation
>>> contained within.
>>> - The root element must always be a link with an @rel of self and an
>>> appropriate @href value.
>>> - @name must be unique between all links in a document with the same
>>> @rel value, but is not unique within the entire document. i.e. a link
>>> element cannot be referred to by @name alone
>>> - @href value may contain a URI template
>>>
>>> Be interested to hear whether people think there's any value/legs in
>>> this, problems with it, etc.
>>>
>>> Cheers,
>>> Mike
>>>
>>> (see also:
>>> http://restafari.blogspot.com/2010/06/please-accept-applicationhalxml.html)
>>>
>>>
>>
>
>
>
>
Hi Antonio,
I have continued to research this area. A good link I received from Stefan (Tilkov) is [1]. It is a SOAP wrapper service enabling SOAP clients access RESTful Web Services.
Our model has several differences:
· in [1], Briggs is SOAP
enabling RESTful Web Services by wrapping them with a SOAP interface; whereas
we are REST(ful HTTP) enabling SOAP clients.
· in [1], the adapter is on the
server; our adapter is on the client. Thus, the on-the-wire client messages in
[1] are opaque and tunneled whereas our messages are visible.
· in [1], the existing SOAP
interface is not taken into account, there is no intention to migrate existing
clients from SOAP and new clients will be SOAP based. Our adapter is exactly
the opposite: we do take the existing SOAP interface into account, it is a
seamless migration tool to transition SOAP clients to RESTful HTTP and new
clients will be RESTful HTTP based.
Regards,
Sean.
[1] http://dev.aol.com/rest_and_soap_sharing
________________________________
From: António Mota <amsmota@...>
To: Rest Discussion Group <rest-discuss@yahoogroups.com>
Sent: Mon, 31 May, 2010 16:40:10
Subject: Re: [rest-discuss] SOAP to REST
I was asked a very similar question - How can external services based
on SOAP to call REST based services - and searching the list I found
this post, but with no answers.
Note that the question is to assess if the services *to be
implemented* should use a REST approach or a WS-* approach, knowing
that the clients of those to be implemented services will be probably
disparate technologies, but including WS-*.
Does someone have any pointers?
On 5 May 2009 11:22, Sean Kennedy <seandkennedy@...> wrote:
>
>
> Hi,
> Any ideas on how to get a WS client to point to a completely different app while at the same time giving access to the XML section with minimal impact to the client? I am trying to map SOAP messages to RESTful URIs on the client prior to any message being issued.
>
> Thanks,
> Sean.
>
> PS I am trying to come up with a way of calling an application (on the client) which will be able to access the XML section of a SOAP message and then map that to a RESTful URI, with minimal impact on the client. I was hoping that changing the WSDL URI might work (i.e. no change to client code) but I don't think that will work as I would then be tied to the operations/parameters in the WSDL (which does not suit).
>
>
I must say I feel this point of view is way too extremist, and I hope this is not a flaming thing to say. Every written text needs interpretation because it is a expression of what the author thinks, not the thinking itself (I was almost going to say that the written text is a representation of what the author thinks, not the thinking itself) because no one can go into another people's mind, and even if it could the mental processes of reasoning will probably be different. There is even a philosophical discipline for it, Hermeneutics (Heidegger, etc...). For me there is even a additional layer of interpretation, since I'm a Portuguese native reading in English... In this issue, although I do understand the benefits regarding cache that Eric points, I fail to see why that - assigning URI to each and every representation of a resource - should be the norm and not the exception. Because that defeats the purpose of "manipulation of resources through representations". Because if you assign a URI to each representation they are in fact resources, not representations... Now Roy Fielding clearly separates these two concepts in 5.2.1.1 Resources and Resource Identifiers and 5.2.1.2 Representations, and again in 6.2.1 Redefinition of Resource and 6.2.2 Manipulating Shadows, and as far as I remember he only advocates the attribution of a URI to a specific representation when that representation is so important that should be considered as a "thing" on it's own (don't have a quote for this, thou, so I may be wrong). Even in respect to caching - and I don't know much about it since we work in a intranet where that problem is not pressing - why would you want to cache a specific representation? Doesn't that defeat the purpose of content-negotiation? Suppose a UA send a Accept of application/pdf, application/html with the pdf having more quality, and the server at that time is only capable of serving html. And that one week later the same client makes the same request and the server is now capable of producing pdf. If the html representation has it's own url, wouldn't a intermediary send back the html representation when if the same request reach the server it will be back with pdf? Isn't that the purpose of content-negotiation? Again, by no means I have a good understanding of cache mechanisms... Since I know this is a subject easily "flammable" let me say in advance that I'm trying to clarify myself, because my understanding does not match what people with more knowledge in this area than me are saying (even if often they don't agree themselves) - not flame anybody... > > I'd like to make a pseudo-argument from authority. Normally, arguing > from authority is a logical fallacy. However, in the case of REST, > there is one person whose word must be taken as fact, and that's Roy. > Over the years that I've participated here, this topic has come up > often. The list always explains that best-practice is to follow the > Identification of Resources constraint by assigning URIs to variants. > > Now, if I or anyone else had been wrong about this, I can guarantee > with certainty that Roy would've jumped in, called us out, and > explained how we were getting Identification of Resources wrong, or why > the follow-best-practice advice was wrong. But, that hasn't happened. > Given all the opportunities to correct this information, if it were > truly wrong, one must assume that Roy would be leading the charge to > point out our error. > > The fact that Roy hasn't stepped in and declared assigning URIs to > variants "bad practice" or a violation of REST, over the years, implies > that such advice is indeed correct. So I'm saying, don't take my word > for it, but do think twice about pointing out REST errors here -- if > that were the case, I highly doubt it would have escaped Roy's notice > and fallen to others to point out... > > -Eric > >
JSON isn't hypermedia, HTML is human-oriented and isn't well suited to
machine processing.
AtomPub prescribes elements with specific link relations that could be
re-written with hal links and standardised rels. Hal's generic hypermedia
semantics allow for more generic client-side logic/tooling which lends
itself better to serendipitous re-use and extension of the protocol.
I don't think hal is equivalent to rdf/xml, but I could be wrong. Either
way, I find hal more intuitive.
Cheers,
Mike
On Wed, Jun 9, 2010 at 9:20 AM, Alan Dean <alan.dean@...> wrote:
>
>
> Mike,
>
> What is the problem this is trying to solve? Why would I choose to use this
> over HTML, Atom, JSON or (for that matter) RDF?
>
> Regards,
> Alan Dean
>
>
> On Wed, Jun 9, 2010 at 09:13, Mike Kelly <mike@...> wrote:
>
>>
>>
>> Here's an example of something I'm calling "hal":
>>
>> GET /list
>>
>> ====
>>
>> <link rel="self" href="/list">
>> <link rel="description" href="/list/description" />
>> <link rel="search" href="/list/search/{search_term}" />
>> <link rel="item" name=1 href="/items/some_item">
>> <title>Some Item</title>
>> <content>This is some item content</content>
>> </link>
>>
>> <link rel="item" name=2 href="/foo/some_other_item">
>>
>> <title>Some Other Item</title>
>>
>> <content>This is content for some other item</content>
>>
>> </link>
>> </link>
>>
>>
>> Hal just defines a standard way to express hyperlinks in xml via a simple
>> <link> element. The link element has the following attributes: @rel @href
>> @name
>>
>> - Simple links can be written as solo/self-closing tags.
>> - Links used to indicate embedded representations from other resources
>> should be written with open and close tags, with the embedded representation
>> contained within.
>> - The root element must always be a link with an @rel of self and an
>> appropriate @href value.
>> - @name must be unique between all links in a document with the same @rel
>> value, but is not unique within the entire document. i.e. a link element
>> cannot be referred to by @name alone
>> - @href value may contain a URI template
>>
>> Be interested to hear whether people think there's any value/legs in this,
>> problems with it, etc.
>>
>> Cheers,
>> Mike
>>
>> (see also:
>> http://restafari.blogspot.com/2010/06/please-accept-applicationhalxml.html)
>>
>>
>
>
>
On Wed, Jun 9, 2010 at 6:05 AM, Mike Kelly <mike@...> wrote: > > > JSON isn't hypermedia, HTML is human-oriented and isn't well suited to machine processing. It wasn't clear what the use-case is here, but assuming it's meant to be the sort of "bookmark uri," I've found XHTML very well suited for this. I've basically asked our service developers to put up a little description of their service in XHTML and simply include hyperlinks in the narrative (using @rel). This turns out to be good user-oriented service documentation and machine-parseable bookmark uri. The only additional constraint we ask is that they expose and link to a few standard resources (e.g. rel=dependency, rel=status, rel=meta) so that we can drive things like Amazon Service Health Dashboard. Is that what you're after with this? --tim
On Tue, Jun 8, 2010 at 12:03 PM, Eric J. Bowman <eric@...> wrote:
> Tim Williams wrote:
>>
>> I've trimmed the rest of this because I think I've finally realized
>> the essence of our disconnect. I gather the SHOULD is:
>>
>> "A server SHOULD provide a Content-Location for the variant
>> corresponding to the response entity; especially in the case
>> where a resource has multiple entities associated with it, and
>> those entities actually have separate locations by which they might be
>> individually accessed, the server SHOULD provide a Content-Location
>> for the particular variant which is returned. "[1]
>>
>> You read that particular SHOULD statement as a prescriptive as in "you
>> SHOULD expose those entities at separate locations." And I read it to
>> mean, "when the situation exists, you SHOULD do it this way".
>>
>
> The English is plain. "A server SHOULD provide a Content-Location for
> the variant corresponding to the response entity." It then goes on to
> note that this is *especially* the case if certain conditions are met.
> This does _not_ mean that the SHOULD only applies *if* those conditions
> are met.
Right, I should have pasted in the introductory sentence from which my
position stemmed:
"The Content-Location entity-header field MAY be used to supply the resource
location for the entity enclosed in the message when that entity
is accessible
from a location separate from the requested resource's URI."
It's "...when the entity is accessible from a location separate..." -
that's what I meant when I said, if the situation exists. I'd think
the situation where a representation is accessible separately, isn't
ideal.
>> I don't
>> interpret it as asserting whether or not the existence of the
>> situation itself is good or not.
>>
>
> We're talking about a situation where the server is responding with a
> variant of a negotiated resource. Therefore, the SHOULD directly
> applies to the situation, without passing any judgments on it.
My premise is that representations shouldn't be addressable
separately, so you'd never arrive at that condition - resources should
be addressable not representations.
>>
>> Put another way, if we had a weather forecast:
>>
>> http://weather.example.com/zip/22180
>>
>> You seem to suggest it's desireable to have:
>>
>> http://weather.example.com/zip/22180.txt
>> http://weather.example.com/zip/22180.html
>> http://weather.example.com/zip/22180.pdf
>>
>> and I think the former as desirable.
>>
>
> Yes, I agree, the former is desirable for the negotiated resource. The
> latter are also desirable as they meet the conditions of the SHOULD.
Are you ascribing special meaning to the term "*negotiated* resource"
vs. "resource"? If the former is the resource and it's representation
negotiated through server-driven negotiation, then you'd never see the
later, more specifically, the later would never be separately
addressable, and the SHOULD statement would never apply?
>> I've read web architecture[2]
>> and don't see the basis for your conclusion that identifying variants
>> is particularly desirable.
>>
>
> Then refer to REST, where this desirability is called the
> "identification of resources constraint." If you are developing REST,
> then all REST's constraints are particularly desirable. If you are not
> developing a REST system, then go ahead and violate the identification
> of resources constraint, I won't stop you. But don't violate that
> constraint and call the result REST.
Well, I thought you'd give me some credit:) Of course, I have, which
is why I'm having the question. I haven't seen in the dissertation
either where he asserts that each representation should have its own
URI. I understand this to mean the opposite:
"Finally, it allows an author to reference the concept rather than
some singular
representation of that concept, thus removing the need to change
all existing links
whenever the representation changes (assuming the author used the
right identifier)."
That's where he talked about the benefit of the resource abstraction.
By suggesting that every representation should be addressable, it
seems you break that and I haven't found where Roy's said otherwise.
>> Unless I've misinterpreted Roy's thoughts,
>> I gather he supports the former:
>>
>> "If the resource is a concept independent of
>> representation format, then its URI must not have any aspect
>> that is specific to the representation format."[3]
>>
>
> Roy is saying, don't negotiate for a URI ending in *.txt, basically.
> This doesn't really have anything to do with the topic at hand.
Then I'm really confused, because I think it addresses the topic
squarely. I've said that <a href="/weather/22180"> is ideal. And you
say that the best practice is to give each variant its own URI, e.g.
<a href="/weather/22180.html">. That seems to be the situation that
Jan brought up and what Roy was advising on there?
>> Personally, I see the coupling introduced with a @type attribute as
>> much less offensive than the coupling (resource<->representation) that
>> occurs by having the variants in their own URIs.
>>
>
> What coupling is that? Allowing the representation to be a resource in
> its own right by assigning it a URI, is decoupling. Only being able to
> access that representation directly through @type is the coupling that
> assigning URIs to variants avoids...
I'm trying, just don't get it. The dissertation goes to great lengths
to convince us the importance of the distinction between Resources and
their Representations, and says URIs address Resources. I read Roy's
mail above to be further clarification that you should only link to a
specific representation when "the resource semantics include the
format." To be honest, I have no clue what that quote means:)
>> You asked a fair question: what does it break/what's the downside? I
>> think it breaks the essential ideal in REST that URIs identify
>> resources as concepts separate from their representation.
>>
>
> The essential ideas in REST are called constraints; following the
> Identification of Resources constraint does not violate any other
> constraints. Identification of Resources states that any resource you
> wish to manipulate must first have its own URI. If you intend to
> manipulate variants (or cache them), you must first identify those
> variants as resources, otherwise you are breaking this constraint, as
> has been pointed out 1000 times on this list as best-practice.
A "variant" is a specific representation and what I'm missing is where
the dissertation says that representations themselves should be
resources - it seems to me that breaks the whole notion of
"manipulating resources through their representation"? Can you point
me to the part that says, "you must first identify those variants as
resources"?
>> If you're
>> relating resources together (e.g. linked data), for consistency, it's
>> important to link the concepts together not a specific representation.
>>
>
> Then link to negotiated URIs instead of variant URIs. The ability to
> link the concepts together, not specific representations, is provided
> by content negotiation. Assigning URIs to variants does not negate
> this ability.
I thought you were saying that best practice is to link to a specific
representation. To which, I said it'd create problems when linking
concepts together. But here you're saying the solution is not to link
to a specific representation?
>> I think the Cool URIs[4] document makes the case too.
>>
>
> Quote, please. Cool URIs says nothing about representation vs.
> resource. It discusses the maintenance of URIs once they've been
> assigned, but has nothing to do with this topic.
I gathered the advice about leaving out file extensions implied - link
to resources, not their representation?
"File name extension. This is a very common one. "cgi", even ".html"
is something which will change. You may not be using HTML for that
page in 20 years time, but you might want today's links to it to still
be valid. The canonical way of making links to the W3C site doesn't
use the extension"
--tim
Totally agree. But I do distinguish even more between architecting (using REST style) and designing (defining resources and interactions) and lastly coding (using the HTTP libraries). So, the problem is the REST term that was re-conceptualized to be just HTTP use, or even worse, RPC using HTTP media. (Which BTW, was the original SOAP intent, but it was later adjusted by WSA with no much success). I cheer up the idea of creating a new term for HTTP design and development, and start talking about REST architecting. As an out of place comment, the NOSQL movement started as an "against SQL" with some critics saying the negative karma was not good, and someone came saying it meant No Only SQL. NOSOAP will suffer the same destiny. And SOAP has nothing to do with all the discussion but for the RPC part.I would look for another name. Cheers. William Martinez Pomares --- In rest-discuss@yahoogroups.com, mike amundsen <mamund@...> wrote: > > I say focus on the spec (HTTP) and leave the style (REST) alone for now. > > Build a great library for MSFT devs that uses the HTTP name (not REST name) > and you'll have lots of smart people talking about writing great web apps w/ > HTTP. > > mca > http://amundsen.com/blog/ > http://mamund.com/foaf.rdf#me > > > > On Mon, Jun 7, 2010 at 15:37, Glenn Block <glenn.block@...> wrote: > > > > > > > You may be right, it could be a fool's errand. I agree we can do more from > > an education stand point. It just seems like the term has been "hijacked" so > > to speak off of it's original intent. Not the first time. > > > > Glenn > > > > On Mon, Jun 7, 2010 at 12:31 PM, Eb <amaeze@...> wrote: > > > >> > >> > >> On Mon, Jun 7, 2010 at 3:23 PM, Glenn Block <glenn.block@...>wrote: > >> > >>> My whole reasoning was to remove the confusion that exists around those > >>> who today use the term REST though not truly intending to apply RESTful > >>> principles or even having the same goals in mind, with those who are > >>> intentionally applying a RESTful style. > >>> > >>> > >>> > >> > >> I understand that, but do we really think they (including myself) will > >> revert to this new naming convention(s) whatever it is? Additionally a lot > >> of these REST "api's" have already been released to the wild for > >> consumption. I think its better to continue to educate people on what is > >> (and is not) REST so as to help us all making the distinctions when these > >> claims are made. > >> > > > > > > > > > > >
Hi group,
I am a novice in web development and I tried a lot to understand and implent the REST, but I can't find how to implement REST in an application.
Could anyone point me in right direction.
Thanks in advance,
Sasikumar
If I were you I'll ask this in a list of a REST based framework, most notably, if you're in Java, in the Jersey list that is the implementation reference of JAX-RS and have a very good user support on the list. _________________________________________________ Melhores cumprimentos / Beir beannacht / Best regards Antnio Manuel dos Santos Mota http://card.ly/amsmota _________________________________________________ On 9 June 2010 13:15, sasi_gi <gi.sasi@...> wrote: > > > Hi group, > > I am a novice in web development and I tried a lot to understand and > implent the REST, but I can't find how to implement REST in an application. > > Could anyone point me in right direction. > > Thanks in advance, > Sasikumar > > >
One place to check out is http://implementing-rest.googlecode.com There is also a discussion group associated with that wiki. Finally, I encourage you to drop by the #REST IRC channel on freenode. You can view logs for that channel, too: http://rest.hackyhack.net/ mca http://amundsen.com/blog/ http://mamund.com/foaf.rdf#me 2010/6/9 Antnio Mota <amsmota@...> > > > If I were you I'll ask this in a list of a REST based framework, most > notably, if you're in Java, in the Jersey list that is the implementation > reference of JAX-RS and have a very good user support on the list. > > _________________________________________________ > > Melhores cumprimentos / Beir beannacht / Best regards > > Antnio Manuel dos Santos Mota > > http://card.ly/amsmota > _________________________________________________ > > > > > On 9 June 2010 13:15, sasi_gi <gi.sasi@...> wrote: > >> >> >> Hi group, >> >> I am a novice in web development and I tried a lot to understand and >> implent the REST, but I can't find how to implement REST in an application. >> >> Could anyone point me in right direction. >> >> Thanks in advance, >> Sasikumar >> >> > > >
http://code.google.com/p/implementing-rest/ On Wed, Jun 9, 2010 at 1:15 PM, sasi_gi <gi.sasi@...> wrote: > Hi group, > > I am a novice in web development and I tried a lot to understand and > implent the REST, but I can't find how to implement REST in an application. > > Could anyone point me in right direction. > > Thanks in advance, > Sasikumar > > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
> +1. This is exactly why I feel claims that JAX-RS isn't RESTful aren't helping, they're misleading. And claiming JAX-RS is on the same level as WCF is somewhat insulting :-) Yes, you are right. JAX-RS allows restful services to be created, and you guys are probably doing that already. The problem that I see in 'our market' is that, unfortunately, developers who did not read anything about how hypermedia could improve their systems pick the tool and use it in the same way as they were doing before, not benefiting from the usage. Don't you see this problem happening in the market? Guilherme Silveira Caelum | Ensino e Inovao http://www.caelum.com.br/ 2010/6/7 Stefan Tilkov <stefan.tilkov@...>: > On Jun 5, 2010, at 11:59 AM, Bill de hOra wrote: > >> JAX-RS doesn't have a client, how can it be the wrong way to go >> yet. The server part that is specified, deals well enough with the >> protocol elements and doesn't prevent me from using formats or object >> models that contain links. One thing the spec does do well is >> UriBuilder/UriInfo - regardless of whether the builder pattern is to >> taste, it helps solves a layering problem in Java between service code >> and http code. The JAX-RS impls haven't gotten in my way yet when it >> comes to working media types, http, or just applying REST in general, >> which is more than I can say for most frameworks on the JVM. I'm free to >> figure out the data. > > +1. This is exactly why I feel claims that JAX-RS isn't RESTful aren't helping, they're misleading. And claiming JAX-RS is on the same level as WCF is somewhat insulting :-) > > Stefan > -- > Stefan Tilkov, http://www.innoq.com/blog/st/
> To get around that in restfulie, it seems to be via hardcoding <link > rel|href> into a class incorrectly called 'Resource', like this Thanks Bill, refactoring it right now into Representation. > That's a car subclass of carpark kind of error - resources aren't > representations and not all media types have relations Can you help me improving it? Now with link headers would it be fine to say that every response might contain relations? And *if* the representation contains relations then its also enhanced with those methods. Does it make sense? > If that's the 'right way to go' for hateoas, then perhaps I don't understand > the concept of hateaos. You are right, the first one was a misspelling that could confuse users. How can we improve the client further, apart from response enhancement being optional? Regards Guilherme Silveira Caelum | Ensino e Inovao http://www.caelum.com.br/ 2010/6/5 Bill de hOra <bill@...>: > Guilherme Silveira wrote: >> >> >> Hello Bill, >> >> >> I'm not certain that today's JAX-RS offers much more than today's WCF >> in >> terms of REST support. If Glenn's team are going to do "REST like >> they >> meant it" to paraphrase Guilherme, I don't think that JAX-RS is the >> >> right way to go. >> > > But that's just an opinion. Or is there some technical criticism as >> well? >> >> It seems like the client part of a REST client was not so clear at that >> time, and there were not so many attemps to create generic consumers. Every >> service provided their "own specific REST APIs" for their "specific REST >> services", i.e. twitter, facebook, and hundreds of others. >> >> The first JAX-RS spec did not take hypermedia in account, so if you think >> about REST without hypermedia, it will not be problem. But it seems like >> REST depends on using hypermedia, right? >> >> If you believe so and want your consumers to use hypermedia, using a Java >> framework, you have to rely on Restfulie, Jersey and Restlet, who are trying >> to do so. >> >> As Paul mentioned, its a matter of time for it to enter the JAX-RS specs. > > Exactly. JAX-RS doesn't have a client, how can it be the wrong way to go > yet. The server part that is specified, deals well enough with the protocol > elements and doesn't prevent me from using formats or object models that > contain links. One thing the spec does do well is UriBuilder/UriInfo - > regardless of whether the builder pattern is to taste, it helps solves a > layering problem in Java between service code and http code. The JAX-RS > impls haven't gotten in my way yet when it comes to working media types, > http, or just applying REST in general, which is more than I can say for > most frameworks on the JVM. I'm free to figure out the data. > > When it comes to building a client I suspect the problem will be dealing > with Java's type system and generics (I'm not sure C# would be much better). > To get around that in restfulie, it seems to be via hardcoding <link > rel|href> into a class incorrectly called 'Resource', like this > > <http://github.com/caelum/restfulie-java/blob/master/core/src/main/java/br/com/caelum/restfulie/Resource.java> > <http://github.com/caelum/restfulie-java/blob/master/core/src/main/java/br/com/caelum/restfulie/Relation.java> > > That's a car subclass of carpark kind of error - resources aren't > representations and not all media types have relations > > If that's the 'right way to go' for hateoas, then perhaps I don't understand > the concept of hateaos. > > Bill > >
--- Den ons 9/6/10 skrev Tim Williams <williamstw@...>: > It wasn't clear what the use-case is here, but assuming > it's meant to > be the sort of "bookmark uri," I've found XHTML very > well suited for > this. I've basically asked our service developers to > put up a little > description of their service in XHTML and simply include > hyperlinks in > the narrative (using @rel). This turns out to be good > user-oriented > service documentation and machine-parseable bookmark > uri. The only > additional constraint we ask is that they expose and link > to a few > standard resources (e.g. rel=dependency, rel=status, > rel=meta) so that > we can drive things like Amazon Service Health > Dashboard. Sounds like a good approach. Do you have a public example of this documentation to show ? /Morten
Tim: Yes, @rel="add-review" is proly a bad choice. @rel="review" would have been better. As for indicating supported methods in the media-type documentation, AtomPub take this same approach[1] and the example continues along the same lines. Also, when documenting support for protocol methods for a link relation, element, etc. media-types are free to use MUST, SHOULD, MAY, etc. This allows implementors to make choices that best support the use case for resources that support that media type. [1] http://tools.ietf.org/html/rfc5023#section-9 mca http://amundsen.com/blog/ http://mamund.com/foaf.rdf#me On Tue, Jun 8, 2010 at 12:55, Tim Williams <williamstw@gmail.com> wrote: > All this discussion over how to instruct the client of what > representation to use for POST/PUT got me wondering about the role of > link relations outside of GET. Atompub uses a @rel=edit to indicate > an editable link, which, seems ok. Then I was reading in the nifty > new RESTful Cookbook (thanks mike), where it has an example using > rel=add-review and this struck me as a bit rpc-ish. The book also > suggests that the link relation documentation should: > > 1) Indicate valid methods for the target URI > 2) Indicate expected media types on request/response for the target URI > > This seemed to me to be mixing concerns. For example, it seems that > the resource should define what methods are allowed (all are valid?) > itself - independent of any link relation. Besides pointing me to > Atompub, are there other good examples out there? > > Thanks, > --tim > > > ------------------------------------ > > Yahoo! Groups Links > > > >
On Wed, Jun 9, 2010 at 10:35 AM, Morten <mortench2004@...> wrote: > --- Den ons 9/6/10 skrev Tim Williams <williamstw@...>: >> It wasn't clear what the use-case is here, but assuming >> it's meant to >> be the sort of "bookmark uri," I've found XHTML very >> well suited for >> this. I've basically asked our service developers to >> put up a little >> description of their service in XHTML and simply include >> hyperlinks in >> the narrative (using @rel). This turns out to be good >> user-oriented >> service documentation and machine-parseable bookmark >> uri. The only >> additional constraint we ask is that they expose and link >> to a few >> standard resources (e.g. rel=dependency, rel=status, >> rel=meta) so that >> we can drive things like Amazon Service Health >> Dashboard. > > Sounds like a good approach. Do you have a public example of this documentation to show ? Sorry Morten, unfortunately not, it's an internal intranet services architecture. --tim
Antnio Mota wrote: > > I must say I feel this point of view is way too extremist, and I hope > this is not a flaming thing to say. > Changing the semantics of @type is extremist, when the same problem it solves has already been solved, by assigning URIs to variants, which isn't extreme; it's how things are done. There is nothing wrong with, let alone extreme about, solving a REST problem by minting URIs. There is nothing extremist about assigning URIs to variants. Since you can't do what you want to do without it, and since that's what everyone who came before you has done to solve the same problem, it fits the very definition of mainstream. > > Every written text needs > interpretation because it is a expression of what the author thinks, > not the thinking itself (I was almost going to say that the written > text is a representation of what the author thinks, not the thinking > itself) because no one can go into another people's mind, > In this case, Roy isn't dead. So if assigning URIs to variants was wrong, he'd say something about it, himself. One thing we all know Roy cares about, is correcting misinterpretations of REST... which this isn't. > > In this issue, although I do understand the benefits regarding cache > that Eric points, I fail to see why that - assigning URI to each and > every representation of a resource - should be the norm and not the > exception. > Because that's how the Web works. Failing to assign URIs to variant represenations leads to clearly-identifiable problems which are easily fixed by applying the Identification of Resources constraint, i.e. assigning URIs to variants. This solution is the norm, because it works. Other solutions, like borking @type, are not the norm because they don't work. How much more proof of what I'm saying is required? > > Because that defeats the purpose of "manipulation of > resources through representations". Because if you assign a URI to > each representation they are in fact resources, not representations... > A representation is a representation, regardless of the resource it is representing. The same representation may result from dereferencing different resources, this is a key element of the REST style (author's preferred version). To change the value of a resource, a representation of that resource is manipulated, IOW I can change /A by manipulating /A.txt, if that's what the hypertext calls for. The fact that /A.txt is a different resource from /A doesn't mean /A.txt can't also be a representation of /A. Assigning URIs to variants makes those variants resources in their own right, but they're still also variants. > > Now Roy Fielding clearly separates these two concepts in 5.2.1.1 > Resources and Resource Identifiers and 5.2.1.2 Representations, and > again in 6.2.1 Redefinition of Resource and 6.2.2 Manipulating > Shadows, and as far as I remember he only advocates the attribution of > a URI to a specific representation when that representation is so > important that should be considered as a "thing" on it's own (don't > have a quote for this, thou, so I may be wrong). > Right. If /A is to be manipulated via /A.txt, there needs to be a way to identify /A.txt, IOW, without assigning that variant a URI it can't be manipulated. If you fail to identify that variant with a URI then you've failed to assign a URI to a key resource, IOW, that variant is so important that you want to consider it as a "thing" on its own, in order that it may be manipulated or cached. Trying to manipulate or cache things that don't have URIs doesn't work. The solution is to assign these things URIs, then they will work. > > Even in respect to caching - and I don't know much about it since we > work in a intranet where that problem is not pressing - why would you > want to cache a specific representation? Doesn't that defeat the > purpose of content-negotiation? > So if I want to use conneg, I can't use caching? What's cached in caching, are representations, not resources -- a resource is an abstract concept. My origin server isn't going to change how it responds to a given Accept header from one request to the next, so once it's figured out what representation to return, it informs intermediaries of the parameters of the negotiation (Vary) and the URI to associate with those parameters. Now, a user agent making a subsequent request for the same representation can bypass the origin server -- the Accept header maps to a URI in the cache, so in the presence of the same Accept header string, the cache can return the proper representation, without bothering the origin server (assuming we're talking about Accept- driven negotiation). So caching representations by URI is how the Web works, not to defeat content-negotiation, but in order to cache despite content- negotiation. It would suck, if by using conneg I couldn't use cache, because somehow caching defeats the purpose of conneg -- but that isn't the case. > > Suppose a UA send a Accept of > application/pdf, application/html > with the pdf having more quality, and the server at that time is only > capable of serving html. And that one week later the same client makes > the same request and the server is now capable of producing pdf. > You're pointing out a problem that holds true for caching with any resources, not just negotiated resources. What if my browser caches an HTML representation of a site which then changes its output to PDF? There's the must-revalidate directive I discussed earlier, this will almost-ensure freshness. If you want to avoid this situation, then don't cache, because with caching you must cede this control. > > If the html representation has it's own url, wouldn't a intermediary > send back the html representation when if the same request reach the > server it will be back with pdf? Isn't that the purpose of > content-negotiation? Again, by no means I have a good understanding of > cache mechanisms... > If you want to avoid the problem you describe, simply don't assign URIs to your variants, and caches won't be caching them to any significant extent. If you want your variants to cache, then like I said before, assign them URIs and use must-revalidate... that's what they're there for, they work, so use them. If you're running a server which uses conneg, and you add a new media type, then you're changing the conneg algorithm, so I don't see why a user agent would still get that HTML instead of PDF. The must- revalidate directive tells caches to renegotiate the resource. The result may still be that HTML representation, which a cache can then serve, if it still has it, once the server has negotiated the request. > > Since I know this is a subject easily "flammable" let me say in > advance that I'm trying to clarify myself, because my understanding > does not match what people with more knowledge in this area than me > are saying (even if often they don't agree themselves) - not flame > anybody... > OK, but you folks are testing my patience, by asking the same questions over and over, that I've already answered, repeatedly saying the exact same thing in different ways. I'm at the point of throwing my hands up in the air and saying, "You know what? I don't care if your API is REST or not, so knock yourselves out." There's a difference in trying to understand why things are the way they are, vs. sticking to your guns until you're told what you want to hear. If you want to hear that assigning URIs to variants is bad, then by all means, continue to argue with everything I say, instead of explaining an actual downside from assigning URIs to variants... There's no reason this subject should be flammable, and in fact, until this thread it hasn't been. Every other time this issue has come up, applying Identification of Resources by assigning URIs to variants has been the answer. Except every other time, the wisdom and accuracy of the best-practice solution has been accepted, and the thread ended. My frustration results from the fact that something that's understood beyond the need for further debate, is resulting in further debate. Identification of Resources is a critical REST constraint, but easy to follow. Assign URIs to your variants, people. It's that simple. Why people can't accept this, is beyond me, because it's so fundamental to my understanding of REST (and it works). Arguing with me until I agree with you (not that that's likely) doesn't change the technology I'm trying to help you understand. What is the problem with assigning URIs to variants, other than that you folks "feel" it shouldn't be that way? Why is the evidence that this solution works because that's how the Web works, not convincing? The sky will not fall and the sun refuse to shine, if you assign URIs to your variants. The only result is increased interoperability with the real-world Web. Not doom. If you don't understand this, then read up on it, and politely ask questions. Instead of challenging everything I say, try to learn why I'm saying it. Instead of challenging the fact that variants need URIs to work properly, learn *why* variants need URIs to work properly -- it's spelled out clearly in REST, nothing to do with my say-so. -Eric
Mike Kelly wrote: > > JSON isn't hypermedia, > It can be. Any media type I serve with, for example, a Link header, is by definition hypertext. > > HTML is human-oriented and isn't well suited to > machine processing. > I think the entire accessibility community disagrees with you. The accessibility features of HTML are specifically designed to make the document machine-readable. The same goes for RDFa. The notion that m2m processing requires new media types, is based on this fallacy that HTML isn't machine-readable, which is just that, a fallacy. Also, why create <title> when @title already exists? -Eric
On Wed, Jun 9, 2010 at 7:46 PM, Eric J. Bowman <eric@...> wrote: > > Mike Kelly wrote: > > > > JSON isn't hypermedia, > > > > It can be. Any media type I serve with, for example, a Link header, is > by definition hypertext. > The media type itself isn't transformed into hypertext just because it's served with a link header. Fair to say the representation as whole is, but not clear on how that's relevant here. > > > > > HTML is human-oriented and isn't well suited to > > machine processing. > > > > I think the entire accessibility community disagrees with you. The > accessibility features of HTML are specifically designed to make the > document machine-readable. The same goes for RDFa. The notion that > m2m processing requires new media types, is based on this fallacy that > HTML isn't machine-readable, which is just that, a fallacy. I didn't say it wasn't machine-readable, just that it isn't well suited - which it isn't - hal's intended to be lightweight and as generic as possible > Also, why create <title> when @title already exists? Exists where? Why not? It's just an example, I could've called it <foobar> Cheers, Mike
Mike Kelly wrote: > > > Also, why create <title> when @title already exists? > > Exists where? Why not? It's just an example, I could've called it > <foobar> > Well, @title already exists as standard markup for expressing the title of a remote link, and <title> already exists as standard markup for expressing the title of the content it appears in. And REST is informed by the principle of generality, i.e. re-use, which is emphasized throughout. So for the same reasons REST requires standard methods, link relations and media types be used, it frowns on re-inventing any other wheel (@title, @type). Try to constrain yourself, instead of taking an unbounded-creativity approach, in RESTspeak. If you reverse @title and <title> then you can't possibly have a uniform interface, because the semantics of @title and <title> have already been widely agreed upon in practice, in multiple markup languages. So if you're creating a markup language, you should re- use the existing semantics of @title and <title> instead of changing them, unless you don't care about REST. -Eric
> > So if you're creating a markup language, you should re- > use the existing semantics of @title and <title> instead of changing > them, unless you don't care about REST. > The same goes for introducing new elements or attributes with semantics identical to those of widely-deployed elements or attributes. -Eric
Alan Dean wrote: > > If it helps, here are previous quotes from Roy about conneg [1] and > media types [2] > Roy and I definitely disagree on implementation, in that Roy almost always uses redirection with conneg, while I almost never do (only with language negotiation when I want the address bar to change). Since Roy's negotiated-resource variants are redirects, they don't get URIs, but of course their targets need them. So while I say, "assign URIs to your variants," but Roy says, "redirect to variant resources," there's no daylight between our positions, only a layer of indirection. Roy has his way, because he wants to avoid the caching tradeoffs. I have my way, because I don't want the address bar to change (which happens widely in reality even when it's against spec). Both solutions require the minting of an identical number of URIs, as both treat the variants as resources in their own right. Either solution negates any argument for changing @type semantics. Right, Roy? (I doubt he has time to answer -- congrats on the bundle o' joy, Roy!) -Eric
Tim Williams wrote: > > Right, I should have pasted in the introductory sentence from which my > position stemmed: > "The Content-Location entity-header field MAY be used to supply > the resource location for the entity enclosed in the message when > that entity is accessible > from a location separate from the requested resource's URI." > > It's "...when the entity is accessible from a location separate..." - > that's what I meant when I said, if the situation exists. I'd think > the situation where a representation is accessible separately, isn't > ideal. > This is just explaining how to deal with the "author's preferred version" problem. If I have a resource, /preferred , which identifies one version of /article , then I need some way to express the relation. I MAY use 'Content-Location: /article' in response to a request for /preferred to express this relation. But, this has nothing to do with conneg. In conneg, the SHOULD supercedes the MAY, without changing the semantics of the header. What RFC2616 says, is that you MAY use Content-Location "when the entity is [also] accessible from a [separate] location," but you "SHOULD provide a Content-Location for the variant corresponding to the response entity." There's no conflict there. > > >> I don't > >> interpret it as asserting whether or not the existence of the > >> situation itself is good or not. > >> > > > > We're talking about a situation where the server is responding with > > a variant of a negotiated resource. Therefore, the SHOULD directly > > applies to the situation, without passing any judgments on it. > > My premise is that representations shouldn't be addressable > separately, so you'd never arrive at that condition - resources should > be addressable not representations. > First, resources are identifiable, I don't know what addressable means because it implies dereferencing. Your premise is correct, where the variants are byte-for-byte identical at parse time, as is the case with compression. However, we're talking about negotiating for variants of some abstract concept as variant media types, which are not byte-for-byte identical at parse time. When taken individually, these variants describe different abstract concepts from that of the negotiated resource... If you just need a /resource then link to /resource . However, if the abstract concept you're after is "/resource formatted as PDF" rather than "server's optimal variant of /resource" then you have identified another resource. All that's left to meet the Identification of Resources constraint, is to assign a URI to that other resource you've identified. Identify what your resources *are*, then give them URIs. If you never care about "/resource formatted as PDF" then you don't need to assign it a URI -- that's only a SHOULD. If you do care about "/resource formatted as PDF" for the purposes of caching or directly dereferencing, then obviously, it needs a URI, since "/resource formatted as PDF" and "server's optimal variant of /resource (for the requesting user agent)" are different abstractions, and therefore different resources. > >> > >> Put another way, if we had a weather forecast: > >> > >> http://weather.example.com/zip/22180 > >> > >> You seem to suggest it's desireable to have: > >> > >> http://weather.example.com/zip/22180.txt > >> http://weather.example.com/zip/22180.html > >> http://weather.example.com/zip/22180.pdf > >> > >> and I think the former as desirable. > >> > > > > Yes, I agree, the former is desirable for the negotiated resource. > > The latter are also desirable as they meet the conditions of the > > SHOULD. > > Are you ascribing special meaning to the term "*negotiated* resource" > vs. "resource"? > Is it so hard to follow? That a "negotiated resource" is for URIs which use conneg, vs. "resource" for URIs which don't? There are caching and other considerations which apply only on resources which use conneg, so it behooves us to make such a distinction when we're discussing conneg. > > If the former is the resource and it's representation > negotiated through server-driven negotiation, then you'd never see the > later, > But you always "see" one of the latter as the returned entity-body. These variants aren't byte-for-byte the same after parsing, like with compression. So there's obviously a need to somehow identify the variants such that a cache can distinguish between them, otherwise a cache has no way of telling that the response for one Opera user is the same as the response for any other Opera user (assuming we're sniffing User-Agent and sending Vary: User-Agent). As I've said, fancy caches have fancy metrics to overcome this, for cases where the server isn't following the SHOULD. But everything else assumes that you aren't violating RFC2616. Wouldn't it be neat if we could stipulate to the cache that anytime an Opera browser requests /22180 it should respond with /22180.html? Of course, we *can* do *exactly* that, by simply minting /22180.html . Without minting /22180.html, how can we accomplish this for /22180 aside from relying on nonstandard caches to sniff out that the same HTML code is being returned to any Opera user agent in the presence of Vary: User-Agent? > > more specifically, the latter would never be separately > addressable, and the SHOULD statement would never apply? > If the only reason variants aren't separately addressable is because you stubbornly refuse to mint those URIs, it hardly negates the SHOULD. When you deliberately refuse to follow the SHOULD, then point to your refusal to follow it as the reason the SHOULD doesn't apply, my eyeballs start a-rollin'. That is _not_ how standards are followed. > > Well, I thought you'd give me some credit:) Of course, I have, which > is why I'm having the question. I haven't seen in the dissertation > either where he asserts that each representation should have its own > URI. > It's true, the dissertation doesn't come out and say "assign URIs to variants" because REST isn't about the implementation specifics of HTTP. What REST does clearly define is the Identification of Resources constraint, which we pragmatically apply to HTTP by assigning URIs to variants. The consequences of failing to assign URIs to variants are exactly the consequences that REST predicts for failing to apply the Identification of Resources constraint, whereas the benefits of assigning URIs to variants are exactly the benefits that REST predicts for successfully applying said constraint. So an analysis of caching in an HTTP conneg system where variants aren't assigned URIs (excepting compression) against an HTTP conneg system where the variants are resources in their own rights, will serve to prove that REST is correct, and has pragmatic benefits for real-world systems, since only one of those systems will cache properly over the Web. The explanation for *why* that is, is called REST. When a system doesn't work as REST anticipates, we go looking for mismatches. In this case, the inferior cache performance of the system that doesn't assign URIs to variants, is the direct result of failing to identify its resources, i.e. a REST mismatch. > > I understand this to mean the opposite: > > "Finally, it allows an author to reference the concept rather than > some singular > representation of that concept, thus removing the need to change > all existing links > whenever the representation changes (assuming the author used the > right identifier)." > > That's where he talked about the benefit of the resource abstraction. > That's right, you are getting confused that the "author's preferred version" has something to do with conneg. Perhaps that's my fault, for pointing out that a bunch of variants with different URIs is no different from a single representation served at different URIs, i.e. "author's preferred version". I do this because folks keep insisting that for one representation to have multiple URIs identifying it, is somehow a REST violation in and of itself. IOW, if we can't assign URIs to variants because then each variant has multiple URIs, then why does REST advocate doing exactly this in the discussion of "author's preferred version?" > > By suggesting that every representation should be addressable, it > seems you break that and I haven't found where Roy's said otherwise. > Pragmatically, I'm saying "assign URIs to variants." Theoretically, I'm saying "apply the Identification of Resources constraint." Not assigning URIs to variants, then attempting to directly access those variants through some @type mechanism, is a clear and unequivocal REST violation, because you've failed to identify those variants as separate resources (theoretically) by assigning them URIs (pragmatically). > > >> Unless I've misinterpreted Roy's thoughts, > >> I gather he supports the former: > >> > >> "If the resource is a concept independent of > >> representation format, then its URI must not have any aspect > >> that is specific to the representation format."[3] > >> > > > > Roy is saying, don't negotiate for a URI ending in *.txt, basically. > > This doesn't really have anything to do with the topic at hand. > > Then I'm really confused, because I think it addresses the topic > squarely. > No. Discussion about cool URIs, like advocating for not using filename extensions, is not germane to discussion of what resources need to be identified and assigned URIs. To re-state Roy, if you have a resource that's a "dog" then it has nothing to do with HTML, so you must not assign "dog.html" as its URI. Don't try to apply this to another context. If /dog happens to have a /dog.html variant, it's irrelevant to what Roy said if the user only ever sees /dog in the location bar, or as links to /dog . If I specifically need the HTML representation of the /dog resource, then I want to link to a different abstraction/resource -- i.e. I'd want "/dog as HTML" not "/dog in whatever variant conneg decides," in which case the .html doesn't go against what Roy said, because HTML is part of the abstraction, so it's OK for "the resource semantics [to] include the format." > > I read Roy's > mail above to be further clarification that you should only link to a > specific representation when "the resource semantics include the > format." To be honest, I have no clue what that quote means:) > I hope that clarifies it for you? > > I've said that <a href="/weather/22180"> is ideal. And you > say that the best practice is to give each variant its own URI, e.g. > <a href="/weather/22180.html">. That seems to be the situation that > Jan brought up and what Roy was advising on there? > No. Jan and Roy are talking about what URIs you link to, and see in the address bar. Since /22180.html is never, ever seen by the user then its existence simply doesn't matter when the discussion context is what the end-user sees, i.e. /22180 . Besides, if I have a bunch of variants for /22180 and I call the HTML variant /22180.html, I'm not going against your Roy quote. The fact that it's HTML is exactly what we're trying to identify -- /22180 identifies a resource whose abstract concept has nothing to do with HTML, while /22180.html identifies a resource whose abstract concept is that it *is* HTML (or rather, something presented as HTML). I keep coming back to this, don't I? I don't care if /22180.html is a variant of /22180, they're still not the same resource. They can't be, they have different abstractions. Each abstraction on your system is a resource on your system, Identification of Resources means giving those resources URIs. You must accept that /22180 and /22180.html can be different resources, regardless of the one being a variant of the other. They must be different resources -- they have different URIs. ;-) > > >> Personally, I see the coupling introduced with a @type attribute as > >> much less offensive than the coupling (resource<->representation) > >> that occurs by having the variants in their own URIs. > >> > > > > What coupling is that? Allowing the representation to be a > > resource in its own right by assigning it a URI, is decoupling. > > Only being able to access that representation directly through > > @type is the coupling that assigning URIs to variants avoids... > > I'm trying, just don't get it. The dissertation goes to great lengths > to convince us the importance of the distinction between Resources and > their Representations, and says URIs address Resources. > Well, it says URIs *identify* resources. Perhaps you mean the same thing by "addressing," but please use the terms of REST to discuss REST. What you aren't understanding is that /22180.html can be, and indeed is, a resource in its own right, separate from /22180 . Just like a document about a concept "foo" can have different abstractions reflected in different URIs for "current version of foo" and "author's preferred version of foo". In this case, it's "negotiated variant of foo" vs. "specific representation of foo" but the constraint, called Identification of Resources, is the same. > > A "variant" is a specific representation and what I'm missing is where > the dissertation says that representations themselves should be > resources - it seems to me that breaks the whole notion of > "manipulating resources through their representation"? > I think you're making a common mistake, here. Nowhere does REST state that manipulating /foo must occur by calling some method of /foo . This way lies IDLs, not hypertext APIs... all REST requires is that hypertext is used to instruct a user agent how to manipulate /foo . It does not matter that a representation dereferenced at /foo is updated by a PUT to /bar of some other media type. It only matters that a user agent is capable of following these instructions it received from /foo . Sending a different media type to /bar than what was dereferenced at /foo in order to update /foo is not a REST violation, it's the hypertext constraint in action. The user agent receives a representation which instructs it how to build a representation to PUT or POST in order to arrive at the user's next desired application state. There is simply no restriction on what media types or URIs may be used to achieve this, only that it be expressed using standard media types, methods and link relations. > > Can you point > me to the part that says, "you must first identify those variants as > resources"? > No, I can only refer you to the Identification of Resources constraint, as explained above. REST doesn't care about your implementation details, only that its constraints are met. Heck, I'm not even saying you *need* to assign URIs to variants, I'm saying you need to do this to achieve specific results. If you're just using conneg for compression, then you certainly don't need, nor have I advocated, URIs for variants. If you don't care about caching or directly manipulating the variants, then they don't need URIs, so this would be a silly thing for REST to require -- just like with compression. If you want a real-world HTTP-driven REST system with conneg, but you also want those variants to be accessed directly (like by borking @type) and cache properly, *then* you need to assign URIs to your variants (not bork @type). The entire argument against assigning URIs to variants is based on the notion that the variants can't possibly be resources, or that it's a mistake to assign multiple URIs to the same representation. Neither is true. You can't learn REST without first accepting that glass is a liquid, then setting about figuring out why that is. I would've been wasting my Chemistry professor's and the class's time by first trying to prove to them that glass is a solid, instead of learning from them *why* it's actually a liquid. Which is not to state that I couldn't have made a pretty good case for the solidity of glass, which "feels" right and certainly sounds correct to a non-major, but would be useless to a discussion of the Chemistry or Physics of glass, where its liquidity must be accepted as fact even if it defies your quaint notions of what's a solid and what's a liquid. I'm beginning to understand that the solution to this thread, is for some folks to get over their quaint notions of "resource" and use the same definition as everyone else. The problem y'all are seeing with assigning URIs to variants, goes away once we get past the insistence that /22180 and /22180.html must be the same resource, because the one is a variant of the other. If dereferencing /22180 returns /22180.html as Content-Location, the resource in question is still /22180 and /22180.html is merely a label. Yeah, it identifies some other resource, but that doesn't have anything to do with the fact that /22180 is the identifier of the resource you dereferenced. All you're doing is assigning another URI to a representation, but as with "author's preferred version" you're not doing anything wrong, just identifying a new abstraction -- there's no limit to the number of URIs which point to a given representation. -Eric
Tim Williams wrote: > > I read Roy's > mail above to be further clarification that you should only link to a > specific representation when "the resource semantics include the > format." To be honest, I have no clue what that quote means:) > If you want the HTML variant, you link to the .html URI. Of course, URIs are opaque, so that URI could be anything, it doesn't have to end in .html. If we're assigning a .html URI to an HTML variant, we're only linking to the .html when we want to link to a resource whose semantics include the notion that it's HTML. So, the only time we'd be linking to .html, is when we've decided that we want to directly dereference HTML instead of using conneg, and used .html accordingly as the extension, because we want the URI to reflect that it's HTML, now and for all time. Still nothing to do with conneg, though. Conneg allows multiple representations to be served from the same URI, but conneg isn't required if we're just talking about cool URIs without file extensions. Which is all Roy was talking about, there -- advice on naming URIs, not advice on identifying resources which need URIs, not advice which translates into linking to one resource vs. another resource. > > > Then link to negotiated URIs instead of variant URIs. The ability > > to link the concepts together, not specific representations, is > > provided by content negotiation. Assigning URIs to variants does > > not negate this ability. > > I thought you were saying that best practice is to link to a specific > representation. To which, I said it'd create problems when linking > concepts together. But here you're saying the solution is not to link > to a specific representation? > No, the only thing I said was best practice, is to assign a URI to any variant you want to directly dereference (or cache). That way, if it's a specific variant you're after, you can link directly to that variant. If you're not after a specific variant, then use the negotiated URI in your links. If you are after a specific variant, then link to whatever URI you gave it, file extension or not. If you don't assign URIs to your variants, and you don't care about caching or directly accessing them (and we aren't talking about compression), then it's no big deal. If you do want to directly dereference a variant, then the *only* way you can do that, is to give that variant a URI such that it's possible to link to it. Not by borking @type. So, assigning URIs to variants is best practice. What to link to depends on what you're trying to accomplish, not REST. What to call your URIs may be informed by cool URIs, but has nothing to do with REST where URIs are opaque. > > I gathered the advice about leaving out file extensions implied - link > to resources, not their representation? > No, URIs are opaque. Link to the resource which matches the abstraction you're after. Linking to the URI of a variant, is still a link to a resource, you can't link to a representation... well, not without borking @type first... Given a resource A and a variant resource A', you link to A when you want conneg, but you link to A' if you're looking to bypass conneg and retrieve a specific variant. What URIs to assign to A and A' and whether or not file extensions is used, is neither here nor there -- it only matters that the variants *have* URIs to link *to*. -Eric
Thanks, Subbu. That doesnt solve my original problem, but it did suggest a different approach that doesnt have that problem. (And I really do need to start reading the whole way to the end of the books I buy.) --Chuck On Mon, Jun 7, 2010 at 11:47 AM, Subbu Allamaraju <subbu@...> wrote: > Just create a resource that abstracts the things being updated and manipulate that resource to get the same effect. See http://my.safaribooksonline.com/9780596809140/chapter-misc-writes for examples. Other solutions tend to reduce protocol visibility as well introduce challenges such as poor scalability or even DoS attacks. > > Subbu > > On Jun 4, 2010, at 4:15 PM, chucking24 wrote: > >> I'm looking for examples of MIME types/protocols that work with collections of things, but support batch updates on collection members rather than requiring separate updates for each collection member. >> >> The use case is supporting user-defined lists with an arbitrary number of columns. We have chosen to treat a list as a collection of row elements. A browser-based client will support editing of the list in a tabular view. In addition to adding and removing entire rows, users will also be able to edit individual fields within a row. We would like to support a Save button that saves all changes (possibly across multiple rows) on the current screen in one http request. >> >> Our current thinking is that we would send back a collection containing only the rows to be updated (or inserted/deleted). However (without getting down into the details) the back-end will need to be able to determine whether individual fields in each row actually need to be updated, so there is some question as to how to represent whether or not an individual field has been changed. We would like to use the same document type both for getting the list entries and posting changes back to the list. >> >> Does anyone have any pointers to some examples of MIME types or application protocols that support this sort of model? >> >> --Chuck >> >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> > >
Hi,
I work in academia so would be grateful for the industry perspective. I am working on a thesis which includes both WS-* and REST. If UDDI is in fact dead http://www.innoq.com/blog/st/2010/03/uddi_rip.html how are enterprises communicating the WSDL files? Are we talking email, publishing on a Web site, inserting into a db??
Thanks,
Sean.
PS Would the same apply for WADL files?
OK, I think I understood what I was missing, when you say that is a good practice to assign URI to variants, you're not saying that is a good practice *in general* but *only* for the restricted situations where the server wants to enable the users to directly reference a specific variant and/or if it wants to take advantage of caching capabilities, is that so? However, thinking about your example of /A and /A.html that good practice make sense, but what about resources that produce their content "on-the-fly", and not a static page like that? And also in that example, suppose the client references the /A.html as a representation of /A, then that manipulation of /A.html has to be made thru a representation of /A.html, does it make sense to also assign URI's to the representation of that representation of /A? I suppose it depends on the "importance" of those representations? Nevertheless I think to call this "best practice" induces in error (it did with me) because it's only applicable to restricted use case scenarios. _________________________________________________ Melhores cumprimentos / Beir beannacht / Best regards Antnio Manuel dos Santos Mota http://card.ly/amsmota _________________________________________________ On 10 June 2010 01:00, Eric J. Bowman <eric@...> wrote: > > > Tim Williams wrote: > > > > I read Roy's > > mail above to be further clarification that you should only link to a > > specific representation when "the resource semantics include the > > format." To be honest, I have no clue what that quote means:) > > > > If you want the HTML variant, you link to the .html URI. Of course, > URIs are opaque, so that URI could be anything, it doesn't have to end > in .html. If we're assigning a .html URI to an HTML variant, we're only > linking to the .html when we want to link to a resource whose semantics > include the notion that it's HTML. > > So, the only time we'd be linking to .html, is when we've decided that > we want to directly dereference HTML instead of using conneg, and used > .html accordingly as the extension, because we want the URI to reflect > that it's HTML, now and for all time. > > Still nothing to do with conneg, though. Conneg allows multiple > representations to be served from the same URI, but conneg isn't > required if we're just talking about cool URIs without file extensions. > Which is all Roy was talking about, there -- advice on naming URIs, not > advice on identifying resources which need URIs, not advice which > translates into linking to one resource vs. another resource. > > > > > > > Then link to negotiated URIs instead of variant URIs. The ability > > > to link the concepts together, not specific representations, is > > > provided by content negotiation. Assigning URIs to variants does > > > not negate this ability. > > > > I thought you were saying that best practice is to link to a specific > > representation. To which, I said it'd create problems when linking > > concepts together. But here you're saying the solution is not to link > > to a specific representation? > > > > No, the only thing I said was best practice, is to assign a URI to any > variant you want to directly dereference (or cache). That way, if it's > a specific variant you're after, you can link directly to that variant. > > If you're not after a specific variant, then use the negotiated URI in > your links. If you are after a specific variant, then link to whatever > URI you gave it, file extension or not. > > If you don't assign URIs to your variants, and you don't care about > caching or directly accessing them (and we aren't talking about > compression), then it's no big deal. > > If you do want to directly dereference a variant, then the *only* way > you can do that, is to give that variant a URI such that it's possible > to link to it. Not by borking @type. > > So, assigning URIs to variants is best practice. What to link to > depends on what you're trying to accomplish, not REST. What to call > your URIs may be informed by cool URIs, but has nothing to do with REST > where URIs are opaque. > > > > > > I gathered the advice about leaving out file extensions implied - link > > to resources, not their representation? > > > > No, URIs are opaque. Link to the resource which matches the abstraction > you're after. Linking to the URI of a variant, is still a link to a > resource, you can't link to a representation... well, not without > borking @type first... > > Given a resource A and a variant resource A', you link to A when you > want conneg, but you link to A' if you're looking to bypass conneg and > retrieve a specific variant. What URIs to assign to A and A' and > whether or not file extensions is used, is neither here nor there -- it > only matters that the variants *have* URIs to link *to*. > > -Eric > >
On 09.06.2010, at 15:34, Guilherme Silveira wrote: > > +1. This is exactly why I feel claims that JAX-RS isn't RESTful aren't helping, they're misleading. And claiming JAX-RS is on the same level as WCF is somewhat insulting :-) > Yes, you are right. JAX-RS allows restful services to be created, and > you guys are probably doing that already. > > The problem that I see in 'our market' is that, unfortunately, > developers who did not read anything about how hypermedia could > improve their systems pick the tool and use it in the same way as they > were doing before, not benefiting from the usage. Don't you see this > problem happening in the market? > Don't get me wrong, I totally agree that hypermedia support should be added to frameworks, both on the client and server side, and I applaud your efforts to do that. I just believe that something like JAX-RS makes it a lot easier to build RESTful solutions than, say, the Servlet API or WCF; claiming it's not RESTful doesn't help anybody either. Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/
From what I have seen of Jersey it looks nice. Would be cool if it required less annotations and supported more conventions. GetOrder for example would map to Get, PostOrder to post etc. It did remind me a bit of WCF with the URI mapping stuff, though more flexible. Still much more limited than routes in rails or MVC routes. Glenn On 6/10/10, Stefan Tilkov <stefan.tilkov@...> wrote: > On 09.06.2010, at 15:34, Guilherme Silveira wrote: > >> > +1. This is exactly why I feel claims that JAX-RS isn't RESTful aren't >> > helping, they're misleading. And claiming JAX-RS is on the same level as >> > WCF is somewhat insulting :-) >> Yes, you are right. JAX-RS allows restful services to be created, and >> you guys are probably doing that already. >> >> The problem that I see in 'our market' is that, unfortunately, >> developers who did not read anything about how hypermedia could >> improve their systems pick the tool and use it in the same way as they >> were doing before, not benefiting from the usage. Don't you see this >> problem happening in the market? >> > > Don't get me wrong, I totally agree that hypermedia support should be added > to frameworks, both on the client and server side, and I applaud your > efforts to do that. I just believe that something like JAX-RS makes it a lot > easier to build RESTful solutions than, say, the Servlet API or WCF; > claiming it's not RESTful doesn't help anybody either. > > Stefan > -- > Stefan Tilkov, http://www.innoq.com/blog/st/ > > > -- Sent from my mobile device
On Jun 10, 2010, at 6:04 PM, Glenn Block wrote: > From what I have seen of Jersey it looks nice. Would be cool if it > required less annotations and supported more conventions. GetOrder > for example would map to Get, PostOrder to post etc. IIRC, this was even in the spec in some stage; it was taken out because it didn't really fit Java's nature. I may recall this wrongly. > It did remind me > a bit of WCF with the URI mapping stuff, though more flexible. Still > much more limited than routes in rails or MVC routes. One of the things I don't like that much about JAX-RS is the fact that it spread the knowledge about the routing all over the place, I much prefer a central place for that (like in Rails or many other web frameworks). Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/
What's UDDI? ;) Subbu On Jun 10, 2010, at 3:39 AM, Sean Kennedy wrote: > > > Hi, > I work in academia so would be grateful for the industry perspective. I am working on a thesis which includes both WS-* and REST. If UDDI is in fact deadhttp://www.innoq.com/blog/st/2010/03/uddi_rip.html how are enterprises communicating the WSDL files? Are we talking email, publishing on a Web site, inserting into a db?? > > Thanks, > Sean. > > PS Would the same apply for WADL files? > > > >
Consider the path of least resistance. My guess is url to wsdl sent via email or some wiki with a link in it. On Jun 10, 2010, at 3:39 AM, Sean Kennedy wrote: > > > Hi, > I work in academia so would be grateful for the industry perspective. I am working on a thesis which includes both WS-* and REST. If UDDI is in fact dead http://www.innoq.com/blog/st/2010/03/uddi_rip.html how are enterprises communicating the WSDL files? Are we talking email, publishing on a Web site, inserting into a db?? > > Thanks, > Sean. > > PS Would the same apply for WADL files? > > > >
It's sad that UDDI still lives in web services text books and university courses as a key component of SOAP-based service oriented systems. Dong On Thu, Jun 10, 2010 at 3:22 PM, Subbu Allamaraju <subbu@...> wrote: > > > What's UDDI? > > ;) > > Subbu > > > On Jun 10, 2010, at 3:39 AM, Sean Kennedy wrote: > > > > > > > Hi, > > I work in academia so would be grateful for the industry perspective. I > am working on a thesis which includes both WS-* and REST. If UDDI is in fact > deadhttp://www.innoq.com/blog/st/2010/03/uddi_rip.html how are enterprises > communicating the WSDL files? Are we talking email, publishing on a Web > site, inserting into a db?? > > > > > Thanks, > > Sean. > > > > PS Would the same apply for WADL files? > > > > > > > > > > >
Not to mention in my job, unfortunately. I always like to promote the necessity of some sort of registry of UDDI registries. My proposed name is the "Multiverse of Universal Description Discovery and Integration" or MUDDI. No takers so far. StanD. Dong Liu wrote: > > > It's sad that UDDI still lives in web services text books and university > courses as a key component of SOAP-based service oriented systems. > > > Dong > > On Thu, Jun 10, 2010 at 3:22 PM, Subbu Allamaraju <subbu@... > <mailto:subbu@...>> wrote: > > > > What's UDDI? > > ;) > > Subbu > > > > On Jun 10, 2010, at 3:39 AM, Sean Kennedy wrote: > > > > > > > Hi, > > I work in academia so would be grateful for the industry > perspective. I am working on a thesis which includes both WS-* and > REST. If UDDI is in fact > deadhttp://www.innoq.com/blog/st/2010/03/uddi_rip.html how are > enterprises communicating the WSDL files? Are we talking email, > publishing on a Web site, inserting into a db?? > > > > > Thanks, > > Sean. > > > > PS Would the same apply for WADL files? > > > > > > > > > >
On Jun 10, 2010, at 10:54 PM, Stefan Tilkov wrote: > On Jun 10, 2010, at 6:04 PM, Glenn Block wrote: > >> From what I have seen of Jersey it looks nice. Would be cool if it >> required less annotations and supported more conventions. GetOrder >> for example would map to Get, PostOrder to post etc. > > IIRC, this was even in the spec in some stage; it was taken out > because it didn't really fit Java's nature. I may recall this wrongly. > Correct. Methods starting with "get" are very common in Java so there needs to be a way to distinguish between such methods that consume HTTP requests and those that do not (e.g. some developers use a reuse JAXB bean as a resource class, whether that is the right thing to do or not is a different discussion!). Also we wanted to have one way and be explicit for both developers and tooling. >> It did remind me >> a bit of WCF with the URI mapping stuff, though more flexible. Still >> much more limited than routes in rails or MVC routes. > > One of the things I don't like that much about JAX-RS is the fact > that it spread the knowledge about the routing all over the place, I > much prefer a central place for that (like in Rails or many other > web frameworks). > Tooling can help in such matters (e.g. NetBeans, and i think IDEA?, help in this regard). Perhaps we could address this for a JAX-RS 2.0 effort? A few JAX-RS implementations, at least CXF and Jersey [1] for example, included some form of support. Paul. [1] https://jersey.dev.java.net/nonav/apidocs/latest/jersey/com/sun/jersey/api/core/ResourceConfig.html #getExplicitRootResources%28%29
Antnio Mota wrote: > > OK, I think I understood what I was missing, when you say that is a > good practice to assign URI to variants, you're not saying that is a > good practice *in general* but *only* for the restricted situations > where the server wants to enable the users to directly reference a > specific variant and/or if it wants to take advantage of caching > capabilities, is that so? > No, this has nothing to do with my say-so on anything. I'm trying to explain why RFC 2616 says you SHOULD assign URIs to variants -- a SHOULD in a spec indicates a best practice. The most prevalent use of content negotiation is compression. As I've bent over backwards to state, you don't need to assign URIs to the variants involved in compression, which is why RFC 2616 doesn't say you MUST assign URIs to variants. So, assigning URIs to variants doesn't apply to the general case, but it does apply to all other cases. For any other use of content negotiation besides compression, assign URIs to variants. Unless you don't care about caching, etc. in which case you don't really care about REST or following RFC 2616. I am not advocating that you ignore the SHOULD, merely pointing out the consequences. Please don't make this overly complicated. For the fiftieth time, assign URIs to all your variants, except for compression. > > However, thinking about your example of /A and /A.html that good > practice make sense, but what about resources that produce their > content "on-the-fly", and not a static page like that? > URIs are opaque. I don't know how you can tell from just "/A" that it must be a static page? The answer is that it makes no difference whatsoever to anything I've said, whether either of those resources or any other resources I may have tossed out as examples, are static or dynamic. To a REST connector it's just a bunch of response bytes, as implementation details are opaque behind the uniform interface. > > And also in that example, suppose the client references the /A.html > as a representation of /A, then that manipulation of /A.html has to > be made thru a representation of /A.html, does it make sense to also > assign URI's to the representation of that representation of /A? I > suppose it depends on the "importance" of those representations? > Your terminology is, errr, not so good, so the only chance I have of answering that question is to rewrite it first: "If the user agent dereferences /A and the response is the /A.html variant, then that manipulation of /A.html..." What manipulation of /A.html? The user agent is dereferencing /A. The response is a variant with a Content-Location of /A.html, not a Location of /A.html. There's only one request-response here, the user agent knows nothing of /A.html because the user agent hasn't dereferenced /A.html. "If the user agent dereferences /A and the response is the /A.html variant, then that manipulation of /A.html has to be made by transferring a representation of /A.html..." I don't follow. URIs are opaque, you are deducing an awful lot from some hypothetical example not-really-even-URIs. The user agent dereferences /A and retrieves instructions on how to render a steady- state, which presents the user with options for transitioning to other application steady-states. In this case, dereferencing /A responds with the variant we've labeled /A.html in Content-Location to keep caching straight. This variant may instruct the user agent how to update /A using hypertext. You can't assume that this will involve manipulating /A.html -- the user agent may be instructed to PUT to /A.atom, with the result being that both /A and /A.html are updated. You can't assume that the user agent will ever be instructed to dereference or manipulate /A.html . Content-Location is only a label, it implies no behavior and is not any sort of instruction to client connectors. In order to have anything to put in Content-Location you must first assign URIs to your variants (with the exception of compression). "Does it make sense to also assign URI's to the variants of the variants of /A?" None whatsoever. Why would /A.html have any variants, except for compression? The entire purpose of assigning URIs to variants is to access them as resources in their own right, tied to a specific media type (which may or may not be expressed as a filename extension), or language, etc. So the only conneg left to do is compression, if /A.html is dereferenced, which of course is not a given that it will be. > > Nevertheless I think to call this "best practice" induces in error > (it did with me) because it's only applicable to restricted use case. > scenarios. > No, it applies to every use case except compression, as per the SHOULD in RFC 2616. Ignoring said SHOULD is a deviation from best practice. What I'm saying can't be put any more simply than "assign URIs to your variants, except for compression." That's best practice for the theoretical reason that it's what RFC 2616 says to do, and for the pragmatic reason that the real-world Web depends on your doing this because that's how the Web actually works in reality. Don't fight it. -Eric
As I said, I think I understand the "principle", but not the necessity of applying it in all situations (except compression). Just some more notes: 2010/6/11 Eric J. Bowman <eric@...> > > So, assigning URIs to variants doesn't apply to the general case, but it > does apply to all other cases. > > I still don't see any other use cases except the client being able to dereference a specific variant or for use with cache. Both of wich are not that important inside a intranet. > > URIs are opaque. I don't know how you can tell from just "/A" that it > must be a static page? The answer is that it makes no difference > whatsoever to anything I've said, whether either of those resources or > any other resources I may have tossed out as examples, are static or > dynamic. To a REST connector it's just a bunch of response bytes, as > implementation details are opaque behind the uniform interface. > > I know URIs are opaque, I was just pointing to your examples. But my point is preciselly that one. If "it's just a bunch of response bytes", how can a non-static resource be cached if each time it is dereferenced it will probably have a diferent bunch of bytes? For instance /currentime is allways diferent and so is not cacheable, rigth? What's the importance then of having a fixed URI to variants of this resource (if you also consider that we should never allow the client to call specific variants)? > > > > And also in that example, suppose the client references the /A.html > > as a representation of /A, then that manipulation of /A.html has to > > be made thru a representation of /A.html, does it make sense to also > > assign URI's to the representation of that representation of /A? I > > suppose it depends on the "importance" of those representations? > > > > Your terminology is, errr, not so good, so the only chance I have of > answering that question is to rewrite it first: > Yes, my english is far from good... > > "If the user agent dereferences /A and the response is the /A.html > variant, then that manipulation of /A.html..." > > What manipulation of /A.html? The user agent is dereferencing /A. The > response is a variant with a Content-Location of /A.html, not a > Location of /A.html. There's only one request-response here, the user > agent knows nothing of /A.html because the user agent hasn't > dereferenced /A.html. > OK, I see, the dereferencing of /A.html is made by the server itself, not the user agent, so the user agent never "sees" it? > > "If the user agent dereferences /A and the response is the /A.html > variant, then that manipulation of /A.html has to be made by > transferring a representation of /A.html..." > > I don't follow. URIs are opaque, you are deducing an awful lot from > some hypothetical example not-really-even-URIs. The user agent > dereferences /A and retrieves instructions on how to render a steady- > state, which presents the user with options for transitioning to other > application steady-states. > Well, by "manipulation" I was only thinking of GETting it, not to change it. I was pointing only that if the variant of /A that we assigned a URI of /A.html is a resource on ot's own that implies that there is also (at least one) representation of /A.html that we could wish or not to assign it's how URI, like /A.html.en, /A.html.pt... > > "Does it make sense to also assign URI's to the variants of the variants > of /A?" > > None whatsoever. Why would /A.html have any variants, except for > compression? The entire purpose of assigning URIs to variants is to > access them as resources in their own right, tied to a specific media > type (which may or may not be expressed as a filename extension), or > language, etc. So the only conneg left to do is compression, if /A.html > is dereferenced, which of course is not a given that it will be. > I was thinking about diferent languages for the same resource/variant as my previous example. > > > > > Nevertheless I think to call this "best practice" induces in error > > (it did with me) because it's only applicable to restricted use case. > > scenarios. > > > > No, it applies to every use case except compression, as per the SHOULD > in RFC 2616. Ignoring said SHOULD is a deviation from best practice. > > What I'm saying can't be put any more simply than "assign URIs to your > variants, except for compression." That's best practice for the > theoretical reason that it's what RFC 2616 says to do, and for the > pragmatic reason that the real-world Web depends on your doing this > because that's how the Web actually works in reality. Don't fight it. > > Well, that argument of "that's how the Web actually works" goes as far as it goes. The web actually works with cookies too, that are consensually not RESTfull...
Sean, In all seriousness, the transmission vector for the vast majority of WSDL URI's *by service count* is email as they are private or semi-private custom service endpoints within and between companies. For public services, yes, the transmission vector is the humble web page. Regards, Alan Dean On Thu, Jun 10, 2010 at 11:39, Sean Kennedy <seandkennedy@...>wrote: > > > Hi, > I work in academia so would be grateful for the industry perspective. I > am working on a thesis which includes both WS-* and REST. If UDDI is in fact > dead http://www.innoq.com/blog/st/2010/03/uddi_rip.html how are > enterprises communicating the WSDL files? Are we talking email, publishing > on a Web site, inserting into a db?? > > Thanks, > Sean. > > PS Would the same apply for WADL files? > > >
Antnio Mota wrote: > > As I said, I think I understand the "principle", but not the > necessity of applying it in all situations (except compression). Just > some more notes: > It pushes my buttons when your reply comes so fast that it took you more time to write it, than you spent reading my reply. I am trying to impart of the wisdom I've accumulated through a dozen years of experience with conneg and REST (back then REST was called HTTP Request Object). I am not trying to trick you into doing something that isn't in your best interests, I'm pointing out a best practice that is in your best interests, because it's also a REST constraint. How difficult is it to understand that there is one exception to the SHOULD, and that's compression? You can keep asking me about every possible exception out there, but it won't change my answer -- it will only annoy the crap out of me. If these possible exceptions aren't compression, my answer remains "no." Seriously, how much more concisely and unequivocally can I state my position? > > > > > So, assigning URIs to variants doesn't apply to the general case, > > but it does apply to all other cases. > > > > I still don't see any other use cases except the client being able to > dereference a specific variant or for use with cache. Both of wich > are not that important inside a intranet. > If the intranet context (or anything else) was a valid exception to the SHOULD, then I wouldn't be saying until I'm blue in the face that the only valid exception to the SHOULD is compression. Besides, this is not intranet-discuss, this is rest-discuss. I refuse to tailor my answers to the specific needs of those whose systems do not need REST's primary benefit of anarchic scalability over the real- world Web. That intranets have nowhere near the scaling requirements of Web systems, is simply not relevant to any discussion of REST, nor is it a reason not to implement REST. What I've learned from doing this for a dozen years, is that your life gets infinitely easier when dealing with conneg, if variants are assigned their own URIs. If for no other reason than to be able to test and maintain the system properly. Why develop any architecture, particularly a REST architecture, to be incompatible with caching just because it isn't an immediate need? Have you not been paying attention to anything I write about how REST is a goal for the long-term evolution of a system rather than a solution for its immediate needs? If it turns out after you've deployed an intranet system, that caching indeed would be nice, wouldn't it make a lot more sense to have followed the Identification of Resources constraint in the first place, such that you can just drop squid in where and as needed, instead of requiring a fully-coupled caching solution like cache channels? Following REST from the get-go prevents you from painting yourself into the corner like that. One benefit of the Identification of Resources constraint is caching. That does not mean that because you don't care about caching today, you can just ignore that constraint. OTOH, by applying that constraint, your system can evolve in a scalable fashion over the long term. Why bend over backwards to avoid that, for the sake of not minting some URIs? Your position makes no sense to me. > > > > > URIs are opaque. I don't know how you can tell from just "/A" that > > it must be a static page? The answer is that it makes no difference > > whatsoever to anything I've said, whether either of those resources > > or any other resources I may have tossed out as examples, are > > static or dynamic. To a REST connector it's just a bunch of > > response bytes, as implementation details are opaque behind the > > uniform interface. > > > > I know URIs are opaque, I was just pointing to your examples. But my > point is preciselly that one. If "it's just a bunch of response > bytes", how can a non-static resource be cached if each time it is > dereferenced it will probably have a diferent bunch of bytes? > Look at the demo I posted. The URIs you dereference are just stubs whose content (metadata) rarely changes. All steady-states are rendered using client-side XSLT to include other resources. Those other resources have different cache optimizations according to their nature. The caching of the initial representation is not coupled to the caching of any resource making up the steady-state. It just calls an XSLT transformation. This is no different than any HTML page which calls an external CSS file. Updating the CSS has absolutely no effect on the freshness of any representation linking to the CSS. When my system is fleshed out, it will implement XHR to update the number of replies in a thread, wherever that information is needed. That way, those pages dynamically update, without affecting the caching of the representation which calls that XHR. You are scraping the bottom of the barrel now, looking for edge cases and exceptions. Why? The answer remains assign URIs to variants, and architect your way around these issues you bring up, such that they don't matter. Nothing you mention is a showstopper, I doubt you will ever come up with anything that is or which shows best practice to be inherently flawed, nor will you convince me that the Identification of Resources constraint may be safely ignored in the intranet context... Just as you will not prove to me that glass is a solid. You need to learn why this is the way it is, instead of desperately seeking cases you think might disprove this, and confusing the rest of the class while bugging your professor, who has already been incredibly patient in pointing out time and again that the *only* exception here is compression. Especially since just minting the damn URIs is so simple and has no downside. > > For instance /currentime is allways diferent and so is not cacheable, > rigth? What's the importance then of having a fixed URI to variants > of this resource (if you also consider that we should never allow the > client to call specific variants)? > If /currentime is a negotiated resource, then assigning URIs to its variants, aside from following the spec and applying the Identification of Resources constraint, makes it one heckuva lot easier to curl the variants for testing, independent of the conneg mechanism. I can't imagine how much harder you're making it to develop and maintain a system by only being able to access variants by using curl with Accept headers. This was the first thing I figured out a dozen years ago, when I started using conneg, and it's held true ever since -- trying to develop a conneg system without assigning URIs to variants is a thousand times more difficult than just minting the damn URIs. So please, just follow the spec and apply the REST constraint. It's so much easier than flogging a horse that's been dead since the last millenium, when this debate was SETTLED. Find all the edge cases you want, where you wouldn't want to cache or directly dereference variants. How does this override the SHOULD or the Identification of Resources constraint? As I've said a million times now, the exception to assigning URIs to variants is compression, not your desire to avoid applying a REST constraint or following RFC 2616, for reasons which still elude me entirely -- there's no downside to assigning URIs to variants, so why are you looking so hard for exceptions to this best practice? I already told you _the_ exception: compression. > > > > > > > And also in that example, suppose the client references > > > the /A.html as a representation of /A, then that manipulation > > > of /A.html has to be made thru a representation of /A.html, does > > > it make sense to also assign URI's to the representation of that > > > representation of /A? I suppose it depends on the "importance" of > > > those representations? > > > > > > > Your terminology is, errr, not so good, so the only chance I have of > > answering that question is to rewrite it first: > > > > Yes, my english is far from good... > Your grasp of REST terminology is a separate issue from your grasp of English. I could care less about your grasp of English. > > > > > "If the user agent dereferences /A and the response is the /A.html > > variant, then that manipulation of /A.html..." > > > > What manipulation of /A.html? The user agent is dereferencing /A. > > The response is a variant with a Content-Location of /A.html, not a > > Location of /A.html. There's only one request-response here, the > > user agent knows nothing of /A.html because the user agent hasn't > > dereferenced /A.html. > > > > OK, I see, the dereferencing of /A.html is made by the server itself, > not the user agent, so the user agent never "sees" it? > The server isn't dereferencing anything. Perhaps /A.html is an actual file on the filesystem of the origin server, perhaps not, it does not matter. The server is responding to a request for /A with whatever response code, headers and entity the system's coding tells it to. One of those headers contains a URI which other components may use in order to distinguish between variants -- it's just a label. > > > > > "If the user agent dereferences /A and the response is the /A.html > > variant, then that manipulation of /A.html has to be made by > > transferring a representation of /A.html..." > > > > I don't follow. URIs are opaque, you are deducing an awful lot from > > some hypothetical example not-really-even-URIs. The user agent > > dereferences /A and retrieves instructions on how to render a > > steady- state, which presents the user with options for > > transitioning to other application steady-states. > > > > Well, by "manipulation" I was only thinking of GETting it, not to > change it. I was pointing only that if the variant of /A that we > assigned a URI of /A.html is a resource on ot's own that implies that > there is also (at least one) representation of /A.html that we could > wish or not to assign it's how URI, like /A.html.en, /A.html.pt... > No! Absolutely not! The appearance of a URI in a Content-Location header is just a label. It implies nothing, you can make no assertions based on its presence, it doesn't even imply that you can dereference /A.html let alone say anything about the number of representations of /A.html, and it certainly doesn't imply some additional negotiation layer -- which, if you were using transparent conneg, is actually a 506 Variant Also Negotiates error as per RFC 2295. If there were different languages to negotiate, and each language varies in possible media types, then the system would compute the language, then the media type, then send a response to 'GET /A' with the appropriate headers including Content-Location, whose URI says nothing about anything since it's just labelling a variant for the purpose of distinguishing it from other variants. Stop making this impossible for yourself to ever comprehend. If you have a resource /A which varies by media type and language, then you have a set of variants to which you can assign URIs. You don't take the variants of each language and make them negotiable URIs based on media type, that leads the user agent around in a circle. Just give a different URI to each variant -- pretend those URIs are random gobbledygook with no apparent relation to one another (i.e. opaque). They're just labels, not a Location where the user agent needs to conduct further content negotiation. > > > > > "Does it make sense to also assign URI's to the variants of the > > variants of /A?" > > > > None whatsoever. Why would /A.html have any variants, except for > > compression? The entire purpose of assigning URIs to variants is to > > access them as resources in their own right, tied to a specific > > media type (which may or may not be expressed as a filename > > extension), or language, etc. So the only conneg left to do is > > compression, if /A.html is dereferenced, which of course is not a > > given that it will be. > > > > I was thinking about diferent languages for the same resource/variant > as my previous example. > The answer does not change based on the number of different headers you're considering for the negotiation. Resource /A has a set of variants, it doesn't matter whether they're by media type, language, or both media type and language, or compressed, or not compressed, the result is a set of variants for /A which need URIs assigned to them. What you're saying, is that you were thinking that the user agent would dereference the Content-Location URI to conduct further negotiation. No! This would never happen, because Content-Location is not an instruction to dereference anything. That's what Location does. So if /A.html were negotiable, how would the user agent ever know about it? The negotiated resource is /A , because I said that my example /A is a negotiated resource. How you can assume that means more negotiation would occur at /A.html because it's in Content-Location, when Content-Location is just a label containing an opaque URI, escapes me. You're making this a million times more difficult than it would be if you could just accept for a fact, that it's best practice to assign URIs to your variants... Trying to escape that reality is leading you into some incredibly convoluted hypotheticals, whose rebuttals are only making yourself and others more confused. Why can't you just assign URIs to your variants, and learn from the experience why it's desirable? Surely that would be more productive than convoluted theoretical debate seeking for exceptions using edge-case examples, which will only serve to ensure that you never learn REST? > > > > > > > > > Nevertheless I think to call this "best practice" induces in error > > > (it did with me) because it's only applicable to restricted use > > > case. scenarios. > > > > > > > No, it applies to every use case except compression, as per the > > SHOULD in RFC 2616. Ignoring said SHOULD is a deviation from best > > practice. > > > > What I'm saying can't be put any more simply than "assign URIs to > > your variants, except for compression." That's best practice for > > the theoretical reason that it's what RFC 2616 says to do, and for > > the pragmatic reason that the real-world Web depends on your doing > > this because that's how the Web actually works in reality. Don't > > fight it. > > > > Well, that argument of "that's how the Web actually works" goes as > far as it goes. The web actually works with cookies too, that are > consensually not RESTfull... > Sigh. Roy's thesis clearly explains that cookies are a REST mismatch, as most commonly used (although there are uses of cookies which don't amount to storing application state, which aren't REST mismatches). Are you seriously trying to rebut the explanation of a constraint, by comparing that constraint to a known REST mismatch? Given the congruent development of REST and the Web, the way conneg works on the real-world Web is both the basis for, and the expression of, the Identification of Resources constraint. This is a constraint, not a mismatch. Resorting to bringing up cookies is something I can't take seriously. I have done everything I can in this thread to explain that the SHOULD requirement for assigning URIs to variants works on the real-world Web, because that aspect of the real-world Web is behaving according to the constraints of REST. Your response to that is that cookies are a REST mismatch? What does that even mean, except that there's really no point in furthering this discussion with you, because you'll apparently stop at nothing, no matter how patently absurd, in an effort to dispute what I'm saying? I'm done here, as there's obviously no point in continuing. Come back when you've decided that you want to learn REST instead of wasting my time. -Eric
> > Given the congruent development of REST and the Web, the way conneg > works on the real-world Web is both the basis for, and the expression > of, the Identification of Resources constraint. > That's too strong; conneg is a partial basis for, and assigning URIs to variants a partial expression of, Identification of Resources. Still, this thread is entirely about real-world Web behavior that's consistent with REST's constraints, which is not to be confused with discussion about real-world Web behavior (like stateful cookies) that's the result of a REST mismatch. Apples and oranges. -Eric
2010/6/12 Eric J. Bowman <eric@...>: > Antnio Mota wrote: >> >> As I said, I think I understand the "principle", but not the >> necessity of applying it in all situations (except compression). Just >> some more notes: >> > > It pushes my buttons when your reply comes so fast that it took you > more time to write it, than you spent reading my reply. I am trying to > impart of the wisdom I've accumulated through a dozen years of > experience with conneg and REST (back then REST was called HTTP Request > Object). I am not trying to trick you into doing something that isn't > in your best interests, I'm pointing out a best practice that is in > your best interests, because it's also a REST constraint. Please can you stop repeating this 'point', you've been asked multiple times now to *explain* how identification of resources stands as justification for a blanket undermining of the distinction between resource and representation. Does the current overall web arch interpretation of resource and representation consider it an incorrect approach? Possibly. Is it possible for this 'alternative' interpretation to co-exist? Probably - the best FUD you could come up with was some bizarre analogy with fragment identifiers. Does the REST style in its essence have anything to say about these kinds of implementation specifics? It would appear not. You can't keep jumping from arguing violation of REST principals to arguing over feasibility in practice on the web. Pick one, and stick to it please. > > How difficult is it to understand that there is one exception to the > SHOULD, and that's compression? Because you're not articulating it well enough? Because it's far more subjective than you would have us believe? Because you're wrong? >You can keep asking me about every > possible exception out there, but it won't change my answer -- it will > only annoy the crap out of me. If these possible exceptions aren't > compression, my answer remains "no." Seriously, how much more concisely > and unequivocally can I state my position? With all due respect I don't think concise is a word I'd associate with your postings here. What your position is, is not particularly interesting - why it is held, is though. Try focusing you efforts to be concise on why instead of what, then we might get somewhere. Having to trawl through long-winded, meandering explanations is annoying the crap out of me, personally. If if bothers you that much, perhaps it's time to Accept that you aren't capable of making the point coherently enough for the intended audience, and move on. Cheers, Mike
In general, I feel claiming Any toolchest/framework is not RESTFul does not help at all. It may be better to have a study that explains 1. what features may be useful for creating something REST complaint 2. what features may mislead the programmer into the creation of something that looks RESTFul but it is not. 3. What features are missing. 4. How well the features are designed to allow for a natural REST develope,t 5. How well the features blend into the language. Cheers! William Martinez, --- In rest-discuss@yahoogroups.com, Stefan Tilkov <stefan.tilkov@...> wrote: > > On Jun 5, 2010, at 11:59 AM, Bill de hOra wrote: > > > +1. This is exactly why I feel claims that JAX-RS isn't RESTful aren't helping, they're misleading. And claiming JAX-RS is on the same level as WCF is somewhat insulting :-) > > Stefan > -- > Stefan Tilkov, http://www.innoq.com/blog/st/ >
Mike Kelly wrote: > > Please can you stop repeating this 'point', you've been asked multiple > times now to *explain* how identification of resources stands as > justification for a blanket undermining of the distinction between > resource and representation. > Every time I'm asked to explain how a REST constraint violates REST, my response will be the same -- it doesn't. What undermining of distinction? GET /A responds with one of a set of variant representations. When conneg is utilized, aside from compression, these variants are also resources in their own right (whether you accept this and give them URIs or not). This doesn't stop them from being variant representations of the negotiatied resource, because there's no constraint which limits a representation to having only one identifier. Can you explain why you think that a representation having multiple URIs undermines the definition of resource or representation, in terms of Roy's thesis? If your understanding of resource and representation leads you to believe that the established best practice (the SHOULD in RFC 2616) is a REST mismatch that's escaped Roy's notice, then the burden of explanation is on you -- you can't ask me to explain it for you when I disagree with the notion entirely. Assigning URIs to variants (aka following RFC 2616) works in practice because it meets the Identification of Resources constraint. If your understanding of REST is at odds with this reality, i.e. you see a REST mismatch there when nobody else does, then I suggest that the solution for you is to change your understanding of REST. REST's definitions of representation and resource are grounded in the need to explain how one network address can house multiple entities based on user or user agent preference, and how one entity can reside at more than one network address. It does so nicely, in a way which encompasses the conneg requirement of assigning URIs to variants such that components have some way to distinguish one variant from another. (Except for compression.) This entire notion is required in order to understand the concept of the late binding of a representation to a resource. You're having trouble grasping conneg the way everyone else does (this debate was settled a dozen years ago by the consensus of RFC 2616), i.e. you're having trouble with the late binding bit, I suspect because you're having trouble with the resource/representation bit. When my quaint notion of some REST concept leads me to think there's a REST mismatch in HTTP nobody else has noticed, I personally take that as a sign that the problem lies in my quaint notion of that concept. There's no shame in that. I've freely admitted here that it took me over a year to understand Roy's weblog entry about REST APIs being hypertext driven, because it meant I had to accept that many of my own quaint notions of REST were wrong. In most cases, my results were still RESTful in spite of myself, but the 'aha!' moment last December led to beneficial changes I'd never have seen had I not been able to admit that I didn't know what I was talking about. It took me a few years to figure out what REST was. It then took me a few more years to know what I was talking about. A few years later, I discovered to my chagrin that I didn't know, which was the key to my finally learning REST enough that I finally do know what I'm talking about, now. It's all so clear I can't believe it was ever so hard. So I mean no disrespect when I suggest that someone may be holding themself back by holding a quaint notion of some REST concept. REST is hard to learn, but once you have, everything falls into place and it's easy to see why things are the way they are on the real-world Web. > > Does the current overall web arch interpretation of resource and > representation consider it an incorrect approach? Possibly. > This is rest-discuss, not webarch-discuss. Roy's thesis explicitly names the REST mismatches in HTTP. Conspicuously absent from the list is any hint that assigning URIs to variants is a REST mismatch. The thesis is also explicit about the reasons for and effects of the Identification of Resources constraint. Failing to assign URIs to variants results in the problems the thesis predicts for breaking that constraint. Assigning URIs to variants results in the benefits the thesis predicts for applying that constraint. So REST obviously does not consider that SHOULD in RFC 2616 to be an incorrect approach; if WebArch does, then that's off-topic. > > Is it possible for this 'alternative' interpretation to co-exist? > Probably - the best FUD you could come up with was some bizarre > analogy with fragment identifiers. > You're sticking with changing the semantics of @type, and claiming that this 180-degree reversal of its semantics will "probably" co-exist. I still say it can't, but that it's irrelevant. What's relevant to your suggested borking of @type is the layered system constraint. There is no 'alternative interpretation' of REST where a media type may dictate the Accept header, this is a prima facie constraint violation. If you fail to see that, then the failure must lie with your quaint notion of the layered system constraint, because that constraint explains nicely the reality that browsers' Accept headers are hard- coded and no Javascript API exists for changing them. One example of the layered system constraint in practice is URI fragments -- which have no effect on the parameters of the request to dereference. Another example is @type, which also has no effect on the parameters of the request to dereference the URL it annotates. I'm sorry that this reality also fails to meet your quaint notions of how the Web should work, but it is not FUD to compare apples to apples. Also, as I keep explaining, nowhere in any markup language do we find any mention whatsoever of using any sort of tags to perform anything remotely resembling content negotiation. We do find conneg mentioned in multiple RFCs pertaining to the HTTP protocol. Clearly, then, the native architecture of the Web considers conneg to be a protocol-layer operation contained entirely within HTTP headers (or a variant list if using transparent, which is barely worth mentioning), not within the message body (borking @type). To change this, is to fundamentally change the architecture of the Web, from being a known quantity as defined by both REST and WebArch into some untested, unknown quantity which lacks a definition. This would be a change away from established constraints, to an architectural style of unbounded creativity. > > Does the REST style in its essence have anything to say about these > kinds of implementation specifics? It would appear not. > Of course not. REST is an architectural style, not an implementation guideline. Implementation specifics may be analyzed in terms of REST, however. Just like Roy's thesis does in Chapter 6, where HTTP REST mismatches are discussed, conspicuously not including any mention of HTTP's SHOULD requirement to assign URIs to variants. If, aside from compression, the only way HTTP conneg works (assign URIs to variants) is by undermining REST's definition of representation and resource, I'm quite sure Roy would have noticed, and mentioned it in Chapter 6, or here on rest-discuss, or on www-tag, or changed it in HTTPbis... > > You can't keep jumping from arguing violation of REST principals to > arguing over feasibility in practice on the web. Pick one, and stick > to it please. > We're on rest-discuss talking about conneg. So of course I can talk about HTTP implementation specifics (which is what conneg is) in terms of REST (which is what we're here for). REST informed the design of HTML, HTTP and URI. REST is all about hitting the Web's sweet spot, which means leveraging those aspects of HTML, HTTP and URI which follow the constraints which informed their design, while ignoring those aspects which it identifies as mismatches. URI fragments and @type are the way they are because their design was informed by REST, specifically the layered system constraint. RFC 2616 says you SHOULD assign URIs to variants, because conneg's design was informed by REST, specifically the identification of resources constraint. Cookies are the way they are because their design went against REST, specifically the stateless messaging constraint. So it is a fact, that sometimes reality on the Web is a direct result of the pragmatic application of REST constraints to the design of HTTP, URI and HTML. So I can indeed frame my responses in terms of how the way certain things work is a reflection of some REST constraint or another, or more. The only argument to avoid, is pointing to a REST mismatch and declaring it isn't one because it works in practice. The arguments against borking @type to avoid assigning URIs to variants consist of explaining that the way the Web works in reality, results from its design following the constraints of REST. The argument against your way co-existing with the Web as it is, is that HTTP != REST -- just because we could do things your way is a spurious argument to make in favor of your way being RESTful. We could store state in cookies, but it's still a REST mismatch, not an 'alternate interpretation'. Your suggestion would require a state of denial, that there's any correlation between REST and how the Web works in practice, when the truth is that the two are inextricably intertwined. > > > > > How difficult is it to understand that there is one exception to the > > SHOULD, and that's compression? > > Because you're not articulating it well enough? Because it's far more > subjective than you would have us believe? Because you're wrong? > Or because I was lurking there a dozen years ago when this debate was SETTLED with the adoption of RFC 2616 as consensus, and have never seen in all the years since, a situation where it does not hold true? Or because assigning URIs to variants (aka following RFC 2616) solves every problem anyone has ever proposed an alternative solution for, so I don't see the point to re-inventing that wheel? Or because the fact that HTTPbis neither changes this bit of HTTP nor recognizes any valid reasons not to assign URIs to variants that may have cropped up over the last dozen years, proves to me I'm right? There are plenty of reasons why assigning URIs to variants is best practice. There is every reason to doubt that you or anyone else has discovered a valid reason to ignore this SHOULD (besides compression). Conneg isn't perfect, there's plenty of room for improvement, but not in HTTP 1.1 -- this is for a successor protocol to work out. Which means that for the forseeable future, which is HTTPbis, you SHOULD assign URIs to your variants. Because, whether you like it or not, variants (except compression) are resources in their own right (just as glass is a liquid even if you insist on calling it a solid). So, to assign them URIs not only meets the identification of resources constraint, it also meets the SHOULD in HTTP which implements that constraint for conneg. Given that a RESTful best practice solves the problem you're proposing borking @type as a solution to, I have intense skepticism that it would be adopted even if it didn't violate any REST constraints. > > >You can keep asking me about every > > possible exception out there, but it won't change my answer -- it > > will only annoy the crap out of me. If these possible exceptions > > aren't compression, my answer remains "no." Seriously, how much > > more concisely and unequivocally can I state my position? > > With all due respect I don't think concise is a word I'd associate > with your postings here. > I can't get any more concise than "assign URIs to your variants" but that isn't good enough for some, who keep asking for elaboration. That elaboration may not be concise, but that doesn't change the simple nature of the assertion, which is that the only exception is compression. Continuing to ask whether one thing after another might be an exception after I keep repeating that the only exception is compression will annoy me to the point where I stop responding, so I give fair warning, just in case the goal is to learn rather than annoy. > > What your position is, is not particularly interesting > Oh, I agree completely. Assigning URIs to variants is a debate that's been settled for over a decade, repeatedly having to explain it every third month on this list _is_ incredibly boring. What would be interesting, is a discussion about whether or not using URNs in Content- Location works, and if so, if it's RESTful. We can't get to that interesting discussion if we can't agree to assign URIs to variants, though. > > Having to trawl through long-winded, meandering explanations is > annoying the crap out of me, personally. > You could always try, for once, accepting that maybe, just maybe, the answer you're getting (and not just from me), besides merely pointing out best practice, might actually be right? Of course, if you don't like what I write, you can always ignore it, or just not respond to it. If you're actually trying to learn REST by asking me questions, it's best not to annoy me. OTOH, if I annoy you to the point you stop asking me questions, well, I don't see the downside to that, so what's my motivation not to annoy the crap out of you? > > If if bothers you that much, perhaps it's time to Accept that you > aren't capable of making the point coherently enough for the intended > audience, and move on. > I think everybody understands that REST's only real problem, is that it's hard to teach (thus, hard to learn). The evidence of this, is that if we were teaching it properly, then despite adoption being buzzword-driven the reality wouldn't resemble the current situation where 99% of REST claimants are nothing of the sort. Part of my interest in participating here, is figuring out how to teach REST. We've touched on something here that some people, instead of pushing their own agenda, are legitimately having difficulty understanding. The challenge is to translate Roy's thesis from its technical, concise, fewer-words-couldn't-be-used language into something more accessible to the masses. Any effort will obviously be less concise, but there are no guidelines for this translation. So my approach is deliberately tautological, the concept being to throw whatever I can think of out there, and see what sticks. Some people never learn, while others have actually come around. Enough come around over the years, that they start giving the same best-practice advice, and in this advice I often see my own points being re-stated. This is how I learn how to teach REST. If I never were to see any evidence that what I write sinks in, I'd quit as you suggest, but I don't see your failure to learn as evidence of my failure to teach. If you don't think there's anything I can teach you, don't ask me questions. If you honestly want to learn from me, don't ask me annoying questions for the sake of being argumentative. I'm more than happy to be generous with my time here, unless I feel it's being wasted. Which is where we're at in this thread, because as I've said, this debate really was settled over a decade ago and we're only flogging a dead horse by debating the merits of established best practice as if there were some sort of actual problem with it. I'm more than happy to explain *why* it's best practice, but very annoyed by people who insist that there's something *wrong* with RESTful best practice and can't be told otherwise by anybody, in the face of abundant evidence showing the best practice to be absolutely fundamentally sound, and who refuse to accept criticism of their alternative solution, no matter how many people say that it goes against REST while not doing anything that can't be done by following best practice which conforms to REST. -Eric
Hello Glen. As I just posted a comment with some hint about how to evaluate a toolchest/framework, I may want to open another lead here. One way is to actually look at how other frameworks (mostly java I hear) deal with the idea. The other way is to actually work on understanding the REST style, why the interaction is how it is, and what happens in the network. Why do I say so? Because the successful API definition depends on that understanding and in the goal you are trying achieve. Bear with me please: 1. REST explains the constrains you impose to your architecture to allow better performance, reliability, visibility, scalability for large grain hypermedia transfer applications in a networked solutions. 2. A RESTFul service is an idea of using the guidelines of REST to expose services on web. Some ways work better and attach to more constrains that others, but in general the idea is to have one initial URI, starting point to a set of states, which transitions are governed by hyperlinks, and whose actions are focused in manipulating resources using a standard interface (HTTP methods). 3. The constrains include self-description of messages, cache support, client/server separation of concerns and possible code on demand support. And, of course, use of Hypermedia to define and control application state 4. In WCF, one important thing is the generic interface. That is, the service Endpoint (Address, binding and contract), down to the binding elements. Behind all that, is a description of interaction, and API definition. How to match that to the REST service? Interesting question. 5. Let's start on the server side. a. One first thing is the possibility to define a serviceEndpoint as unique, meaning just one URI to start it all. b. The other need is of course the ability to produce different kinds of media types to serve representations. If the idea is to avoid a bare-bones implementation for developers, we may want to abstract the content negotiation so it is somehow automatic. The client will request what can be served, or request some particular representation. On server side we only need to define the representation source and transformation. (say, Mr. WCF, this record in database you can publish it as JSON, XML or URLEncoded, here is the mapping, take care of it when requested, thanks). c. What about defining the resources and possible state engine? Even setting an URI generation template. All that to add the generated URIs into the representations. Of course, for each resource type, define the HTTP operations. See next point. d. We only support HTTP Methods. So, no [OperationContract] String sayHello(String name); things. sayHellow is an internal method you can use internally, but that RPC stuff may be heresy in REST world (nor for some, I know). But it may be [OperationContract(HTTPMethod=POST)] String sayHello(String body); where the String argument is the body sent in the post and the returned String the response. All the other POST metadata and control may be defined with other artifacts. If you want to excel, you can design that to use any protocol, not just HTTP. e. All that means, all the HTTP plumbing is hidden, plus an easier way to provide automatic representations control, metadata control and response. 6. What about the client? Almost same thing. In the ideal world clients are given just one URI, and build dynamically their path from there on. In real world, they usually know what are the steps and the dynamic URIs (hypermedia usage) is to identify specific instances of already known expected types. Any Ideas to improve that? A nice client that runs by itself and starts following links and things, just stopping for asking me about data or decide on options/path here and there, would be nice. Not surprisingly that describes a browser. 7. But all I have said is too nailed to the grown. On the high side, the idea is to allow client and server independent evolution since no coupling is done at interface level. That calls for an automatic interaction thing that allows me to focus on resource and representation definition, plus the state map, and on client side to worry about a goal definition. Caching, gateways, all that is invisible. 8. There is something else, last thing. The layered constrain in REST. The idea is that you can have interim nodes, that may even parse and process partially the payload of the message. In this case, client only sees the next layer, and that one may see the next one. Again, I suggest you to start taking a look at the real implementations that are done bare-bones, understanding the idea of the style, and see if that fits into WCF generic definition or not. Cheers. William Martinez Pomares --- In rest-discuss@yahoogroups.com, Glenn Block <glenn.block@...> wrote: > > Hi guys > > I've been trolling for a few weeks :-) I work on the WCF team at Microsoft. > We're currently in the very early stages of planning for new apis for > supporting pure REST and HTTP style development. Our goal is to create > something simple, lightweight and true to form. We are looking provide a > natural API both for the service author and for the consumer. This not an > attempt to simply retrofit onto a SOAP based model. > > It would be great to hear thoughts you guys have on what would be the ideal > developer experience for using REST. Also if you'd like to be involved we'd > welcome the feedback. > > Regards > Glenn >
Hi, Are there well-known alternatives to HTTP for building REST services? When doing small-scale internal services, I still find a RESTful architecture still useful, however, the overhead of HTTP seems to be noticable. I was wondering if there widely used alternatives that focus on performance in the same manner that some RPC tools do (Protocol Buffer, Thrift). Also, on media-types, are there well-known media types that are relatively cheap to parse? For one, I'm keeping my eye on BSON: http://bsonspec.org/ as an alternative to JSON. Jan Vincent Liwanag jvliwanag@...
Thanks William for the detailed response. I will read and digest. As far as service contract and such. I really want to think outside the box and limit the possibilities to what we have today. Start with what the ideal experience would be and then work our way back. Then we can see what we can do and how that fares against our current infrastructure. Make sense? Glenn On 6/12/10, William Martinez Pomares <wmartinez@...> wrote: > Hello Glen. > As I just posted a comment with some hint about how to evaluate a > toolchest/framework, I may want to open another lead here. > > One way is to actually look at how other frameworks (mostly java I hear) > deal with the idea. The other way is to actually work on understanding the > REST style, why the interaction is how it is, and what happens in the > network. > > Why do I say so? Because the successful API definition depends on that > understanding and in the goal you are trying achieve. Bear with me please: > > 1. REST explains the constrains you impose to your architecture to allow > better performance, reliability, visibility, scalability for large grain > hypermedia transfer applications in a networked solutions. > > 2. A RESTFul service is an idea of using the guidelines of REST to expose > services on web. Some ways work better and attach to more constrains that > others, but in general the idea is to have one initial URI, starting point > to a set of states, which transitions are governed by hyperlinks, and whose > actions are focused in manipulating resources using a standard interface > (HTTP methods). > > 3. The constrains include self-description of messages, cache support, > client/server separation of concerns and possible code on demand support. > And, of course, use of Hypermedia to define and control application state > > 4. In WCF, one important thing is the generic interface. That is, the > service Endpoint (Address, binding and contract), down to the binding > elements. Behind all that, is a description of interaction, and API > definition. How to match that to the REST service? Interesting question. > > 5. Let's start on the server side. > a. One first thing is the possibility to define a serviceEndpoint as > unique, meaning just one URI to start it all. > b. The other need is of course the ability to produce different kinds of > media types to serve representations. If the idea is to avoid a bare-bones > implementation for developers, we may want to abstract the content > negotiation so it is somehow automatic. The client will request what can be > served, or request some particular representation. On server side we only > need to define the representation source and transformation. (say, Mr. WCF, > this record in database you can publish it as JSON, XML or URLEncoded, here > is the mapping, take care of it when requested, thanks). > c. What about defining the resources and possible state engine? Even > setting an URI generation template. All that to add the generated URIs into > the representations. Of course, for each resource type, define the HTTP > operations. See next point. > d. We only support HTTP Methods. So, no [OperationContract] String > sayHello(String name); things. sayHellow is an internal method you can use > internally, but that RPC stuff may be heresy in REST world (nor for some, I > know). But it may be [OperationContract(HTTPMethod=POST)] String > sayHello(String body); where the String argument is the body sent in the > post and the returned String the response. All the other POST metadata and > control may be defined with other artifacts. If you want to excel, you can > design that to use any protocol, not just HTTP. > e. All that means, all the HTTP plumbing is hidden, plus an easier way to > provide automatic representations control, metadata control and response. > > 6. What about the client? Almost same thing. In the ideal world clients are > given just one URI, and build dynamically their path from there on. In real > world, they usually know what are the steps and the dynamic URIs (hypermedia > usage) is to identify specific instances of already known expected types. > Any Ideas to improve that? A nice client that runs by itself and starts > following links and things, just stopping for asking me about data or decide > on options/path here and there, would be nice. Not surprisingly that > describes a browser. > > 7. But all I have said is too nailed to the grown. On the high side, the > idea is to allow client and server independent evolution since no coupling > is done at interface level. That calls for an automatic interaction thing > that allows me to focus on resource and representation definition, plus the > state map, and on client side to worry about a goal definition. Caching, > gateways, all that is invisible. > > 8. There is something else, last thing. The layered constrain in REST. The > idea is that you can have interim nodes, that may even parse and process > partially the payload of the message. In this case, client only sees the > next layer, and that one may see the next one. > > Again, I suggest you to start taking a look at the real implementations that > are done bare-bones, understanding the idea of the style, and see if that > fits into WCF generic definition or not. > > Cheers. > > William Martinez Pomares > > > > --- In rest-discuss@yahoogroups.com, Glenn Block <glenn.block@...> wrote: >> >> Hi guys >> >> I've been trolling for a few weeks :-) I work on the WCF team at >> Microsoft. >> We're currently in the very early stages of planning for new apis for >> supporting pure REST and HTTP style development. Our goal is to create >> something simple, lightweight and true to form. We are looking provide a >> natural API both for the service author and for the consumer. This not an >> attempt to simply retrofit onto a SOAP based model. >> >> It would be great to hear thoughts you guys have on what would be the >> ideal >> developer experience for using REST. Also if you'd like to be involved >> we'd >> welcome the feedback. >> >> Regards >> Glenn >> > > > -- Sent from my mobile device
Thanks Alan. Have you worked with WADL - or are (nearly) all RESTful Web Services described with XHTML pages?
Sean.
________________________________
From: Alan Dean <alan.dean@...>
To: Sean Kennedy <seandkennedy@yahoo.co.uk>
Cc: Rest Discussion Group <rest-discuss@yahoogroups.com>; Sean Kennedy <skennedy@...>
Sent: Sat, 12 June, 2010 5:14:39
Subject: Re: [rest-discuss] UDDI dead?
Sean,
In all seriousness, the transmission vector for the vast majority of WSDL URI's *by service count* is email as they are private or semi-private custom service endpoints within and between companies.
For public services, yes, the transmission vector is the humble web page.
Regards,
Alan Dean
On Thu, Jun 10, 2010 at 11:39, Sean Kennedy <seandkennedy@...> wrote:
>
>
>
>
>
>
>
>
>
>
>
>
>
> >
>
>>
>
>>
>
>Hi,
> I work in academia so would be grateful for the industry perspective. I am working on a thesis which includes both WS-* and REST. If UDDI is in fact dead http://www.innoq.com/blog/st/2010/03/uddi_rip.html how are enterprises communicating the WSDL files? Are we talking email, publishing on a Web site, inserting into a db??
>
>Thanks,
>Sean.
>
>PS Would the same apply for WADL files?
>
>
>
Hello Glenn You seem to start pretty good discussions around here! And I want to side step again, to try understand your question. You are looking for a commonly used way to tell, in a link or when consuming a link, what media is expected/requested? Well, media negotiation may be complex. 1. We have media types, standard and custom. Using HTTP headers we are able to say which media I want and which media the server can serve. That is what the HTTP app protocol gives us. 2. Another way of getting the media is using embedded metadata in the media itself. That is done, for instance, using the link. The mediatype attribute will tell you specifically which media you should expect to use for a particular link and operation. Of course, that reduces a bit the visibility (since an interim node should know the media the link is embedded in to be able to "see" it) and also limits the negotiation (the link "suggests" the media). 3. Mike already mentioned the type attribute, which will somehow be the same as above. 4. Now, media is just a format you use to encode data, and its semantics are usually about the media that is encoded. You need to also know what is the media for and what the info that type encompass to use it in your app (meaning the app semantics). An image may be encoded in XML if you wish, and it is highly possible you want it for display. To process media you need a media parser. There is media encoding used for definition (tunneling may someone say) of app data. For example, XML. Since this is a generic encoding, it may have a type designator telling the reader what type is actually encoded. Thus, you may have a media-type application/xml but you are getting your image in xml encoding. The image type is basically detected by the app looking at the XML schema or namespace. This typing is not artificial, but part of the XML standard itself. The problem with this other approach is visibility again. The header will tell you the caller is requesting an XML, with no other indication of what is in it. That is why it is like tunneling. Using custom media is then one option, where you use user defined media types, usually XML, with a vnd. in the media name telling you what to expect in the XML. The good of the custom media is that a consumer may peek the media supported to determine if it can consume it, and that just by looking at the supported media in the server (protocol level). The good of generic media (XML) is that you have lots of parsers for it, and the media it contains can be also discovered and processed, even dynamically, by means of more complex media descriptors like the XSD. e.g. I may ask a server what representation can it serve about a resource, and it will tell me he can send an XML representation. That is too generic, but I accept, and when I have the XML, I request the XSD (which is a link in the XML, BTW, which nicely works as HATEOAS) and reading it I can dynamically fill in the blanks and perform my operation. A most standard way of doing that is also using forms. 5. Finally, and someone may say I'm crazy about saying this, but another way to describe a type is using a descriptor language, yes, like WSDL. This may break the badly nicknamed HATEOAS, if you use it like an all containing static contract used at design time. WSDL 2.0 includes now extensions to not only describe the workflow, but also the HTTP methods to be used and the schema bits to form the message. (One approach is explained, along with an intro to WSDL 2.0, by Lawrence Mandel from IBM here http://www.ibm.com/developerworks/webservices/library/ws-restwsdl/, and I have a couple of posts just mentioning it can be done). The idea to apply to REST is each resource may have a WSDL you can request prior requesting anything, and it will tell you what to do and how in that resource (and the resource is depicted as a service, of course). Similar to point 4, it then describes the type in detail, not by name, embedded in another type. It has also visibility problems, but with the advantage that WSDL is also a standard and you are plenty of parsers. So, you request the types for a resource and the server will tell you it supports WSDL, you request that representation and it will tell you what you can do to the resource, and a side effect it will also tell you the resource is a service. Any interim node may view the message and can actually understand it since it is a WSDL formatted thing. Lastly, it helps you separate the concerns of interaction and type definition from the real data. Note of course that WSDL must not be used to tunnel RPC; but to depict the type of messages, the interaction flow, and the HTTP methods to invoke. Hope, again, all this helps. William Martinez Pomares. --- In rest-discuss@yahoogroups.com, Glenn Block <glenn.block@...> wrote: > > Aside from the specification / documentation for a media type is there some > common accepted practice for indicating which media type is expected to > passed when calling a link? > > Thanks > Glenn >
In our platform we have only one use case where we use WADL (taking advantage of the automatic WADL generation of Jersey). Basically, we have one client dealing with different types of reports, that have different number and type of parameters. Some have one date, some two dates, some a text code, etc... So that specific client (it's a rich client, not a browser) asks for a WADL for the type of report chosen by the user, and from that extract the number and type of the params to present to the user for input. My personal opinion is *not* to use WADL and be based only in HATEOAS, but sometimes we have to sacrifice "purity" for "ease of development"... _________________________________________________ Melhores cumprimentos / Beir beannacht / Best regards Antnio Manuel dos Santos Mota http://card.ly/amsmota _________________________________________________ On 14 June 2010 09:41, Sean Kennedy <seandkennedy@...> wrote: > > > Thanks Alan. Have you worked with WADL - or are (nearly) all RESTful Web > Services described with XHTML pages? > > Sean. > > > ------------------------------ > *From:* Alan Dean <alan.dean@...> > *To:* Sean Kennedy <seandkennedy@....uk> > *Cc:* Rest Discussion Group <rest-discuss@yahoogroups.com>; Sean Kennedy < > skennedy@...> > *Sent:* Sat, 12 June, 2010 5:14:39 > *Subject:* Re: [rest-discuss] UDDI dead? > > Sean, > > In all seriousness, the transmission vector for the vast majority of WSDL > URI's *by service count* is email as they are private or semi-private custom > service endpoints within and between companies. > > For public services, yes, the transmission vector is the humble web page. > > Regards, > Alan Dean > > On Thu, Jun 10, 2010 at 11:39, Sean Kennedy <seandkennedy@...>wrote: > >> >> >> Hi, >> I work in academia so would be grateful for the industry perspective. >> I am working on a thesis which includes both WS-* and REST. If UDDI is in >> fact dead http://www.innoq.com/blog/st/2010/03/uddi_rip.html how are >> enterprises communicating the WSDL files? Are we talking email, publishing >> on a Web site, inserting into a db?? >> >> Thanks, >> Sean. >> >> PS Would the same apply for WADL files? >> >> > > >
Sean, I'm not in the WADL camp. (X)HTML, RDF (XML / n3) and AtomPub between them probably account for almost all the live RESTful activity over HTTP today. Couldn't guess what the proportions are. Regards, Alan Dean On Mon, Jun 14, 2010 at 09:41, Sean Kennedy <seandkennedy@...>wrote: > Thanks Alan. Have you worked with WADL - or are (nearly) all RESTful Web > Services described with XHTML pages? > > Sean. > > > ------------------------------ > *From:* Alan Dean <alan.dean@...> > *To:* Sean Kennedy <seandkennedy@...> > *Cc:* Rest Discussion Group <rest-discuss@yahoogroups.com>; Sean Kennedy < > skennedy@...> > *Sent:* Sat, 12 June, 2010 5:14:39 > *Subject:* Re: [rest-discuss] UDDI dead? > > Sean, > > In all seriousness, the transmission vector for the vast majority of WSDL > URI's *by service count* is email as they are private or semi-private custom > service endpoints within and between companies. > > For public services, yes, the transmission vector is the humble web page. > > Regards, > Alan Dean > > On Thu, Jun 10, 2010 at 11:39, Sean Kennedy <seandkennedy@...>wrote: > >> >> >> Hi, >> I work in academia so would be grateful for the industry perspective. >> I am working on a thesis which includes both WS-* and REST. If UDDI is in >> fact dead http://www.innoq.com/blog/st/2010/03/uddi_rip.html how are >> enterprises communicating the WSDL files? Are we talking email, publishing >> on a Web site, inserting into a db?? >> >> Thanks, >> Sean. >> >> PS Would the same apply for WADL files? >> >> >> > > >
> > >> It did remind me > >> a bit of WCF with the URI mapping stuff, though more flexible. Still > >> much more limited than routes in rails or MVC routes. > > > > One of the things I don't like that much about JAX-RS is the fact > > that it spread the knowledge about the routing all over the place, I > > much prefer a central place for that (like in Rails or many other > > web frameworks). > I really like Rails router centralization also. We have tried to mimic in a way that allows clients to create their own Routing processes, so you could create patterns as i.e. crud, state machine and so on, that could be easily mapped with the DSL. Of course, sticking to conventions whenever possible might help minimizing it. Perhaps we could address this for a JAX-RS 2.0 effort? > That would be great... programmatic configuration on routes allows us to TDD our code... Regards A few JAX-RS > implementations, at least CXF and Jersey [1] for example, included > some form of support. > > Paul. > > [1] > https://jersey.dev.java.net/nonav/apidocs/latest/jersey/com/sun/jersey/api/core/ResourceConfig.html > #getExplicitRootResources%28%29 > > >
Stefan, > One of the things I don't like that much about JAX-RS is the fact > that it spread the knowledge about the routing all over the place, > I much prefer a central place for that (like in Rails or many other > web frameworks). When I first started using JAX-RS I thought the same thing. After using it for a while I really find that it's not that big of an issue for me. When I'm in a particular piece of code in Rails I generally know the URL due to convention over configuration and only need to resort to the routes file occasionally. Since we generally don't have such a thing in JAX-RS apps I've personally found that I like having the URL in the code with my class rather having to go look elsewhere for it. Thanks! Brandon On Thu, Jun 10, 2010 at 3:54 PM, Stefan Tilkov <stefan.tilkov@...> wrote: > On Jun 10, 2010, at 6:04 PM, Glenn Block wrote: > >> From what I have seen of Jersey it looks nice. Would be cool if it >> required less annotations and supported more conventions. GetOrder >> for example would map to Get, PostOrder to post etc. > > IIRC, this was even in the spec in some stage; it was taken out because it didn't really fit Java's nature. I may recall this wrongly. > >> It did remind me >> a bit of WCF with the URI mapping stuff, though more flexible. Still >> much more limited than routes in rails or MVC routes. > > One of the things I don't like that much about JAX-RS is the fact that it spread the knowledge about the routing all over the place, I much prefer a central place for that (like in Rails or many other web frameworks). > > Stefan > -- > Stefan Tilkov, http://www.innoq.com/blog/st/ > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
Hi Eric, On Sat, Jun 12, 2010 at 10:32 AM, Eric J. Bowman <eric@...> wrote: > > I'm more than happy to explain *why* it's best practice, but very > annoyed by people who insist that there's something *wrong* with > RESTful best practice and can't be told otherwise by anybody, in the > face of abundant evidence showing the best practice to be absolutely > fundamentally sound, and who refuse to accept criticism of their > alternative solution, no matter how many people say that it goes > against REST while not doing anything that can't be done by following > best practice which conforms to REST. I hesitate to get involved in this thread for fear of getting flamed, but i find myself a little confused about your position. It is obvious that you believe that all representations should also be resources in their own right. But reading the thread i am unsure exactly why you think that. From your posts i think one of the following might be the reason: 1. HTTP is designed for every representation to be a resource in it's own right. The RFC strongly encourages this approach (by using "should" to describe this behavior). Doing so does not violate any of the REST constraints and intermediates expect it. Therefore we should follow the RFC. 2. Having representations that are not resources in their own right violates a REST constraint. Therefore to follow the REST style you must identify each representation as a resource. 3. I have completely misunderstood your position. If your position is 1 then i think i understand, but i could use a bit of clarification about how content-location helps proxy caching. If you position is closer to 2 then i guess i need to go read the dissertation again. If 3 a short clarification might be helpful. Peter Williams http://barelyenough.org
"William Martinez Pomares" wrote: > > 2. Another way of getting the media is using embedded metadata in the > media itself. That is done, for instance, using the link. The > mediatype attribute will tell you specifically which media you should > expect to use for a particular link and operation. Of course, that > reduces a bit the visibility (since an interim node should know the > media the link is embedded in to be able to "see" it) and also limits > the negotiation (the link "suggests" the media). > Once more: @type has absolutely nothing remotely to do with content negotiation, it does not "limit" it, it does not "drive" it, and it does not reduce visibility (not using @type reduces visibility). It is simply an annotation. Not any sort of instruction. -Eric
Hi,
say I have a resource like
/mySets/{set}/{item}
which provides a DELETE.
If this resource is found and successfully deleted a 204 will be returned. If this resource is not found (e.g. because it has been deleted before) a 404 will be returned.
Now I wonder which status code should be returned if the resource
/mySets/{set}
is not found.
Also a 404? A 400?
I'm not so happy about a 404 because it lacks some semantics - the DELETE on the subresource (or subsubresource) /mySets/{set}/{item} implies that /mySets/{set} exists. If this is not given it might indicate a conflict or that the client is wrong in some way.
While looking for appropriate response codes based on their name 409 (Conflict), 410 (Gone) or 412 (Precondition Failed) sound interesting (see http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4 )) but their semantics does not match really.
My current choice would be to return 400 (Bad Request), but this has also different semantics.
Do you have suggestions how to handle this?
Thanx && cheers,
Martin
Peter Williams wrote: > > It is obvious that you believe that all representations should also be > resources in their own right. > The variants involved in compression are not resources, so don't assign them URIs. Aside from compression, they're resources whether you want to call them that or not (just as glass remains a liquid even when you call it a solid). > > 1. HTTP is designed for every representation to be a resource in it's > own right. > No, HTTP supports compression using content negotiation, such variants are not resources. They are byte-for-byte identical except for transfer coding, so they can't be different resources. An XML and a YAML variant of the same resource may contain the same data, but they are not byte-for-byte identical except for transfer coding. Therefore, while they can be variants of a resource, they are also resources themselves, whether you assign them URIs or not. > > The RFC strongly encourages this approach (by using > "should" to describe this behavior). Doing so does not violate any > of the REST constraints and intermediates expect it. Therefore we > should follow the RFC. > Doing so is to apply a REST constraint to the real-world Web. > > 2. Having representations that are not resources in their own right > violates a REST constraint. Therefore to follow the REST style you > must identify each representation as a resource. > No, compressed/not-compressed variants are not resources, so to assign them URIs and treat them as such would not be correct. Assuming no Content-Location: Say I have a negotiated resource which returns both application/xhtml +xml and text/html variants. We send text/html to IE and application/ xhtml+xml to Ffx. Let's say the first time a cache encounters this resource, the user agent is IE. The cache stores the text/html variant. Let's say the next request the cache encounters for the resource is from another IE. Cache responds with cached text/html variant. Next, the cache gets the same request from Ffx. Vary: User-Agent tells the cache to check the origin server. Origin responds with application/ xhtml+xml variant, cache stores that variant, overwriting the text/ html variant, because that's what the cache is being told to do. Now, another request for the resource comes to the cache from IE. The cache only has an application/xhtml+xml variant. Vary: User-Agent tells the cache to check the origin server. Origin server responds with text/html variant, which the cache stores, overwriting the application/xhtml+xml variant. This undesirable behavior results from not following the SHOULD. By including a URI in Content-Location, the cache has the additional information it needs to tag-and-bag variants. The cache can now associate URI A with text/html, and URI B with application/xhtml+xml, such that a series of IE-Ffx-IE requests will return the proper variant, instead of only caching the most-recent variant. If the only way that conneg works properly (except compression) is by treating those variants as resources in their own right, it tells us one of two things. Either those variants really *are* resources, or there's a REST mismatch in HTTP that's escaped Roy's notice. -Eric
404 does not mean "deleted before." 410 Gone may be closer to what you
want to return for resources that have already been deleted.
mca
http://amundsen.com/blog/
http://mamund.com/foaf.rdf#me
On Mon, Jun 14, 2010 at 11:18, Martin <martin.grotzke@...> wrote:
> Hi,
>
> say I have a resource like
> /mySets/{set}/{item}
> which provides a DELETE.
>
> If this resource is found and successfully deleted a 204 will be returned. If this resource is not found (e.g. because it has been deleted before) a 404 will be returned.
>
> Now I wonder which status code should be returned if the resource
> /mySets/{set}
> is not found.
> Also a 404? A 400?
>
> I'm not so happy about a 404 because it lacks some semantics - the DELETE on the subresource (or subsubresource) /mySets/{set}/{item} implies that /mySets/{set} exists. If this is not given it might indicate a conflict or that the client is wrong in some way.
>
> While looking for appropriate response codes based on their name 409 (Conflict), 410 (Gone) or 412 (Precondition Failed) sound interesting (see http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4 )) but their semantics does not match really.
>
> My current choice would be to return 400 (Bad Request), but this has also different semantics.
>
> Do you have suggestions how to handle this?
>
> Thanx && cheers,
> Martin
>
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
>
There is no such thing as a subresource, those are models that exist only on the server. As such, there is absolutely no inherent relationship between /mySets/{set} and /mySets/{set}/{item}. Those are two resources that live independently one from the other, the existence of one being completely independent of the existence of the other.
So 404 is perfectly accurate.
________________________________________
From: rest-discuss@yahoogroups.com [rest-discuss@yahoogroups.com] on behalf of Martin [martin.grotzke@googlemail.com]
Sent: 14 June 2010 16:18
To: rest-discuss@yahoogroups.com
Subject: [rest-discuss] Which status code to return for DELETE if parent resource is not found?
Hi,
say I have a resource like
/mySets/{set}/{item}
which provides a DELETE.
If this resource is found and successfully deleted a 204 will be returned. If this resource is not found (e.g. because it has been deleted before) a 404 will be returned.
Now I wonder which status code should be returned if the resource
/mySets/{set}
is not found.
Also a 404? A 400?
I'm not so happy about a 404 because it lacks some semantics - the DELETE on the subresource (or subsubresource) /mySets/{set}/{item} implies that /mySets/{set} exists. If this is not given it might indicate a conflict or that the client is wrong in some way.
While looking for appropriate response codes based on their name 409 (Conflict), 410 (Gone) or 412 (Precondition Failed) sound interesting (see http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4 )) but their semantics does not match really.
My current choice would be to return 400 (Bad Request), but this has also different semantics.
Do you have suggestions how to handle this?
Thanx && cheers,
Martin
------------------------------------
Yahoo! Groups Links
Eric J. Bowman wrote: > Peter Williams wrote: > > 1. HTTP is designed for every representation to be a resource in > > it's own right. > > No, HTTP supports compression using content negotiation, such variants > are not resources. They are byte-for-byte identical except for > transfer coding, so they can't be different resources. > > An XML and a YAML variant of the same resource may contain the same > data, but they are not byte-for-byte identical except for transfer > coding. Therefore, while they can be variants of a resource, they are > also resources themselves, whether you assign them URIs or not. I thought I was understanding until this point; perhaps it's just stronger here than you intend in order to fend off "borking @type". Roy said, "...a resource R is a temporally varying membership function MR(t), which for time t maps to a set of entities, or values, which are equivalent. The values in the set may be resource representations and/or resource identifiers." [1] I take that to mean you can Vary representations or, as you advocate, make them into resources and assign them URI's. But it doesn't seem to prefer one over the other, nor does it indicate that the variants are independent resources whether you assign them URI's or not. > Assuming no Content-Location: > > Say I have a negotiated resource which returns both application/xhtml > +xml and text/html variants. We send text/html to IE and application/ > xhtml+xml to Ffx. Let's say the first time a cache encounters this > resource, the user agent is IE. The cache stores the text/html > variant. > > Let's say the next request the cache encounters for the resource is > from another IE. Cache responds with cached text/html variant. Next, > the cache gets the same request from Ffx. Vary: User-Agent tells the > cache to check the origin server. Origin responds with application/ > xhtml+xml variant, cache stores that variant, overwriting the text/ > html variant, because that's what the cache is being told to do. > > Now, another request for the resource comes to the cache from IE. The > cache only has an application/xhtml+xml variant. Vary: User-Agent > tells the cache to check the origin server. Origin server responds > with text/html variant, which the cache stores, overwriting the > application/xhtml+xml variant. > > This undesirable behavior results from not following the SHOULD. By > including a URI in Content-Location, the cache has the additional > information it needs to tag-and-bag variants. The cache can now > associate URI A with text/html, and URI B with application/xhtml+xml, > such that a series of IE-Ffx-IE requests will return the proper > variant, instead of only caching the most-recent variant. I was under the impression that the cache needed three things to tag-and-bag variants: 1) the URI, 2) the Vary response header, and 3) the request headers. Since HTTP, at least, has synchronous requests and responses, a cache should have all of those for any given conversation. We recently upgraded CherryPy's cache to this model [2], and it seems to work well. Django apparently does the same [3]. Squid seems to have extensions to do this [4]. Varnish is at least discussing the issue [5]. I was a bit surprised to hear that the idea that "most caches...simply won't cache responses whose Vary header consists of anything more than 'Accept-Encoding'" [6]. If that's true, then I see it as an unfortunate limitation of the ideal--an engineering problem to be suffered through, not part and parcel of the REST style. That suffering takes the form of assigning URI's to variants via Content-Location, as you say, but also of working to improve proper support for Vary in caches [7] (IE6 seems the worst offender, but note the comment from Eric Law regarding IE7 improvements). I would very much like to see Mark fix or put up a new version of that resource [8] to see how the state of browser caching in 2010 compares to 2006. Firefox 3.5.9, at least, passes the Vary tests therein. User-Agent is something of an extreme case--it has perhaps the most diverse set of values of any header save Cookie. Even proper Vary support in caches doesn't help that situation; assigning URI's to variants does. But that alone doesn't elevate URI's-per-variant to a preferred style in my (very humble, willing to learn) opinion. I'm still not convinced that those practical realities inform REST; rather, they seem to be mismatches between REST and the implementation of caches on the one hand and the design of the User-Agent header on the other. None of which means "borking @type" (using "type" attributes to indicate the type of the response) is a good idea--but the OP's question (and mca's initial answer) seemed to me to be more about the type of the *request* entity, which has little to do with all of the above. Robert Brewer fumanchu@... [1] http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm#s ec_5_2 [2] http://www.cherrypy.org/browser/trunk/cherrypy/lib/caching.py#L128 [3] http://docs.djangoproject.com/en/dev/topics/cache/#using-vary-headers [4] http://squid.sourceforge.net/vary/ [5] http://varnish-cache.org/wiki/ArchitectureVary [6] http://tech.groups.yahoo.com/group/rest-discuss/message/15532 [7] http://www.mnot.net/blog/2006/05/11/browser_caching. [8] http://www.mnot.net/javascript/xmlhttprequest/cache.html
On Mon, Jun 14, 2010 at 12:06 PM, Eric J. Bowman <eric@...>wrote: > > Assuming no Content-Location: > > Say I have a negotiated resource which returns both application/xhtml > +xml and text/html variants. We send text/html to IE and application/ > xhtml+xml to Ffx. Let's say the first time a cache encounters this > resource, the user agent is IE. The cache stores the text/html variant. > > Let's say the next request the cache encounters for the resource is > from another IE. Cache responds with cached text/html variant. Next, > the cache gets the same request from Ffx. Vary: User-Agent tells the > cache to check the origin server. Origin responds with application/ > xhtml+xml variant, cache stores that variant, overwriting the text/ > html variant, because that's what the cache is being told to do. > > Now, another request for the resource comes to the cache from IE. The > cache only has an application/xhtml+xml variant. Vary: User-Agent > tells the cache to check the origin server. Origin server responds > with text/html variant, which the cache stores, overwriting the > application/xhtml+xml variant. > > This undesirable behavior results from not following the SHOULD. By > including a URI in Content-Location, the cache has the additional > information it needs to tag-and-bag variants. The cache can now > associate URI A with text/html, and URI B with application/xhtml+xml, > such that a series of IE-Ffx-IE requests will return the proper > variant, instead of only caching the most-recent variant. > > Maybe my misunderstanding is because of the experimenting I have been doing with Varnish. What Varnish seems to do is keep a hash of the URL and associate any variants (using the Vary header) to that hash. It will not go back to the origin server even after a different variant was accessed. Once I had the text/plain version of /test in cache Varnish always returned it even after I curl-ed for the text/html variant of the resource. I was then able to purge /test to rinse and repeat. Is this non-compliant behavior? -- David blog: http://www.traceback.org twitter: http://twitter.com/dstanek
On Mon, Jun 14, 2010 at 10:06 AM, Eric J. Bowman <eric@...> wrote: > > No, compressed/not-compressed variants are not resources, so to assign > them URIs and treat them as such would not be correct. > > Assuming no Content-Location: > > Say I have a negotiated resource which returns both application/xhtml > +xml and text/html variants. We send text/html to IE and application/ > xhtml+xml to Ffx. Let's say the first time a cache encounters this > resource, the user agent is IE. The cache stores the text/html variant. > > Let's say the next request the cache encounters for the resource is > from another IE. Cache responds with cached text/html variant. Next, > the cache gets the same request from Ffx. Vary: User-Agent tells the > cache to check the origin server. Origin responds with application/ > xhtml+xml variant, cache stores that variant, overwriting the text/ > html variant, because that's what the cache is being told to do. > > Now, another request for the resource comes to the cache from IE. The > cache only has an application/xhtml+xml variant. Vary: User-Agent > tells the cache to check the origin server. Origin server responds > with text/html variant, which the cache stores, overwriting the > application/xhtml+xml variant. > > This undesirable behavior results from not following the SHOULD. By > including a URI in Content-Location, the cache has the additional > information it needs to tag-and-bag variants. The cache can now > associate URI A with text/html, and URI B with application/xhtml+xml, > such that a series of IE-Ffx-IE requests will return the proper > variant, instead of only caching the most-recent variant. > > If the only way that conneg works properly (except compression) is by > treating those variants as resources in their own right, it tells us > one of two things. Either those variants really *are* resources, or > there's a REST mismatch in HTTP that's escaped Roy's notice. Any cache that works as described above is broken, assuming all the components are behaving properly (ie, setting requests and responses header fields appropriately). In the scenario above even compressed variants suffer from pollution issues. For example, a user agent that supports `gzip` content encoding (for example, a browser) would pollute the cache for any user agents that do not (for example, the quick ruby script i wrote that only supports identity encoding). The cache key for a response is the union of the URI and the values of the request header fields listed the Vary field of the response. If the server says `Vary: User-Agent` caching intermediates had better cache responses for different user agents separately, or not at all. Regardless, the caching argument is an implementation argument. Basically, HTTP needs you to individually name every entity transferred over it. Do you assert that the REST architectural style itself demands (perhaps with a compression exception) that every variant of every resource be named as a resource in its own right? Peter Williams http://barelyenough.org
David Stanek wrote: > > Maybe my misunderstanding is because of the experimenting I have been > doing with Varnish. What Varnish seems to do is keep a hash of the > URL and associate any variants (using the Vary header) to that hash. > It will not go back to the origin server even after a different > variant was accessed. Once I had the text/plain version of /test in > cache Varnish always returned it even after I curl-ed for the > text/html variant of the resource. I was then able to purge /test to > rinse and repeat. > > Is this non-compliant behavior? > I say unspecified behavior. All caches behave the same way in the presence of Content-Location, such behavior is specified. Any cache developer attempting to work around a missing Content-Location header, has to choose between different unspecified behaviors. -Eric
On Mon, Jun 14, 2010 at 1:45 PM, Eric J. Bowman <eric@...> wrote: > David Stanek wrote: >> >> Maybe my misunderstanding is because of the experimenting I have been >> doing with Varnish. What Varnish seems to do is keep a hash of the >> URL and associate any variants (using the Vary header) to that hash. >> It will not go back to the origin server even after a different >> variant was accessed. Once I had the text/plain version of /test in >> cache Varnish always returned it even after I curl-ed for the >> text/html variant of the resource. I was then able to purge /test to >> rinse and repeat. >> >> Is this non-compliant behavior? >> > > I say unspecified behavior. All caches behave the same way in the > presence of Content-Location, such behavior is specified. Any cache > developer attempting to work around a missing Content-Location header, > has to choose between different unspecified behaviors. It doesn't seem particularly under specified to me:[1] A server SHOULD use the Vary header field to inform a cache of what request-header fields were used to select among multiple representations of a cacheable response subject to server-driven negotiation. The set of header fields named by the Vary field value is known as the "selecting" request-headers. When the cache receives a subsequent request whose Request-URI specifies one or more cache entries including a Vary header field, the cache MUST NOT use such a cache entry to construct a response to the new request unless all of the selecting request-headers present in the new request match the corresponding stored request-headers in the original request. Peter Williams http://barelyenough.org [1]: http://www.w3.org/Protocols/rfc2616/rfc2616-sec13.html#sec13.6
"Robert Brewer" wrote: > > Roy said, "...a resource R is a temporally varying membership function > MR(t), which for time t maps to a set of entities, or values, which > are equivalent. The values in the set may be resource representations > and/or resource identifiers." [1] I take that to mean you can Vary > representations or, as you advocate, make them into resources and > assign them URI's. But it doesn't seem to prefer one over the other, > nor does it indicate that the variants are independent resources > whether you assign them URI's or not. > Assigning URIs to variants is an application of the identification of resources constraint. That constraint relies on an understanding of "resource". A variant that has its own URI meets the definition of a resource. Besides compression, variants without URIs are still members of (at least) two sets, even if you fail to acknowledge this by assigning those URIs. If nothing could be considered a resource until it's been assigned a URI, then it would be by-definition impossible to violate the identification of resources constraint. This constraint basically means that it's possible to fail to recognize something as a resource, i.e. your system may have resources that weren't considered at design time. This is why I say that REST is a tool for iterative design. How do you know exactly what your resources will be, without implementing? How can you discover your resources at build-time if they're set in stone at design-time? Minting URIs is the most common workaround to REST problems, assigning URIs to variants is simply how conneg is done. > > I was under the impression that the cache needed three things to > tag-and-bag variants: 1) the URI, 2) the Vary response header, and 3) > the request headers. > In general, caches work by associating a representation with a URI. Content-Location allows this paradigm to apply even when there's more than one representation, by providing more than one URI. Failing to assign URIs to variants is like naming your twin sons "Sam." They represent the same resource, i.e. they're both your son. Even if they're identical twins, they aren't cell-for-cell the same person, so it makes sense to give them different names, even if you only ever say, "Boys! Come here!" > > Since HTTP, at least, has synchronous requests > and responses, a cache should have all of those for any given > conversation. We recently upgraded CherryPy's cache to this model > [2], and it seems to work well. Django apparently does the same [3]. > Squid seems to have extensions to do this [4]. Varnish is at least > discussing the issue [5]. > This is a moot point, no matter how many caches try to sniff their way around missing Content-Location, because in the presence of Content- Location they all exhibit exactly the same, specified, behavior. No matter how many caches implement such unspecified behaviors, or how they do it, it's still a REST violation for a system to ignore RFC 2616's SHOULD. > > I was a bit surprised to hear that the idea that "most caches...simply > won't cache responses whose Vary header consists of anything more than > 'Accept-Encoding'" [6]. If that's true, then I see it as an > unfortunate limitation of the ideal--an engineering problem to be > suffered through, not part and parcel of the REST style. > Nobody ever claimed conneg was perfect, or that there isn't room for improvement in a successor protocol to HTTP 1.1. That some caches don't respond well to complex Vary headers proves this, but has nothing to do with the REST style. Those other caches without such limitation all agree on what to do in the presence of Content-Location, because assigning URIs to variants *is* part of the REST style. > > suffering takes the form of assigning URI's to variants via > Content-Location, as you say, but also of working to improve proper > support for Vary in caches [7] (IE6 seems the worst offender, but > note the comment from Eric Law regarding IE7 improvements). I would > very much like to see Mark fix or put up a new version of that > resource [8] to see how the state of browser caching in 2010 compares > to 2006. Firefox 3.5.9, at least, passes the Vary tests therein. > How one cache behaves in relation to another isn't a REST argument. REST is all about hitting that sweet spot in the deployed Web, where most components behave predictably. This is done by applying REST constraints, including identification of resources, as expressed by assigning URIs to variants as per RFC 2616's SHOULD. > > User-Agent is something of an extreme case--it has perhaps the most > diverse set of values of any header save Cookie. Even proper Vary > support in caches doesn't help that situation; assigning URI's to > variants does. But that alone doesn't elevate URI's-per-variant to a > preferred style in my (very humble, willing to learn) opinion. > OK, that's a reasonable position. However, if assigning URIs to variants were a REST mismatch as you say, then surely Roy would've caught it by now? I still have not the slightest clue why this debate continues, because nobody has made a valid case that assigning URIs to variants breaks any REST constraints -- and it works in practice. > > I'm still not convinced that those practical realities inform REST; > rather, they seem to be mismatches between REST and the implementation > of caches on the one hand and the design of the User-Agent header on > the other. > A REST mismatch is something which violates a REST constraint. Assigning URIs to variants can't be considered a mismatch, if nobody can articulate what constraint it violates. Lacking any explanation as to why it's wrong, I assume there must be some explanation as to why it's right. Generally, the Web goes wrong where REST is violated. So when something works on the Web, it's reasonable to attempt to correlate it to one or more REST constraints, because it says right there in REST that such practical realities informed the design of REST. > > None of which means "borking @type" (using "type" attributes to > indicate the type of the response) is a good idea--but the OP's > question (and mca's initial answer) seemed to me to be more about the > type of the *request* entity, which has little to do with all of the > above. > This thread would've been a lot less confusing if more folks would recognize that very point. The media type of a request entity has nothing whatsoever to do with the media type of any previously- received response, it is what @type says it is. That conversation has evolved along the lines that assigning URIs to variants to achieve the same objective as borking @type, is a REST mismatch in HTTP. -Eric
Eric J. Bowman wrote: > "Robert Brewer" wrote: > > User-Agent is something of an extreme case--it has perhaps the most > > diverse set of values of any header save Cookie. Even proper Vary > > support in caches doesn't help that situation; assigning URI's to > > variants does. But that alone doesn't elevate URI's-per-variant to a > > preferred style in my (very humble, willing to learn) opinion. > > OK, that's a reasonable position. However, if assigning URIs to > variants were a REST mismatch as you say, then surely Roy would've > caught it by now? I still have not the slightest clue why this debate > continues, because nobody has made a valid case that assigning URIs to > variants breaks any REST constraints -- and it works in practice. Er, I didn't say that. Assigning URI's to variants is not a mismatch. It can be quite useful. So can the specified use of Vary. Cache implementations (whether in-browser or not) which do not implement the "MUST NOT...unless" [1] that Peter pointed out in relation to the Vary header are in violation of the HTTP spec and should be fixed. > > I'm still not convinced that those practical realities inform REST; > > rather, they seem to be mismatches between REST and the > > implementation of caches on the one hand and the design of the > > User-Agent header on the other. > > A REST mismatch is something which violates a REST constraint. > Assigning URIs to variants can't be considered a mismatch, if nobody > can articulate what constraint it violates. Lacking any explanation as > to why it's wrong, I assume there must be some explanation as to why > it's right. Generally, the Web goes wrong where REST is violated. So > when something works on the Web, it's reasonable to attempt to > correlate it to one or more REST constraints, because it says right > there in REST that such practical realities informed the design of > REST. Agreed. Again, I didn't mean to imply that assigning URI's to variants violates any REST constraints. You said, "If the only way that conneg works properly (except compression) is by treating those variants as resources in their own right, it tells us one of two things. Either those variants really *are* resources, or there's a REST mismatch in HTTP that's escaped Roy's notice." But that's not the only way that conneg works properly. It's the only way it works for your example because A) it varied on User-Agent, which is too loosely-specified to be useful with Vary, and B) it assumed the public Web, which currently suffers from some non-compliant cache implementations. You therefore seemed to imply that proper use of the Vary header according to the HTTP spec always violates REST. I don't assume that's your position, but it's easy to read that way. Variants are resources. But the choice of whether to give them URI's or rely on Vary appears to have much more to do with environment and implementation than constraints of the REST style. Let's use each where appropriate. Robert Brewer fumanchu@... [1] http://www.w3.org/Protocols/rfc2616/rfc2616-sec13.html#sec13.6
>
> say I have a resource like
> /mySets/{set}/{item}
> which provides a DELETE.
>
If I understand, you would have some resource, say, <
http://example./mySets/stuff/thing>, whose origin server would do something
useful in response to requests with the method "DELETE" and a Request-URI
that identifies the resource.
> If this resource is found and successfully deleted a 204 will be returned.
> If this resource is not found (e.g. because it has been deleted before) a
> 404 will be returned.
>
Why should an origin server respond to a "DELETE" request with the status
code "404"? The party issuing the "DELETE" request is trying to empty the
set of representations of the resource that the Request-URI identifies. If
the set had one or more representations before the receipt of the request
and the origin server empties the set (the "resource is found and
successfully deleted"), great: respond with the status code "204" or some
other status code that indicates success. If the set had zero
representations before the receipt of the request and the origin server
simply leaves the set empty (the "resource is not found"), great again:
respond with the status code "204" or some other status code that indicates
success.
> Now I wonder which status code should be returned if the resource
> /mySets/{set}
> is not found.
> Also a 404? A 400?
>
Here, too, status codes indicating failure are inappropriate. The status
code "204" would be fine.
> I'm not so happy about a 404 because it lacks some semantics - the DELETE
> on the subresource (or subsubresource) /mySets/{set}/{item} implies that
> /mySets/{set} exists.
>
No, it doesn't. That a URI is a prefix of another URI implies nothing about
the relationships between the resources that the respective URIs identify.
Etan Wexler wrote: > > Why should an origin server respond to a "DELETE" request with the > status code "404"? > If the target URI of the DELETE request is nonexistent, then there's no resource to delete, and the proper response status is 404 to reflect that. 410 is also acceptable, if the resource has already been deleted. > > If the set had zero representations before the > receipt of the request and the origin server simply leaves the set > empty (the "resource is not found"), great again: respond with the > status code "204" or some other status code that indicates success. > No, if there's nothing to DELETE then there can't be a successful deletion, and 204 is wholly inappropriate, because the request failed to DELETE anything. The user agent must inform the user what actually occurred in response to the request the user made. -Eric
Peter Williams wrote: > > > > > I say unspecified behavior. All caches behave the same way in the > > presence of Content-Location, such behavior is specified. Any cache > > developer attempting to work around a missing Content-Location > > header, has to choose between different unspecified behaviors. > > It doesn't seem particularly under specified to me:[1] > > A server SHOULD use the Vary header field to inform a cache of what > request-header fields were used to select among multiple > representations of a cacheable response subject to server-driven > negotiation. The set of header fields named by the Vary field value > is known as the "selecting" request-headers. > How is this different from what I am saying, which is that Vary only indicates which headers the origin server considered when negotiating? Vary is not a cache header. It merely informs other components that the URI is negotiated. The "selecting" request headers don't tell the cache which of its stored variants those headers relate to. That's what Content-Location is for. > > When the cache receives a subsequent request whose Request-URI > specifies one or more cache entries including a Vary header field, > the cache MUST NOT use such a cache entry to construct a response > to the new request unless all of the selecting request-headers > present in the new request match the corresponding stored > request-headers in the original request. > What does this have to do with using Content-Location or not? This is just telling caches not to assume that one IE User-Agent header is the same as another, and such. When the "selecting request headers" do match, how does the cache know what variant they match *with* unless you use Content-Location? -Eric
On Jun 14, 2010, at 4:26 PM, Eric J. Bowman wrote: > Etan Wexler wrote: > > > > Why should an origin server respond to a "DELETE" request with the > > status code "404"? > > > > If the target URI of the DELETE request is nonexistent, then there's no > resource to delete, and the proper response status is 404 to reflect > that. 410 is also acceptable, if the resource has already been deleted. A resource is never deleted. The current representation is deleted. ....Roy
>>>>> "Roy" == Roy T Fielding <fielding@...> writes:
Roy> A resource is never deleted. The current representation is
Roy> deleted.
I hope you're implementing a REST based system for a bank one day
Roy. I love to delete some of my money using your system :-)
--
Cheers,
Berend de Boer
> > > Why should an origin server respond to a "DELETE" request with the status > code "404"? The party issuing the "DELETE" request is trying to empty the > set of representations of the resource that the Request-URI identifies. If > the set had one or more representations before the receipt of the request > and the origin server empties the set (the "resource is found and > successfully deleted"), great: respond with the status code "204" or some > other status code that indicates success. If the set had zero > representations before the receipt of the request and the origin server > simply leaves the set empty (the "resource is not found"), great again: > respond with the status code "204" or some other status code that indicates > success. > > > What does success indicate? That the DELETE happened based on your specific request or just that a DELETE at "some point" had happened? What if the system executes other behaviors based on that success (did it not cause to happen)?
"Robert Brewer" wrote: > > Er, I didn't say that. Assigning URI's to variants is not a mismatch. > It can be quite useful. So can the specified use of Vary. > I didn't realize this thread had anything to do with the Vary header. Sending Content-Location certainly isn't a substitute for Vary, nobody has advocated not using Vary to identify negotiated resources. But Vary is still not a caching header. It does only what RFC 2616 says it does, distinguishes negotiated resources from non-negotiated resources. > > Cache implementations (whether in-browser or not) which do not > implement the "MUST NOT...unless" [1] that Peter pointed out in > relation to the Vary header are in violation of the HTTP spec and > should be fixed. > That isn't the situation we're discussing. We're discussing how the cache knows what variant to send when the headers *do* match. Put simply, how do we inform the cache which header combinations yield the same variant? Multiple combinations of selecting headers MAY yield the same variant, but only if the Content-Location returned for each request is the same (and the response passes a validation check). > > But that's not the only way that conneg works > properly. > It's the only way it works properly except for compression. Any non- compression scenario will do: Accept, User-Agent, doesn't matter. Is there some real-world example where this does not hold true? > > with Vary, and B) it assumed the public Web, which currently suffers > from some non-compliant cache implementations. > Non-compliant implementations abound. REST is all about leveraging what *does* work on the real-world Web. Like Content-Location, which is more compatible with more caches than anything else. So if you want maximum interoperability of your conneg caching scheme, you use Content- Location. REST is all about that maximum interoperability, i.e. the sweet spot where the majority of implementations agree (because they're following spec). > > You therefore seemed > to imply that proper use of the Vary header according to the HTTP > spec always violates REST. > No, all I've said about Vary is that it isn't a caching header. Its sole purpose is to indicate negotiated resources, not how to cache them. Without sending Vary, the server is violating the self-descriptive messaging constraint. Vary, without Content-Location, is useless for caching, unless we're talking about compression. > > Variants are resources. But the choice of whether to give them URI's > or rely on Vary appears to have much more to do with environment and > implementation than constraints of the REST style. Let's use each > where appropriate. > Where following RFC 2616 implements a REST constraint, it needs to be followed. But, this is not a protocol concern. I can't imagine making any conneg implementation in any protocol work, without some means of distinguishing one variant from another. Given any protocol that uses URIs to identify negotiated resources, it only makes sense for the variants to have URIs, this has everything to do with REST and nothing to do with HTTP. The only means of achieving this constraint in HTTP 1.1 is Content- Location. The assertion that variants need URIs is therefore protocol- independent -- Content-Location is the implementation of the constraint. You can't implement REST's identification of resources constraint in HTTP 1.1, except by using Content-Location. So in HTTP 1.1, if you aren't using Content-Location, then you haven't assigned URIs to variants, which violates the identification of resources constraint. So I don't understand anyone's points about not recognizing the obvious correlation between Content-Location in non-compression conneg, and the identification of resources constraint. In HTTP 1.1 they're one and the same. I don't understand what you mean by "rely on Vary" as Vary is not a caching header, and except for compression (or unspecified cache behavior, but following that is hardly RESTful), I've never seen conneg work without Content-Location. -Eric
"Roy T. Fielding" wrote: > > > If the target URI of the DELETE request is nonexistent, then > > there's no resource to delete, and the proper response status is > > 404 to reflect that. 410 is also acceptable, if the resource has > > already been deleted. > > A resource is never deleted. The current representation is deleted. > How about this wording? If the target URI of the DELETE request maps to the empty set, then there's no representation to delete, and the proper response status is 404 to reflect that. 410 is also acceptable, if the resource mapping to the empty set *used* to have a representation. -Eric
Eb wrote: > > What does success indicate? That the DELETE happened based on your > specific request or just that a DELETE at "some point" had happened? > The objective is to inform the user of the results of the user's request. If someone else's DELETE succeeded first, then a success response would not be illustrative of the handling of the user's request, but its handling of some other request. > > What if the system executes other behaviors based on that success > (did it not cause to happen)? > Implementation details are hidden behind the uniform interface. All we can inform the user about, is the result of the specific request that the user made. The only method that guarantees the user agent isn't responsible for side-effects of the request, is GET. Other than that, no assumptions may be made about "what else happened" as a result of a successful request. -Eric
On Jun 14, 2010, at 4:46 PM, Berend de Boer wrote: >>>>>> "Roy" == Roy T Fielding <fielding@...> writes: > > Roy> A resource is never deleted. The current representation is > Roy> deleted. > > I hope you're implementing a REST based system for a bank one day > Roy. I love to delete some of my money using your system :-) One day? http://www.day.com/day/en/customers.html I'd love to implement a system that let me PUT my own bank balance, but that would be f'ing stupid on the part of the bank. I don't build stupid systems. REST does not require that every data item be mapped to a resource, let alone that every resource allow PUT and DELETE. So why on earth would I allow a client to DELETE money? REST is not distributed objects. Don't get stuck thinking that resources are like objects. They are not. The origin server is in complete control over what actions are allowed across that interface, and the nature of the results. ....Roy
On Jun 14, 2010, at 5:09 PM, Eric J. Bowman wrote: > "Roy T. Fielding" wrote: >> >>> If the target URI of the DELETE request is nonexistent, then >>> there's no resource to delete, and the proper response status is >>> 404 to reflect that. 410 is also acceptable, if the resource has >>> already been deleted. >> >> A resource is never deleted. The current representation is deleted. >> > > How about this wording? > > If the target URI of the DELETE request maps to the empty set, then > there's no representation to delete, and the proper response status is > 404 to reflect that. 410 is also acceptable, if the resource mapping > to the empty set *used* to have a representation. Any status code should be expected. If it is unclear what the status means, then maybe the server wants to be vague. Deal with it. Rigidity is the enemy of long-lived systems. ....Roy
Peter Williams wrote: > > Any cache that works as described above is broken, assuming all the > components are behaving properly (ie, setting requests and responses > header fields appropriately). > Not broken, just behaving in an unspecified fashion due to the origin server's failure to follow HTTP. While there are caches that will make assumptions, such behavior is not specified. If you don't explicitly tell caches how to distinguish between variants, then the cache only knows that you want to cache the most-recent variant it has encountered. > > In the scenario above even compressed > variants suffer from pollution issues. For example, a user agent that > supports `gzip` content encoding (for example, a browser) would > pollute the cache for any user agents that do not > Absolutely not, as anyone can see by engaging compression on any httpd and testing this notion. The cache gets a request for a resource, the user agent and origin server both support compression, so the cache either returns the compressed variant it has stored, or on-the-fly compresses the uncompressed variant it has stored. If the next request is from K-meleon (no compression), the cache either returns the uncompressed variant it has stored, or else it on-the-fly uncompresses the compressed variant it has stored. When dealing with compression, non-origin-server components are free to make this assumption, and they do. Unlike with variants used for other purposes, where non-origin-server components can't assume anything (we don't expect a cache to use its XML variant to generate an HTML variant, or to tranlate one human language to another, but these are distinctly different problems than compression). > > The cache key for a response is the union of the URI and the values of > the request header fields listed the Vary field of the response. > True for compression. Not true otherwise. Unless we're talking about caches which use sniffing to infer stuff we aren't telling it -- such caches exist, but go beyond spec to accomplish this. Caches which follow RFC 2616 aren't sniffing, because they're expecting you to follow the SHOULD. Any given version of IE has about a zillion different User-Agent strings depending on the OS and its configuration. Without the Content- Location header, a cache cannot distinguish one variant from another, except by sniffing content. A cache has four choices in absence of Content-Location. One, it can store a variant for each IE User-Agent string it encounters. Two, it can store the most-recently-requested variant. Three, it can deduce by sniffing that the response to one User-Agent string is identical to another. Or four, it can opt not to cache that resource. None of these behaviors is specified. Assigning URIs to variants *is* specified, to keep just this problem from occurring. Only when you assign the same URI to each possible IE User-Agent string using Content-Location, can a cache safely deduce that each IE User-Agent string maps to the _same_ variant (without sniffing the content). Only by using Content-Location are you giving the cache enough information to associate multiple headers (in this case, multiple User-Agent headers) with the same variant, for any given negotiated URI (except compression). Without Content-Location, a cache will treat IE/XP, IE/Vista and IE/Win7 responses as unique variants even if they're the same, unless that cache is sniffing content. A failure to apply the identification of resources constraint, leads to a failure to apply the self- descriptive messaging constraint (as exemplified by the necessity to sniff content), in HTTP conneg. Only with Content-Location (or some other means of distinguishing one variant from another), can you inform caches that the relationship between variants and request headers is one-to-many, not one-to-one, because the cache can recognize that it got the same variant for multiple request headers (rather than a different variant for each header) without resorting to sniffing. > > If the server says `Vary: User-Agent` caching intermediates had > better cache responses for different user agents separately, or not > at all. > But that isn't what RFC 2616 says. Instead of requiring each variant header/header combination to be associated with its own _unique_ stored entity, RFC 2616 says you SHOULD use Content-Location such that variant header/header combinations may be associated with a _shared_ stored entity. > > Regardless, the caching argument is an implementation argument. > Well, yeah, it is. But that doesn't mean it isn't a REST argument, since the implementation we're discussing is an expression of REST's constraints. I refuse to accept the argument that because cookies aren't RESTful, we can't point to real-world HTTP behavior as examples of REST in practice. Caching works correctly when REST is followed (by following RFC 2616's SHOULD) in HTTP implementations. It does not work correctly otherwise. > > Basically, HTTP needs you to individually name every entity > transferred over it. > No, it does not. Compression, again, is the exception which results in a SHOULD not a MUST in RFC 2616. Aside from compression, it's REST which requires you to assign URIs to your variants, to meet the identification of resources constraint. Content-Location is the means to implement that constraint, not something HTTP requires (not a MUST). > > Do you assert that the REST architectural style itself demands > (perhaps with a compression exception) that every variant of every > resource be named as a resource in its own right? > That is what I keep saying -- glass is technically a liquid even if you refuse to call it a liquid. Aside from compression, variants are resources, even if you refuse to acknowledge that by giving them URIs. The identification of resources constraint is about discovering and exposing your resources, not naming them. Which is why failing to follow the SHOULD (except for compression) is a REST violation -- you've failed to expose those resources with URIs. I'm still befuddled by all this pushback. What reason exists to NOT assign URIs to variants? Why is this solution so impractical, that it's even worth debating for weeks on end? Can someone please answer that? Wouldn't it be easier to just follow the spec? -Eric
"Eric J. Bowman" wrote: > > The objective is to inform the user of the results of the user's > request. If someone else's DELETE succeeded first, then a success > response would not be illustrative of the handling of the user's > request, but its handling of some other request. > Or not, apparently, after reading Roy's last response... ;-) The gist of which seems to be that any response code is allowed when DELETE is aimed at a resource mapping to the empty set. -Eric
"Eric J. Bowman" wrote: > > Only with Content-Location (or some other means of distinguishing one > variant from another), can you inform caches that the relationship > between variants and request headers is one-to-many, not one-to-one, > because the cache can recognize that it got the same variant for > multiple request headers (rather than a different variant for each > header) without resorting to sniffing. > I meant selection headers, of course. If the purpose is to send text/html to IE and application/xhtml+xml to everything else: Not using Content-Location results in a one-to-one mapping of a variant to the selection headers that resulted in the variant, i.e. multiple variants cached for IE/XP, more for IE/Vista, more for IE/Win7, for each version of IE, for each possible OS configuration reflected in User-Agent by that browser. Using Content-Location, we can associate one application/xhtml+xml variant with multiple combinations of selection headers, i.e. a one-to-many mapping. This can't be done without some means of distinguishing one variant from another, without sniffing content. -Eric
On Jun 14, 2010, at 5:26 PM, Eric J. Bowman wrote: > "Eric J. Bowman" wrote: > > > > The objective is to inform the user of the results of the user's > > request. If someone else's DELETE succeeded first, then a success > > response would not be illustrative of the handling of the user's > > request, but its handling of some other request. > > > > Or not, apparently, after reading Roy's last response... ;-) The gist > of which seems to be that any response code is allowed when DELETE is > aimed at a resource mapping to the empty set. Well, no, the gist of it is that any response code is possible, no matter what the request. The client needs to deal with all of them. What is the client going to do when it receives a 204? A 404? A 410? How about a 456? The point of having a flexible interface is to be flexible. The more response codes you can map into "success", the better. ....Roy
> Well, no, the gist of it is that any response code is possible, > no matter what the request. The client needs to deal with all > of them. What is the client going to do when it receives a 204? > A 404? A 410? How about a 456? > > The point of having a flexible interface is to be flexible. > The more response codes you can map into "success", the better. > > ....Roy > Would there be a recommended response code to be returned in this particular or is that inconsequential (even if clients can handle whatever they get)?
On Jun 14, 2010, at 5:55 PM, Eb wrote: > >> Well, no, the gist of it is that any response code is possible, >> no matter what the request. The client needs to deal with all >> of them. What is the client going to do when it receives a 204? >> A 404? A 410? How about a 456? >> >> The point of having a flexible interface is to be flexible. >> The more response codes you can map into "success", the better. >> >> ....Roy >> > Would there be a recommended response code to be returned in this particular or is that inconsequential (even if clients can handle whatever they get)? Neither REST nor HTTP would care. I would send 200 or 204, because there is no equivalent of a "rm -f" (i.e., remove even if it doesn't exist") in HTTP. If the client actually wants to check that it exists, then the request would include an If-match: * and the precondition failed response would then be required. ....Roy
"Roy T. Fielding" wrote: > > Neither REST nor HTTP would care. I would send 200 or 204, > because there is no equivalent of a "rm -f" (i.e., remove even > if it doesn't exist") in HTTP. If the client actually wants > to check that it exists, then the request would include an > > If-match: * > > and the precondition failed response would then be required. > So 2xx/4xx only indicates that the request was accepted/rejected, without implying that any processing happened as a result of the request? -Eric
I've done a number of "REST" projects, and I didn't use any of of the technologies you listed. The Java frameworks and public APIs that I've seen all had custom XML schemas or JSon. My current application uses WADL for informational/documentation purposes -Solomon On Mon, Jun 14, 2010 at 9:56 AM, Alan Dean <alan.dean@...> wrote: > > > Sean, > > I'm not in the WADL camp. > > (X)HTML, RDF (XML / n3) and AtomPub between them probably account for > almost all the live RESTful activity over HTTP today. Couldn't guess what > the proportions are. > > Regards, > Alan Dean > > > On Mon, Jun 14, 2010 at 09:41, Sean Kennedy <seandkennedy@...>wrote: > >> Thanks Alan. Have you worked with WADL - or are (nearly) all RESTful Web >> Services described with XHTML pages? >> >> Sean. >> >> >> ------------------------------ >> *From:* Alan Dean <alan.dean@...> >> *To:* Sean Kennedy <seandkennedy@...> >> *Cc:* Rest Discussion Group <rest-discuss@yahoogroups.com>; Sean Kennedy >> <skennedy@...> >> *Sent:* Sat, 12 June, 2010 5:14:39 >> *Subject:* Re: [rest-discuss] UDDI dead? >> >> Sean, >> >> In all seriousness, the transmission vector for the vast majority of WSDL >> URI's *by service count* is email as they are private or semi-private custom >> service endpoints within and between companies. >> >> For public services, yes, the transmission vector is the humble web page. >> >> Regards, >> Alan Dean >> >> On Thu, Jun 10, 2010 at 11:39, Sean Kennedy <seandkennedy@...>wrote: >> >>> >>> >>> Hi, >>> I work in academia so would be grateful for the industry perspective. >>> I am working on a thesis which includes both WS-* and REST. If UDDI is in >>> fact dead http://www.innoq.com/blog/st/2010/03/uddi_rip.html how are >>> enterprises communicating the WSDL files? Are we talking email, publishing >>> on a Web site, inserting into a db?? >>> >>> Thanks, >>> Sean. >>> >>> PS Would the same apply for WADL files? >>> >>> >> >> > >
On Mon, Jun 14, 2010 at 6:36 PM, Eric J. Bowman <eric@...> wrote: > "Eric J. Bowman" wrote: >> >> Only with Content-Location (or some other means of distinguishing one >> variant from another), can you inform caches that the relationship >> between variants and request headers is one-to-many, not one-to-one, >> because the cache can recognize that it got the same variant for >> multiple request headers (rather than a different variant for each >> header) without resorting to sniffing. >> > > I meant selection headers, of course. > > If the purpose is to send text/html to IE and application/xhtml+xml to > everything else: > > Not using Content-Location results in a one-to-one mapping of a variant > to the selection headers that resulted in the variant, i.e. multiple > variants cached for IE/XP, more for IE/Vista, more for IE/Win7, for each > version of IE, for each possible OS configuration reflected in > User-Agent by that browser. > > Using Content-Location, we can associate one application/xhtml+xml > variant with multiple combinations of selection headers, i.e. a > one-to-many mapping. This can't be done without some means of > distinguishing one variant from another, without sniffing content. Providing a `content-location` allows more efficient caching by allowing mapping a variety of selection headers to a single entity in caches. Agreed. On the other hand, vigorous use of `etag` would provide similar improvements to the cache hit rate. It is a big step from "Content-Location can improve cache hit rates" to, "conneg is useless without Content-Location". A conforming cache will not respond with an inappropriate representation if the server sends an appropriate `vary` header. (Though it might miss a valid chance to serve a cached entity.) Private caches at the user agent are less susceptible to selection criteria explosion. Repeated requests from a single user agent are likely to all be quite similar. In my experience private caches are far more important than caching intermediates, anyway. `content-location` is a terribly useful header. Using it does increase the cache hit rates for negotiated resources. However, skipping `content-location` in a negotiated response does not violate any of the REST constraints that i can see. Peter Williams http://barelyenough.org
Hello Erick. Its been time. Ok, First, let me tell you I didn't want to enter the discussion about type, since the ideas depend totally of what you use type for. I'm sorry my text made you infer I was saying type is forcing us to do bad things. So, in what I wrote, please note that I'm answering a question about where can I find information of media types. I have seen that use for type. Please let me know if type does not provide that info. Also note that I'm not saying, although it seems I do, that I MUST use the type indicated in @type. I say that info comes there, and that indicates what you should expect! If you do, please note that you will have less visibility and it will limit you negotiation. It does not say that if you put the @type in any media, you hopelessly be unable to negotiate any other type, and that subsequent operations won't be visible, that is not the idea. You may ignore it altogether! Type can also be use to define app semantics. So, you have XML as the mediatype negotiated between agent-server, the type can tell you that XML is an PO, or an serialized person object. So, again, type is optional and, as you mention, is about documentation. There you can find info about the links and hypermedia. Regards. William Martinez, --- In rest-discuss@yahoogroups.com, "Eric J. Bowman" <eric@...> wrote: > > "William Martinez Pomares" wrote: > > > > 2. Another way of getting the media is using embedded metadata in the > > media itself. That is done, for instance, using the link. The > > mediatype attribute will tell you specifically which media you should > > expect to use for a particular link and operation. Of course, that > > reduces a bit the visibility (since an interim node should know the > > media the link is embedded in to be able to "see" it) and also limits > > the negotiation (the link "suggests" the media). > > > > Once more: @type has absolutely nothing remotely to do with content > negotiation, it does not "limit" it, it does not "drive" it, and it > does not reduce visibility (not using @type reduces visibility). > > It is simply an annotation. Not any sort of instruction. > > -Eric >
"William Martinez Pomares" wrote: > > I'm sorry my text made you infer I was saying type is forcing us to > do bad things. > No apologies necessary; other than from me to you. In retrospect, my post seems like I'm jumping down your throat... I should've phrased it differently, like by asking if you were really saying what I thought you were saying, which you apparently weren't... my bad. :-) -Eric
Peter Williams wrote: > > Providing a `content-location` allows more efficient caching by > allowing mapping a variety of selection headers to a single entity in > caches. Agreed. On the other hand, vigorous use of `etag` would > provide similar improvements to the cache hit rate. It is a big step > from "Content-Location can improve cache hit rates" to, "conneg is > useless without Content-Location". > > A conforming cache will not respond with an inappropriate > representation if the server sends an appropriate `vary` header. > (Though it might miss a valid chance to serve a cached entity.) > Private caches at the user agent are less susceptible to selection > criteria explosion. Repeated requests from a single user agent are > likely to all be quite similar. In my experience private caches are > far more important than caching intermediates, anyway. > > `content-location` is a terribly useful header. Using it does > increase the cache hit rates for negotiated resources. However, > skipping `content-location` in a negotiated response does not violate > any of the REST constraints that i can see. > OK, now we're getting somewhere, in that I now understand the point you're making (which is not to say I agree with it). I will carefully consider what you're saying, before responding, but my first thought is that Etag is not a substitute for Content-Location to meet the identification of resources constraint. Let me think on this some more... -Eric
--- In rest-discuss@yahoogroups.com, "Roy T. Fielding" <fielding@...> wrote: > > On Jun 14, 2010, at 5:26 PM, Eric J. Bowman wrote: > > > "Eric J. Bowman" wrote: > > > > > > The objective is to inform the user of the results of the user's > > > request. If someone else's DELETE succeeded first, then a success > > > response would not be illustrative of the handling of the user's > > > request, but its handling of some other request. > > > > > > > Or not, apparently, after reading Roy's last response... ;-) The gist > > of which seems to be that any response code is allowed when DELETE is > > aimed at a resource mapping to the empty set. > > Well, no, the gist of it is that any response code is possible, > no matter what the request. The client needs to deal with all > of them. What is the client going to do when it receives a 204? > A 404? A 410? How about a 456? > > The point of having a flexible interface is to be flexible. > The more response codes you can map into "success", the better. > > ....Roy > Roy, I'm with you that the client should definitely be written this way. At the same time though, while the server has many options, it isn't free to return absolutely anything no matter what happened to the request. Otherwise the response code and the client's mapping of it to a disposition are meaningless. I know that's perhaps an obvious statement, but there are two ways to read "... any response code is allowed ..." (one for each actor) and I just think its worth clearly pointing out as its a key part of the mental leap to get this stuff right. The way I like to think of it (and I'd be interested in your take on this) is that each response code imposes a set of constraints on the server's request processing, its state and the response. The server is free to return any response code as long as the associated constraints are not violated. The server constraints defined for the various response codes are really light (it's quite surprising when you skim through 2616 Sec. 10 and look for them) so given any request, any response code is possible. The response code used also implies constraints on the client processing (e.g. reset content on a 205). So the response code is also partially about telling the client what to do -- it's not just a reflection of what happened during request processing. Anyways, it's certainly very different from object method call responses which tend to be have much more rigid constraints on the method processing (and usually little or none on the caller) and the set of outcomes that can be represented by each response code usually don't intersect. Hope I'm making sense. Regards, Andrew Wahbe http://linkednotbound.net/
On Jun 14, 2010, at 6:17 PM, Eric J. Bowman wrote: > "Roy T. Fielding" wrote: > > > > Neither REST nor HTTP would care. I would send 200 or 204, > > because there is no equivalent of a "rm -f" (i.e., remove even > > if it doesn't exist") in HTTP. If the client actually wants > > to check that it exists, then the request would include an > > > > If-match: * > > > > and the precondition failed response would then be required. > > > > So 2xx/4xx only indicates that the request was accepted/rejected, > without implying that any processing happened as a result of the > request? Processing did happen. The resource was identified, the existence of representations was checked, and the end result is what the client requested (that the resource be placed in a state of having no current representation). DELETE is idempotent. ....Roy
Hi, Are there well-known alternatives to HTTP for building REST services? When doing small-scale internal services, I still find a RESTful architecture still useful, however, the overhead of HTTP seems to be noticable. I was wondering if there widely used alternatives that focus on performance in the same manner that some RPC tools do (Protocol Buffer, Thrift). Also, on media-types, are there well-known media types that are relatively cheap to parse? For one, I'm keeping my eye on BSON:http://bsonspec.org/ as an alternative to JSON. Jan Vincent Liwanag
On Jun 14, 2010, at 9:55 PM, wahbedahbe wrote: > I'm with you that the client should definitely be written this way. At the same time though, while the server has many options, it isn't free to return absolutely anything no matter what happened to the request. Otherwise the response code and the client's mapping of it to a disposition are meaningless. Insanity is a relative thing. Servers like to provide useful services, so they tend to provide sane responses to sane requests, but they will also lie to a client that they suspect is evil (such as a DoS attacker or misbehaving robot). I didn't say that the server will return any status code -- I just said that the client needs to handle any status code. The problem with providing a specific list of status codes to give is that there are many orthogonal errors that might be indicated (e.g., 401) and the set of status codes itself is extensible. Besides, it just leads to poor assumptions on the part of clients. ....Roy
"Roy T. Fielding" wrote: > > Eric J. Bowman wrote: > > > "Roy T. Fielding" wrote: > > > > > > Neither REST nor HTTP would care. I would send 200 or 204, > > > because there is no equivalent of a "rm -f" (i.e., remove even > > > if it doesn't exist") in HTTP. If the client actually wants > > > to check that it exists, then the request would include an > > > > > > If-match: * > > > > > > and the precondition failed response would then be required. > > > > > > > So 2xx/4xx only indicates that the request was accepted/rejected, > > without implying that any processing happened as a result of the > > request? > > Processing did happen. The resource was identified, the existence > of representations was checked, and the end result is what the client > requested (that the resource be placed in a state of having no > current representation). DELETE is idempotent. > OK, thanks. My quaint notion was that if a GET responds 404, then so would all other methods except PUT. -Eric
Your aggressive tone and personal considerations turns a little difficult for me to continue to try to understand this issue, but because I don't want to be misunderstood let me point some more notes. On the personal considerations you made, I'm not asking questions or pointing extreme cases (for me they are not extreme) just to annoy you. Believe or not, I don't care about your inflated ego, but I do care about some useful answers that you and others on this list most of the provide. The reason I ask the questions I ask is because if I want to implement something I have to justify to my bosses why I want to do it. And I can't just say "because someone told me 'just do it'" or "because that's how the web works"... I have to prove they are worth money and/or time even if in the future.I think in the real world this is understandable, as money and time are scarce resources. About all the fuss you've made about my example of cookies, I wasn't talking about cookies "per se" but only as a example that your argument of "that's because how web works" isn't a argument at all, or better, it's a argument for those who have no clear arguments... Finally, I'm learning REST ok, with or without your answers, and again I don't care about your inflated ego that makes you see yourself as a "professor" that is so kind that is willing to share 12 years of experience (!) with us the poor ignorant guys... I can say the same thing, if you don't want to answer my questions without that tone of paternalism and arrogance, please don't. It's you that are wasting my time and patience. Now for the technical questions that should be what we should talk about, the rest of this thread shows clearly that I'm not the only one who doesn't follow your expertise so clearly as you think it is. And there was no evidence whatsoever that a variant should be a identifiable resource on it's own for cases other than cache (or the server-side developer decides to do so for business reasons). So I'm going to look for answers elsewhere, because "it's just the way it is" or "because that's how the web works", even if they are true statements, can't serve as justification to implement something. And please, I also think there's no point in continuing to read your confusing opinions, so I'm also done here... Maybe when you come down from your high horse I can learn some things more from your answers, as actually had happened before. 2010/6/12 Eric J. Bowman <eric@bisonsystems.net>: > Antnio Mota wrote: >> >> As I said, I think I understand the "principle", but not the >> necessity of applying it in all situations (except compression). Just >> some more notes: >> > > It pushes my buttons when your reply comes so fast that it took you > more time to write it, than you spent reading my reply. I am trying to > impart of the wisdom I've accumulated through a dozen years of > experience with conneg and REST (back then REST was called HTTP Request > Object). I am not trying to trick you into doing something that isn't > in your best interests, I'm pointing out a best practice that is in > your best interests, because it's also a REST constraint. > > How difficult is it to understand that there is one exception to the > SHOULD, and that's compression? You can keep asking me about every > possible exception out there, but it won't change my answer -- it will > only annoy the crap out of me. If these possible exceptions aren't > compression, my answer remains "no." Seriously, how much more concisely > and unequivocally can I state my position? > >> >> > >> > So, assigning URIs to variants doesn't apply to the general case, >> > but it does apply to all other cases. >> > >> >> I still don't see any other use cases except the client being able to >> dereference a specific variant or for use with cache. Both of wich >> are not that important inside a intranet. >> > > If the intranet context (or anything else) was a valid exception to the > SHOULD, then I wouldn't be saying until I'm blue in the face that the > only valid exception to the SHOULD is compression. > > Besides, this is not intranet-discuss, this is rest-discuss. I refuse > to tailor my answers to the specific needs of those whose systems do > not need REST's primary benefit of anarchic scalability over the real- > world Web. That intranets have nowhere near the scaling requirements > of Web systems, is simply not relevant to any discussion of REST, nor > is it a reason not to implement REST. > > What I've learned from doing this for a dozen years, is that your life > gets infinitely easier when dealing with conneg, if variants are > assigned their own URIs. If for no other reason than to be able to > test and maintain the system properly. > > Why develop any architecture, particularly a REST architecture, to be > incompatible with caching just because it isn't an immediate need? > Have you not been paying attention to anything I write about how REST > is a goal for the long-term evolution of a system rather than a solution > for its immediate needs? > > If it turns out after you've deployed an intranet system, that caching > indeed would be nice, wouldn't it make a lot more sense to have followed > the Identification of Resources constraint in the first place, such > that you can just drop squid in where and as needed, instead of > requiring a fully-coupled caching solution like cache channels? > > Following REST from the get-go prevents you from painting yourself into > the corner like that. One benefit of the Identification of Resources > constraint is caching. That does not mean that because you don't care > about caching today, you can just ignore that constraint. OTOH, by > applying that constraint, your system can evolve in a scalable fashion > over the long term. Why bend over backwards to avoid that, for the > sake of not minting some URIs? Your position makes no sense to me. > >> >> > >> > URIs are opaque. I don't know how you can tell from just "/A" that >> > it must be a static page? The answer is that it makes no difference >> > whatsoever to anything I've said, whether either of those resources >> > or any other resources I may have tossed out as examples, are >> > static or dynamic. To a REST connector it's just a bunch of >> > response bytes, as implementation details are opaque behind the >> > uniform interface. >> > >> >> I know URIs are opaque, I was just pointing to your examples. But my >> point is preciselly that one. If "it's just a bunch of response >> bytes", how can a non-static resource be cached if each time it is >> dereferenced it will probably have a diferent bunch of bytes? >> > > Look at the demo I posted. The URIs you dereference are just stubs > whose content (metadata) rarely changes. All steady-states are rendered > using client-side XSLT to include other resources. Those other > resources have different cache optimizations according to their > nature. The caching of the initial representation is not coupled to > the caching of any resource making up the steady-state. It just calls > an XSLT transformation. > > This is no different than any HTML page which calls an external CSS > file. Updating the CSS has absolutely no effect on the freshness of > any representation linking to the CSS. When my system is fleshed out, > it will implement XHR to update the number of replies in a thread, > wherever that information is needed. That way, those pages dynamically > update, without affecting the caching of the representation which calls > that XHR. > > You are scraping the bottom of the barrel now, looking for edge cases > and exceptions. Why? The answer remains assign URIs to variants, and > architect your way around these issues you bring up, such that they > don't matter. Nothing you mention is a showstopper, I doubt you will > ever come up with anything that is or which shows best practice to be > inherently flawed, nor will you convince me that the Identification of > Resources constraint may be safely ignored in the intranet context... > > Just as you will not prove to me that glass is a solid. You need to > learn why this is the way it is, instead of desperately seeking cases > you think might disprove this, and confusing the rest of the class > while bugging your professor, who has already been incredibly patient > in pointing out time and again that the *only* exception here is > compression. Especially since just minting the damn URIs is so simple > and has no downside. > >> >> For instance /currentime is allways diferent and so is not cacheable, >> rigth? What's the importance then of having a fixed URI to variants >> of this resource (if you also consider that we should never allow the >> client to call specific variants)? >> > > If /currentime is a negotiated resource, then assigning URIs to its > variants, aside from following the spec and applying the Identification > of Resources constraint, makes it one heckuva lot easier to curl the > variants for testing, independent of the conneg mechanism. I can't > imagine how much harder you're making it to develop and maintain a > system by only being able to access variants by using curl with Accept > headers. > > This was the first thing I figured out a dozen years ago, when I > started using conneg, and it's held true ever since -- trying to > develop a conneg system without assigning URIs to variants is a > thousand times more difficult than just minting the damn URIs. So > please, just follow the spec and apply the REST constraint. It's so > much easier than flogging a horse that's been dead since the last > millenium, when this debate was SETTLED. > > Find all the edge cases you want, where you wouldn't want to cache or > directly dereference variants. How does this override the SHOULD or > the Identification of Resources constraint? As I've said a million > times now, the exception to assigning URIs to variants is compression, > not your desire to avoid applying a REST constraint or following RFC > 2616, for reasons which still elude me entirely -- there's no downside > to assigning URIs to variants, so why are you looking so hard for > exceptions to this best practice? I already told you _the_ exception: > compression. > >> >> > > >> > > And also in that example, suppose the client references >> > > the /A.html as a representation of /A, then that manipulation >> > > of /A.html has to be made thru a representation of /A.html, does >> > > it make sense to also assign URI's to the representation of that >> > > representation of /A? I suppose it depends on the "importance" of >> > > those representations? >> > > >> > >> > Your terminology is, errr, not so good, so the only chance I have of >> > answering that question is to rewrite it first: >> > >> >> Yes, my english is far from good... >> > > Your grasp of REST terminology is a separate issue from your grasp of > English. I could care less about your grasp of English. > >> >> > >> > "If the user agent dereferences /A and the response is the /A.html >> > variant, then that manipulation of /A.html..." >> > >> > What manipulation of /A.html? The user agent is dereferencing /A. >> > The response is a variant with a Content-Location of /A.html, not a >> > Location of /A.html. There's only one request-response here, the >> > user agent knows nothing of /A.html because the user agent hasn't >> > dereferenced /A.html. >> > >> >> OK, I see, the dereferencing of /A.html is made by the server itself, >> not the user agent, so the user agent never "sees" it? >> > > The server isn't dereferencing anything. Perhaps /A.html is an actual > file on the filesystem of the origin server, perhaps not, it does not > matter. The server is responding to a request for /A with whatever > response code, headers and entity the system's coding tells it to. One > of those headers contains a URI which other components may use in order > to distinguish between variants -- it's just a label. > >> >> > >> > "If the user agent dereferences /A and the response is the /A.html >> > variant, then that manipulation of /A.html has to be made by >> > transferring a representation of /A.html..." >> > >> > I don't follow. URIs are opaque, you are deducing an awful lot from >> > some hypothetical example not-really-even-URIs. The user agent >> > dereferences /A and retrieves instructions on how to render a >> > steady- state, which presents the user with options for >> > transitioning to other application steady-states. >> > >> >> Well, by "manipulation" I was only thinking of GETting it, not to >> change it. I was pointing only that if the variant of /A that we >> assigned a URI of /A.html is a resource on ot's own that implies that >> there is also (at least one) representation of /A.html that we could >> wish or not to assign it's how URI, like /A.html.en, /A.html.pt... >> > > No! Absolutely not! The appearance of a URI in a Content-Location > header is just a label. It implies nothing, you can make no assertions > based on its presence, it doesn't even imply that you can dereference > /A.html let alone say anything about the number of representations of > /A.html, and it certainly doesn't imply some additional negotiation > layer -- which, if you were using transparent conneg, is actually a 506 > Variant Also Negotiates error as per RFC 2295. > > If there were different languages to negotiate, and each language > varies in possible media types, then the system would compute the > language, then the media type, then send a response to 'GET /A' with the > appropriate headers including Content-Location, whose URI says nothing > about anything since it's just labelling a variant for the purpose of > distinguishing it from other variants. > > Stop making this impossible for yourself to ever comprehend. If you > have a resource /A which varies by media type and language, then you > have a set of variants to which you can assign URIs. You don't take > the variants of each language and make them negotiable URIs based on > media type, that leads the user agent around in a circle. Just give a > different URI to each variant -- pretend those URIs are random > gobbledygook with no apparent relation to one another (i.e. opaque). > They're just labels, not a Location where the user agent needs to > conduct further content negotiation. > >> >> > >> > "Does it make sense to also assign URI's to the variants of the >> > variants of /A?" >> > >> > None whatsoever. Why would /A.html have any variants, except for >> > compression? The entire purpose of assigning URIs to variants is to >> > access them as resources in their own right, tied to a specific >> > media type (which may or may not be expressed as a filename >> > extension), or language, etc. So the only conneg left to do is >> > compression, if /A.html is dereferenced, which of course is not a >> > given that it will be. >> > >> >> I was thinking about diferent languages for the same resource/variant >> as my previous example. >> > > The answer does not change based on the number of different headers > you're considering for the negotiation. Resource /A has a set of > variants, it doesn't matter whether they're by media type, language, or > both media type and language, or compressed, or not compressed, the > result is a set of variants for /A which need URIs assigned to them. > > What you're saying, is that you were thinking that the user agent would > dereference the Content-Location URI to conduct further negotiation. > No! This would never happen, because Content-Location is not an > instruction to dereference anything. That's what Location does. So > if /A.html were negotiable, how would the user agent ever know about > it? The negotiated resource is /A , because I said that my example /A > is a negotiated resource. How you can assume that means more > negotiation would occur at /A.html because it's in Content-Location, > when Content-Location is just a label containing an opaque URI, escapes > me. > > You're making this a million times more difficult than it would be if > you could just accept for a fact, that it's best practice to assign > URIs to your variants... Trying to escape that reality is leading you > into some incredibly convoluted hypotheticals, whose rebuttals are only > making yourself and others more confused. Why can't you just assign > URIs to your variants, and learn from the experience why it's desirable? > > Surely that would be more productive than convoluted theoretical debate > seeking for exceptions using edge-case examples, which will only serve > to ensure that you never learn REST? > >> >> > >> > > >> > > Nevertheless I think to call this "best practice" induces in error >> > > (it did with me) because it's only applicable to restricted use >> > > case. scenarios. >> > > >> > >> > No, it applies to every use case except compression, as per the >> > SHOULD in RFC 2616. Ignoring said SHOULD is a deviation from best >> > practice. >> > >> > What I'm saying can't be put any more simply than "assign URIs to >> > your variants, except for compression." That's best practice for >> > the theoretical reason that it's what RFC 2616 says to do, and for >> > the pragmatic reason that the real-world Web depends on your doing >> > this because that's how the Web actually works in reality. Don't >> > fight it. >> > >> >> Well, that argument of "that's how the Web actually works" goes as >> far as it goes. The web actually works with cookies too, that are >> consensually not RESTfull... >> > > Sigh. > > Roy's thesis clearly explains that cookies are a REST mismatch, as most > commonly used (although there are uses of cookies which don't amount to > storing application state, which aren't REST mismatches). Are you > seriously trying to rebut the explanation of a constraint, by comparing > that constraint to a known REST mismatch? > > Given the congruent development of REST and the Web, the way conneg > works on the real-world Web is both the basis for, and the expression > of, the Identification of Resources constraint. This is a constraint, > not a mismatch. Resorting to bringing up cookies is something I can't > take seriously. > > I have done everything I can in this thread to explain that the SHOULD > requirement for assigning URIs to variants works on the real-world Web, > because that aspect of the real-world Web is behaving according to the > constraints of REST. Your response to that is that cookies are a REST > mismatch? > > What does that even mean, except that there's really no point in > furthering this discussion with you, because you'll apparently stop at > nothing, no matter how patently absurd, in an effort to dispute what > I'm saying? I'm done here, as there's obviously no point in continuing. > Come back when you've decided that you want to learn REST instead of > wasting my time. > > -Eric >
Eric's points are not opinions they reflect the current state of affairs. You should thank him for the amount of time he's spent explaining this to you and abstain from showing disdain or feel offended, for he indeed masters a subject in which you ate still a student, and an angry one at that. Sent from my iPhone On 15 Jun 2010, at 11:59, "António Mota" <amsmota@gmail.com> wrote: > Your aggressive tone and personal considerations turns a little > difficult for me to continue to try to understand this issue, but > because I don't want to be misunderstood let me point some more notes. > > On the personal considerations you made, I'm not asking questions or > pointing extreme cases (for me they are not extreme) just to annoy > you. Believe or not, I don't care about your inflated ego, but I do > care about some useful answers that you and others on this list most > of the provide. The reason I ask the questions I ask is because if I > want to implement something I have to justify to my bosses why I want > to do it. And I can't just say "because someone told me 'just do it'" > or "because that's how the web works"... I have to prove they are > worth money and/or time even if in the future.I think in the real > world this is understandable, as money and time are scarce resources. > > About all the fuss you've made about my example of cookies, I wasn't > talking about cookies "per se" but only as a example that your > argument of "that's because how web works" isn't a argument at all, or > better, it's a argument for those who have no clear arguments... > > Finally, I'm learning REST ok, with or without your answers, and again > I don't care about your inflated ego that makes you see yourself as a > "professor" that is so kind that is willing to share 12 years of > experience (!) with us the poor ignorant guys... I can say the same > thing, if you don't want to answer my questions without that tone of > paternalism and arrogance, please don't. It's you that are wasting my > time and patience. > > Now for the technical questions that should be what we should talk > about, the rest of this thread shows clearly that I'm not the only one > who doesn't follow your expertise so clearly as you think it is. And > there was no evidence whatsoever that a variant should be a > identifiable resource on it's own for cases other than cache (or the > server-side developer decides to do so for business reasons). So I'm > going to look for answers elsewhere, because "it's just the way it is" > or "because that's how the web works", even if they are true > statements, can't serve as justification to implement something. > > And please, I also think there's no point in continuing to read your > confusing opinions, so I'm also done here... Maybe when you come down > from your high horse I can learn some things more from your answers, > as actually had happened before. > > > 2010/6/12 Eric J. Bowman <eric@bisonsystems.net>: >> António Mota wrote: >>> >>> As I said, I think I understand the "principle", but not the >>> necessity of applying it in all situations (except compression). >>> Just >>> some more notes: >>> >> >> It pushes my buttons when your reply comes so fast that it took you >> more time to write it, than you spent reading my reply. I am >> trying to >> impart of the wisdom I've accumulated through a dozen years of >> experience with conneg and REST (back then REST was called HTTP >> Request >> Object). I am not trying to trick you into doing something that >> isn't >> in your best interests, I'm pointing out a best practice that is in >> your best interests, because it's also a REST constraint. >> >> How difficult is it to understand that there is one exception to the >> SHOULD, and that's compression? You can keep asking me about every >> possible exception out there, but it won't change my answer -- it >> will >> only annoy the crap out of me. If these possible exceptions aren't >> compression, my answer remains "no." Seriously, how much more >> concisely >> and unequivocally can I state my position? >> >>> >>>> >>>> So, assigning URIs to variants doesn't apply to the general case, >>>> but it does apply to all other cases. >>>> >>> >>> I still don't see any other use cases except the client being able >>> to >>> dereference a specific variant or for use with cache. Both of wich >>> are not that important inside a intranet. >>> >> >> If the intranet context (or anything else) was a valid exception to >> the >> SHOULD, then I wouldn't be saying until I'm blue in the face that the >> only valid exception to the SHOULD is compression. >> >> Besides, this is not intranet-discuss, this is rest-discuss. I >> refuse >> to tailor my answers to the specific needs of those whose systems do >> not need REST's primary benefit of anarchic scalability over the >> real- >> world Web. That intranets have nowhere near the scaling requirements >> of Web systems, is simply not relevant to any discussion of REST, nor >> is it a reason not to implement REST. >> >> What I've learned from doing this for a dozen years, is that your >> life >> gets infinitely easier when dealing with conneg, if variants are >> assigned their own URIs. If for no other reason than to be able to >> test and maintain the system properly. >> >> Why develop any architecture, particularly a REST architecture, to be >> incompatible with caching just because it isn't an immediate need? >> Have you not been paying attention to anything I write about how REST >> is a goal for the long-term evolution of a system rather than a >> solution >> for its immediate needs? >> >> If it turns out after you've deployed an intranet system, that >> caching >> indeed would be nice, wouldn't it make a lot more sense to have >> followed >> the Identification of Resources constraint in the first place, such >> that you can just drop squid in where and as needed, instead of >> requiring a fully-coupled caching solution like cache channels? >> >> Following REST from the get-go prevents you from painting yourself >> into >> the corner like that. One benefit of the Identification of Resources >> constraint is caching. That does not mean that because you don't >> care >> about caching today, you can just ignore that constraint. OTOH, by >> applying that constraint, your system can evolve in a scalable >> fashion >> over the long term. Why bend over backwards to avoid that, for the >> sake of not minting some URIs? Your position makes no sense to me. >> >>> >>>> >>>> URIs are opaque. I don't know how you can tell from just "/A" that >>>> it must be a static page? The answer is that it makes no >>>> difference >>>> whatsoever to anything I've said, whether either of those resources >>>> or any other resources I may have tossed out as examples, are >>>> static or dynamic. To a REST connector it's just a bunch of >>>> response bytes, as implementation details are opaque behind the >>>> uniform interface. >>>> >>> >>> I know URIs are opaque, I was just pointing to your examples. But my >>> point is preciselly that one. If "it's just a bunch of response >>> bytes", how can a non-static resource be cached if each time it is >>> dereferenced it will probably have a diferent bunch of bytes? >>> >> >> Look at the demo I posted. The URIs you dereference are just stubs >> whose content (metadata) rarely changes. All steady-states are >> rendered >> using client-side XSLT to include other resources. Those other >> resources have different cache optimizations according to their >> nature. The caching of the initial representation is not coupled to >> the caching of any resource making up the steady-state. It just >> calls >> an XSLT transformation. >> >> This is no different than any HTML page which calls an external CSS >> file. Updating the CSS has absolutely no effect on the freshness of >> any representation linking to the CSS. When my system is fleshed >> out, >> it will implement XHR to update the number of replies in a thread, >> wherever that information is needed. That way, those pages >> dynamically >> update, without affecting the caching of the representation which >> calls >> that XHR. >> >> You are scraping the bottom of the barrel now, looking for edge cases >> and exceptions. Why? The answer remains assign URIs to variants, >> and >> architect your way around these issues you bring up, such that they >> don't matter. Nothing you mention is a showstopper, I doubt you will >> ever come up with anything that is or which shows best practice to be >> inherently flawed, nor will you convince me that the Identification >> of >> Resources constraint may be safely ignored in the intranet context... >> >> Just as you will not prove to me that glass is a solid. You need to >> learn why this is the way it is, instead of desperately seeking cases >> you think might disprove this, and confusing the rest of the class >> while bugging your professor, who has already been incredibly patient >> in pointing out time and again that the *only* exception here is >> compression. Especially since just minting the damn URIs is so >> simple >> and has no downside. >> >>> >>> For instance /currentime is allways diferent and so is not >>> cacheable, >>> rigth? What's the importance then of having a fixed URI to variants >>> of this resource (if you also consider that we should never allow >>> the >>> client to call specific variants)? >>> >> >> If /currentime is a negotiated resource, then assigning URIs to its >> variants, aside from following the spec and applying the >> Identification >> of Resources constraint, makes it one heckuva lot easier to curl the >> variants for testing, independent of the conneg mechanism. I can't >> imagine how much harder you're making it to develop and maintain a >> system by only being able to access variants by using curl with >> Accept >> headers. >> >> This was the first thing I figured out a dozen years ago, when I >> started using conneg, and it's held true ever since -- trying to >> develop a conneg system without assigning URIs to variants is a >> thousand times more difficult than just minting the damn URIs. So >> please, just follow the spec and apply the REST constraint. It's so >> much easier than flogging a horse that's been dead since the last >> millenium, when this debate was SETTLED. >> >> Find all the edge cases you want, where you wouldn't want to cache or >> directly dereference variants. How does this override the SHOULD or >> the Identification of Resources constraint? As I've said a million >> times now, the exception to assigning URIs to variants is >> compression, >> not your desire to avoid applying a REST constraint or following RFC >> 2616, for reasons which still elude me entirely -- there's no >> downside >> to assigning URIs to variants, so why are you looking so hard for >> exceptions to this best practice? I already told you _the_ >> exception: >> compression. >> >>> >>>>> >>>>> And also in that example, suppose the client references >>>>> the /A.html as a representation of /A, then that manipulation >>>>> of /A.html has to be made thru a representation of /A.html, does >>>>> it make sense to also assign URI's to the representation of that >>>>> representation of /A? I suppose it depends on the "importance" of >>>>> those representations? >>>>> >>>> >>>> Your terminology is, errr, not so good, so the only chance I have >>>> of >>>> answering that question is to rewrite it first: >>>> >>> >>> Yes, my english is far from good... >>> >> >> Your grasp of REST terminology is a separate issue from your grasp of >> English. I could care less about your grasp of English. >> >>> >>>> >>>> "If the user agent dereferences /A and the response is the /A.html >>>> variant, then that manipulation of /A.html..." >>>> >>>> What manipulation of /A.html? The user agent is dereferencing /A. >>>> The response is a variant with a Content-Location of /A.html, not a >>>> Location of /A.html. There's only one request-response here, the >>>> user agent knows nothing of /A.html because the user agent hasn't >>>> dereferenced /A.html. >>>> >>> >>> OK, I see, the dereferencing of /A.html is made by the server >>> itself, >>> not the user agent, so the user agent never "sees" it? >>> >> >> The server isn't dereferencing anything. Perhaps /A.html is an >> actual >> file on the filesystem of the origin server, perhaps not, it does not >> matter. The server is responding to a request for /A with whatever >> response code, headers and entity the system's coding tells it to. >> One >> of those headers contains a URI which other components may use in >> order >> to distinguish between variants -- it's just a label. >> >>> >>>> >>>> "If the user agent dereferences /A and the response is the /A.html >>>> variant, then that manipulation of /A.html has to be made by >>>> transferring a representation of /A.html..." >>>> >>>> I don't follow. URIs are opaque, you are deducing an awful lot >>>> from >>>> some hypothetical example not-really-even-URIs. The user agent >>>> dereferences /A and retrieves instructions on how to render a >>>> steady- state, which presents the user with options for >>>> transitioning to other application steady-states. >>>> >>> >>> Well, by "manipulation" I was only thinking of GETting it, not to >>> change it. I was pointing only that if the variant of /A that we >>> assigned a URI of /A.html is a resource on ot's own that implies >>> that >>> there is also (at least one) representation of /A.html that we >>> could >>> wish or not to assign it's how URI, like /A.html.en, /A.html.pt... >>> >> >> No! Absolutely not! The appearance of a URI in a Content-Location >> header is just a label. It implies nothing, you can make no >> assertions >> based on its presence, it doesn't even imply that you can dereference >> /A.html let alone say anything about the number of representations of >> /A.html, and it certainly doesn't imply some additional negotiation >> layer -- which, if you were using transparent conneg, is actually a >> 506 >> Variant Also Negotiates error as per RFC 2295. >> >> If there were different languages to negotiate, and each language >> varies in possible media types, then the system would compute the >> language, then the media type, then send a response to 'GET /A' >> with the >> appropriate headers including Content-Location, whose URI says >> nothing >> about anything since it's just labelling a variant for the purpose of >> distinguishing it from other variants. >> >> Stop making this impossible for yourself to ever comprehend. If you >> have a resource /A which varies by media type and language, then you >> have a set of variants to which you can assign URIs. You don't take >> the variants of each language and make them negotiable URIs based on >> media type, that leads the user agent around in a circle. Just >> give a >> different URI to each variant -- pretend those URIs are random >> gobbledygook with no apparent relation to one another (i.e. opaque). >> They're just labels, not a Location where the user agent needs to >> conduct further content negotiation. >> >>> >>>> >>>> "Does it make sense to also assign URI's to the variants of the >>>> variants of /A?" >>>> >>>> None whatsoever. Why would /A.html have any variants, except for >>>> compression? The entire purpose of assigning URIs to variants is >>>> to >>>> access them as resources in their own right, tied to a specific >>>> media type (which may or may not be expressed as a filename >>>> extension), or language, etc. So the only conneg left to do is >>>> compression, if /A.html is dereferenced, which of course is not a >>>> given that it will be. >>>> >>> >>> I was thinking about diferent languages for the same resource/ >>> variant >>> as my previous example. >>> >> >> The answer does not change based on the number of different headers >> you're considering for the negotiation. Resource /A has a set of >> variants, it doesn't matter whether they're by media type, >> language, or >> both media type and language, or compressed, or not compressed, the >> result is a set of variants for /A which need URIs assigned to them. >> >> What you're saying, is that you were thinking that the user agent >> would >> dereference the Content-Location URI to conduct further negotiation. >> No! This would never happen, because Content-Location is not an >> instruction to dereference anything. That's what Location does. So >> if /A.html were negotiable, how would the user agent ever know about >> it? The negotiated resource is /A , because I said that my >> example /A >> is a negotiated resource. How you can assume that means more >> negotiation would occur at /A.html because it's in Content-Location, >> when Content-Location is just a label containing an opaque URI, >> escapes >> me. >> >> You're making this a million times more difficult than it would be if >> you could just accept for a fact, that it's best practice to assign >> URIs to your variants... Trying to escape that reality is leading >> you >> into some incredibly convoluted hypotheticals, whose rebuttals are >> only >> making yourself and others more confused. Why can't you just assign >> URIs to your variants, and learn from the experience why it's >> desirable? >> >> Surely that would be more productive than convoluted theoretical >> debate >> seeking for exceptions using edge-case examples, which will only >> serve >> to ensure that you never learn REST? >> >>> >>>> >>>>> >>>>> Nevertheless I think to call this "best practice" induces in error >>>>> (it did with me) because it's only applicable to restricted use >>>>> case. scenarios. >>>>> >>>> >>>> No, it applies to every use case except compression, as per the >>>> SHOULD in RFC 2616. Ignoring said SHOULD is a deviation from best >>>> practice. >>>> >>>> What I'm saying can't be put any more simply than "assign URIs to >>>> your variants, except for compression." That's best practice for >>>> the theoretical reason that it's what RFC 2616 says to do, and for >>>> the pragmatic reason that the real-world Web depends on your doing >>>> this because that's how the Web actually works in reality. Don't >>>> fight it. >>>> >>> >>> Well, that argument of "that's how the Web actually works" goes as >>> far as it goes. The web actually works with cookies too, that are >>> consensually not RESTfull... >>> >> >> Sigh. >> >> Roy's thesis clearly explains that cookies are a REST mismatch, as >> most >> commonly used (although there are uses of cookies which don't >> amount to >> storing application state, which aren't REST mismatches). Are you >> seriously trying to rebut the explanation of a constraint, by >> comparing >> that constraint to a known REST mismatch? >> >> Given the congruent development of REST and the Web, the way conneg >> works on the real-world Web is both the basis for, and the expression >> of, the Identification of Resources constraint. This is a >> constraint, >> not a mismatch. Resorting to bringing up cookies is something I >> can't >> take seriously. >> >> I have done everything I can in this thread to explain that the >> SHOULD >> requirement for assigning URIs to variants works on the real-world >> Web, >> because that aspect of the real-world Web is behaving according to >> the >> constraints of REST. Your response to that is that cookies are a >> REST >> mismatch? >> >> What does that even mean, except that there's really no point in >> furthering this discussion with you, because you'll apparently stop >> at >> nothing, no matter how patently absurd, in an effort to dispute what >> I'm saying? I'm done here, as there's obviously no point in >> continuing. >> Come back when you've decided that you want to learn REST instead of >> wasting my time. >> >> -Eric >>
His personal considerations about me are for sure opinions, and his technical considerations, looking at the rest of the thread, are discussable, not for being wrong - which I never assumed, quite the contrary I repeat that I think Eric is a valuable person in this list, like others - but maybe they are not clear exposed. What I question his someone questioning my motivations, that are technically and business oriented only. I thank him and and everybody else on this list, assuming they do this on their free will. If he, you or anybody else think they are wasting their time because they assume my intentions are not what they are, just ignore me. No hard feelings... _________________________________________________ 2010/6/15 Sebastien Lambla <seb@...>: > Eric's points are not opinions they reflect the current state of > affairs. You should thank him for the amount of time he's spent > explaining this to you and abstain from showing disdain or feel > offended, for he indeed masters a subject in which you ate still a > student, and an angry one at that. > > Sent from my iPhone > > On 15 Jun 2010, at 11:59, "Antnio Mota" <amsmota@...> wrote: > >> Your aggressive tone and personal considerations turns a little >> difficult for me to continue to try to understand this issue, but >> because I don't want to be misunderstood let me point some more notes. >> >> On the personal considerations you made, I'm not asking questions or >> pointing extreme cases (for me they are not extreme) just to annoy >> you. Believe or not, I don't care about your inflated ego, but I do >> care about some useful answers that you and others on this list most >> of the provide. The reason I ask the questions I ask is because if I >> want to implement something I have to justify to my bosses why I want >> to do it. And I can't just say "because someone told me 'just do it'" >> or "because that's how the web works"... I have to prove they are >> worth money and/or time even if in the future.I think in the real >> world this is understandable, as money and time are scarce resources. >> >> About all the fuss you've made about my example of cookies, I wasn't >> talking about cookies "per se" but only as a example that your >> argument of "that's because how web works" isn't a argument at all, or >> better, it's a argument for those who have no clear arguments... >> >> Finally, I'm learning REST ok, with or without your answers, and again >> I don't care about your inflated ego that makes you see yourself as a >> "professor" that is so kind that is willing to share 12 years of >> experience (!) with us the poor ignorant guys... I can say the same >> thing, if you don't want to answer my questions without that tone of >> paternalism and arrogance, please don't. It's you that are wasting my >> time and patience. >> >> Now for the technical questions that should be what we should talk >> about, the rest of this thread shows clearly that I'm not the only one >> who doesn't follow your expertise so clearly as you think it is. And >> there was no evidence whatsoever that a variant should be a >> identifiable resource on it's own for cases other than cache (or the >> server-side developer decides to do so for business reasons). So I'm >> going to look for answers elsewhere, because "it's just the way it is" >> or "because that's how the web works", even if they are true >> statements, can't serve as justification to implement something. >> >> And please, I also think there's no point in continuing to read your >> confusing opinions, so I'm also done here... Maybe when you come down >> from your high horse I can learn some things more from your answers, >> as actually had happened before. >> >> >> 2010/6/12 Eric J. Bowman <eric@...>: >>> Antnio Mota wrote: >>>> >>>> As I said, I think I understand the "principle", but not the >>>> necessity of applying it in all situations (except compression). >>>> Just >>>> some more notes: >>>> >>> >>> It pushes my buttons when your reply comes so fast that it took you >>> more time to write it, than you spent reading my reply. I am >>> trying to >>> impart of the wisdom I've accumulated through a dozen years of >>> experience with conneg and REST (back then REST was called HTTP >>> Request >>> Object). I am not trying to trick you into doing something that >>> isn't >>> in your best interests, I'm pointing out a best practice that is in >>> your best interests, because it's also a REST constraint. >>> >>> How difficult is it to understand that there is one exception to the >>> SHOULD, and that's compression? You can keep asking me about every >>> possible exception out there, but it won't change my answer -- it >>> will >>> only annoy the crap out of me. If these possible exceptions aren't >>> compression, my answer remains "no." Seriously, how much more >>> concisely >>> and unequivocally can I state my position? >>> >>>> >>>>> >>>>> So, assigning URIs to variants doesn't apply to the general case, >>>>> but it does apply to all other cases. >>>>> >>>> >>>> I still don't see any other use cases except the client being able >>>> to >>>> dereference a specific variant or for use with cache. Both of wich >>>> are not that important inside a intranet. >>>> >>> >>> If the intranet context (or anything else) was a valid exception to >>> the >>> SHOULD, then I wouldn't be saying until I'm blue in the face that the >>> only valid exception to the SHOULD is compression. >>> >>> Besides, this is not intranet-discuss, this is rest-discuss. I >>> refuse >>> to tailor my answers to the specific needs of those whose systems do >>> not need REST's primary benefit of anarchic scalability over the >>> real- >>> world Web. That intranets have nowhere near the scaling requirements >>> of Web systems, is simply not relevant to any discussion of REST, nor >>> is it a reason not to implement REST. >>> >>> What I've learned from doing this for a dozen years, is that your >>> life >>> gets infinitely easier when dealing with conneg, if variants are >>> assigned their own URIs. If for no other reason than to be able to >>> test and maintain the system properly. >>> >>> Why develop any architecture, particularly a REST architecture, to be >>> incompatible with caching just because it isn't an immediate need? >>> Have you not been paying attention to anything I write about how REST >>> is a goal for the long-term evolution of a system rather than a >>> solution >>> for its immediate needs? >>> >>> If it turns out after you've deployed an intranet system, that >>> caching >>> indeed would be nice, wouldn't it make a lot more sense to have >>> followed >>> the Identification of Resources constraint in the first place, such >>> that you can just drop squid in where and as needed, instead of >>> requiring a fully-coupled caching solution like cache channels? >>> >>> Following REST from the get-go prevents you from painting yourself >>> into >>> the corner like that. One benefit of the Identification of Resources >>> constraint is caching. That does not mean that because you don't >>> care >>> about caching today, you can just ignore that constraint. OTOH, by >>> applying that constraint, your system can evolve in a scalable >>> fashion >>> over the long term. Why bend over backwards to avoid that, for the >>> sake of not minting some URIs? Your position makes no sense to me. >>> >>>> >>>>> >>>>> URIs are opaque. I don't know how you can tell from just "/A" that >>>>> it must be a static page? The answer is that it makes no >>>>> difference >>>>> whatsoever to anything I've said, whether either of those resources >>>>> or any other resources I may have tossed out as examples, are >>>>> static or dynamic. To a REST connector it's just a bunch of >>>>> response bytes, as implementation details are opaque behind the >>>>> uniform interface. >>>>> >>>> >>>> I know URIs are opaque, I was just pointing to your examples. But my >>>> point is preciselly that one. If "it's just a bunch of response >>>> bytes", how can a non-static resource be cached if each time it is >>>> dereferenced it will probably have a diferent bunch of bytes? >>>> >>> >>> Look at the demo I posted. The URIs you dereference are just stubs >>> whose content (metadata) rarely changes. All steady-states are >>> rendered >>> using client-side XSLT to include other resources. Those other >>> resources have different cache optimizations according to their >>> nature. The caching of the initial representation is not coupled to >>> the caching of any resource making up the steady-state. It just >>> calls >>> an XSLT transformation. >>> >>> This is no different than any HTML page which calls an external CSS >>> file. Updating the CSS has absolutely no effect on the freshness of >>> any representation linking to the CSS. When my system is fleshed >>> out, >>> it will implement XHR to update the number of replies in a thread, >>> wherever that information is needed. That way, those pages >>> dynamically >>> update, without affecting the caching of the representation which >>> calls >>> that XHR. >>> >>> You are scraping the bottom of the barrel now, looking for edge cases >>> and exceptions. Why? The answer remains assign URIs to variants, >>> and >>> architect your way around these issues you bring up, such that they >>> don't matter. Nothing you mention is a showstopper, I doubt you will >>> ever come up with anything that is or which shows best practice to be >>> inherently flawed, nor will you convince me that the Identification >>> of >>> Resources constraint may be safely ignored in the intranet context... >>> >>> Just as you will not prove to me that glass is a solid. You need to >>> learn why this is the way it is, instead of desperately seeking cases >>> you think might disprove this, and confusing the rest of the class >>> while bugging your professor, who has already been incredibly patient >>> in pointing out time and again that the *only* exception here is >>> compression. Especially since just minting the damn URIs is so >>> simple >>> and has no downside. >>> >>>> >>>> For instance /currentime is allways diferent and so is not >>>> cacheable, >>>> rigth? What's the importance then of having a fixed URI to variants >>>> of this resource (if you also consider that we should never allow >>>> the >>>> client to call specific variants)? >>>> >>> >>> If /currentime is a negotiated resource, then assigning URIs to its >>> variants, aside from following the spec and applying the >>> Identification >>> of Resources constraint, makes it one heckuva lot easier to curl the >>> variants for testing, independent of the conneg mechanism. I can't >>> imagine how much harder you're making it to develop and maintain a >>> system by only being able to access variants by using curl with >>> Accept >>> headers. >>> >>> This was the first thing I figured out a dozen years ago, when I >>> started using conneg, and it's held true ever since -- trying to >>> develop a conneg system without assigning URIs to variants is a >>> thousand times more difficult than just minting the damn URIs. So >>> please, just follow the spec and apply the REST constraint. It's so >>> much easier than flogging a horse that's been dead since the last >>> millenium, when this debate was SETTLED. >>> >>> Find all the edge cases you want, where you wouldn't want to cache or >>> directly dereference variants. How does this override the SHOULD or >>> the Identification of Resources constraint? As I've said a million >>> times now, the exception to assigning URIs to variants is >>> compression, >>> not your desire to avoid applying a REST constraint or following RFC >>> 2616, for reasons which still elude me entirely -- there's no >>> downside >>> to assigning URIs to variants, so why are you looking so hard for >>> exceptions to this best practice? I already told you _the_ >>> exception: >>> compression. >>> >>>> >>>>>> >>>>>> And also in that example, suppose the client references >>>>>> the /A.html as a representation of /A, then that manipulation >>>>>> of /A.html has to be made thru a representation of /A.html, does >>>>>> it make sense to also assign URI's to the representation of that >>>>>> representation of /A? I suppose it depends on the "importance" of >>>>>> those representations? >>>>>> >>>>> >>>>> Your terminology is, errr, not so good, so the only chance I have >>>>> of >>>>> answering that question is to rewrite it first: >>>>> >>>> >>>> Yes, my english is far from good... >>>> >>> >>> Your grasp of REST terminology is a separate issue from your grasp of >>> English. I could care less about your grasp of English. >>> >>>> >>>>> >>>>> "If the user agent dereferences /A and the response is the /A.html >>>>> variant, then that manipulation of /A.html..." >>>>> >>>>> What manipulation of /A.html? The user agent is dereferencing /A. >>>>> The response is a variant with a Content-Location of /A.html, not a >>>>> Location of /A.html. There's only one request-response here, the >>>>> user agent knows nothing of /A.html because the user agent hasn't >>>>> dereferenced /A.html. >>>>> >>>> >>>> OK, I see, the dereferencing of /A.html is made by the server >>>> itself, >>>> not the user agent, so the user agent never "sees" it? >>>> >>> >>> The server isn't dereferencing anything. Perhaps /A.html is an >>> actual >>> file on the filesystem of the origin server, perhaps not, it does not >>> matter. The server is responding to a request for /A with whatever >>> response code, headers and entity the system's coding tells it to. >>> One >>> of those headers contains a URI which other components may use in >>> order >>> to distinguish between variants -- it's just a label. >>> >>>> >>>>> >>>>> "If the user agent dereferences /A and the response is the /A.html >>>>> variant, then that manipulation of /A.html has to be made by >>>>> transferring a representation of /A.html..." >>>>> >>>>> I don't follow. URIs are opaque, you are deducing an awful lot >>>>> from >>>>> some hypothetical example not-really-even-URIs. The user agent >>>>> dereferences /A and retrieves instructions on how to render a >>>>> steady- state, which presents the user with options for >>>>> transitioning to other application steady-states. >>>>> >>>> >>>> Well, by "manipulation" I was only thinking of GETting it, not to >>>> change it. I was pointing only that if the variant of /A that we >>>> assigned a URI of /A.html is a resource on ot's own that implies >>>> that >>>> there is also (at least one) representation of /A.html that we >>>> could >>>> wish or not to assign it's how URI, like /A.html.en, /A.html.pt... >>>> >>> >>> No! Absolutely not! The appearance of a URI in a Content-Location >>> header is just a label. It implies nothing, you can make no >>> assertions >>> based on its presence, it doesn't even imply that you can dereference >>> /A.html let alone say anything about the number of representations of >>> /A.html, and it certainly doesn't imply some additional negotiation >>> layer -- which, if you were using transparent conneg, is actually a >>> 506 >>> Variant Also Negotiates error as per RFC 2295. >>> >>> If there were different languages to negotiate, and each language >>> varies in possible media types, then the system would compute the >>> language, then the media type, then send a response to 'GET /A' >>> with the >>> appropriate headers including Content-Location, whose URI says >>> nothing >>> about anything since it's just labelling a variant for the purpose of >>> distinguishing it from other variants. >>> >>> Stop making this impossible for yourself to ever comprehend. If you >>> have a resource /A which varies by media type and language, then you >>> have a set of variants to which you can assign URIs. You don't take >>> the variants of each language and make them negotiable URIs based on >>> media type, that leads the user agent around in a circle. Just >>> give a >>> different URI to each variant -- pretend those URIs are random >>> gobbledygook with no apparent relation to one another (i.e. opaque). >>> They're just labels, not a Location where the user agent needs to >>> conduct further content negotiation. >>> >>>> >>>>> >>>>> "Does it make sense to also assign URI's to the variants of the >>>>> variants of /A?" >>>>> >>>>> None whatsoever. Why would /A.html have any variants, except for >>>>> compression? The entire purpose of assigning URIs to variants is >>>>> to >>>>> access them as resources in their own right, tied to a specific >>>>> media type (which may or may not be expressed as a filename >>>>> extension), or language, etc. So the only conneg left to do is >>>>> compression, if /A.html is dereferenced, which of course is not a >>>>> given that it will be. >>>>> >>>> >>>> I was thinking about diferent languages for the same resource/ >>>> variant >>>> as my previous example. >>>> >>> >>> The answer does not change based on the number of different headers >>> you're considering for the negotiation. Resource /A has a set of >>> variants, it doesn't matter whether they're by media type, >>> language, or >>> both media type and language, or compressed, or not compressed, the >>> result is a set of variants for /A which need URIs assigned to them. >>> >>> What you're saying, is that you were thinking that the user agent >>> would >>> dereference the Content-Location URI to conduct further negotiation. >>> No! This would never happen, because Content-Location is not an >>> instruction to dereference anything. That's what Location does. So >>> if /A.html were negotiable, how would the user agent ever know about >>> it? The negotiated resource is /A , because I said that my >>> example /A >>> is a negotiated resource. How you can assume that means more >>> negotiation would occur at /A.html because it's in Content-Location, >>> when Content-Location is just a label containing an opaque URI, >>> escapes >>> me. >>> >>> You're making this a million times more difficult than it would be if >>> you could just accept for a fact, that it's best practice to assign >>> URIs to your variants... Trying to escape that reality is leading >>> you >>> into some incredibly convoluted hypotheticals, whose rebuttals are >>> only >>> making yourself and others more confused. Why can't you just assign >>> URIs to your variants, and learn from the experience why it's >>> desirable? >>> >>> Surely that would be more productive than convoluted theoretical >>> debate >>> seeking for exceptions using edge-case examples, which will only >>> serve >>> to ensure that you never learn REST? >>> >>>> >>>>> >>>>>> >>>>>> Nevertheless I think to call this "best practice" induces in error >>>>>> (it did with me) because it's only applicable to restricted use >>>>>> case. scenarios. >>>>>> >>>>> >>>>> No, it applies to every use case except compression, as per the >>>>> SHOULD in RFC 2616. Ignoring said SHOULD is a deviation from best >>>>> practice. >>>>> >>>>> What I'm saying can't be put any more simply than "assign URIs to >>>>> your variants, except for compression." That's best practice for >>>>> the theoretical reason that it's what RFC 2616 says to do, and for >>>>> the pragmatic reason that the real-world Web depends on your doing >>>>> this because that's how the Web actually works in reality. Don't >>>>> fight it. >>>>> >>>> >>>> Well, that argument of "that's how the Web actually works" goes as >>>> far as it goes. The web actually works with cookies too, that are >>>> consensually not RESTfull... >>>> >>> >>> Sigh. >>> >>> Roy's thesis clearly explains that cookies are a REST mismatch, as >>> most >>> commonly used (although there are uses of cookies which don't >>> amount to >>> storing application state, which aren't REST mismatches). Are you >>> seriously trying to rebut the explanation of a constraint, by >>> comparing >>> that constraint to a known REST mismatch? >>> >>> Given the congruent development of REST and the Web, the way conneg >>> works on the real-world Web is both the basis for, and the expression >>> of, the Identification of Resources constraint. This is a >>> constraint, >>> not a mismatch. Resorting to bringing up cookies is something I >>> can't >>> take seriously. >>> >>> I have done everything I can in this thread to explain that the >>> SHOULD >>> requirement for assigning URIs to variants works on the real-world >>> Web, >>> because that aspect of the real-world Web is behaving according to >>> the >>> constraints of REST. Your response to that is that cookies are a >>> REST >>> mismatch? >>> >>> What does that even mean, except that there's really no point in >>> furthering this discussion with you, because you'll apparently stop >>> at >>> nothing, no matter how patently absurd, in an effort to dispute what >>> I'm saying? I'm done here, as there's obviously no point in >>> continuing. >>> Come back when you've decided that you want to learn REST instead of >>> wasting my time. >>> >>> -Eric >>> >
Hello Martin.
Maybe you found your answers already in all the other comments.
I may tell you just this: a URI is not, or should not, be more that one ID, a name for resource.
HTTP nor REST are aware about you having a mySets resource that you handle as a collection, containing a set that is a collection too, that contains several items. The URI you use is just the name of a resource, URI is not telling you the structure (at least not to HTTP nor REST).
When you use /mySets/{set}/{item} as a URI, you are referring to one resource that happens to be an item in a set. Nothing else should matter. The Server will tell you if that exists or not, if that was deleted or not. For practical reasons, sending a delete to a resource should end up with the resource being inaccessible using that URI. If the resource does not exist at all, that was the easiest job for the server, and may even return a 200 saying I did my job, you will not find that resource in that URI again. If the server is kind and sincere, it will tell you it didn't found the resource, meaning also it is inaccessible. That is as far as it should go. DELETE is not to get info about anything else, and even less about a structure in the URI that nobody should take into account. If you NEED to know if the item was or not found, ask for it previously issuing the DELETE, don't relay on the DELETE response.
If your app wants to know about the reason why any of the items in /mySets/{set} is found, then you may infer that maybe the /mySets/{set} itself may not exist, and ask about it. When you send to server the /mySets/{set}, a totally independent URI, then you are referring to that resource, which, for REST, is just another resource totally unrelated to the item ones.
See now how should that be seen?
Cheers!
William Martinez Pomares
--- In rest-discuss@yahoogroups.com, "Martin" <martin.grotzke@...> wrote:
>
> Hi,
>
> say I have a resource like
> /mySets/{set}/{item}
> which provides a DELETE.
>
> If this resource is found and successfully deleted a 204 will be returned. If this resource is not found (e.g. because it has been deleted before) a 404 will be returned.
>
> Now I wonder which status code should be returned if the resource
> /mySets/{set}
> is not found.
> Also a 404? A 400?
>
> I'm not so happy about a 404 because it lacks some semantics - the DELETE on the subresource (or subsubresource) /mySets/{set}/{item} implies that /mySets/{set} exists. If this is not given it might indicate a conflict or that the client is wrong in some way.
>
> While looking for appropriate response codes based on their name 409 (Conflict), 410 (Gone) or 412 (Precondition Failed) sound interesting (see http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4 )) but their semantics does not match really.
>
> My current choice would be to return 400 (Bad Request), but this has also different semantics.
>
> Do you have suggestions how to handle this?
>
> Thanx && cheers,
> Martin
>
2010/6/15 Sebastien Lambla <seb@...>: > Eric's points are not opinions they reflect the current state of > affairs. You should thank him for the amount of time he's spent > explaining this to you and abstain from showing disdain or feel > offended, for he indeed masters a subject in which you ate still a > student, and an angry one at that. Given that Antonio is not the only person with issues over Eric's rationale and tone, I question whether "master" or "teacher" are good frames of reference. Either way, I don't think it's your place to chastise Antonio, or imply he is lacking gratitude(?!). That sort of pretense achieves nothing - if you have a point, then make it; if you aren't coherent enough then either deal with it (by being more coherent) or stop communicating. Can I suggest that this now stays "on point" (whatever that actually is now) and the nonconstructive prattle finishes. If you have to do it, just take it off the list.
Can anyone suggest an existing media type that might be re-used to describe the availability of media type transformations? In other words, one that can adequately describe something like: transforms (+transform) transform (fromType, toType, type, href) where: fromType - the MIME Type of the source. toType - the desired MIME Type type - the MIME Type [hint] of the transformer itself href - the location of this particular transform Thanks, --tim
Hello Sean. Sadly, (or joyfully, I guess) the UDDI promise was not met, and thus all technology behind was a failure. Several big guys got together and put a global repository you were able to hit searching for web services. When they found the implementation and spec were strong enough, they shut down the repository and started selling their own implementations. UDDI is still present in some middleware, and the idea is for you to publish the enterprise web services in a repository and handle information about statistics, sanity and such for governance. One great example for public repository would be Amazon web services. They may have published all the services using an UDDI server, but the way you access is reading the documentation and downloading the SDKs. See? Still, all that falls into developers hands, and those are not as happy with discovery and dynamic binding, so in general all was done static. That means the WSDL is the least published thing in the world. Usually, you create a class with a method. Then you ask your IDE to generate a WSDL for that method to be called (RPC), then you copy the WSDL and ask you IDE to generate a client stub, and then you forget about WSDL! For publishing services, you may get the WSDL sending the wsdl parameter in the service URI to get the WSDL, or you can request it by email. I suggest you to look at the current middleware products, and do a quick research to list the current repository methods and protocols. Cheers. William Martinez Pomares --- In rest-discuss@yahoogroups.com, Sean Kennedy <seandkennedy@...> wrote: > > Hi, > I work in academia so would be grateful for the industry perspective. I am working on a thesis which includes both WS-* and REST. If UDDI is in fact dead http://www.innoq.com/blog/st/2010/03/uddi_rip.html how are enterprises communicating the WSDL files? Are we talking email, publishing on a Web site, inserting into a db?? > > Thanks, > Sean. > > PS Would the same apply for WADL files? >
I don't know of this explicitly expressed in any existing media type, but I suspect the use of XHTML DL(DT,DD) or UL(LI) and A elements along w/ a MicroFormat markup would do the trick. mca http://amundsen.com/blog/ http://mamund.com/foaf.rdf#me On Tue, Jun 15, 2010 at 08:55, Tim Williams <williamstw@...> wrote: > Can anyone suggest an existing media type that might be re-used to > describe the availability of media type transformations? In other > words, one that can adequately describe something like: > > transforms (+transform) > transform (fromType, toType, type, href) > > where: > > fromType - the MIME Type of the source. > toType - the desired MIME Type > type - the MIME Type [hint] of the transformer itself > href - the location of this particular transform > > > Thanks, > --tim > > > ------------------------------------ > > Yahoo! Groups Links > > > >
Jan Vincent wrote: > > Are there well-known alternatives to HTTP for building REST services? > No. Well, no to the "well-known" part, anyway. If you want to take advantage of REST architectural strengths like caching or conneg, then there's only one protocol which supports it worth speaking of, and that is, of course, HTTP 1.1 -- no other well-known protocol has these features. You can build a REST system using FTP if you want, but it won't scale like HTTP, even with less overhead. > > When doing small-scale internal services, I still find a RESTful > architecture still useful, however, the overhead of HTTP seems to be > noticable. I was wondering if there widely used alternatives that > focus on performance in the same manner that some RPC tools do > (Protocol Buffer, Thrift). > I've only very recently been pointed to NetKernel and NetKernel Protocol. Perhaps this is what you're after? > > Also, on media-types, are there well-known media types that are > relatively cheap to parse? For one, I'm keeping my eye on > BSON:http://bsonspec.org/ as an alternative to JSON. > Not that I know of; however, parse efficiency isn't a REST concern. -Eric
Hello, lets say I have a web shop with a product page that contains a panel with the shopping cart. The product page should be the same for all users, but the shopping cart should be individual for each user. But if the page is the same for all users, it cannot contain an individual URI for the user's shopping cart! My idea is to store the cart URI in a cookie and request the cart form the server with content of that cookie. The process looks like this: 1. Client: GET /product 2. Server: Ok, product page 3. Client: POST /cart (requested with JavaScript) 4. Server: Created, Location: /cart/93a41fe545b, set_cookie: /cart/93a41fe545b ... on the next page: 5. Client: GET /another_product 6. Server: Ok, another product page 7. Client: GET /cart/93a41fe545b (requested with JavaScript after reading the cookie) 8. Server: Ok, user's shopping cart Maybe the actual creation of the shopping cart can be deferred until the first item is placed into the cart, but that is not important for my example. Important is the mechanism to store the cart URI in a cookie and request and display the cart with JavaScript. Is there a way to do something similar without JavaScript? What do you think of this approach? Another idea: Since this kind of shopping cart could be shared between users (e. g. configure a computer for someone else and send the URI of the cart) it would make sense, to make shopping carts immutable. If the cart is modified, the user gets a new cart URI, so that all previous versions would still be available. The obvious shortcoming is that many carts will be stored over time ... But disk space is cheap and old, unused carts could be removed after a while. Any opinion? Regards Christian -- http://scala-forum.org/
Hello, lets say I have a web shop with a product page that contains a panel with the shopping cart. The product page should be the same for all users, but the shopping cart should be individual for each user. But if the page is the same for all users, it cannot contain an individual URI for the user's shopping cart! My idea is to store the cart URI in a cookie and request the cart form the server with content of that cookie. The process looks like this: 1. Client: GET /product 2. Server: Ok, product page 3. Client: POST /cart (requested with JavaScript) 4. Server: Created, Location: /cart/93a41fe545b, set_cookie: /cart/93a41fe545b ... on the next page: 5. Client: GET /another_product 6. Server: Ok, another product page 7. Client: GET /cart/93a41fe545b (requested with JavaScript after reading the cookie) 8. Server: Ok, user's shopping cart Maybe the actual creation of the shopping cart can be deferred until the first item is placed into the cart, but that is not important for my example. Important is the mechanism to store the cart URI in a cookie and request and display the cart with JavaScript. Is there a way to do something similar without JavaScript? What do you think of this approach? Another idea: Since this kind of shopping cart could be shared between users (e. g. configure a computer for someone else and send the URI of the cart) it would make sense, to make shopping carts immutable. If the cart is modified, the user gets a new cart URI, so that all previous versions would still be available. The obvious shortcoming is that many carts will be stored over time ... But disk space is cheap and old, unused carts could be removed after a while. Any opinion? Regards Christian
It's not exactly what you asked, but here is a series of blog posts from
2008 where I discuss my opinion on baskets ('carts' in american english):
http://alandean.blogspot.com/2008/11/twittering-about-restful-basket.html
<http://alandean.blogspot.com/2008/11/twittering-about-restful-basket.html>
http://alandean.blogspot.com/2008/11/when-basket-checkout-isn-restful.html
<http://alandean.blogspot.com/2008/11/when-basket-checkout-isn-restful.html>
http://alandean.blogspot.com/2008/11/what-restful-basket-checkout-might-look.html
<http://alandean.blogspot.com/2008/11/what-restful-basket-checkout-might-look.html>
http://alandean.blogspot.com/2008/11/on-restful-basket-state.html
<http://alandean.blogspot.com/2008/11/on-restful-basket-state.html>
Regards,
Alan Dean
On Wed, Jun 16, 2010 at 19:05, Christian Helmbold <
christian.helmbold@...> wrote:
>
>
> Hello,
>
> lets say I have a web shop with a product page that contains a panel with
> the shopping cart. The product page should be the same for all users, but
> the shopping cart should be individual for each user. But if the page is the
> same for all users, it cannot contain an individual URI for the user's
> shopping cart!
>
> My idea is to store the cart URI in a cookie and request the cart form the
> server with content of that cookie. The process looks like this:
>
> 1. Client: GET /product
> 2. Server: Ok, product page
> 3. Client: POST /cart (requested with JavaScript)
> 4. Server: Created, Location: /cart/93a41fe545b, set_cookie:
> /cart/93a41fe545b
>
> ... on the next page:
>
> 5. Client: GET /another_product
> 6. Server: Ok, another product page
> 7. Client: GET /cart/93a41fe545b (requested with JavaScript after reading
> the cookie)
> 8. Server: Ok, user's shopping cart
>
> Maybe the actual creation of the shopping cart can be deferred until the
> first item is placed into the cart, but that is not important for my
> example. Important is the mechanism to store the cart URI in a cookie and
> request and display the cart with JavaScript.
>
> Is there a way to do something similar without JavaScript?
>
> What do you think of this approach?
>
> Another idea:
>
> Since this kind of shopping cart could be shared between users (e. g.
> configure a computer for someone else and send the URI of the cart) it would
> make sense, to make shopping carts immutable. If the cart is modified, the
> user gets a new cart URI, so that all previous versions would still be
> available. The obvious shortcoming is that many carts will be stored over
> time ... But disk space is cheap and old, unused carts could be removed
> after a while. Any opinion?
>
> Regards
> Christian
>
> --
> http://scala-forum.org/
>
>
>
On Jun 16, 2010, at 8:05 PM, Christian Helmbold wrote: > Hello, > > lets say I have a web shop with a product page that contains a panel with the shopping cart. The product page should be the same for all users, but the shopping cart should be individual for each user. But if the page is the same for all users, it cannot contain an individual URI for the user's shopping cart! Why do you want the page to be the same for all users? Caching? Here is what you could do: Link to /cart from the product page and redirect /cart to /cart/tom for user Tom based on Authorization header. GET /cart Authorization: Thomas Magnum 307 Temporary Redirect Location: /cart/tom Even better: use client side storage (even if it is cookie-based) to store user state (the cart). Jan > > My idea is to store the cart URI in a cookie and request the cart form the server with content of that cookie. The process looks like this: > > 1. Client: GET /product > 2. Server: Ok, product page > 3. Client: POST /cart (requested with JavaScript) > 4. Server: Created, Location: /cart/93a41fe545b, set_cookie: /cart/93a41fe545b > > ... on the next page: > > 5. Client: GET /another_product > 6. Server: Ok, another product page > 7. Client: GET /cart/93a41fe545b (requested with JavaScript after reading the cookie) > 8. Server: Ok, user's shopping cart > > Maybe the actual creation of the shopping cart can be deferred until the first item is placed into the cart, but that is not important for my example. Important is the mechanism to store the cart URI in a cookie and request and display the cart with JavaScript. > > Is there a way to do something similar without JavaScript? > > What do you think of this approach? > > > Another idea: > > Since this kind of shopping cart could be shared between users (e. g. configure a computer for someone else and send the URI of the cart) it would make sense, to make shopping carts immutable. If the cart is modified, the user gets a new cart URI, so that all previous versions would still be available. The obvious shortcoming is that many carts will be stored over time ... But disk space is cheap and old, unused carts could be removed after a while. Any opinion? > > Regards > Christian > > > -- > http://scala-forum.org/ > > > > > > ------------------------------------ > > Yahoo! Groups Links > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
> Why do you want the page to be the same for all > users? Caching? Yes, I had caching in mind. > Here is what you could do: > Link to /cart from the > product page and redirect /cart to /cart/tom for user Tom based on Authorization > header. > > GET /cart > Authorization: Thomas Magnum > > 307 > Temporary Redirect > Location: /cart/tom A user is not necessarily authenticated. > Even better: use client > side storage (even if it is cookie-based) to store user state (the > cart). Ok, that would be an opportunity. Christian
On Wed, Jun 16, 2010 at 10:18 PM, Christian Helmbold <
christian.helmbold@yahoo.de> wrote:
>
>
>
>
> > Why do you want the page to be the same for all
> > users? Caching?
>
> Yes, I had caching in mind.
>
>
> > Here is what you could do:
>
> > Link to /cart from the
> > product page and redirect /cart to /cart/tom for user Tom based on
> Authorization
> > header.
> >
> > GET /cart
> > Authorization: Thomas Magnum
> >
> > 307
> >
> Temporary Redirect
> > Location: /cart/tom
>
> A user is not necessarily authenticated.
>
> This seems like a good opportunity to make up a random cart identifier for
the unauthenticated user. In fact, you should make up a random cart
identifier for authenticated users too. That way, you're removing an attack
vector from a malicious hacker that figures out your "/cart/{username}" URI
pattern. Plus, it makes it easy to maintain the stateless constraint even
for authenticated users (the cart and its associated ID expires after some
timeout period).
So, for either authenticated or unauthenticated users, send them to
"/cart/{cartID}" where cartID is a randomly calculated value that has an
expiration -- basically this is the same kind of thing that Java servlet
containers do with session IDs, to reduce the chance that an attacker can
guess an appropriate value.
Craig
>
> > Even better: use client
> > side storage (even if it is cookie-based) to store user state (the
> > cart).
>
> Ok, that would be an opportunity.
>
> Christian
>
>
>
On Wed, Jun 16, 2010 at 7:05 PM, Christian Helmbold <christian.helmbold@yahoo.de> wrote: > Hello, > > lets say I have a web shop with a product page that contains a panel with the shopping cart. The product page should be the same for all users, but the shopping cart should be individual for each user. But if the page is the same for all users, it cannot contain an individual URI for the user's shopping cart! > > My idea is to store the cart URI in a cookie and request the cart form the server with content of that cookie. The process looks like this: > > 1. Client: GET /product > 2. Server: Ok, product page > 3. Client: POST /cart (requested with JavaScript) > 4. Server: Created, Location: /cart/93a41fe545b, set_cookie: /cart/93a41fe545b > > ... on the next page: > > 5. Client: GET /another_product > 6. Server: Ok, another product page > 7. Client: GET /cart/93a41fe545b (requested with JavaScript after reading the cookie) > 8. Server: Ok, user's shopping cart > > Maybe the actual creation of the shopping cart can be deferred until the first item is placed into the cart, but that is not important for my example. Important is the mechanism to store the cart URI in a cookie and request and display the cart with JavaScript. > > Is there a way to do something similar without JavaScript? > > What do you think of this approach? That approach is pretty clean - clients are allowed to maintain their own state, it's driven by hypertext, and you're using code on demand (because you have to). I prefer an approach with a 3xx redirect as Jan described, you could private cache the 3xx response and you end up with the same effect as the cookie approach. I wouldn't bother trying to 'store the cart on the client side' though - you'll gain virtually nothing from it (apart from a headache and a deep hatred for the browser 'platform') and more than likely still end up storing the cart state to the server anyway. Cheers, Mike
> I prefer an approach with a 3xx redirect
> as Jan
described, you could
> private cache the 3xx response and you end
up
> with the same effect as
> the cookie approach.
How does that "private caching" work? The server doesn't know which cart to deliver when another product page asks the server for the users
cart, since the product page doesn't know the {cart-id}.
Christian
--
http://scala-forum.org/
> This seems like a good opportunity to make up a random cart identifier for the unauthenticated user. In fact, you should make up a random cart identifier for authenticated users too. That way, you're removing an attack vector from a malicious hacker that figures out your "/cart/{username}" URI pattern. Plus, it makes it easy to maintain the stateless constraint even for authenticated users (the cart and its associated ID expires after some timeout period).
I agree that a secure {cart-id} makes sense in any case. But the question, though, is where to store the {cart-id}? In a cookie? Embed it in the page that includes the cart and in all it's links? The latter would prevent caching and lead to personalized links like with session ids in the URIs.
Christian
On Thu, Jun 17, 2010 at 10:28 AM, Christian Helmbold
<christian.helmbold@...> wrote:
>
>> I prefer an approach with a 3xx redirect
>> as Jan
> described, you could
>> private cache the 3xx response and you end
> up
>> with the same effect as
>> the cookie approach.
>
> How does that "private caching" work? The server doesn't know which cart to deliver when another product page asks the server for the users
> cart, since the product page doesn't know the {cart-id}.
>
Each browser client can locally cache its 'unique' redirect response
from the /cart resource, which means that all pages can contain a
generic link e.g. <link rel="cart" href="/cart" /> , which renders the
pages themselves cacheable.
Each client can redirect to its actual cart without having to hit the
network since it can just take the redirect from its local cache.
Cheers,
Mike
On Thu, Jun 17, 2010 at 11:05 AM, Mike Kelly <mike@...> wrote:
> On Thu, Jun 17, 2010 at 10:28 AM, Christian Helmbold
> <christian.helmbold@...> wrote:
>>
>>> I prefer an approach with a 3xx redirect
>>> as Jan
>> described, you could
>>> private cache the 3xx response and you end
>> up
>>> with the same effect as
>>> the cookie approach.
>>
>> How does that "private caching" work? The server doesn't know which cart to deliver when another product page asks the server for the users
>> cart, since the product page doesn't know the {cart-id}.
>>
>
>
> Each browser client can locally cache its 'unique' redirect response
> from the /cart resource, which means that all pages can contain a
> generic link e.g. <link rel="cart" href="/cart" /> , which renders the
> pages themselves cacheable.
>
> Each client can redirect to its actual cart without having to hit the
> network since it can just take the redirect from its local cache.
>
> Cheers,
> Mike
>
There appears to be a bug in chrome as it won't cache a 307 response.
It works ok in firefox, I haven't tried anything else yet.
You may be better off just using cookies to store the URI as a path of
least resistance :)
Cheers,
Mike
Peter Williams wrote: > > > Using Content-Location, we can associate one application/xhtml+xml > > variant with multiple combinations of selection headers, i.e. a > > one-to-many mapping. This can't be done without some means of > > distinguishing one variant from another, without sniffing content. > > Providing a `content-location` allows more efficient caching by > allowing mapping a variety of selection headers to a single entity in > caches. Agreed. On the other hand, vigorous use of `etag` would > provide similar improvements to the cache hit rate. It is a big step > from "Content-Location can improve cache hit rates" to, "conneg is > useless without Content-Location". > My position is that assigning URIs to variants is both a REST constraint and HTTP best-practice. I haven't said "conneg is useless without Content-Location," particularly as I've kept saying "except for caching"... I get your meaning, though, but "Content-Location can improve cache hit rates" is your strawman, not my position. Over the course of the thread, I may have staked out too rigid a position, that the only way to distinguish variants from one another is by assigning Content-Location URIs to them. You are correct, Etag may be used to distinguish variants, and this can increase cache hit rates even when Content-Location is absent. But, this does not follow REST, so it does not change my advice... > > A conforming cache will not respond with an inappropriate > representation if the server sends an appropriate `vary` header. > OK. I was giving one example of aberrant cache behavior, which doesn't apply to the specifics of using Etag in combination with Vary. My way of doing things is to make my system compliant with HTTP 1.0 caches to the fullest extent possible, because last I heard there were still plenty of HTTP 1.0 caches deployed out there on the real-world Web. So to my way of thinking, conneg should work independently of caching scheme, i.e. Etag or Expires both work when Vary is combined with Content-Location... which is probably another reason for that SHOULD. > > (Though it might miss a valid chance to serve a cached entity.) > The other drawback to relying on Etag to cover for a missing Content- Location, is that on the real-world, anarchically-scalable Web, myriad cases exist where a cache may legitimately decide to serve a stale representation. This loss of control is the tradeoff to caching. By omitting Content-Location, you're preventing the cache from identifying the proper variant to send, forcing it to contact the origin server, which presumably it had good reason to avoid doing (like if that server is unavailable from the cache's location). When Content-Location is omitted, much uncertainty is introduced which is otherwise avoided by following the SHOULD. > > Private caches at the user agent are less susceptible to selection > criteria explosion. Repeated requests from a single user agent are > likely to all be quite similar. In my experience private caches are > far more important than caching intermediates, anyway. > My experience disagrees with your experience. When I first started doing Web development in late 1993, it was by downloading Mosaic via my Compuserve account, and creating pages on my local filesystem. My first experience with HTTP was in 1994, after I'd opened my own ISP. I was an early member of the Colorado Internet Cooperative Association, whose board consisted of most of the authors of "UNIX System Administration Handbook". One of whom was Evi (who had a second home in Steamboat Springs, but went with my non-coop competition because I only offered PPP and she demanded CSLIP), who, in her position as a professor at CU-Boulder, was instrumental in the student-led development of squid. The first anyone really ever heard of squid was at a coop meeting, to an ISP-dominated audience. So in my (heavily-ISP-weighted) experience, shared caches are far more important than private. But, this is just one preference vs. another. I do not take the view that REST constraints which don't apply to a particular system, are irrelevant. Thus, constraints intended to increase visibility to intermediary components are still part of the style, even when we only care about private caches which don't require us to follow such constraints. You are presenting an edge case of not caring about shared caches, showing that Content-Location isn't required. I cannot be persuaded that any edge case nullifies the best-practice advice I'm giving. I only agree that your edge case exists, not that you're better off by not meeting the identification of resources constraint. REST is the Platonic Ideal for the long-term development of a system -- just because you're setting Cache-Control: private today, doesn't mean you shouldn't be able to change it tomorrow, by just changing the Cache- Control header. If your system wasn't designed with a long-term view of REST, then you can't just change Cache-Control, you must also add Content-Location. So what I'm saying is, start with Content-Location even if you don't see an immediate need for it. By making it your habit to follow this best practice, you'll never regret having avoided it. Instead of tailoring my solutions to the specific needs of the system I'm developing, I follow REST and develop a Uniform Interface, because I know that works in the present and will continue to work in the future, so I won't have to re-architect any system in response to its evolving needs. Tweaking an existing system's headers is easier than adding new headers. > > `content-location` is a terribly useful header. Using it does > increase the cache hit rates for negotiated resources. However, > skipping `content-location` in a negotiated response does not violate > any of the REST constraints that i can see. > Variants are resources. As such, REST requires them to be identified, in order for one variant to be distinguishable from another. Etag does not meet this constraint, because Etags are transient, in that they change over time for any given representation. The purpose of assigning a URI is to declare a static mapping. This is why assigning URIs to variants is a best practice -- provide one URI for a set of Etagged entities to map to. In HTTP, REST's requirement of assigning URIs to variants is reflected in the SHOULD about Content-Location. So to apply REST in HTTP, the SHOULD is followed. You are pointing to an edge case, where avoiding Content-Location can still be made to work. But you haven't explained why minting those URIs is undesirable, i.e. "works without it" does not justify avoiding Content-Location. "Compression" justifies avoiding Content-Location, i.e. ignoring the SHOULD, but I still haven't seen any other case where that SHOULD shouldn't be taken as a MUST (if, that is, you're following REST and applying the identification of resources constraint). I still wouldn't want to touch a non-compression conneg system that avoids Content-Location with a ten-foot pole. There is no simpler way to develop and maintain a conneg system, than to assign URIs to variants (except for compression), even if those URIs aren't exposed beyond the firewall. I've developed enough conneg systems to know that at some point, most likely more than one point, I will need to examine variants directly, bypassing the negotiation mechanism entirely (as opposed to testing the mechanism by altering selection headers). To me, this is a stronger argument than any edge case where Content- Location isn't technically needed by a caching scheme -- I don't care, assign URIs to your variants anyway, because REST requires it, and because it would be insane to develop and maintain a conneg system without doing so (except for compression). Spoken from experience. There is still no downside to assigning URIs to variants, so I still don't see the point in examining edge cases. Why *not* assign URIs to variants? What is it we're so desperately trying to avoid here, that we would disregard best practice by ignoring RFC 2616's SHOULD? Not caring about shared caching isn't a reason, particularly given that this is rest-discuss, where our concern is targeting the sweet-spot in the deployed Web which allows anarchic scalability (shared caching). The identification of resources constraint, applied in HTTP by using Content-Location to assign URIs to variants, allows for anarchic scalability. Edge cases where that level of scalability aren't required, are not sufficient reason not to apply the constraint anyway, and don't change best practice. Best practice in REST is to apply REST constraints and follow HTTP. Assigning URIs to variants is required by REST and strongly recommended as best practice by HTTP. Even if avoiding this has no downside today, REST development means not assuming that tomorrow's needs are the same as today's; design for the future. So the only advice I can give about assigning URIs to variants, is to do just exactly that. There is no REST argument *against* doing so, and a key REST constraint will be met by following this best practice. This really is as simple as the black-and-white clarity of the advice I keep giving. Even if one doesn't uderstand it, I promise you that it's far easier to learn REST by implementing best practices and learning from them, than trying to learn REST by avoiding best practices in one's implementations, then trying to rectify the results with REST ex-post- facto. REST should be any Web system's long-term goal. I don't fault a system for not implementing a constraint, if applying the constraint carries an immediate cost which outweighs the constraint's long-term benefits. This is not such a case. Identification of resources is fundamental, and has no costs to implement. I would even say that to avoid assigning URIs to variants, carries greater immediate costs (in terms of development hours alone) than are incurred by assigning them. So I still don't see any theoretical or cost-benefit reasons to avoid assigning URIs to variants. -Eric
Related discussion here: http://tech.groups.yahoo.com/group/rest-discuss/message/14395 -Eric
> > Related discussion here: > Content negotiation involves architecting around some nasty-tricksy tradeoffs. My solution is to use browser-resident XSLT to transclude personalized content from a resource implementing authentication- based conneg redirection, using HTTP Digest authentication. GET /A responds with an XHTML stub, which calls a static XSLT file. That XSLT file transforms Atom content from /B into XHTML, and also transforms an XML personalization file from /C . Resource /C does not initiate challenge-response in absence of an Authentication: header, it responds 200 OK with 'Cache-Control: public' and its anonymous-user variant. When the request has an Authentication: header, /C may use role- or user-based logic to redirect to role- or user-based variants whose Location: is inside a directory which does initiate challenge-response when no Authentication: header is present. This redirect can also use HTTPS to secure HTTP Basic authentication, but I use Digest and Cache- Control: private. Thus, /A may securely expose an individual user's password in its rendered steady-state, despite the fact that /A is a public-cached resource. The drawback is that must-revalidate is required on /C , but the browser-resident XSLT architecture effectively mitigates this, allowing /A to scale anarchically, despite personalized content. -Eric
Of course, there's nothing about my solution that requires XSLT; that's just "my" solution. The same architecture may be implemented using XHR in /A to request JSON personalization data from /C . -Eric
I want to leave the HTTP best practice part of this for a bit. That is of less interest to me than fully understanding the REST aspects. Once we have settle whether this is a requirement of the REST style we can come back to the implementation details. On Thu, Jun 17, 2010 at 2:16 PM, Eric J. Bowman <eric@...> wrote: > ... (see below) > Variants are resources. I don't think this is required to be true. A variant is a resource if, and only if, the server decides it is. If the server decides that a variant is a resource it will assign the variant a resource identifier. As the dissertation says,[1] a resource R is a temporally varying membership function MR(t), which for time t maps to a set of entities, or values, which are equivalent. If the server decides that a particular xml entity is equivalent to a particular json entity it is perfectly with-in it rights to serve both of the as representations of the same resource. The rules for defining equivalence seem left entirely to the server's discretion. I don't see any suggestion in the dissertation, though perhaps i just missed it, that it all non-byte-for-byte identical entities must, or even should, be treated as separate resources. Of course a server might choose to make any entity a resource. Or even more than one resource. That decision also seems to be left entirely up to the server. Why you think that all variants must also be resources in their own right? Peter Williams http://barelyenough.org [1]: http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm#sec_5_2_1_1 > Peter Williams wrote: >> >> > Using Content-Location, we can associate one application/xhtml+xml >> > variant with multiple combinations of selection headers, i.e. a >> > one-to-many mapping. This can't be done without some means of >> > distinguishing one variant from another, without sniffing content. >> >> Providing a `content-location` allows more efficient caching by >> allowing mapping a variety of selection headers to a single entity in >> caches. Agreed. On the other hand, vigorous use of `etag` would >> provide similar improvements to the cache hit rate. It is a big step >> from "Content-Location can improve cache hit rates" to, "conneg is >> useless without Content-Location". >> > > My position is that assigning URIs to variants is both a REST constraint > and HTTP best-practice. I haven't said "conneg is useless without > Content-Location," particularly as I've kept saying "except for > caching"... I get your meaning, though, but "Content-Location can > improve cache hit rates" is your strawman, not my position. > > Over the course of the thread, I may have staked out too rigid a > position, that the only way to distinguish variants from one another is > by assigning Content-Location URIs to them. You are correct, Etag may > be used to distinguish variants, and this can increase cache hit rates > even when Content-Location is absent. > > But, this does not follow REST, so it does not change my advice... > >> >> A conforming cache will not respond with an inappropriate >> representation if the server sends an appropriate `vary` header. >> > > OK. I was giving one example of aberrant cache behavior, which doesn't > apply to the specifics of using Etag in combination with Vary. My way > of doing things is to make my system compliant with HTTP 1.0 caches to > the fullest extent possible, because last I heard there were still > plenty of HTTP 1.0 caches deployed out there on the real-world Web. > > So to my way of thinking, conneg should work independently of caching > scheme, i.e. Etag or Expires both work when Vary is combined with > Content-Location... which is probably another reason for that SHOULD. > >> >> (Though it might miss a valid chance to serve a cached entity.) >> > > The other drawback to relying on Etag to cover for a missing Content- > Location, is that on the real-world, anarchically-scalable Web, myriad > cases exist where a cache may legitimately decide to serve a stale > representation. This loss of control is the tradeoff to caching. By > omitting Content-Location, you're preventing the cache from identifying > the proper variant to send, forcing it to contact the origin server, > which presumably it had good reason to avoid doing (like if that server > is unavailable from the cache's location). When Content-Location is > omitted, much uncertainty is introduced which is otherwise avoided by > following the SHOULD. > >> >> Private caches at the user agent are less susceptible to selection >> criteria explosion. Repeated requests from a single user agent are >> likely to all be quite similar. In my experience private caches are >> far more important than caching intermediates, anyway. >> > > My experience disagrees with your experience. When I first started > doing Web development in late 1993, it was by downloading Mosaic via my > Compuserve account, and creating pages on my local filesystem. My > first experience with HTTP was in 1994, after I'd opened my own ISP. I > was an early member of the Colorado Internet Cooperative Association, > whose board consisted of most of the authors of "UNIX System > Administration Handbook". > > One of whom was Evi (who had a second home in Steamboat Springs, but > went with my non-coop competition because I only offered PPP and she > demanded CSLIP), who, in her position as a professor at CU-Boulder, was > instrumental in the student-led development of squid. The first anyone > really ever heard of squid was at a coop meeting, to an ISP-dominated > audience. So in my (heavily-ISP-weighted) experience, shared caches > are far more important than private. > > But, this is just one preference vs. another. I do not take the view > that REST constraints which don't apply to a particular system, are > irrelevant. Thus, constraints intended to increase visibility to > intermediary components are still part of the style, even when we only > care about private caches which don't require us to follow such > constraints. > > You are presenting an edge case of not caring about shared caches, > showing that Content-Location isn't required. I cannot be persuaded > that any edge case nullifies the best-practice advice I'm giving. I > only agree that your edge case exists, not that you're better off by > not meeting the identification of resources constraint. > > REST is the Platonic Ideal for the long-term development of a system -- > just because you're setting Cache-Control: private today, doesn't mean > you shouldn't be able to change it tomorrow, by just changing the Cache- > Control header. If your system wasn't designed with a long-term view > of REST, then you can't just change Cache-Control, you must also add > Content-Location. > > So what I'm saying is, start with Content-Location even if you don't > see an immediate need for it. By making it your habit to follow this > best practice, you'll never regret having avoided it. Instead of > tailoring my solutions to the specific needs of the system I'm > developing, I follow REST and develop a Uniform Interface, because I > know that works in the present and will continue to work in the future, > so I won't have to re-architect any system in response to its evolving > needs. Tweaking an existing system's headers is easier than adding new > headers. > >> >> `content-location` is a terribly useful header. Using it does >> increase the cache hit rates for negotiated resources. However, >> skipping `content-location` in a negotiated response does not violate >> any of the REST constraints that i can see. >> > > Variants are resources. As such, REST requires them to be identified, > in order for one variant to be distinguishable from another. Etag does > not meet this constraint, because Etags are transient, in that they > change over time for any given representation. The purpose of > assigning a URI is to declare a static mapping. This is why assigning > URIs to variants is a best practice -- provide one URI for a set of > Etagged entities to map to. > > In HTTP, REST's requirement of assigning URIs to variants is reflected > in the SHOULD about Content-Location. So to apply REST in HTTP, the > SHOULD is followed. You are pointing to an edge case, where avoiding > Content-Location can still be made to work. But you haven't explained > why minting those URIs is undesirable, i.e. "works without it" does not > justify avoiding Content-Location. "Compression" justifies avoiding > Content-Location, i.e. ignoring the SHOULD, but I still haven't seen > any other case where that SHOULD shouldn't be taken as a MUST (if, that > is, you're following REST and applying the identification of resources > constraint). > > I still wouldn't want to touch a non-compression conneg system that > avoids Content-Location with a ten-foot pole. There is no simpler way > to develop and maintain a conneg system, than to assign URIs to > variants (except for compression), even if those URIs aren't exposed > beyond the firewall. I've developed enough conneg systems to know that > at some point, most likely more than one point, I will need to examine > variants directly, bypassing the negotiation mechanism entirely (as > opposed to testing the mechanism by altering selection headers). > > To me, this is a stronger argument than any edge case where Content- > Location isn't technically needed by a caching scheme -- I don't care, > assign URIs to your variants anyway, because REST requires it, and > because it would be insane to develop and maintain a conneg system > without doing so (except for compression). Spoken from experience. > > There is still no downside to assigning URIs to variants, so I still > don't see the point in examining edge cases. Why *not* assign URIs to > variants? What is it we're so desperately trying to avoid here, that we > would disregard best practice by ignoring RFC 2616's SHOULD? Not > caring about shared caching isn't a reason, particularly given that > this is rest-discuss, where our concern is targeting the sweet-spot in > the deployed Web which allows anarchic scalability (shared caching). > > The identification of resources constraint, applied in HTTP by using > Content-Location to assign URIs to variants, allows for anarchic > scalability. Edge cases where that level of scalability aren't > required, are not sufficient reason not to apply the constraint anyway, > and don't change best practice. Best practice in REST is to apply REST > constraints and follow HTTP. Assigning URIs to variants is required by > REST and strongly recommended as best practice by HTTP. Even if > avoiding this has no downside today, REST development means not assuming > that tomorrow's needs are the same as today's; design for the future. > > So the only advice I can give about assigning URIs to variants, is to > do just exactly that. There is no REST argument *against* doing so, > and a key REST constraint will be met by following this best practice. > This really is as simple as the black-and-white clarity of the advice I > keep giving. Even if one doesn't uderstand it, I promise you that it's > far easier to learn REST by implementing best practices and learning > from them, than trying to learn REST by avoiding best practices in one's > implementations, then trying to rectify the results with REST ex-post- > facto. > > REST should be any Web system's long-term goal. I don't fault a system > for not implementing a constraint, if applying the constraint carries > an immediate cost which outweighs the constraint's long-term benefits. > This is not such a case. Identification of resources is fundamental, > and has no costs to implement. I would even say that to avoid > assigning URIs to variants, carries greater immediate costs (in terms > of development hours alone) than are incurred by assigning them. So I > still don't see any theoretical or cost-benefit reasons to avoid > assigning URIs to variants. > > -Eric >
Peter Williams wrote: > > > Variants are resources. > > I don't think this is required to be true. > It's simply a fact. We don't require glass to be a liquid in casual conversation -- it would be confusing to speak of a liquid shattering into a million pieces. But this doesn't change the simple fact that glass is a liquid -- knowledge required for any discussion of the Chemistry or Physics of glass. Any discussion of content negotiation (except compression, or to be more technically accurate, aside from variance in Transfer-Encoding or similar variance in wrapping the entity) requires an acceptance of the fact that variants are also resources in their own right. > > A variant is a resource > if, and only if, the server decides it is. If the server decides that > a variant is a resource it will assign the variant a resource > identifier. > This view makes it impossible to violate the identification of resources constraint. A variant is a resource in its own right, whether the server assigns it a resource identifier or not. Failure to assign URIs to resources is not only possible, it violates that constraint. If resources are only what the server says they are, the constraint is inviolable by definition, but that's not actually the case. > > As the dissertation says,[1] > > a resource R is a temporally varying membership function MR(t), > which for time t maps to a set of entities, or values, which are > equivalent. > Exactly. A resource is simply an abstraction. R1 is a negotiated resource, V is a member of R1. R2 is not a negotiated resource, which also contains V as a member. Calling variant V a resource in its own right, is merely acknowledging the existence of R2, whether R2 has a URI or not. "In other words, any concept that might be the target of an author's hypertext reference must fit within the definition of a resource." We wish to distinguish one variant from another by recognizing R2 and assigning it a URI. If the concept of R2 is to become a hypertext target like that, all REST says is that R2 must fit the definition of a resource, which it does. REST does not imply that once a variant has been identified as a member of R, that it cannot also belong to R2. > > The rules for > defining equivalence seem left entirely to the server's discretion. > Yes, they are. Whatever variants the server assigns to one URI, also happen to be resources in their own right, whether the server identifies them as such, or not. Just as glass is a liquid, whether we call it that or not. Being an equivalent variant of one resource, does not restrict the variant from also representing some other resource: " For example, the "authors' preferred version" of an academic paper is a mapping whose value changes over time, whereas a mapping to "the paper published in the proceedings of conference X" is static. These are two distinct resources, even if they both map to the same value at some point in time. The distinction is necessary so that both resources can be identified and referenced independently. " Note that Roy gives an example of a variant (nothing prevents author's preferred version from being a negotiated resource) that's a member of two different sets. Nothing is wrong with assigning multiple conceptual mappings to an entity. In fact, this distinction is necessary in order to distinguish negotiated variants from one another. ;-) > > I don't see any suggestion in the dissertation, though perhaps i just > missed it, that it all non-byte-for-byte identical entities must, or > even should, be treated as separate resources. > That's my wording, all I mean is varying by Transfer-Encoding, but that's HTTP which you want to avoid. Unless we're talking about compression (unless there's some other use of Transfer-Encoding I'm not aware of), we're talking about some variance between entities which makes them not byte-for-byte the same. > > Why you think that all variants must also be resources in their own > right? > If a resource varies between HTML and Atom representations, then those two variants, taken separately, must have some abstract definition that is different from one another, and is also different from the abstract definition of the negotiated resource whose URI they share. R = concept as HTML or Atom, R1 = concept as HTML and only HTML, R2 = concept as Atom and only Atom. This is an absolute given, except for the HTTP case of Transfer- Encoding, or any similar case where the variance which occurs is confined to a wrapper which does not alter the underlying entity (what I meant by byte-for-byte the same). Abstractions R1 and R2 exist separately from abstraction R, therefore three resources exist. To assign URIs to R1 and R2 is to apply the identification of resources constraint. To fail, is to violate the constraint (refusing to call glass a liquid). -Eric
On Fri, Jun 18, 2010 at 8:23 AM, Eric J. Bowman <eric@...> wrote: > Peter Williams wrote: >> >> > Variants are resources. >> >> I don't think this is required to be true. >> > > It's simply a fact. We don't require glass to be a liquid in casual > conversation -- it would be confusing to speak of a liquid shattering > into a million pieces. But this doesn't change the simple fact that > glass is a liquid -- knowledge required for any discussion of the > Chemistry or Physics of glass. [off-topic] Classifying glass as a liquid or solid is not simple. And either classification wouldn't be fact even if many physicists, our commonsense, and wikipedia conclude it's a solid. The "glass debate" itself seems analogous to this discussion in that there are lots of legitimate, reasonable arguments, theories, and very little fact. [/off-topic] ...now back to your regularly scheduled programming.... :) --tim
Are arbitrary parameters allowed for all Media Types? Are the
allowable one's defined in the Media Type documentation itself or
where? For some context, Jersey's mechanism for allowing the server
to have a preferred media type is to use a "quality of source" or "qs"
parameter in its Produces annotation[1]. Something like:
@Produces("application/xml;qs=2")
public Object getAsXml()
In the response, the "qs" then parameter comes along in the
ContentType header. I think, as Paul suggested in that mail, clients
will ignore it, but it got me curious about the more general case.
Does section 3.7[2] mean that arbitrary parameters are allowed?
Thanks,
--tim
[1] - https://jersey.dev.java.net/servlets/BrowseList?list=users&by=thread&from=2968827
[2] - http://www.w3.org/Protocols/rfc2616/rfc2616-sec3.html#sec3.7
On Fri, Jun 18, 2010 at 1:23 PM, Eric J. Bowman <eric@...> wrote: > Peter Williams wrote: >> >> A variant is a resource >> if, and only if, the server decides it is. If the server decides that >> a variant is a resource it will assign the variant a resource >> identifier. >> > > This view makes it impossible to violate the identification of > resources constraint. A variant is a resource in its own right, > whether the server assigns it a resource identifier or not. Failure to > assign URIs to resources is not only possible, it violates that > constraint. If resources are only what the server says they are, the > constraint is inviolable by definition, but that's not actually the > case. This of course assumes that you haven't fabricated the "existing" possibility out of thin air.. how is this any different as you stand now? Are you sure you're applying that constraint in the correct context? It makes sense that you can't violate the identification of resources constraint within a specific system, like the web, that implements Uniform Resource Identifiers. > >> >> Why you think that all variants must also be resources in their own >> right? >> > > If a resource varies between HTML and Atom representations, then those > two variants, taken separately, must have some abstract definition that > is different from one another, and is also different from the abstract > definition of the negotiated resource whose URI they share. No, they have the *same* abstract definition - which is the reason they are identified as representations of a resource that can be negotiated, and not completely independent resources. The difference between an Atom and HTML representation is *not* abstract at all Eric, that is why you are able to transform via XSLT from Atom to HTML. Cheers, Mike
I may be wrong here, but as far as I understand the quality only make sense
in requests (for content-negotiation) , not in responses.
_________________________________________________
Melhores cumprimentos / Beir beannacht / Best regards
Antnio Manuel dos Santos Mota
http://card.ly/amsmota
_________________________________________________
On 18 June 2010 15:02, Tim Williams <williamstw@...> wrote:
>
>
> Are arbitrary parameters allowed for all Media Types? Are the
> allowable one's defined in the Media Type documentation itself or
> where? For some context, Jersey's mechanism for allowing the server
> to have a preferred media type is to use a "quality of source" or "qs"
> parameter in its Produces annotation[1]. Something like:
>
> @Produces("application/xml;qs=2")
> public Object getAsXml()
>
> In the response, the "qs" then parameter comes along in the
> ContentType header. I think, as Paul suggested in that mail, clients
> will ignore it, but it got me curious about the more general case.
> Does section 3.7[2] mean that arbitrary parameters are allowed?
>
> Thanks,
> --tim
>
> [1] -
> https://jersey.dev.java.net/servlets/BrowseList?list=users&by=thread&from=2968827
> [2] - http://www.w3.org/Protocols/rfc2616/rfc2616-sec3.html#sec3.7
>
>
2010/6/18 Antnio Mota <amsmota@...> > > I may be wrong here, but as far as I understand the quality only make sense in requests (for content-negotiation) >, not in responses. Hey Antonio, you're right, this isn't the "q" parameter that's in requests though, it's Jersey's "qs" parameter that allows the server to indicate its preferred representation when the client has no obvious preference. For example, when the only Accept that matches is */*, the "qs" allows the developer to indicate which representation is preferred. --tim
You're right of course. I misread the question, even because I also use Jersey and didn't know it supported that... _________________________________________________ Melhores cumprimentos / Beir beannacht / Best regards Antnio Manuel dos Santos Mota http://card.ly/amsmota _________________________________________________ 2010/6/18 Tim Williams <williamstw@gmail.com>: > 2010/6/18 Antnio Mota <amsmota@...> >> >> I may be wrong here, but as far as I understand the quality only make sense in requests (for content-negotiation) >>, not in responses. > > Hey Antonio, you're right, this isn't the "q" parameter that's in > requests though, it's Jersey's "qs" parameter that allows the server > to indicate its preferred representation when the client has no > obvious preference. For example, when the only Accept that matches is > */*, the "qs" allows the developer to indicate which representation is > preferred. > > --tim >
Tim Williams wrote: > > Classifying glass as a liquid or solid is not simple. And either > classification wouldn't be fact even if many physicists, our > commonsense, and wikipedia conclude it's a solid. > Well, Wikipedia is hardly a normative reference for anything... but even Wikipedia recognizes that a true solid has crystalline structure, classifies glass as an 'amorphous solid', and recognizes that "glass crystal" is a common misnomer. Wikipedia also states that glass is made by heating SiO2 until it melts -- it's hard to conceive of a solid substance melting into another solid substance. Whether you want to call glass a liquid or an amorphous solid, the fact remains that it has no crystalline structure, doesn't cool back into crystalline SiO2, and therefore cannot be classified as a true solid, which is why it must be classified with the term "amorphous" when it's called a solid -- the definition of amorphous means that it can't be definitively classified a solid. If there's a flaw in my analogy, it's that I fudged and claimed glass to be a liquid, when I should have asserted simply that it isn't a true solid. But in lay terms, non-solids are liquids or gasses, and no distinction is made between amorphous vs. true solids. So the argument is one of precision terminology. Which leads us back to REST. The claim that a representation is a resource is wrong, of course. The common advice to treat variants as resources in their own right, should really be "treat variants also as representations of another resource" but that's more confusing to say. -Eric
On Fri, Jun 18, 2010 at 9:00 AM, Eric J. Bowman <eric@...> wrote:
>
> Which leads us back to REST. The claim that a representation is a
> resource is wrong, of course. The common advice to treat variants as
> resources in their own right, should really be "treat variants also as
> representations of another resource" but that's more confusing to say.
I think what you are saying is something like the following. For a
resource R1 with more than one member there must exist some other
resources whose memberships are {e in R1,e acceptable to a specific
subset of clients} such that every member of R1 is the sole member of
one of these other resources.
This proposition does not follow from any of the constraints. Nor is
it covered explicitly in the dissertation. It is also not required,
at least in principle, to achieve any of the desired qualities of a
REST style architecture.
Given a resource R={E1,E2}, if there was no resource R2 = {one e in R,
e acceptable clients to which E1 is acceptable} no REST constraints
are violated unless you have a preexisting notion that R2 must exist.
The constraint violation you keep bring up is dependent on the
assertion that the resource in question exists. IOW, it seems your
argument is, the resource R2 must exist because it does not then you
have failed to identify the resource R2.
The above seems complicated to point of absurdity. One the other
hand, when approaching the question of when variant should be
resources for application stand point it becomes quite simple. If a
variant is individually useful in the application or implementation
domain, is should be modeled as a resource in its own right.
Circling back to content-location in the conneg context. I read the
RFC as allowing variants to be identified in the content-location
header if the variants have separate locations. I do not read it to
mean that all variants *should* have locations.
Handling conneg by redirecting to the appropriate resource would have
significant caching benefits. However, i think that using etags would
result in better cache behavior than content location for resources
with multiple representations. A cache can form more expressive
conditional requests using If-None-Match than it can with
If-Modified-Since when multiple representations are in play. There
would be certainly be no problem with providing both if you wanted to,
though.
Peter Williams
http://barelyenough.org
On Jun 18, 2010, at 11:47 AM, Peter Williams wrote: > Circling back to content-location in the conneg context. I read the > RFC as allowing variants to be identified in the content-location > header if the variants have separate locations. I do not read it to > mean that all variants *should* have locations. That's right. Content-Location is similar to other Content-XXX headers, and has at least two usages: - When a variant has a URI as discussed in this thread - When the representation in the response does not correspond to the request URI The latter is more useful for POST (for 201) and PUT cases. When the server decides to include the representation of the created/updated resource, it ought to tell the client about the Content-Location. Subbu
As far as the HTTP is concerned, yes, arbitrary params are allowed. But their semantics are up to media type specifications (for the sake of interop).
Subbu
On Jun 18, 2010, at 7:02 AM, Tim Williams wrote:
> Are arbitrary parameters allowed for all Media Types? Are the
> allowable one's defined in the Media Type documentation itself or
> where? For some context, Jersey's mechanism for allowing the server
> to have a preferred media type is to use a "quality of source" or "qs"
> parameter in its Produces annotation[1]. Something like:
>
> @Produces("application/xml;qs=2")
> public Object getAsXml()
>
> In the response, the "qs" then parameter comes along in the
> ContentType header. I think, as Paul suggested in that mail, clients
> will ignore it, but it got me curious about the more general case.
> Does section 3.7[2] mean that arbitrary parameters are allowed?
>
> Thanks,
> --tim
>
> [1] - https://jersey.dev.java.net/servlets/BrowseList?list=users&by=thread&from=2968827
> [2] - http://www.w3.org/Protocols/rfc2616/rfc2616-sec3.html#sec3.7
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
OK, I digested it now :-) (back from vacation) I agree pretty much with everything you said. So far my focus since launching this thread has bend on understanding what the characteristics of a RESTful system are. In doing that, I have also been looking at existing implementations, Ie OpenRasta, RestSharp, Restfulie and Jersey amongst them. Rather than looking at it from the 'how does this fit in WCF's existing frame?' angle, I have been taking the approach of what would the right API look like? what does the developer need?, etc. My question is in that context and asking those who have far more experience than & do, what works and what doesn't from an api / authoring experience perspective. Regards On Saturday, June 12, 2010, William Martinez Pomares <wmartinez@...> wrote: > > > > > > > > > > > > > > > > > > > > > > > > > > Hello Glen. > As I just posted a comment with some hint about how to evaluate a toolchest/framework, I may want to open another lead here. > > One way is to actually look at how other frameworks (mostly java I hear) deal with the idea. The other way is to actually work on understanding the REST style, why the interaction is how it is, and what happens in the network. > > Why do I say so? Because the successful API definition depends on that understanding and in the goal you are trying achieve. Bear with me please: > > 1. REST explains the constrains you impose to your architecture to allow better performance, reliability, visibility, scalability for large grain hypermedia transfer applications in a networked solutions. > > 2. A RESTFul service is an idea of using the guidelines of REST to expose services on web. Some ways work better and attach to more constrains that others, but in general the idea is to have one initial URI, starting point to a set of states, which transitions are governed by hyperlinks, and whose actions are focused in manipulating resources using a standard interface (HTTP methods). > > 3. The constrains include self-description of messages, cache support, client/server separation of concerns and possible code on demand support. And, of course, use of Hypermedia to define and control application state > > 4. In WCF, one important thing is the generic interface. That is, the service Endpoint (Address, binding and contract), down to the binding elements. Behind all that, is a description of interaction, and API definition. How to match that to the REST service? Interesting question. > > 5. Let's start on the server side. > a. One first thing is the possibility to define a serviceEndpoint as unique, meaning just one URI to start it all. > b. The other need is of course the ability to produce different kinds of media types to serve representations. If the idea is to avoid a bare-bones implementation for developers, we may want to abstract the content negotiation so it is somehow automatic. The client will request what can be served, or request some particular representation. On server side we only need to define the representation source and transformation. (say, Mr. WCF, this record in database you can publish it as JSON, XML or URLEncoded, here is the mapping, take care of it when requested, thanks). > c. What about defining the resources and possible state engine? Even setting an URI generation template. All that to add the generated URIs into the representations. Of course, for each resource type, define the HTTP operations. See next point. > d. We only support HTTP Methods. So, no [OperationContract] String sayHello(String name); things. sayHellow is an internal method you can use internally, but that RPC stuff may be heresy in REST world (nor for some, I know). But it may be [OperationContract(HTTPMethod=POST)] String sayHello(String body); where the String argument is the body sent in the post and the returned String the response. All the other POST metadata and control may be defined with other artifacts. If you want to excel, you can design that to use any protocol, not just HTTP. > e. All that means, all the HTTP plumbing is hidden, plus an easier way to provide automatic representations control, metadata control and response. > > 6. What about the client? Almost same thing. In the ideal world clients are given just one URI, and build dynamically their path from there on. In real world, they usually know what are the steps and the dynamic URIs (hypermedia usage) is to identify specific instances of already known expected types. Any Ideas to improve that? A nice client that runs by itself and starts following links and things, just stopping for asking me about data or decide on options/path here and there, would be nice. Not surprisingly that describes a browser. > > 7. But all I have said is too nailed to the grown. On the high side, the idea is to allow client and server independent evolution since no coupling is done at interface level. That calls for an automatic interaction thing that allows me to focus on resource and representation definition, plus the state map, and on client side to worry about a goal definition. Caching, gateways, all that is invisible. > > 8. There is something else, last thing. The layered constrain in REST. The idea is that you can have interim nodes, that may even parse and process partially the payload of the message. In this case, client only sees the next layer, and that one may see the next one. > > Again, I suggest you to start taking a look at the real implementations that are done bare-bones, understanding the idea of the style, and see if that fits into WCF generic definition or not. > > Cheers. > > William Martinez Pomares > > --- In rest-discuss@yahoogroups.com, Glenn Block <glenn.block@...> wrote: >> >> Hi guys >> >> I've been trolling for a few weeks :-) I work on the WCF team at Microsoft. >> We're currently in the very early stages of planning for new apis for >> supporting pure REST and HTTP style development. Our goal is to create >> something simple, lightweight and true to form. We are looking provide a >> natural API both for the service author and for the consumer. This not an >> attempt to simply retrofit onto a SOAP based model. >> >> It would be great to hear thou > > > > > > > > > > >
I have not looked at NetKernel for a long time. it is based on Jetty for the HTTP part. I have tried to migrated the Jetty in NetKernel to a newer version a couples of years ago ... I would recommend node.js for similar features instead if it is not giving an apple for an orange. Cheers, Dong On Tue, Jun 15, 2010 at 2:11 PM, Eric J. Bowman <eric@...>wrote: > > > Jan Vincent wrote: > > > > Are there well-known alternatives to HTTP for building REST services? > > > > No. Well, no to the "well-known" part, anyway. If you want to take > advantage of REST architectural strengths like caching or conneg, then > there's only one protocol which supports it worth speaking of, and that > is, of course, HTTP 1.1 -- no other well-known protocol has these > features. You can build a REST system using FTP if you want, but it > won't scale like HTTP, even with less overhead. > > > > > > When doing small-scale internal services, I still find a RESTful > > architecture still useful, however, the overhead of HTTP seems to be > > noticable. I was wondering if there widely used alternatives that > > focus on performance in the same manner that some RPC tools do > > (Protocol Buffer, Thrift). > > > > I've only very recently been pointed to NetKernel and NetKernel > Protocol. Perhaps this is what you're after? > > > > > > Also, on media-types, are there well-known media types that are > > relatively cheap to parse? For one, I'm keeping my eye on > > BSON:http://bsonspec.org/ as an alternative to JSON. > > > > Not that I know of; however, parse efficiency isn't a REST concern. > > -Eric > > >
As I'm sure you are well aware, the best API is the one that best matches the use case. I know you took this approach with both Prism and MEF; I think taking the same approach here will once again lead you to your desired location. You mentioned that you are looking at API's for both client and server. I think tools like Restfulie, RestClient, etc. do a great job of solving the client issue. You need something simple and flexible that can follow directions provided by another party (the server). REST clients need to act very similarly to human users. I think we have some fairly good solutions here. No doubt they could and will be improved. On the server side, however, we have a large hole, imho. I really like that Restfulie adds a fix for that hole in the form of hooking up a state machine (of sorts) to help drive the interaction by sending the next steps to the client. I may be wrong, but I think Restfulie is the only one that does this now. If you can help solve this problem, even if it is not required to use the complete solution, you will have solved a major hurdle in applying REST. Again, that is just my opinion. Aside from the state machine, you need a simple way of applying the other constraints. One solution seems to be making an enormous headway: Sinatra<http://www.sinatrarb.com>. We now have Denied <http://denied.immersedcode.org/> (Python), Flask<http://flask.pocoo.org/>(Python), Express <http://github.com/visionmedia/express/> (JavaScript), Jack<http://jackjs.org/>(JavaScript), and Padrino <http://padrinorb.com> (Ruby, built on top of Sinatra). In other words, this *simple* DSL over writing web apps appears to be effective and is inspiring copycats. (We've discussed Frank previously, which is my own attempt at doing this in F# for .NET.) In turn these are all built upon similar, *simple*, to-the-metal server interfaces: Rack <http://rack.rubyforge.org/> (Ruby), WSGI<http://wsgi.org/>(Python), and JSGI <http://jackjs.org/jsgi-spec.html> (JavaScript). (I'm in the middle of something like this <http://github.com/panesofglass/frack> for .NET.) You'll note I highlighted *simple*. The biggest problem I've had with WCF to date is all the config. Sinatra and its kith put the config up front in the code, which allows you to explicitly declare your intent. The server interface handles the rest. (We could go further into the middlewares thing, but this is already long.) I'm not saying WCF has to go this way, but the number of players indicates this is a useful approach and easy to pick up and use quickly. It's also extensible. Look at Padrino. You have Rack -> Sinatra -> Padrino. They stack nicely and keep a consistent interface no matter what level of complexity you need for your app. The latest Changelog show<http://thechangelog.com/post/708173099/episode-0-2-7-padrino-ruby-web-framework>indicated that Padrino is fine handling large apps. That means that even full enterprise environments can run on this stuff. (Of course, hard evidence would be nice. :) ) Try starting there. Work outwards but try not to lose the simplicity. Find what works by building real applications that follow the REST constraints. Cheers, Ryan Riley Email: ryan.riley@... LinkedIn: http://www.linkedin.com/in/ryanriley Twitter: @panesofglass Blog: http://wizardsofsmart.net/ Website: http://panesofglass.org/ On Mon, Jun 21, 2010 at 3:58 AM, Glenn Block <glenn.block@...> wrote: > > > OK, I digested it now :-) (back from vacation) > > I agree pretty much with everything you said. So far my focus since > launching this thread has bend on understanding what the > characteristics of a RESTful system are. In doing that, I have also > been looking at existing implementations, Ie OpenRasta, RestSharp, > Restfulie and Jersey amongst them. > > Rather than looking at it from the 'how does this fit in WCF's > existing frame?' angle, I have been taking the approach of what would > the right API look like? what does the developer need?, etc. > > My question is in that context and asking those who have far more > experience than & do, what works and what doesn't from an api / > authoring experience perspective. > >
Hello All,
I am experiencing memory/thread leak ,with Restlet-2.0-RC4 and
Restlet-2.0-SNAPSHOT , when using ClientResource . Basically,
ClientResource doesn't close the thread it spawns and this result in
number of inactive threads and severe memory leak.
Here is some very simple code to illustrate this behaviour. The same
code runs fine in Restlet-2.0-M6 (which doesn't span new thread in
ClientResource).
public void run(int instances) throws Exception {
for (int i=0; i < instances;i++) {
ClientResource clientResource = null;
Representation r = null;
try {
clientResource = new ClientResource("http://restlet.org");
r = clientResource.get();
} finally {
try { r.release(); } catch (Exception x) {}
try { clientResource.release(); } catch (Exception x) {}
}
}
}
public static void main(String[] args) throws Exception {
ThreadTest test = new ThreadTest();
test.run(1000);
}
I guess there might be something missing in the code to explicitly close
threads, but since the same code runs fine in M6, it is quite confusing
to experience leaks after upgrade.
Best regards,
Nina Jeliazkova
P.S. Inactive threads while executing the example above.
Shouldn't you be asking about this kind of thing on the Restlet mailing
lists?
Craig
On Mon, Jun 21, 2010 at 11:20 PM, Nina Jeliazkova <nina@...> wrote:
> Hello All,
>
> I am experiencing memory/thread leak ,with Restlet-2.0-RC4 and
> Restlet-2.0-SNAPSHOT , when using ClientResource . Basically, ClientResource
> doesn't close the thread it spawns and this result in number of inactive
> threads and severe memory leak.
>
> Here is some very simple code to illustrate this behaviour. The same code
> runs fine in Restlet-2.0-M6 (which doesn't span new thread in
> ClientResource).
>
> public void run(int instances) throws Exception {
>
> for (int i=0; i < instances;i++) {
> ClientResource clientResource = null;
> Representation r = null;
> try {
> clientResource = new ClientResource("http://restlet.org"<http://restlet.org>
> );
> r = clientResource.get();
> } finally {
> try { r.release(); } catch (Exception x) {}
> try { clientResource.release(); } catch (Exception x) {}
> }
> }
> }
>
> public static void main(String[] args) throws Exception {
> ThreadTest test = new ThreadTest();
> test.run(1000);
> }
>
>
> I guess there might be something missing in the code to explicitly close
> threads, but since the same code runs fine in M6, it is quite confusing to
> experience leaks after upgrade.
>
> Best regards,
> Nina Jeliazkova
>
> P.S. Inactive threads while executing the example above.
>
>
>
Sorry, wrong list, my mistake!
nina
Craig McClanahan wrote:
>
>
> Shouldn't you be asking about this kind of thing on the Restlet
> mailing lists?
>
> Craig
>
> On Mon, Jun 21, 2010 at 11:20 PM, Nina Jeliazkova <nina@...
> <mailto:nina@...>> wrote:
>
> Hello All,
>
> I am experiencing memory/thread leak ,with Restlet-2.0-RC4 and
> Restlet-2.0-SNAPSHOT , when using ClientResource . Basically,
> ClientResource doesn't close the thread it spawns and this result
> in number of inactive threads and severe memory leak.
>
> Here is some very simple code to illustrate this behaviour. The
> same code runs fine in Restlet-2.0-M6 (which doesn't span new
> thread in ClientResource).
>
> public void run(int instances) throws Exception {
>
> for (int i=0; i < instances;i++) {
> ClientResource clientResource = null;
> Representation r = null;
> try {
> clientResource = new
> ClientResource("http://restlet.org" <http://restlet.org>);
> r = clientResource.get();
> } finally {
> try { r.release(); } catch (Exception x) {}
> try { clientResource.release(); } catch (Exception
> x) {}
> }
> }
> }
>
> public static void main(String[] args) throws Exception {
> ThreadTest test = new ThreadTest();
> test.run(1000);
> }
>
>
> I guess there might be something missing in the code to explicitly
> close threads, but since the same code runs fine in M6, it is
> quite confusing to experience leaks after upgrade.
>
> Best regards,
> Nina Jeliazkova
>
> P.S. Inactive threads while executing the example above.
>
>
>
>
>
>
Mike Kelly wrote: > > >> > >> A variant is a resource > >> if, and only if, the server decides it is. If the server decides > >> that a variant is a resource it will assign the variant a resource > >> identifier. > >> > > > > This view makes it impossible to violate the identification of > > resources constraint. A variant is a resource in its own right, > > whether the server assigns it a resource identifier or not. > > Failure to assign URIs to resources is not only possible, it > > violates that constraint. If resources are only what the server > > says they are, the constraint is inviolable by definition, but > > that's not actually the case. > > This of course assumes that you haven't fabricated the "existing" > possibility out of thin air.. how is this any different as you stand > now? > I can't make heads or tails out of that question. > > Are you sure you're applying that constraint in the correct context? > Absolutely. > > It makes sense that you can't violate the identification of resources > constraint within a specific system, like the web, that implements > Uniform Resource Identifiers. > I assure you, the identification of resources constraint is not met by choosing one resource identification scheme over another. As I mentioned earlier in this thread, it would be interesting to discuss the merits of using URNs in Content-Location -- a discussion which presupposes that variants need identifiers (REST), instead focusing on implementation (HTTP). Failure to use URIs to identify resources, is not a violation of the identification of resources constraint. Failure to identify resources within your system by assigning them identifiers, violates the constraint. It only "makes sense that you can't violate the identification of resources constraint within a specific system" if you are trying to define REST in a way which fits your agenda, or if you are misunderstanding what a resource *is*. > > >> > >> Why you think that all variants must also be resources in their own > >> right? > >> > > > > If a resource varies between HTML and Atom representations, then > > those two variants, taken separately, must have some abstract > > definition that is different from one another, and is also > > different from the abstract definition of the negotiated resource > > whose URI they share. > > No, they have the *same* abstract definition - which is the reason > they are identified as representations of a resource that can be > negotiated, and not completely independent resources. > Two variants have the same abstract definition, if they share a URI. This does not restrict those variants from each having an infinite number of other possible abstractions. Applying the identification of resources constraint, consists of assigning identifiers to any other abstractions important to the system. When you are varying media types (or anything besides compression), the abstractions of each variant of /concept as a specific media type (i.e. /concept.atom and /concept.html as 'concept as Atom and only Atom' and 'concept as HTML and only HTML') are important to the system, and are different from 'concept as varying media type'. This holds true whether you assign URIs to identify these additional resources, or not. Your counter to this, is to keep insisting that once you've declared a negotiated resource, its variants are not allowed to be representations of any other resource, regardless of how many times it's pointed out to you that REST explicitly endorses this very behavior (author's preferred version). The only thing you can accomplish by dragging this debate out, is exposing flaws in one explanation or another that I've posited, which does not change the best practice of assigning URIs to variants like RFC 2616 says you SHOULD. IOW, I'm not arguing that assigning URIs to variants *should* be best practice. I'm explaining *why* it *is* best practice. Rejecting my explanations will not change established best practice. Nor will failing to explain how this best practice violates REST -- insisting that a variant can only have one URI, that of the negotiated resource, goes against REST completely. > > The difference between an Atom and HTML representation is *not* > abstract at all Eric, that is why you are able to transform via XSLT > from Atom to HTML. > I can't make heads or tails out of that statement. How do you get from my statement that variants have multiple abstractions, to a claim that the differences between variants are abstract? If anything, I've pointed out that the differences between "concept as HTML and only HTML" and "concept as Atom and only Atom" are anything but abstract. These concrete differences are exactly why these variants require different abstractions to be identified, i.e. why variants must be treated as resources in their own right. No such concrete differences exist with compression variants, making them unimportant to the system, and therefore not requiring URIs. Granted, XSLT may be used to transform Atom into HTML, but the semantics of Atom are lost -- the user agent must be informed of, and understand, some domain-specific vocabulary to map the semantics of HTML to the semantics of Atom -- you can't tell just from the media type what hypertext relates to "author" and what hypertext relates to "updated" or anything else. Atom and HTML variants may represent the same concept when taken together, but when taken separately they also represent other concepts. A failure to recognize that these variants also represent other resources, by identifying those resources, is a failure to apply the identification of resources constraint. Period. -Eric
Peter Williams wrote:
>
> > Which leads us back to REST. The claim that a representation is a
> > resource is wrong, of course. The common advice to treat variants
> > as resources in their own right, should really be "treat variants
> > also as representations of another resource" but that's more
> > confusing to say.
>
> I think what you are saying is something like the following. For a
> resource R1 with more than one member there must exist some other
> resources whose memberships are {e in R1,e acceptable to a specific
> subset of clients} such that every member of R1 is the sole member of
> one of these other resources.
>
No, I'm saying that given negotiated (except for compression) resource
/R , each variant n must also be a representation of some other
resource Rn. I say nothing about not negotiating Rn, by Transfer-
Encoding or any other selection headers.
Content negotiation allows multiple representations to share the same
URI. Maybe it would help to think of this, except for compression, as
allowing representations of multiple resources to share a negotiated
URI.
>
> This proposition does not follow from any of the constraints. Nor is
> it covered explicitly in the dissertation. It is also not required,
> at least in principle, to achieve any of the desired qualities of a
> REST style architecture.
>
I think a dissertation which covered all use cases would need to be
thousands of pages long, at a minimum. The desirable qualities which
result from assigning URIs to variants, are those of the self-
descriptive messaging constraint. A failure to distinguish one variant
from another (IOW, a failure to apply the identification of resources
constraint) results in a failure to apply the self-descriptive messaging
constraint, with all the associated consequences.
I mentioned this before, that we could talk about failing to assign
URIs to variants as a violation of self-descriptive messaging, but that
it's really a violation of the identification of resources constraint
that's prerequisite to self-descriptive messaging.
>
> Given a resource R={E1,E2}, if there was no resource R2 = {one e in R,
> e acceptable clients to which E1 is acceptable} no REST constraints
> are violated unless you have a preexisting notion that R2 must exist.
>
Somewhat true, except I'd call it certain knowledge that R2 does indeed
exist.
>
> The constraint violation you keep bring up is dependent on the
> assertion that the resource in question exists. IOW, it seems your
> argument is, the resource R2 must exist because it does not then you
> have failed to identify the resource R2.
>
No. The consequences I've pointed out result from diminished
visibility. This diminished visibility proves a REST mismatch. The
consequences of the diminished visibility are solved completely by
assigning URIs to variants to increase visibility. This improved
visibility may be attributed to meeting the self-descriptive messaging
constraint. But this constraint was met by identifying more resources
important to the system, so the REST mismatch must have been the
identification of resources constraint.
So I'm not saying that R2 must exist because it can be assigned a URI
(although the fact that being able to assign it a URI proves that the
resource does exist). I'm saying that a failure to recognize that other
abstractions are important to a conneg system, results in easily-proven
visibility reductions which indicate a REST mismatch. The fact that
this mismatch is cleared by assigning URIs to variants, proves one of
two things -- either HTTP conneg is itself a REST mismatch, or that the
mismatch was the direct result of failing to apply the identification
of resources constraint.
>
> The above seems complicated to point of absurdity. One the other
> hand, when approaching the question of when variant should be
> resources for application stand point it becomes quite simple. If a
> variant is individually useful in the application or implementation
> domain, is should be modeled as a resource in its own right.
>
The identification of resources constraint requires that all resources
important to the system be assigned some sort of identifiers. When a
resource varies by media type, the media type of a specific response is
important to the system -- so important that it requires its own URI.
All discussions of caching aside. The basic tenet of REST's late
binding of representation to resource, is to send the most optimal
response *and* a list of alternate representations the user-agent may
choose from if the initial response is inadequate.
How can you present a user-agent with a list of alternates, if the
alternates all have the same URI? What if multiple alternates have the
same media type, for those who answer that question by borking @type in
violation of the layered-system constraint? How do you construct this
list if all you have to distinguish one variant from another is Etag?
Really, the only answer here is to assign URIs to variants -- thus,
best practice...
>
> Circling back to content-location in the conneg context. I read the
> RFC as allowing variants to be identified in the content-location
> header if the variants have separate locations. I do not read it to
> mean that all variants *should* have locations.
>
We already discussed this. That you MAY use Content-Location *if* a
representation is available at another URI, is a separate consideration
from the conneg case, where RFC 2616 is clear that you SHOULD assign
URIs to variants and use these URIs in Content-Location. The fact that
RFC 2616 says "especially" if URIs for variants already exist, does not
override the fact that it says they SHOULD.
>
> However, i think that using etags would
> result in better cache behavior than content location for resources
> with multiple representations.
>
No. Visibility is achieved when one variant may be assigned to
multiple selection-header matches. Etag does not do this. Conditional
requests which result in a new representation from the origin server,
are only valid for the selection-header combination of the request in
question, without Content-Location. A subsequent conditional request
for the same variant, with a different selection-header combination,
would still require a round-trip to the origin server.
These additional round-trips may be avoided by using Content-Location.
IOW this consequence of reduced visibility, and its resolution by
increasing visibility by assigning URIs to variants, is proof positive
that a REST mismatch exists, and that the mismatch results from
violating of the identification of resources constraint.
Using Content-Location (with or without Etag) is the means by which
HTTP allows multiple combinations of selection headers to map to the
same variant, allowing better cache behavior. Etag is no substitute,
as it cannot associate multiple selection header combinations with any
more than one entity, rather than a set of entities varying by Etag.
>
> A cache can form more expressive
> conditional requests using If-None-Match than it can with
> If-Modified-Since when multiple representations are in play. There
> would be certainly be no problem with providing both if you wanted to,
> though.
>
And certainly, no drawback to assigning URIs to variants. Not only
would I not want to touch a conneg system without Content-Location with
a ten-foot pole, but such a system cannot possibly be RESTful since
there is no mechanism by which the user-agent may be informed as to the
existence of other variants -- i.e. assuming conneg to be 100% reliable,
despite all evidence to the contrary. Without assigning URIs to
variants, user agents cannot recover from conneg errors -- errors REST
presumes will occur.
-Eric
Apologies in advance for subjecting everyone to this again. comments inline On Tue, Jun 22, 2010 at 7:25 PM, Eric J. Bowman <eric@...> wrote: > Mike Kelly wrote: >> >> It makes sense that you can't violate the identification of resources >> constraint within a specific system, like the web, that implements >> Uniform Resource Identifiers. >> > > > Failure to use URIs to identify resources, is not a violation of the > identification of resources constraint. Failure to identify resources > within your system by assigning them identifiers, violates the > constraint. Cool, so /representations/ without identifiers don't violate the constraint? I think you've already agreed that the choice to expose the representations as resources in their own right is a matter of judgement/pragmatism/"best-practice", so my assumption was that choosing one way or another is beyond REST - it seems to me that it's even beyond HTTP, since all the control data for describing discrepancies in server side 'selection' of variants already exist in 2616. >> >> >> >> >> Why you think that all variants must also be resources in their own >> >> right? >> >> >> > >> > If a resource varies between HTML and Atom representations, then >> > those two variants, taken separately, must have some abstract >> > definition that is different from one another, and is also >> > different from the abstract definition of the negotiated resource >> > whose URI they share. >> >> No, they have the *same* abstract definition - which is the reason >> they are identified as representations of a resource that can be >> negotiated, and not completely independent resources. >> > > Two variants have the same abstract definition, if they share a URI. > This does not restrict those variants from each having an infinite > number of other possible abstractions. Applying the identification of > resources constraint, consists of assigning identifiers to any other > abstractions important to the system. Presumably the answer to "What constitutes an important abstraction" is arbitrary? Particularly from a REST pov > When you are varying media types (or anything besides compression), the > abstractions of each variant of /concept as a specific media type (i.e. > /concept.atom and /concept.html as 'concept as Atom and only Atom' and > 'concept as HTML and only HTML') are important to the system, and are > different from 'concept as varying media type'. > > This holds true whether you assign URIs to identify these additional > resources, or not. Your counter to this, is to keep insisting that > once you've declared a negotiated resource, its variants are not > allowed to be representations of any other resource, regardless of how > many times it's pointed out to you that REST explicitly endorses this > very behavior (author's preferred version). No, the only thing I've insisted is that identifying representations as resources is a *cost* to visibility - i.e. it should be treated as such and either afforded or overcome where /necessary/. > The only thing you can accomplish by dragging this debate out, is > exposing flaws in one explanation or another that I've posited, which > does not change the best practice of assigning URIs to variants like > RFC 2616 says you SHOULD. In which case let's continue since your explanation appears flawed, and this isn't HTTP-discuss. > IOW, I'm not arguing that assigning URIs to variants *should* be best > practice. I'm explaining *why* it *is* best practice. Rejecting my > explanations will not change established best practice. Nor will > failing to explain how this best practice violates REST -- insisting > that a variant can only have one URI, that of the negotiated resource, > goes against REST completely. Hopefully I haven't said 'best practice' violates REST. One of my main issues in this discussion is that it's basically subjective and dependent on the system in question; i.e. it's nothing to do with REST. I've also said that not assigning URIs to representations doesn't violate any REST constraints. Primarily because it doesn't. > >> >> The difference between an Atom and HTML representation is *not* >> abstract at all Eric, that is why you are able to transform via XSLT >> from Atom to HTML. >> > > I can't make heads or tails out of that statement. How do you get from > my statement that variants have multiple abstractions, to a claim that > the differences between variants are abstract? If anything, I've > pointed out that the differences between "concept as HTML and only HTML" > and "concept as Atom and only Atom" are anything but abstract. When I read "If a resource varies between HTML and Atom representations, then those two variants, taken separately, must have some abstract definition that is different from one another" I concluded that the definitions are abstract, the definitions are different, therefore the difference is abstract. > Atom and HTML variants may represent the same concept when taken > together, but when taken separately they also represent other concepts. Taking them separately is a subjective judgement. All I'm saying is doing that reduces visibility in your system and is therefore a cost that is worth being aware of. > A failure to recognize that these variants also represent other > resources, by identifying those resources, is a failure to apply the > identification of resources constraint. Period. Period? Well that settles it then. Cheers, Mike
Mike Kelly wrote: > > > > > Failure to use URIs to identify resources, is not a violation of the > > identification of resources constraint. Failure to identify > > resources within your system by assigning them identifiers, > > violates the constraint. > > Cool, so /representations/ without identifiers don't violate the > constraint? > I have done my best to point out throughout the course of this thread, that there are no hard-and-fast rules, which is why I keep saying "except compression" or making the comparison to the glass debate. So it defeats the purpose of learning, when you re-state my position as some hard-and-fast rule, instead of reconsidering your understanding of terms like "resource" and "representation" to fit with how others have come to understand them (and give the same advice about assigning URIs to variants, hardly a short list). To have a representation, one must first have a resource. There is no such thing as a representation without an identifier. You can have steady-states without identifiers, and it is entirely possible to have resources without identifiers. A resource without an identifier only violates the constraint if the resource is important to the system. Compressed/uncompressed representations share an identifier. These variants are also resources in their own right, and it is of course possible to assign them URIs (resource.html / resource.html.zip). But, these resources are unimportant to the system, so it does not violate the identification of resources constraint to omit their URIs. In my example of a resource that negotiates between Atom and HTML, I also enable compression, so the set of members includes four variants. I only assign URIs to the HTML and Atom variants, because only those two variants are important enough as resources to warrant identifiers. Note that each compressed/uncompressed variant of the Atom/HTML negotiated resource, also receives a second URI -- proving they're also resources in their own right despite not having their own URIs. > > I think you've already agreed that the choice to expose the > representations as resources in their own right is a matter of > judgement/pragmatism/"best-practice" > Where, in all the times I've stated that assigning URIs to variants is to apply the identification of resources constraint, could you possibly have gotten the notion that I consider this a judgement call? I have repeatedly stated that this is a best RESTful practice, and that except for compression, failure to assign URIs to variants violates REST as surely as it violates RFC 2616's SHOULD. Just because something is a best practice outside the realm of REST, does not imply that it isn't a REST constraint -- that's just par for the course. As always, I reject out-of-hand the notion that REST is so vague and subjective that there exist any "alternate interpretations" or that REST is so ivory-tower that it isn't pragmatic. The fact that applying objective REST constraints in a pragmatic fashion results in tangible benefits is not some quirk of fate, accident or coincidence -- it's by design. > > it seems to me that it's even beyond HTTP, since all the control data > for describing discrepancies in server side 'selection' of variants > already exist in 2616. > The only way to resolve conneg discrepancies is to follow REST, by sending the optimal variant and a list of alternates. This may take the form of the Link: or Alternates: headers, which I will grant already exist in HTTP (not so much in practice), or a list of <link/> elements (external to HTTP), it doesn't matter which. How are any of these possible without assigning URIs to variants? The common practice of assigning URIs to variants and listing them with rel='alternate' is not tied to any one markup language, or any particular protocol for that matter. This has everything to do with REST, and as I keep saying, this REST constraint is reflected in RFC 2616's SHOULD. It is a matter of HTTP protocol *because* it's a matter of REST (and vice-versa). The REST style is derived from the early Web. REST is an effort to describe an architecture to fit the reality of HTTP as a scalable distributed application protocol with content negotiation. REST analyzes *why* HTTP allows even conneg to scale, then applies this analysis to inform the evolution of the Web, i.e. RFC 2616. Any dissertation is a philosophical evaluation of some aspect of the natural world. Roy's thesis is a philosophical evaluation of how and why the early Web succeeded. Alternate dissertations are possible using different terminology, but if such alternatives have the same goal of explaining how and why the Web allows scalable conneg, it's hard to imagine that they wouldn't somehow reflect the need to send URIs in Content-Location, or the need to inform clients of alternates by sending a list of URIs. HTTP isn't the only protocol that REST could be used to inform the design of. But it's likewise hard to imagine that some alternative protocol will evolve that retains the anarchic scalability of HTTP 1.1, while negating all the reasons for the imprecision of conneg. Some new protocol would still need the guidance of REST, which is to send the optimal variant first, with a list of alternates. Presumably I can use this new protocol to send an HTML representation with a bunch of link rel='alternate' elements with non-overlapping URIs. It is therefore hard to imagine some new RESTful protocol where those URIs wouldn't be used to distinguish one variant from another a la Content- Location, in favor of some sort of Etag-like system. So I still don't see the daylight between REST and HTTP on the issue of assigning URIs to variants. It would be difficult to look at existing conneg systems which scale to the Web (like the BBC website), see that they're indeed assigning URIs to variants using Content-Location, and conclude that doing so is somehow not relevant. I do not find it in any way shocking that my understanding of REST leads me to conclude that following RFC 2616's SHOULD is RESTful. Correctly understood, REST is the reason why that SHOULD is in RFC 2616. We do it that way because it's known to work in practice -- REST is a philosophical explanation of why it works; why would REST lead us to any other conclusion, except where REST itself identifies mismatches in HTTP? > > > Two variants have the same abstract definition, if they share a URI. > > This does not restrict those variants from each having an infinite > > number of other possible abstractions. Applying the identification > > of resources constraint, consists of assigning identifiers to any > > other abstractions important to the system. > > Presumably the answer to "What constitutes an important abstraction" > is arbitrary? Particularly from a REST pov > Absolutely not. If there's any ambiguity as to the importance of a resource, then it can't be an important resource. When using conneg (except for compression), the REST style is to send the optimal response first, along with a list of alternates. This means one of two things: either REST requires you to assign URIs to variants; or, REST is self- contradicting. > > > When you are varying media types (or anything besides compression), > > the abstractions of each variant of /concept as a specific media > > type (i.e. /concept.atom and /concept.html as 'concept as Atom and > > only Atom' and 'concept as HTML and only HTML') are important to > > the system, and are different from 'concept as varying media type'. > > > > This holds true whether you assign URIs to identify these additional > > resources, or not. Your counter to this, is to keep insisting that > > once you've declared a negotiated resource, its variants are not > > allowed to be representations of any other resource, regardless of > > how many times it's pointed out to you that REST explicitly > > endorses this very behavior (author's preferred version). > > No, the only thing I've insisted is that identifying representations > as resources is a *cost* to visibility - i.e. it should be treated as > such and either afforded or overcome where /necessary/. > Insisted, yes; shown, no. OTOH, I've explained that to omit Content- Location is to preclude associating multiple selection-header values with a single variant. Since adding the Content-Location header makes this possible, and this ability to map multiple selection-header values to a single variant is an example of what increased visibility *means*, it is not logical to keep insisting that Content-Location decreases visibility. Continuing to insist the opposite only proves that you do not understand the meaning of "visibility" in REST. > > > The only thing you can accomplish by dragging this debate out, is > > exposing flaws in one explanation or another that I've posited, > > which does not change the best practice of assigning URIs to > > variants like RFC 2616 says you SHOULD. > > In which case let's continue since your explanation appears flawed, > and this isn't HTTP-discuss. > I didn't say there's any flaw in my explanations, I'm merely allowing for that possibility. Your failure to understand REST, as I've mentioned before, is not a reflection of my ability to teach REST -- I expect my results will resemble a bell curve. I cannot hold myself to blame that your response to explanations of how and why the Web works the way it works, is to insist the Web is broken -- sticking to that position implies that you'll reject outright any notion that assigning URIs to variants obeys a REST constraint, no matter who says it or how it's worded. > > > IOW, I'm not arguing that assigning URIs to variants *should* be > > best practice. I'm explaining *why* it *is* best practice. > > Rejecting my explanations will not change established best > > practice. Nor will failing to explain how this best practice > > violates REST -- insisting that a variant can only have one URI, > > that of the negotiated resource, goes against REST completely. > > Hopefully I haven't said 'best practice' violates REST. One of my main > issues in this discussion is that it's basically subjective and > dependent on the system in question; i.e. it's nothing to do with > REST. > > I've also said that not assigning URIs to representations doesn't > violate any REST constraints. Primarily because it doesn't. > But it does, and I've not resorted to any subjective explanations for it. Objectively speaking, you can't send a list of alternates to a user agent if you only have one URI, unless we bork @type, which violates REST. Assigning URIs to variants makes this possible, without violating any constraints and certainly without reducing visibility. It may be objectively shown that anarchic scalability of conneg only results from sending the URIs of variants in Content-Location. REST itself is all about describing how and why some HTTP systems achieve this anarchic scalability of negotiated resources, in terms of applied distributed software architecture. This is why, when Content-Location is omitted, a system will display exactly the problems REST anticipates (like reduced visibility), and exactly the benefits REST anticipates when URIs are assigned to variants using Content-Location. The one proves the other -- REST is a pragmatic, objective tool for analyzing the scalability of distributed software systems. REST mismatches result in real-world problems, which may be analyzed and solved in terms of REST. In this case, real problems result from omitting Content-Location, these problems are explained in the thesis, so we can follow the logic that conneg messaging isn't really self- descriptive unless variants are identfied as resources. The URIs to send in Content-Location are also required to inform user agents of alternates, proving the importance of identifying those resources. OK, it's not as neat as Geometry, but simply stated, when I see the very consequences REST predicts in an HTTP conneg system which omits Content- Location and fails to list alternates, and I see the very benefits REST predicts when these conditions are met, I see it as a "proof" of REST's identification of resources "theorem". There is no subjective wiggle- room here, only objective fact. > > > >> > >> The difference between an Atom and HTML representation is *not* > >> abstract at all Eric, that is why you are able to transform via > >> XSLT from Atom to HTML. > >> > > > > I can't make heads or tails out of that statement. How do you get > > from my statement that variants have multiple abstractions, to a > > claim that the differences between variants are abstract? If > > anything, I've pointed out that the differences between "concept as > > HTML and only HTML" and "concept as Atom and only Atom" are > > anything but abstract. > > > When I read "If a resource varies between HTML and Atom > representations, then those two variants, taken separately, must have > some abstract definition that is different from one another" I > concluded that the definitions are abstract, the definitions are > different, therefore the difference is abstract. > Trying to create semantic arguments won't change the fact that assigning URIs to variants *makes* those variants unique resources in their own right. You're claiming that because those variants share a URI, they cannot have any other abstraction, therefore it's wrong to assign them URIs. I am pointing out the obvious -- they're resources in their own right because once you assign *.atom and *.html URIs to them, you've given them different, equally-valid, abstractions. URIs are opaque. A variant with one URI, and a variant with another URI, must have different conceptual meaning -- this is what "resource" means, by definition. That each variant also shares some other conceptual meaning, does not restrict those variants to that meaning, nor does it change the fact that, taken separately, they're different resources because they have different URIs. This is still a whole order of magnitude simpler to grasp than some people insist on making it. REST's definition of "resource" is entirely based on the need to explain this reality. > > > Atom and HTML variants may represent the same concept when taken > > together, but when taken separately they also represent other > > concepts. > > Taking them separately is a subjective judgement. All I'm saying is > doing that reduces visibility in your system and is therefore a cost > that is worth being aware of. > No, that URIs are opaque is an objective fact. It is an objective fact that URIs must be assigned to variants in order to generate a list of alternates. It is an objective fact that negotiated resources will only scale when variants may be distinguished from one another by their URIs. The identification of resources constraint is a philosophical theorem derived from these facts. That the pragmatic application of this constraint exactly matches the case the constraint was derived from, is by design. This would not occur if this were a subjective matter. > > > A failure to recognize that these variants also represent other > > resources, by identifying those resources, is a failure to apply the > > identification of resources constraint. Period. > > Period? Well that settles it then. > Accepting that assigning URIs to variants meets the identification of resources constraint, as has been said hundreds of times by dozens of people over the years, is surely easier than proving said best practice somehow violates REST or is not important to REST, or is vague and subjective enough to be open to interpretation. It really is a fact, kinda like 2+2=4. -Eric
I have one resource which is an algorithm that can receive a potentially
large input, say, a thousand arguments.
GET my-algorithm?x1=1&x2=2....
I encountered some problems due to URI length limits imposed by Web servers,
even though the HTTP protocol does not set any limits on GET request length.
The easiest solution is to expose the resource using POST instead of GET but
this violates REST principles, and the semantics of POST (I am using POST
when I mean GET).
The other alternative is to divide the interaction in 2 steps:
1 - POST my-algorithm/args x1=1&x2=2....
This creates a resource that represents the algorithm's arguments we
want to pass:
201 Created
location: /my-algorithm/args/args-resource-1
2 - GET /my-algorithm/args/args-resource-1
This solution seems to be more RESTful but I fear it will complicate the
interaction and potentially break statelessness (how long should the server
keep the arguments just posted?).
I am really tempted to follow the first solution even though it violates
REST principles because I don't see any drawbacks and it simplifies
interaction.
I would appreciate some comments and suggestions.
Thank you.
Isn't this the same problem as, for example the W3C's HTML validator? You've got some data over "here", which needs to be analyzed by some engine over "there". So it seems like, if you cannot cram all the data into the URL, you could instead put a URL to where the data is stored into a URL. If that happens to be on the same server, that's perhaps useful for security, authentication, and caching purposes, but not an essential part of a RESTful design. -Eric. On 06/23/2010 08:35 AM, Dário Abdulrehman wrote: > > > I have one resource which is an algorithm that can receive a > potentially large input, say, a thousand arguments. > GET my-algorithm?x1=1&x2=2.... > > I encountered some problems due to URI length limits imposed by Web > servers, even though the HTTP protocol does not set any limits on GET > request length. > > The easiest solution is to expose the resource using POST instead of > GET but this violates REST principles, and the semantics of POST (I am > using POST when I mean GET). > > The other alternative is to divide the interaction in 2 steps: > > 1 - POST my-algorithm/args x1=1&x2=2.... > This creates a resource that represents the algorithm's arguments > we want to pass: > 201 Created > location: /my-algorithm/args/args-resource-1 > > 2 - GET /my-algorithm/args/args-resource-1 > > This solution seems to be more RESTful but I fear it will complicate > the interaction and potentially break statelessness (how long should > the server keep the arguments just posted?). > I am really tempted to follow the first solution even though it > violates REST principles because I don't see any drawbacks and it > simplifies interaction. > > I would appreciate some comments and suggestions. > > Thank you. > >
I get the point but what if most of the time the user can cram the data into the URL, and only in rare instances he needs to pass a lot of data? Perhaps it will be nice to provide the ability to pass the data in the URL using GET and additionally a parameter, say 'data-url', that contains the URL of the data in case it cannot be crammed in the URL. The URI could be something like: e.g. GET my-algorithm?<param1>&<param2>....&data-url Of course the user either passes the data explicitly, or provides the data-url, not both at the same time. This way it will save the users the burden of creating a file with the data, making it accessible, etc. On Wed, Jun 23, 2010 at 5:35 PM, Eric Johnson <eric@...> wrote: > > > Isn't this the same problem as, for example the W3C's HTML validator? > > You've got some data over "here", which needs to be analyzed by some engine > over "there". > > So it seems like, if you cannot cram all the data into the URL, you could > instead put a URL to where the data is stored into a URL. If that happens > to be on the same server, that's perhaps useful for security, > authentication, and caching purposes, but not an essential part of a RESTful > design. > > -Eric. > > > On 06/23/2010 08:35 AM, Dário Abdulrehman wrote: > > > > I have one resource which is an algorithm that can receive a potentially > large input, say, a thousand arguments. > GET my-algorithm?x1=1&x2=2.... > > I encountered some problems due to URI length limits imposed by Web > servers, even though the HTTP protocol does not set any limits on GET > request length. > > The easiest solution is to expose the resource using POST instead of GET > but this violates REST principles, and the semantics of POST (I am using > POST when I mean GET). > > The other alternative is to divide the interaction in 2 steps: > > 1 - POST my-algorithm/args x1=1&x2=2.... > This creates a resource that represents the algorithm's arguments we > want to pass: > 201 Created > location: /my-algorithm/args/args-resource-1 > > 2 - GET /my-algorithm/args/args-resource-1 > > This solution seems to be more RESTful but I fear it will complicate the > interaction and potentially break statelessness (how long should the server > keep the arguments just posted?). > I am really tempted to follow the first solution even though it violates > REST principles because I don't see any drawbacks and it simplifies > interaction. > > I would appreciate some comments and suggestions. > > Thank you. > > >
Hi Dário, On 06/23/2010 10:23 AM, Dário Abdulrehman wrote: > > > I get the point but what if most of the time the user can cram the > data into the URL, and only in rare instances he needs to pass a lot > of data? > > Perhaps it will be nice to provide the ability to pass the data in the > URL using GET and additionally a parameter, say 'data-url', that > contains the URL of the data in case it cannot be crammed in the URL. > > The URI could be something like: > e.g. GET my-algorithm?<param1>&<param2>....&data-url > Of course the user either passes the data explicitly, or provides the > data-url, not both at the same time. > > This way it will save the users the burden of creating a file with the > data, making it accessible, etc. > Seems perfectly sensible to me. -Eric. > > On Wed, Jun 23, 2010 at 5:35 PM, Eric Johnson <eric@... > <mailto:eric@...>> wrote: > > > > Isn't this the same problem as, for example the W3C's HTML validator? > > You've got some data over "here", which needs to be analyzed by > some engine over "there". > > So it seems like, if you cannot cram all the data into the URL, > you could instead put a URL to where the data is stored into a > URL. If that happens to be on the same server, that's perhaps > useful for security, authentication, and caching purposes, but not > an essential part of a RESTful design. > > -Eric. > > > > On 06/23/2010 08:35 AM, Dário Abdulrehman wrote: >> >> >> I have one resource which is an algorithm that can receive a >> potentially large input, say, a thousand arguments. >> GET my-algorithm?x1=1&x2=2.... >> >> I encountered some problems due to URI length limits imposed by >> Web servers, even though the HTTP protocol does not set any >> limits on GET request length. >> >> The easiest solution is to expose the resource using POST instead >> of GET but this violates REST principles, and the semantics of >> POST (I am using POST when I mean GET). >> >> The other alternative is to divide the interaction in 2 steps: >> >> 1 - POST my-algorithm/args x1=1&x2=2.... >> This creates a resource that represents the algorithm's >> arguments we want to pass: >> 201 Created >> location: /my-algorithm/args/args-resource-1 >> >> 2 - GET /my-algorithm/args/args-resource-1 >> >> This solution seems to be more RESTful but I fear it will >> complicate the interaction and potentially break statelessness >> (how long should the server keep the arguments just posted?). >> I am really tempted to follow the first solution even though it >> violates REST principles because I don't see any drawbacks and it >> simplifies interaction. >> >> I would appreciate some comments and suggestions. >> >> Thank you >> >
If the query is not supposed to be permanent, use POST. When the query is not permanent and not repeated often, cacheability is not going to be an issue. If indeed cacheability is desired, create a stored query (as illustrated in http://my.safaribooksonline.com/9780596809140/142). Subbu On Jun 23, 2010, at 8:35 AM, Drio Abdulrehman wrote: > > > I have one resource which is an algorithm that can receive a potentially large input, say, a thousand arguments. > GET my-algorithm?x1=1&x2=2.... > > I encountered some problems due to URI length limits imposed by Web servers, even though the HTTP protocol does not set any limits on GET request length. > > The easiest solution is to expose the resource using POST instead of GET but this violates REST principles, and the semantics of POST (I am using POST when I mean GET). > > The other alternative is to divide the interaction in 2 steps: > > 1 - POST my-algorithm/args x1=1&x2=2.... > This creates a resource that represents the algorithm's arguments we want to pass: > 201 Created > location: /my-algorithm/args/args-resource-1 > > 2 - GET /my-algorithm/args/args-resource-1 > > This solution seems to be more RESTful but I fear it will complicate the interaction and potentially break statelessness (how long should the server keep the arguments just posted?). > I am really tempted to follow the first solution even though it violates REST principles because I don't see any drawbacks and it simplifies interaction. > > I would appreciate some comments and suggestions. > > Thank you. > > >
Drio, On Jun 23, 2010, at 5:35 PM, Drio Abdulrehman wrote: > > > I have one resource which is an algorithm that can receive a potentially large input, say, a thousand arguments. > GET my-algorithm?x1=1&x2=2.... If you are building a service that is consumed by a machine client it is also worth to consider that the client would need to have knowledge of all the parameters that make sense to be used to construct the appropriate request. If the number of arguments is that big you could end up in maintenance nightmare. Maybe you find domain concepts behind certain parameter combinations that you can then represent as distinct resources. This would reduce the knowledge that needs to be shared and also the number of parameters. Example: Instead of /items?type=customer&status=100&potential=5 you might have <link href="/highPotCust" rel="http://your.org/linkrels/high-potential-customers"/> Jan > > I encountered some problems due to URI length limits imposed by Web servers, even though the HTTP protocol does not set any limits on GET request length. > > The easiest solution is to expose the resource using POST instead of GET but this violates REST principles, and the semantics of POST (I am using POST when I mean GET). > > The other alternative is to divide the interaction in 2 steps: > > 1 - POST my-algorithm/args x1=1&x2=2.... > This creates a resource that represents the algorithm's arguments we want to pass: > 201 Created > location: /my-algorithm/args/args-resource-1 > > 2 - GET /my-algorithm/args/args-resource-1 > > This solution seems to be more RESTful but I fear it will complicate the interaction and potentially break statelessness (how long should the server keep the arguments just posted?). > I am really tempted to follow the first solution even though it violates REST principles because I don't see any drawbacks and it simplifies interaction. > > I would appreciate some comments and suggestions. > > Thank you. > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
On Jun 24, 2010, at 9:11 AM, Jan Algermissen wrote: > > Maybe you find domain concepts behind certain parameter combinations that you can then represent as distinct resources. This would reduce the knowledge that needs to be shared and also the number of parameters. Forgot to mention that there is the Specification pattern <http://domaindrivendesign.org/node/87> in DDD that does something similar. Jan > > Example: > > Instead of > > /items?type=customer&status=100&potential=5 > > you might have > > <link href="/highPotCust" rel="http://your.org/linkrels/high-potential-customers"/> > > Jan > > > > > > >> >> I encountered some problems due to URI length limits imposed by Web servers, even though the HTTP protocol does not set any limits on GET request length. >> >> The easiest solution is to expose the resource using POST instead of GET but this violates REST principles, and the semantics of POST (I am using POST when I mean GET). >> >> The other alternative is to divide the interaction in 2 steps: >> >> 1 - POST my-algorithm/args x1=1&x2=2.... >> This creates a resource that represents the algorithm's arguments we want to pass: >> 201 Created >> location: /my-algorithm/args/args-resource-1 >> >> 2 - GET /my-algorithm/args/args-resource-1 >> >> This solution seems to be more RESTful but I fear it will complicate the interaction and potentially break statelessness (how long should the server keep the arguments just posted?). >> I am really tempted to follow the first solution even though it violates REST principles because I don't see any drawbacks and it simplifies interaction. >> >> I would appreciate some comments and suggestions. >> >> Thank you. >> >> >> > > ----------------------------------- > Jan Algermissen, Consultant > NORD Software Consulting > > Mail: algermissen@... > Blog: http://www.nordsc.com/blog/ > Work: http://www.nordsc.com/ > ----------------------------------- > > > > > > > ------------------------------------ > > Yahoo! Groups Links > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
Hi Jan, The number of parameters is big but they are repetitive in nature. In this specific case they are "tabular" - just pairs of related values. Dário On Thu, Jun 24, 2010 at 8:11 AM, Jan Algermissen <algermissen1971@...>wrote: > Dário, > > On Jun 23, 2010, at 5:35 PM, Dário Abdulrehman wrote: > > > > > > > I have one resource which is an algorithm that can receive a potentially > large input, say, a thousand arguments. > > GET my-algorithm?x1=1&x2=2.... > > If you are building a service that is consumed by a machine client it is > also worth to consider that the client would need to have knowledge of all > the parameters that make sense to be used to construct the appropriate > request. If the number of arguments is that big you could end up in > maintenance nightmare. > > Maybe you find domain concepts behind certain parameter combinations that > you can then represent as distinct resources. This would reduce the > knowledge that needs to be shared and also the number of parameters. > > Example: > > Instead of > > /items?type=customer&status=100&potential=5 > > you might have > > <link href="/highPotCust" rel=" > http://your.org/linkrels/high-potential-customers"/> > > Jan > > > > > > > > > > I encountered some problems due to URI length limits imposed by Web > servers, even though the HTTP protocol does not set any limits on GET > request length. > > > > The easiest solution is to expose the resource using POST instead of GET > but this violates REST principles, and the semantics of POST (I am using > POST when I mean GET). > > > > The other alternative is to divide the interaction in 2 steps: > > > > 1 - POST my-algorithm/args x1=1&x2=2.... > > This creates a resource that represents the algorithm's arguments we > want to pass: > > 201 Created > > location: /my-algorithm/args/args-resource-1 > > > > 2 - GET /my-algorithm/args/args-resource-1 > > > > This solution seems to be more RESTful but I fear it will complicate the > interaction and potentially break statelessness (how long should the server > keep the arguments just posted?). > > I am really tempted to follow the first solution even though it violates > REST principles because I don't see any drawbacks and it simplifies > interaction. > > > > I would appreciate some comments and suggestions. > > > > Thank you. > > > > > > > > ----------------------------------- > Jan Algermissen, Consultant > NORD Software Consulting > > Mail: algermissen@... > Blog: http://www.nordsc.com/blog/ > Work: http://www.nordsc.com/ > ----------------------------------- > > > > >
Hi Subbu, As I initially said I can't find any reason for not using POST, except that it violates its semantics. If I understood it correctly, POST should be used to create resources. Could you explain why in this case do you recommend using POST? Thank you, Dário On Wed, Jun 23, 2010 at 8:17 PM, Subbu Allamaraju <subbu@...> wrote: > If the query is not supposed to be permanent, use POST. When the query is > not permanent and not repeated often, cacheability is not going to be an > issue. If indeed cacheability is desired, create a stored query (as > illustrated in http://my.safaribooksonline.com/9780596809140/142). > > Subbu > > On Jun 23, 2010, at 8:35 AM, Dário Abdulrehman wrote: > > > > > > > I have one resource which is an algorithm that can receive a potentially > large input, say, a thousand arguments. > > GET my-algorithm?x1=1&x2=2.... > > > > I encountered some problems due to URI length limits imposed by Web > servers, even though the HTTP protocol does not set any limits on GET > request length. > > > > The easiest solution is to expose the resource using POST instead of GET > but this violates REST principles, and the semantics of POST (I am using > POST when I mean GET). > > > > The other alternative is to divide the interaction in 2 steps: > > > > 1 - POST my-algorithm/args x1=1&x2=2.... > > This creates a resource that represents the algorithm's arguments we > want to pass: > > 201 Created > > location: /my-algorithm/args/args-resource-1 > > > > 2 - GET /my-algorithm/args/args-resource-1 > > > > This solution seems to be more RESTful but I fear it will complicate the > interaction and potentially break statelessness (how long should the server > keep the arguments just posted?). > > I am really tempted to follow the first solution even though it violates > REST principles because I don't see any drawbacks and it simplifies > interaction. > > > > I would appreciate some comments and suggestions. > > > > Thank you. > > > > > > > >
There is no such exclusive requirement to use POST to create resources. POST is meant for non-idempotent and unsafe changes but that does not preclude you to use it for safe and/or idempotent operations. There are disadvantages, but it is a matter of tradeoffs. Subbu On Jun 24, 2010, at 3:27 AM, Drio Abdulrehman wrote: > Hi Subbu, > > As I initially said I can't find any reason for not using POST, except that it violates its semantics. If I understood it correctly, POST should be used to create resources. Could you explain why in this case do you recommend using POST? > > Thank you, > Drio > > > On Wed, Jun 23, 2010 at 8:17 PM, Subbu Allamaraju <subbu@subbu.org> wrote: > If the query is not supposed to be permanent, use POST. When the query is not permanent and not repeated often, cacheability is not going to be an issue. If indeed cacheability is desired, create a stored query (as illustrated in http://my.safaribooksonline.com/9780596809140/142). > > Subbu > > On Jun 23, 2010, at 8:35 AM, Drio Abdulrehman wrote: > > > > > > > I have one resource which is an algorithm that can receive a potentially large input, say, a thousand arguments. > > GET my-algorithm?x1=1&x2=2.... > > > > I encountered some problems due to URI length limits imposed by Web servers, even though the HTTP protocol does not set any limits on GET request length. > > > > The easiest solution is to expose the resource using POST instead of GET but this violates REST principles, and the semantics of POST (I am using POST when I mean GET). > > > > The other alternative is to divide the interaction in 2 steps: > > > > 1 - POST my-algorithm/args x1=1&x2=2.... > > This creates a resource that represents the algorithm's arguments we want to pass: > > 201 Created > > location: /my-algorithm/args/args-resource-1 > > > > 2 - GET /my-algorithm/args/args-resource-1 > > > > This solution seems to be more RESTful but I fear it will complicate the interaction and potentially break statelessness (how long should the server keep the arguments just posted?). > > I am really tempted to follow the first solution even though it violates REST principles because I don't see any drawbacks and it simplifies interaction. > > > > I would appreciate some comments and suggestions. > > > > Thank you. > > > > > > > >
I'm sure this has been discussed before, but wasn't able to dig up the thread: Has anybody re-drawn Roy's excellent but ugly diagrams (no offence) from chapter 5 of the dissertation? Thanks, Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/
Hi Stefan,
Haven't been able to find it either. Just on that topic, I find Figure 5-7 http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm#fig_5_7 a little hard to understand with regard to the text from section 5.1.6 http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm#sec_5_1 where Roy states "Layers can be used to encapsulate legacy services and to protect new services from legacy clients, simplifying components by moving infrequently used functionality to a shared intermediary". Does Fig. 5-7 demonstrate this and if so how?
I can see how layering helps with scalability (just add in servers) and security (communicate via a proxy/firewall) but the above in blue....
Sean.
--- On Thu, 24/6/10, Stefan Tilkov <stefan.tilkov@...> wrote:
From: Stefan Tilkov <stefan.tilkov@...>
Subject: [rest-discuss] Diagrams from Dissertation
To: "Rest Group Discussion" <rest-discuss@yahoogroups.com>
Date: Thursday, 24 June, 2010, 14:57
I'm sure this has been discussed before, but wasn't able to dig up the thread: Has anybody re-drawn Roy's excellent but ugly diagrams (no offence) from chapter 5 of the dissertation?
Thanks,
Stefan
--
Stefan Tilkov, http://www.innoq.com/blog/st/
What, you couldn't tell from the bold grey line that the proxy is using a different protocol to talk to the rhombus server which is clearly an alien component? I am shocked and dismayed. ;-) Don't try to read too much into the box and line diagrams. Yes, everything is there for a reason, but the reason would only be clear if you had asked me that question during my final defense. It is a form of bait. In any case, the sentence is about the ability to move rarely used services like wais and gopher out to an intermediary rather than embedding the entire multiple-MB libraries for those protocols into every client executable. This ability can be applied in general for layered systems with a uniform interface. Cheers, ....Roy On Jun 28, 2010, at 7:18 AM, Sean Kennedy <seandkennedy@...> wrote: > Hi Stefan, > Haven't been able to find it either. Just on that topic, I find Figure 5-7 http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm#fig_5_7 a little hard to understand with regard to the text from section 5.1.6 http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm#sec_5_1 where Roy states "Layers can be used to encapsulate legacy services and to protect new services from legacy clients, simplifying components by moving infrequently used functionality to a shared intermediary". Doe s Fig. 5-7 demonstrate this and if so how? > > I can see how layering helps with scalability (just add in servers) and security (communicate via a proxy/firewall) but the above in blue.... > > Sean. > > --- On Thu, 24/6/10, Stefan Tilkov <stefan.tilkov@...> wrote: > > From: Stefan Tilkov <stefan.tilkov@...> > Subject: [rest-discuss] Diagrams from Dissertation > To: "Rest Group Discussion" <rest-discuss@yahoogroups.com> > Date: Thursday, 24 June, 2010, 14:57 > > > I'm sure this has been discussed before, but wasn't able to dig up the thread: Has anybody re-drawn Roy's excellent but ugly diagrams (no offence) from chapter 5 of the dissertation? > > Thanks, > Stefan > > -- > Stefan Tilkov, http://www.innoq.com/blog/st/ > > >
I finally got around to blogging about the new work we are planning: http://codebetter.com/blogs/glenn.block/archive/2010/06/24/resting-from-mef-or-the-mef-dealer-is-at-rest.aspx Feedback / Comments on the post appreciated. Thanks Glenn
I am wondering, whether 'things' like Apache request handlers (e.g. what is happening in the translation phase of Apache) or filters (e.g. loging filter) one adds to a a Client object in Jersey are considered part of the connector or part of the component? I think 'connector' but would like a second opinion. Thanks, Jan
My company is examining adopting a RESTful model to its enterprise architecture. Part of the discussion comes down to finding RESTful idioms, standards, and/or tools to apply to certain recurring enterprise integration problems.
Specifically, we are trying to find RESTful solutions to:
1) Guaranteed Delivery - we need a paradigm to follow so that one service can transfer a sequence of resource representations to another reliably even though both services and the network suffer temporary unreliability
2) Distributed Transactions - we need a paradigm to allow state changes on multiple services to happen so that the changes succeed or fail as a unit
3) Long running operations - we need asynchronous invocations between services and a mechanism for the invoking service to find out when the invoked service is done given that this work may take indefinitely long
4) Workflow Orchestration - we would like to have orchestration services that define business processes via standardized representations (eg BPMN), then execute instances of those processes and build up an process instance execution data resource by interacting with other RESTful resources using message exchange patterns that could specify the above behaviors.
I'm sure that some of these topics have been discussed to death. I'm not looking to repeat the details in one thread, but just wondering if people can give me quick dump of the conventional wisdom as to how to approach such problems, and/or point me to solutions (or alternatives) that they consider consistent with RESTful approaches.
I found the Rest-* effort at http://www.jboss.org/reststar . The name of this project tweaks me, but some of the specs under it seem relevant. Are there others? Are these problems that the community sees value in solving through standards and tooling?
Bryan,
On Jun 30, 2010, at 6:41 PM, Bryan Taylor wrote:
> My company is examining adopting a RESTful model to its enterprise architecture. Part of the discussion comes down to finding RESTful idioms, standards, and/or tools to apply to certain recurring enterprise integration problems.
>
> Specifically, we are trying to find RESTful solutions to:
>
> 1) Guaranteed Delivery - we need a paradigm to follow so that one service can transfer a sequence of resource representations to another reliably even though both services and the network suffer temporary unreliability
HTTP solves this problem by way of the concept of idempotent methods. You can call a GET, PUT or DELETE any number of times until the server responds. IOW: keep trying until you have an answer from the server.
POST is non idempotent[1] but there are ways to work around this, for example by including request IDs in the POST. The server needs to keep track of the IDs it has seen for some time. This allows the server to detect re-postings
[1] That is why your bbrowser ask you for confirmation upon you re-POSTING some request.
>
> 2) Distributed Transactions - we need a paradigm to allow state changes on multiple services to happen so that the changes succeed or fail as a unit
Why do you need distributed transactions? The usual (and orthogonal to REST-or-not) answer is that you rather do those things with compensations anyway. 2PC is an illusion.
>
> 3) Long running operations - we need asynchronous invocations between services and a mechanism for the invoking service to find out when the invoked service is done given that this work may take indefinitely long
Use polling. The HTTP response code for this kind of stuff is 202 Acepted.
>
> 4) Workflow Orchestration - we would like to have orchestration services that define business processes via standardized representations (eg BPMN), then execute instances of those processes and build up an process instance execution data resource by interacting with other RESTful resources using message exchange patterns that could specify the above behaviors.
One way to address this is to have a coordinating service (see Process Manager Pattern of Hohpe's Enterprise Integration Patterns). Consider for example, how a trouble ticketing system coordinates the various human or machine clients. The Trouble Ticket itself is the instance execution data resource. My experience is that you usually have these data resource in legacy applications anyway (contracts in a contract management system, orders in the order management systems etc.). I suggest you use those.
If you want to work with something like BPMN and generation, my idea would be to generate client side code from the model because it is in a RESTful system really the *client component* (aka user agent) that determines the application.
>
> I'm sure that some of these topics have been discussed to death. I'm not looking to repeat the details in one thread, but just wondering if people can give me quick dump of the conventional wisdom
I am afraid you won't be able to skip the learning curve :-) The quick dump would be quite a large dump :-)
> as to how to approach such problems, and/or point me to solutions (or alternatives) that they consider consistent with RESTful approaches.
>
> I found the Rest-* effort at http://www.jboss.org/reststar . The name of this project tweaks me, but some of the specs under it seem relevant.
Roy on REST-*: http://tech.groups.yahoo.com/group/rest-discuss/message/13266 ('nuf said :-)
> Are there others? Are these problems that the community sees value in solving through standards and tooling?
All the standards are there (HTTP 1.1 and friends) execpt for the media types. These are where your modeling effort should (erm...must) go.
Jan
P.S. I'll leave it at this and await your follow-up questions
>
>
>
>
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
-----------------------------------
Jan Algermissen, Consultant
NORD Software Consulting
Mail: algermissen@...
Blog: http://www.nordsc.com/blog/
Work: http://www.nordsc.com/
-----------------------------------
Bryan, These answers are scoped to REST-over-HTTP: 1) Guaranteed Delivery Your client must PUT (or POST) with an If-None-Match: * header [i] to a unique URI. If the URI responds with 412 Precondition Failed then the representation has been delivered. 2) Distributed Transactions Use compensating transactions [ii] (there is further discussion of this on pages 213-215 of RESTful Web Services Cookbook [iii]). 3) Long running operations Server creates a status URI for the invoked operation. If this is as a result of a client request, a redirect should suffice for URI discovery. Client GETs status. If response is 200 OK then the operation is complete. If operation is incomplete, server responds with 202 Accepted [iv] and an estimate of when the operation is expected to complete can be communicated via the Expires header [v]. 4) Workflow Orchestration I don't have a canned answer to this. I'm not aware of a RESTful business process protocol (though I certainly think that we need one) and whilst I am not familiar with BPMN, I note that it's wikipedia entry lists "ambiguity and confusion in sharing BPMN models" as a weakness [vi] which doesn't bode well for it's RESTfulness, I suspect (alongside "converting BPMN models to executable environments"). [i] http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.26 [ii] http://wikipedia.org/wiki/Compensating_transaction [iii] http://www.amazon.com/dp/0596801688/ [iv] http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.3 <http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.3>[v] http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.21 [vi] http://en.wikipedia.org/wiki/Business_Process_Modeling_Notation#Weaknesses_of_BPMN Regards, Alan Dean On Wed, Jun 30, 2010 at 17:41, Bryan Taylor <bryan_w_taylor@...>wrote: > > > My company is examining adopting a RESTful model to its enterprise > architecture. Part of the discussion comes down to finding RESTful idioms, > standards, and/or tools to apply to certain recurring enterprise integration > problems. > > Specifically, we are trying to find RESTful solutions to: > > 1) Guaranteed Delivery - we need a paradigm to follow so that one service > can transfer a sequence of resource representations to another reliably even > though both services and the network suffer temporary unreliability > > 2) Distributed Transactions - we need a paradigm to allow state changes on > multiple services to happen so that the changes succeed or fail as a unit > > 3) Long running operations - we need asynchronous invocations between > services and a mechanism for the invoking service to find out when the > invoked service is done given that this work may take indefinitely long > > 4) Workflow Orchestration - we would like to have orchestration services > that define business processes via standardized representations (eg BPMN), > then execute instances of those processes and build up an process instance > execution data resource by interacting with other RESTful resources using > message exchange patterns that could specify the above behaviors. > > I'm sure that some of these topics have been discussed to death. I'm not > looking to repeat the details in one thread, but just wondering if people > can give me quick dump of the conventional wisdom as to how to approach such > problems, and/or point me to solutions (or alternatives) that they consider > consistent with RESTful approaches. > > I found the Rest-* effort at http://www.jboss.org/reststar . The name of > this project tweaks me, but some of the specs under it seem relevant. Are > there others? Are these problems that the community sees value in solving > through standards and tooling? > > >
Bryan: On Wed, Jun 30, 2010 at 12:41, Bryan Taylor <bryan_w_taylor@yahoo.com> wrote: > My company is examining adopting a RESTful model to its enterprise architecture. Part of the discussion comes down to finding RESTful idioms, standards, and/or tools to apply to certain recurring enterprise integration problems. > > Specifically, we are trying to find RESTful solutions to: > > 1) Guaranteed Delivery - we need a paradigm to follow so that one service can transfer a sequence of resource representations to another reliably even though both services and the network suffer temporary unreliability HTTP does not offer a Guaranteed Delivery model. However you can achieve the same results using Idempotency. For example, HTTP PUT is defined as an Idempotent write operation. It can be safely repeated by the client until the server sends an acknowledgement. You can also use more complicated patterns such as using HTTP POST against a container URI after the client has first acquired a concurrency token from the server (a "ticket") and using this token for each attempt until the client gets an acknowledgement. > > 2) Distributed Transactions - we need a paradigm to allow state changes on multiple services to happen so that the changes succeed or fail as a unit > > 3) Long running operations - we need asynchronous invocations between services and a mechanism for the invoking service to find out when the invoked service is done given that this work may take indefinitely long In both DT and LRT, HTTP offers the 202 Accept response to requests. The spec includes information on response bodies that can include pointers to resources where progress indicators can be displayed throughout the life for the activity. Using this pattern it is not necessary to expose transaction token details (commits, rollbacks, etc.) to the initiating client. Instead, clients can be given a pointer to the progress resource and monitor the ultimate success/failure. Since HTTP allows any party to act as client or server, distributed async/long running operations can be sued to effectively mimic DTs. > > 4) Workflow Orchestration - we would like to have orchestration services that define business processes via standardized representations (eg BPMN), then execute instances of those processes and build up an process instance execution data resource by interacting with other RESTful resources using message exchange patterns that could specify the above behaviors. Sounds like you need to define a media-type that encapsulates your specific use cases and allows available steps to be expressed as hypermedia elements within the response representations. > > I'm sure that some of these topics have been discussed to death. I'm not looking to repeat the details in one thread, but just wondering if people can give me quick dump of the conventional wisdom as to how to approach such problems, and/or point me to solutions (or alternatives) that they consider consistent with RESTful approaches. > > I found the Rest-* effort at http://www.jboss.org/reststar . The name of this project tweaks me, but some of the specs under it seem relevant. Are there others? Are these problems that the community sees value in solving through standards and tooling? > > > > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
--- In rest-discuss@yahoogroups.com, Jan Algermissen <algermissen1971@...> wrote: > > I am wondering, whether 'things' like Apache request handlers (e.g. what is happening in the translation phase of Apache) or filters (e.g. loging filter) one adds to a a Client object in Jersey are considered part of the connector or part of the component? > > > I think 'connector' but would like a second opinion. > > Thanks, > > Jan > Ya I've puzzled over similar entities wondering what bucket they should fall into. I think there are a lot of grey areas. For example, something that is pure "routing" (message in, message out with little change other than the destination address/URI) seems like a connector to me. Except what if it's a customer service app that routes incoming emails to the most appropriate employee by performing sophisticated analysis of the email body? Still a connector? Does the dependence on application state/logic make it a component? So does a connector need to be independent of the application domain and reusable across contexts? How reusable does it need to be? What if my router only looked for a keyword in the subject line? What if it just did a round robin across all customer service employees? Where's the line between connector and component? Curious about what others think as well. Regards, Andrew
On Wed, Jun 30, 2010 at 12:38 PM, Jan Algermissen <algermissen1971@...> wrote: > > 2) Distributed Transactions - we need a paradigm to allow state changes on multiple services to happen so that the changes succeed or fail as a unit > > Why do you need distributed transactions? The usual (and orthogonal to REST-or-not) answer is that you rather do those things with compensations anyway. 2PC is an illusion. > Compensation is 2PC. And often does not work. There is no solution for distributed agreement between more than 2 participants that does not take at least 2 phases, altho they may be disguised by clever naming (e.g.. "compensation"). I quit working on solutions for this problem on this list when Roy ruled out RESTful transactions, and so will not belabor the issue more now.
From: Jan Algermissen <algermissen1971@...>
> > 2) Distributed Transactions - we need a paradigm to allow state changes on multiple services to happen so that the changes succeed or fail as a unit
> Why do you need distributed transactions? The usual (and orthogonal to REST-or-not) answer is that you rather do those things with compensations anyway. 2PC is an illusion.
I don't "need" distributed transactions per se, but I don't know what else to call the problem they solve.
I need a way for multiple resources in different "services" (an
autonomous group of servers) to change in sync. I'll give an example.
If we receive an order, I need to have the billing service process the
billing request and link back to the order, the operations service to
create a product delivery resource and link it to the order, and the
order itself be updated to reflect that these things occurred.
I don't necessarily need a two phase commit approach to "distributed
transactions", I just need a way to guarantee that we don't end up in a
state where billing suceeded but no product delivery document was
created or vice versa. There are three changes to be made and we need
to assure that either all three or zero of them occur.
What do you mean by 2PC is an illusion? I think it violates principles
of service orientation (autonomy, loose coupling, and conversational
state) and the CAP theorem tells me that if I try to get global
consistency I cannot have both availability and tolerance for
unreliable networks, but 2PC certainly can achieve the goal I'm after
if these were not important. I don't know whether or not it violates
RESTful principles. If it does, then there must be some way to fulfill
the same functional goals (eg: don't ship the product and fail to bill
or vice versa).
Compensations seems like a reasonable approach: I decompose the
interaction with each service into state changes that can be undone and
I invoke them all (using a solution for guaranteed delivery) until I
know whether they resolved successfully or not, and if not, I use
guaranteed delivery to request the undo.
On Wed, Jun 30, 2010 at 3:03 PM, Bryan Taylor <bryan_w_taylor@...> wrote: > Compensations seems like a reasonable approach: I decompose the > interaction with each service into state changes that can be undone and > I invoke them all (using a solution for guaranteed delivery) until I > know whether they resolved successfully or not, and if not, I use > guaranteed delivery to request the undo. Some problems with compensation in order-fulfillment situations: * Who knows if all the state changes have resolved successfully or not? * How do you know the state changes will not be undone? * How long do you wait before you cut the product delivery doc, process the billing request, etc? * How do you compensate if the product has gone out the door? An alternative approach is called provisional-final (among a lot of other names), which is similar to a quote preceding an order: the quote is provisional, the order is final. If the order never arrives, the quote is not actionable. The first versions of all of the updates are provisional, and then positive state changes messages are PUT or POSTed to make them final or cancel them. Can all be done RESTfully (I claim, altho Roy may disagree).
Thanks for your answers. They seem to be similar to Jan's, so I'll watch to see they emerge as the consensus. They certainly seem reasonable.
I'm not attached to BPMN in particular, but then the question is what else is there? This standard is well tooled in terms of GUIs for building the actual documents. I don't have personal knowledge of the difficulties of consuming this (or other standards) to runtime environments, but there are workflow engines, both COTS and open source, that try to do it. However, most have a SOAP bent (sometimes BPEL) for orchestrating service calls once you get to the runtime. I don't think the hard part here is SOAP vs REST, but the inherent difficulty in trying to express how to map data elements from the process instance representation to and from what the orchestrated endpoints use. One thought is to code these with either XSLT or javascript that can be delivered as RESTful resources.
________________________________
From: Alan Dean <alan.dean@...>
4) Workflow Orchestration
I don't have a canned answer to this. I'm not aware of a RESTful business process protocol (though I certainly think that we need one) and whilst I am not familiar with BPMN, I note that it's wikipedia entry lists "ambiguity and confusion in sharing BPMN models" as a weakness [vi] which doesn't bode well for it's RESTfulness, I suspect (alongside "converting BPMN models to executable environments").
<snip> I just need a way to guarantee that we don't end up in a state where billing suceeded but no product delivery document was created or vice versa. There are three changes to be made and we need to assure that either all three or zero of them occur. </snip> Think about Amazon.com. they don't expose DT, yet solve the problem thousands of times each day. even w/ third party sellers. Most of the time I need this kind of behavior I implement a "Saga" (search Garcia-Molina) Example: client POSTs an order to server1 server1 replies 202 Accepted w/ Location /orders/1 server1 POSTs a billing request to server2 server2 replies 201 Created w/ Location of completed billing server1 updates /orders/1 w/ progress indicating billing was successful server1 POSTs a shipping request to server3 server3 replies 400 "we don't ship to that person" server1 updates /orders/1 w/ progress indicating shipping failed server1 POSTs refund request to server2 server2 replies 200 OK server1 updates /orders/1 w/ progress indicating refund was processed and job is done The limitation: each step must be reversible. and none of this needs to bleed out to the client. mca http://amundsen.com/blog/ http://mamund.com/foaf.rdf#me On Wed, Jun 30, 2010 at 16:03, Bryan Taylor <bryan_w_taylor@...> wrote: > > From: Jan Algermissen <algermissen1971@...> >> > 2) Distributed Transactions - we need a paradigm to allow state changes on multiple services to happen so that the changes succeed or fail as a unit > >> Why do you need distributed transactions? The usual (and orthogonal to REST-or-not) answer is that you rather do those things with compensations anyway. 2PC is an illusion. > > I don't "need" distributed transactions per se, but I don't know what else to call the problem they solve. > > I need a way for multiple resources in different "services" (an > autonomous group of servers) to change in sync. I'll give an example. > If we receive an order, I need to have the billing service process the > billing request and link back to the order, the operations service to > create a product delivery resource and link it to the order, and the > order itself be updated to reflect that these things occurred. > > I don't necessarily need a two phase commit approach to "distributed > transactions", I just need a way to guarantee that we don't end up in a > state where billing suceeded but no product delivery document was > created or vice versa. There are three changes to be made and we need > to assure that either all three or zero of them occur. > > What do you mean by 2PC is an illusion? I think it violates principles > of service orientation (autonomy, loose coupling, and conversational > state) and the CAP theorem tells me that if I try to get global > consistency I cannot have both availability and tolerance for > unreliable networks, but 2PC certainly can achieve the goal I'm after > if these were not important. I don't know whether or not it violates > RESTful principles. If it does, then there must be some way to fulfill > the same functional goals (eg: don't ship the product and fail to bill > or vice versa). > > Compensations seems like a reasonable approach: I decompose the > interaction with each service into state changes that can be undone and > I invoke them all (using a solution for guaranteed delivery) until I > know whether they resolved successfully or not, and if not, I use > guaranteed delivery to request the undo. > > > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
On Jun 30, 2010, at 9:00 PM, Bob Haugen wrote: > On Wed, Jun 30, 2010 at 12:38 PM, Jan Algermissen > <algermissen1971@...> wrote: >>> 2) Distributed Transactions - we need a paradigm to allow state changes on multiple services to happen so that the changes succeed or fail as a unit >> >> Why do you need distributed transactions? The usual (and orthogonal to REST-or-not) answer is that you rather do those things with compensations anyway. 2PC is an illusion. >> > > Compensation is 2PC. And often does not work. I meant compensations as in paying customers to take another flight if theirs is overbooked, or like sending a credit note. > > There is no solution for distributed agreement between more than 2 > participants that does not take at least 2 phases, altho they may be > disguised by clever naming (e.g.. "compensation"). > > I quit working on solutions for this problem on this list when Roy > ruled out RESTful transactions, and so will not belabor the issue more > now. Jan > > > ------------------------------------ > > Yahoo! Groups Links > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
On Wed, Jun 30, 2010 at 3:59 PM, mike amundsen <mamund@...> wrote: > I just need a way to guarantee that we don't end up in a > state where billing suceeded but no product delivery document was > created or vice versa. There are three changes to be made and we need > to assure that either all three or zero of them occur. > </snip> > > Think about Amazon.com. they don't expose DT, yet solve the problem > thousands of times each day. even w/ third party sellers. > > Most of the time I need this kind of behavior I implement a "Saga" > (search Garcia-Molina) Last time I talked to Amazon about how they do this (which admittedly almost 10 years ago) they did a version of provisional-final interactions. The provisional state was called a request-for-commitment, and then they either cancelled it or sent a commit message. Don't know if they still do it that way.
On Jun 30, 2010, at 10:19 PM, Bob Haugen wrote: > On Wed, Jun 30, 2010 at 3:03 PM, Bryan Taylor <bryan_w_taylor@...> wrote: >> Compensations seems like a reasonable approach: I decompose the >> interaction with each service into state changes that can be undone and >> I invoke them all (using a solution for guaranteed delivery) until I >> know whether they resolved successfully or not, and if not, I use >> guaranteed delivery to request the undo. > > Some problems with compensation in order-fulfillment situations: > * Who knows if all the state changes have resolved successfully or not? > * How do you know the state changes will not be undone? > * How long do you wait before you cut the product delivery doc, > process the billing request, etc? > * How do you compensate if the product has gone out the door? Yes. That is why compensation can only happen at the business level. E.g. when my book has gone out the door I cannot cancel the order. I have to wait and then send it back. That is why there is no undo in accounting. You book something on an account and that's that. All you can do to fix it is to do a compensating booking the other way round. > > An alternative approach is called provisional-final (among a lot of > other names), which is similar to a quote preceding an order: the > quote is provisional, the order is final. If the order never arrives, > the quote is not actionable. > > The first versions of all of the updates are provisional, and then > positive state changes messages are PUT or POSTed to make them final > or cancel them. > > Can all be done RESTfully (I claim, altho Roy may disagree). All that is IMHO beyond REST. It is application semantics. Jan ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
Bob: whatever it is that Amazon does (or did, or will do); it's not exposed to the client application (browser) or my inbox other than a "thanks" or "oops". mca http://amundsen.com/blog/ http://mamund.com/foaf.rdf#me On Wed, Jun 30, 2010 at 17:08, Bob Haugen <bob.haugen@...> wrote: > On Wed, Jun 30, 2010 at 3:59 PM, mike amundsen <mamund@...> wrote: >> I just need a way to guarantee that we don't end up in a >> state where billing suceeded but no product delivery document was >> created or vice versa. There are three changes to be made and we need >> to assure that either all three or zero of them occur. >> </snip> >> >> Think about Amazon.com. they don't expose DT, yet solve the problem >> thousands of times each day. even w/ third party sellers. >> >> Most of the time I need this kind of behavior I implement a "Saga" >> (search Garcia-Molina) > > Last time I talked to Amazon about how they do this (which admittedly > almost 10 years ago) they did a version of provisional-final > interactions. The provisional state was called a > request-for-commitment, and then they either cancelled it or sent a > commit message. Don't know if they still do it that way. >
On Jun 30, 2010, at 11:22 PM, mike amundsen wrote: > Bob: > > whatever it is that Amazon does (or did, or will do); it's not exposed > to the client application (browser) or my inbox other than a "thanks" > or "oops". Yeah - that is why they call it the "Thanks-or-Oops" transaction model :-) Jan (should probably go to bed :-) > > mca > http://amundsen.com/blog/ > http://mamund.com/foaf.rdf#me > > > > > On Wed, Jun 30, 2010 at 17:08, Bob Haugen <bob.haugen@...> wrote: >> On Wed, Jun 30, 2010 at 3:59 PM, mike amundsen <mamund@...> wrote: >>> I just need a way to guarantee that we don't end up in a >>> state where billing suceeded but no product delivery document was >>> created or vice versa. There are three changes to be made and we need >>> to assure that either all three or zero of them occur. >>> </snip> >>> >>> Think about Amazon.com. they don't expose DT, yet solve the problem >>> thousands of times each day. even w/ third party sellers. >>> >>> Most of the time I need this kind of behavior I implement a "Saga" >>> (search Garcia-Molina) >> >> Last time I talked to Amazon about how they do this (which admittedly >> almost 10 years ago) they did a version of provisional-final >> interactions. The provisional state was called a >> request-for-commitment, and then they either cancelled it or sent a >> commit message. Don't know if they still do it that way. >> > > > ------------------------------------ > > Yahoo! Groups Links > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
yep. good ol' '"Thanks-Or-Oops" (rushes to ACM portal to search for papers...) mca http://amundsen.com/blog/ http://mamund.com/foaf.rdf#me On Wed, Jun 30, 2010 at 17:34, Jan Algermissen <algermissen1971@...> wrote: > > On Jun 30, 2010, at 11:22 PM, mike amundsen wrote: > >> Bob: >> >> whatever it is that Amazon does (or did, or will do); it's not exposed >> to the client application (browser) or my inbox other than a "thanks" >> or "oops". > > Yeah - that is why they call it the "Thanks-or-Oops" transaction model :-) > > Jan > > (should probably go to bed :-) > > >> >> mca >> http://amundsen.com/blog/ >> http://mamund.com/foaf.rdf#me >> >> >> >> >> On Wed, Jun 30, 2010 at 17:08, Bob Haugen <bob.haugen@...> wrote: >>> On Wed, Jun 30, 2010 at 3:59 PM, mike amundsen <mamund@...> wrote: >>>> I just need a way to guarantee that we don't end up in a >>>> state where billing suceeded but no product delivery document was >>>> created or vice versa. There are three changes to be made and we need >>>> to assure that either all three or zero of them occur. >>>> </snip> >>>> >>>> Think about Amazon.com. they don't expose DT, yet solve the problem >>>> thousands of times each day. even w/ third party sellers. >>>> >>>> Most of the time I need this kind of behavior I implement a "Saga" >>>> (search Garcia-Molina) >>> >>> Last time I talked to Amazon about how they do this (which admittedly >>> almost 10 years ago) they did a version of provisional-final >>> interactions. The provisional state was called a >>> request-for-commitment, and then they either cancelled it or sent a >>> commit message. Don't know if they still do it that way. >>> >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> > > ----------------------------------- > Jan Algermissen, Consultant > NORD Software Consulting > > Mail: algermissen@... > Blog: http://www.nordsc.com/blog/ > Work: http://www.nordsc.com/ > ----------------------------------- > > > > >
Jan wrote:
> I found the Rest-* effort at http://www.jboss.org/reststar . The name of this project tweaks me, but some of the specs under it seem relevant.
Roy on REST-*: http://tech.groups.yahoo.com/group/rest-discuss/message/13266 ('nuf said :-)
I agree these have nothing to do with REST exactly, and I hate the REST-* name. The individual specs seem to try to create RESTful implementations of integration patterns in the same way as Atom and AtomPub attempt to solve pub/sub in a RESTful way. That said, Roy's statement "this is the single dumbest attempt at one-sided 'standardization' of anti-REST architecture" seems mysterious to me and is quite conclusory. How did he leap from these being about something other than REST to them being "anti-REST". Would AtomPub become anti-REST if it had been wrongly renamed RestPub?
He makes two assertions that follow:
- Distributed transactions are an architectural component of non-REST interaction.
- Message queues are a common integration technique for non-REST architectures.
I could also make the statement that pub/sub syndication models are a common integration techniques for non-REST architectures, which would not prove that Atom and AtomPub are non-RESTful. There are patterns of integration, and I expect to find them in every architectural style. If there is some impossibility conjecture here, I'd like to see it stated in a more analytic way, without the bashing. It may be that a distributed transaction pattern must must violate one of the RESTful architecure principles, but this is far from obvious. Maybe this is some deep corollary of the CAP theory or something. Or perhaps not.
The corresponding statement about message queues seems baffling. These solve a harder problem than guaranteed delivery. Queues solve guaranteed delivery to exactly one consumer among competing consumers with some fairness guarantees.
How would I implement a RESTful way to have airport passengers acquire taxi transportation at the airport in a fair way?
On Wed, Jun 30, 2010 at 4:38 PM, mike amundsen <mamund@...> wrote: > yep. good ol' '"Thanks-Or-Oops" > > (rushes to ACM portal to search for papers...) Love it!
On Jun 30, 2010, at 11:42 PM, Bob Haugen wrote: > On Wed, Jun 30, 2010 at 4:38 PM, mike amundsen <mamund@...> wrote: >> yep. good ol' '"Thanks-Or-Oops" >> >> (rushes to ACM portal to search for papers...) Bet you'll find some WS-Thanks-or-Opps re-invention of the good ol' one :-) Jan > > Love it! ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
+1 On Wed, Jun 30, 2010 at 1:39 PM, Alan Dean <alan.dean@...> wrote: > > > Bryan, > > These answers are scoped to REST-over-HTTP: > > 1) Guaranteed Delivery > > Your client must PUT (or POST) with an If-None-Match: * header [i] to a > unique URI. If the URI responds with 412 Precondition Failed then the > representation has been delivered. > > 2) Distributed Transactions > > Use compensating transactions [ii] (there is further discussion of this on > pages 213-215 of RESTful Web Services Cookbook [iii]). > > 3) Long running operations > > Server creates a status URI for the invoked operation. If this is as a > result of a client request, a redirect should suffice for URI discovery. > Client GETs status. If response is 200 OK then the operation is complete. If > operation is incomplete, server responds with 202 Accepted [iv] and an > estimate of when the operation is expected to complete can be communicated > via the Expires header [v]. > > 4) Workflow Orchestration > > I don't have a canned answer to this. I'm not aware of a RESTful business > process protocol (though I certainly think that we need one) and whilst I am > not familiar with BPMN, I note that it's wikipedia entry lists "ambiguity > and confusion in sharing BPMN models" as a weakness [vi] which doesn't > bode well for it's RESTfulness, I suspect (alongside "converting BPMN > models to executable environments"). > > [i] http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.26 > [ii] http://wikipedia.org/wiki/Compensating_transaction > [iii] http://www.amazon.com/dp/0596801688/ > [iv] http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.3 > <http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.3>[v] > http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.21 > [vi] > http://en.wikipedia.org/wiki/Business_Process_Modeling_Notation#Weaknesses_of_BPMN > > Regards, > Alan Dean > > > On Wed, Jun 30, 2010 at 17:41, Bryan Taylor <bryan_w_taylor@...>wrote: > >> >> >> My company is examining adopting a RESTful model to its enterprise >> architecture. Part of the discussion comes down to finding RESTful idioms, >> standards, and/or tools to apply to certain recurring enterprise integration >> problems. >> >> Specifically, we are trying to find RESTful solutions to: >> >> 1) Guaranteed Delivery - we need a paradigm to follow so that one service >> can transfer a sequence of resource representations to another reliably even >> though both services and the network suffer temporary unreliability >> >> 2) Distributed Transactions - we need a paradigm to allow state changes on >> multiple services to happen so that the changes succeed or fail as a unit >> >> 3) Long running operations - we need asynchronous invocations between >> services and a mechanism for the invoking service to find out when the >> invoked service is done given that this work may take indefinitely long >> >> 4) Workflow Orchestration - we would like to have orchestration services >> that define business processes via standardized representations (eg BPMN), >> then execute instances of those processes and build up an process instance >> execution data resource by interacting with other RESTful resources using >> message exchange patterns that could specify the above behaviors. >> >> I'm sure that some of these topics have been discussed to death. I'm not >> looking to repeat the details in one thread, but just wondering if people >> can give me quick dump of the conventional wisdom as to how to approach such >> problems, and/or point me to solutions (or alternatives) that they consider >> consistent with RESTful approaches. >> >> I found the Rest-* effort at http://www.jboss.org/reststar . The name of >> this project tweaks me, but some of the specs under it seem relevant. Are >> there others? Are these problems that the community sees value in solving >> through standards and tooling? >> >> > > -- Bediako George Partner - Lucid Technics, LLC Think Clearly, Think Lucid www.lucidtechnics.com (p) 202.683.7486 (f) 703.563.6279
On Jun 30, 2010, at 11:40 PM, Bryan Taylor wrote:
>
> Jan wrote:
>
>> I found the Rest-* effort at http://www.jboss.org/reststar . The name of this project tweaks me, but some of the specs under it seem relevant.
> Roy on REST-*: http://tech.groups.yahoo.com/group/rest-discuss/message/13266 ('nuf said :-)
>
> I agree these have nothing to do with REST exactly, and I hate the REST-* name. The individual specs seem to try to create RESTful implementations of integration patterns in the same way as Atom and AtomPub attempt to solve pub/sub in a RESTful way. That said, Roy's statement "this is the single dumbest attempt at one-sided 'standardization' of anti-REST architecture" seems mysterious to me
There was discussion of REST-* on this list when it came out and IIRC the criticism was quite well explained in those posts. Check out the postings around the date of the quoted posting by Roy.
Basically the problem is that REST-* attempts to bend REST to match the usuall enterprisey mind set, claiming that this and that would be a must-have. What should really happen is that "enterprise people" learn from the Web and adjust their mind set to pruodcue systems that are less complex and more easily evolvable.
It just makes no sense to claim complexity is necessary just because one fails to understand how to make things simpler. REST-* originates from this kind of thinking and hence claims that there are lots of things that need to be addressed.
The truth is that all we need is proper media types and a little more guidance how the Web kind of doing things can be applied in enterprise contexts (e.g. match the open, 'Darwinistic' way to an environment that simply needs a little more planning and budgeting).
> and is quite conclusory. How did he leap from these being about something other than REST to them being "anti-REST". Would AtomPub become anti-REST if it had been wrongly renamed RestPub?
>
> He makes two assertions that follow:
> - Distributed transactions are an architectural component of non-REST interaction.
which is true because they violate REST's stateless server constraint (among others I guess).
> - Message queues are a common integration technique for non-REST architectures.
>
Yes, because they violate the hypermedia contraint (would need to check which else).
> I could also make the statement that pub/sub syndication models are a common integration techniques for non-REST architectures, which would not prove that Atom and AtomPub are non-RESTful.
Note that AtomPub is essentially a media type specification that defines the semantics of several hypermedia controls (e.g. the edit-link relation). AtomPub uses straight-forward HTTP for communication between client and server. Actually it would not need to say anything about that but provides the HTTP interaction examples as developer hints.
The HTTP-based use of the formats specified by AtomPub is not PubSub but polling. An AtomPub client polls feeds via GET to check whether the feed has changed.
Also note that PubSub is not forbidden by REST. There is simply no appropriate method in HTTP for doing pubsub but you can allways add one in if PubSub is the right model for you (see [1],[2]).
PubSub with HTTP works like this:
WATCH /some/feed
Reply-To: http://www.my.org/notification-processor
The server could then POST to the Reply-To URI.
(But I doubt that it is ever really of practical relevance. I'd go with polling due to the much greater simplicity).
> There are patterns of integration, and I expect to find them in every architectural style.
This is an interesting topic because you can start this train of thought one level up: If your problem space requires interation (the Web does, and enterprise IT clearly also does) then it is much wiser to pick an architectural style that is tailored towards dealing with integration. Such a style must constrain the connectors (think component API) to be uniform because only then can you avoid to have to do point to point integration every time two components talk to each other.
REST is such a style (surprise, surprise :-) through and through. REST has been designed to deal with integration problems (complexity and change).
Interestingly, no OO-based pattern (especially not the service pattern) out there attempts to constrain the connectors of components. That is why all these attempts (e.g. SOA) are essentially meaningless when it comes to reducing integration complexity. IOW, they cannot guarrantee reduced complexity and good evolvability - REST does because it limits the variation of the component interface (to be uniform).
> If there is some impossibility conjecture here, I'd like to see it stated in a more analytic way, without the bashing.
Maybe - but then... he has said it all before and REST reall is one of those things on earth that are *not* up to interpretation.
Additionally - that is his style and I presonally find it very refreshing. I only lack the competence to adopt it :-)
> It may be that a distributed transaction pattern must must violate one of the RESTful architecure principles, but this is far from obvious.
It requires understanding of REST, yes. OTH, most of the problems people have (at least I did) is due to the fact that 'they' lack proper knowledge of software architecture in general (See Perry&Wolf, Garlan&Shaw, Taylor&Medvidovic and the first half of Roy's dissertation). If you started with that knowledge already - understanding REST would be pretty quick as would be understanding the rationale behind the mentioned constraint violation. (Personally the journey took me about 8 years and I keep having epiphanies :-)
> Maybe this is some deep corollary of the CAP theory or something. Or perhaps not.
>
> The corresponding statement about message queues seems baffling. These solve a harder problem than guaranteed delivery. Queues solve guaranteed delivery to exactly one consumer among competing consumers with some fairness guarantees.
Well, that is a layer 4 issue (transport). What do MQs do in terms of reducing integration complexity or making a system more easily changeable? All the latter is a layer 7 issue.
>
> How would I implement a RESTful way to have airport passengers acquire taxi transportation at the airport in a fair way?
>
POST /taxi-requests
Content-Type: application/procurement+xml
<transport-request from="airport" goods="passenger"/>
201 Created
Location: /taxi-requests/778 ----------<< Your process instance resource
Content-Location: /taxi-requests/778
Cache-Control: no-cache
Content-Type: application/procurement+xml
<transportation> --------------------<< Your process instance data
<status>on its way</status>
<estimated-arrival>07:03 PM</estimated-arrival>
<receipt href="./receipt"/>
</transportation>
Check status:
GET /taxi-requests/778
obtain receipt:
GET /taxi-requests/778/receipt
HTH,
Jan
[1] http://roy.gbiv.com/untangled/2008/paper-tigers-and-hidden-dragons
[2] http://roy.gbiv.com/untangled/2008/economies-of-scale
-----------------------------------
Jan Algermissen, Consultant
NORD Software Consulting
Mail: algermissen@...
Blog: http://www.nordsc.com/blog/
Work: http://www.nordsc.com/
-----------------------------------
Hello Bryan, For the workflow orchestration maybe this could be helpful http://www.jopera.org/files/bpm08-bpel4rest.pdf in the paper section : 5.2 Publishing processes as RESTfulWeb services I haven't thoroughly read the paper, so I am not sure how "RESTful" it is. But it might be of help. Regards, Areeb --- In rest-discuss@yahoogroups.com, Bryan Taylor <bryan_w_taylor@...> wrote: > > My company is examining adopting a RESTful model to its enterprise architecture. Part of the discussion comes down to finding RESTful idioms, standards, and/or tools to apply to certain recurring enterprise integration problems. > > Specifically, we are trying to find RESTful solutions to: > > 1) Guaranteed Delivery - we need a paradigm to follow so that one service can transfer a sequence of resource representations to another reliably even though both services and the network suffer temporary unreliability > > 2) Distributed Transactions - we need a paradigm to allow state changes on multiple services to happen so that the changes succeed or fail as a unit > > 3) Long running operations - we need asynchronous invocations between services and a mechanism for the invoking service to find out when the invoked service is done given that this work may take indefinitely long > > 4) Workflow Orchestration - we would like to have orchestration services that define business processes via standardized representations (eg BPMN), then execute instances of those processes and build up an process instance execution data resource by interacting with other RESTful resources using message exchange patterns that could specify the above behaviors. > > I'm sure that some of these topics have been discussed to death. I'm not looking to repeat the details in one thread, but just wondering if people can give me quick dump of the conventional wisdom as to how to approach such problems, and/or point me to solutions (or alternatives) that they consider consistent with RESTful approaches. > > I found the Rest-* effort at http://www.jboss.org/reststar . The name of this project tweaks me, but some of the specs under it seem relevant. Are there others? Are these problems that the community sees value in solving through standards and tooling? >
--- In rest-discuss@yahoogroups.com, Jan Algermissen <algermissen1971@...> wrote: > There was discussion of REST-* on this list when it came out and IIRC the criticism was quite well explained in those posts. Check out the postings around the date of the quoted posting by Roy. OK, I'll look for this, or just accept that there was some additional context not in Roy's conclusion. > Basically the problem is that REST-* attempts to bend REST to match the usuall enterprisey mind set, claiming that this and that would be a must-have. What should really happen is that "enterprise people" learn from the Web and adjust their mind set to pruodcue systems that are less complex and more easily evolvable. I have trouble seeing this. On it's face, the JBoss efforts looks like an attempt to solve several problems in what appears to be a sincere attempt to be RESTful. It appears many believe it is a failed attempt, so I guess I want to know if there are other competing attempts that get it right. > It just makes no sense to claim complexity is necessary just because one fails to understand how to make things simpler. REST-* originates from this kind of thinking and hence claims that there are lots of things that need to be addressed. >> He makes two assertions that follow: >> - Distributed transactions are an architectural component of non-REST interaction. > > which is true because they violate REST's stateless server constraint (among others I guess). I can see this: isolation, the I in ACID, is inherently a form of conversational state. This raises the question, can I have ACD and be RESTful? If I relax C to eventual consistency (E), I think I see how I can have AED in a RESTful setting by submitting batch requests, persisting them on arrival, and using guaranteed delivery to all affected servers. >> - Message queues are a common integration technique for non-REST architectures. >> > > Yes, because they violate the hypermedia contraint (would need to check which else). This isn't obvious to me, so I'm going to ask you to elaborate if you don't mind. It is, however a little beyond what I've asked for in this thread: a queue is sufficient but not necessary for guaranteed delivery. I'll move this discussion to another thread. [...] >> There are patterns of integration, and I expect to find them in every architectural style. > > This is an interesting topic because you can start this train of thought one level up: If your problem space requires interation (the Web does, and enterprise IT clearly also does) then it is much wiser to pick an architectural style that is tailored towards dealing with integration. Such a style must constrain the connectors (think component API) to be uniform because only then can you avoid to have to do point to point integration every time two components talk to each other. > > REST is such a style (surprise, surprise :-) through and through. REST has been designed to deal with integration problems (complexity and change). I don't follow this argument. Every architectural style used to solve integration problems makes a claim to have been designed to deal with integration problems. Certainly MQ protocols do this. > Interestingly, no OO-based pattern (especially not the service pattern) out there attempts to constrain the connectors of components. That is why all these attempts (e.g. SOA) are essentially meaningless when it comes to reducing integration complexity. IOW, they cannot guarrantee reduced complexity and good evolvability - REST does because it limits the variation of the component interface (to be uniform). SOA and REST are orthogonal architectural styles. SOA mandates loose coupling, abstration, reusability, autonomy, statelessnes, discoverability, and composability, and does so in a way relies on service contracts to enforce governance. Each of these principles has benefits for integration in an enterprise setting, which is why nearly every enterprise adopts them. REST adds other constraints (uniform interface, HATEOAS) and overlaps with a few of the above (statelessness). These also provide benefits for integration. My view is that if we adopt an architecture within intersection of the two styles, the result will be good. >> It may be that a distributed transaction pattern must must violate one of the RESTful architecure principles, but this is far from obvious. > > It requires understanding of REST, yes. OTH, most of the problems people have (at least I did) is due to the fact that 'they' lack proper knowledge of software architecture in general (See Perry&Wolf, Garlan&Shaw, Taylor&Medvidovic and the first half of Roy's dissertation). If you started with that knowledge already - understanding REST would be pretty quick as would be understanding the rationale behind the mentioned constraint violation. (Personally the journey took me about 8 years and I keep having epiphanies :-) If the goal is to reduce integration complexity, solutions that require 8 years to understand before they are useful fail. Hopefully, 80%, 95%, or 99% understanding is enough to solve most problems, and I hope these levels of understanding can be obtained in shorter timeframes. >> Maybe this is some deep corollary of the CAP theory or something. Or perhaps not. >> >> The corresponding statement about message queues seems baffling. These solve a harder problem than guaranteed delivery. Queues solve guaranteed delivery to exactly one consumer among competing consumers with some fairness guarantees. > > Well, that is a layer 4 issue (transport). What do MQs do in terms of reducing integration complexity or making a system more easily changeable? All the latter is a layer 7 issue. Let me explain my language. As I use the term, delivery is a layer 7 issue: a message is delivered when the receiving application says it is delivered subject to any desired application-level conditions. EG: this could involve the successful completion of any long running operation. Message arrival is the corresponding layer 4 concept. Clearly arrival is a prerequisite for delivery. For example, the taxi must arrive, but the customer is "delivered" to the taxi when a contract is formed that may involve an out of band negotiation and acceptance by both parties. As to the benefits of messaging generally, this is all explained well in Hophe and Woolf's book Enterprise Integration Patterns. >> How would I implement a RESTful way to have airport passengers acquire taxi transportation at the airport in a fair way? I shouldn't have asked this question in this thread, as the merits of queues are off topic to what I asked. I'll open another thread and re-ask my question.
Hi Bryan, On Jul 2, 2010, at 11:37 PM, bryan_w_taylor wrote: > > I have trouble seeing this. On it's face, the JBoss efforts looks like an attempt to solve several problems in what appears to be a sincere attempt to be RESTful. The issue is that REST-* suggest that there are problems that need a solution. The problems REST-* addresses can be dealt with by changing the mind set. Enterprise IT need not be any more complex than the Web is. Unfortunetely the thinking seems to persist that enterprise IT is somwhat for the tough guys why Web is for the 'HTML developer'. Tough guys want tough problems :-) > It appears many believe it is a failed attempt, so I guess I want to know if there are other competing attempts that get it right. Again: nothing to get right. HTTP already covers everything. All you need is media types for your problem at hand. >> > > I can see this: isolation, the I in ACID, is inherently a form of conversational state. This raises the question, can I have ACD and be RESTful? If I relax C to eventual consistency (E), I think I see how I can have AED in a RESTful setting by submitting batch requests, persisting them on arrival, and using guaranteed delivery to all affected servers. Hmm - do you have an example of whatyou are trying to solve? > >>> - Message queues are a common integration technique for non-REST architectures. >>> >> >> Yes, because they violate the hypermedia contraint (would need to check which else). > > This isn't obvious to me, so I'm going to ask you to elaborate if you don't mind. The client must learn at runtime what to do next rom received representations. Whith MQs the client needs to know which queue to send stuff to and when. That is design time coupling. (I'll try to refer to the dis the next days - need to think about it a little). >> EST is such a style (surprise, surprise :-) through and through. REST has been designed to deal with integration problems (complexity and change). > > I don't follow this argument. Every architectural style used to solve integration problems makes a claim to have been designed to deal with integration problems. Certainly MQ protocols do this. I am not sure that MQs are an architectural style in the Perry/Wolf,Garlan/Shaw/Fielding sense. For me it is just a transport mechanism. To get on the same page: which of the styles Roy mentions is similar to MQs? Or else: what are the constraints MQs impose on components, connectors and data elements? And what are the system properties induced? (I am mostly trying to make you think along the lines of Roy's dis here, not trying to be difficult). > >> Interestingly, no OO-based pattern (especially not the service pattern) out there attempts to constrain the connectors of components. That is why all these attempts (e.g. SOA) are essentially meaningless when it comes to reducing integration complexity. IOW, they cannot guarrantee reduced complexity and good evolvability - REST does because it limits the variation of the component interface (to be uniform). > > SOA and REST are orthogonal architectural styles. SOA is not an architectural style...because it imposes no constraints on components, connectors and data elements. IOW: SOA does not induce any properties into an architectire. For example, Applying SOA does not guarantee scalability because SOA does not constrain servers to be stateless. > SOA mandates loose coupling, abstration, reusability, autonomy, statelessnes, discoverability, and composability, and does so in a way relies on service contracts to enforce governance. None of those are constraints in the software architecture sense. None of those are testable either. The problem is that they are not testable either. (No insult intended) Example: SOA mandates loose coupling. So what? When do I know I achieved it? And, BTW, what does it mean, exactly? > Each of these principles has benefits for integration in an enterprise setting, which is why nearly every enterprise adopts them. Oh? Most of the enterprises I have seen did almost everything to violate any of the above :-) Partly because people did not have the skills and partly because they all relate to long term benefits that often the next CIO will harvest :-) > REST adds other constraints (uniform interface, HATEOAS) and overlaps with a few of the above (statelessness). By 'statelessness' you mean 'stateless server'? > These also provide benefits for integration. My view is that if we adopt an architecture within intersection of the two styles, the result will be good. Some questions: - why do we need anything else than REST? What is th eneed for intersecting? - why would the result be good? - If you intersect REST with another style you presumably remove constraints of REST (those that are not in the intersection). If you do this, you'll have to analyse which of the properties induced by REST you loose. What I am trying to emphasize is that the set of constraints is coordinated and that it is this set that forms the style. You cannot arbitrarily remove constraints or juggle them around as you see fit. > >>> It may be that a distributed transaction pattern must must violate one of the RESTful architecure principles, but this is far from obvious. >> >> It requires understanding of REST, yes. OTH, most of the problems people have (at least I did) is due to the fact that 'they' lack proper knowledge of software architecture in general (See Perry&Wolf, Garlan&Shaw, Taylor&Medvidovic and the first half of Roy's dissertation). If you started with that knowledge already - understanding REST would be pretty quick as would be understanding the rationale behind the mentioned constraint violation. (Personally the journey took me about 8 years and I keep having epiphanies :-) > > If the goal is to reduce integration complexity, solutions that require 8 years to understand before they are useful fail. See my comment on solid knowledge of the disziplin of software architecture. *That* took so long but should actually be required knowledge of any software architect. Interestingly, very few people even know what that it makes sense to design on the basis of principles. > Hopefully, 80%, 95%, or 99% understanding is enough to solve most problems, and I hope these levels of understanding can be obtained in shorter timeframes. Danger is: you get half-baked solutions. > >>> Maybe this is some deep corollary of the CAP theory or something. Or perhaps not. >>> >>> The corresponding statement about message queues seems baffling. These solve a harder problem than guaranteed delivery. Queues solve guaranteed delivery to exactly one consumer among competing consumers with some fairness guarantees. >> >> Well, that is a layer 4 issue (transport). What do MQs do in terms of reducing integration complexity or making a system more easily changeable? All the latter is a layer 7 issue. > > Let me explain my language. As I use the term, delivery is a layer 7 issue: a message is delivered when the receiving application says it is delivered subject to any desired application-level conditions. Layer 7 is the application layer. placeOrder(), getStockQuote(), fileComplaint(), startEgine() are all layer 7 semantics. Delivery is *transport* (layer 4). One of the keys to understanding REST is to understand that REST constrains layer 7 to be uniform. HTTP's GET, POST, PUT, DELETE are at the same level as getStockQuote() or placeOrder(). HTTP is an application layer protocol. > EG: this could involve the successful completion of any long running operation. Message arrival is the corresponding layer 4 concept. Clearly arrival is a prerequisite for delivery. For example, the taxi must arrive, but the customer is "delivered" to the taxi when a contract is formed that may involve an out of band negotiation and acceptance by both parties. > > As to the benefits of messaging generally, this is all explained well in Hophe and Woolf's book Enterprise Integration Patterns. Can you point me to the section that relates to your paragraph above? I am a bit troubled understanding it (for a lack of context on my part). > >>> How would I implement a RESTful way to have airport passengers acquire taxi transportation at the airport in a fair way? > > I shouldn't have asked this question in this thread, as the merits of queues are off topic to what I asked. I'll open another thread and re-ask my question. Ok. Jan
I posed a problem to Jan Algermissen in another thread where the typical "enterprisey" solution might use a message queue type of solution. I am seeking a RESTful alternative. I will act as the business analyst here, stating the problem in a language that makes sense to the business. Here's the problem: Many of us have had to catch a taxi at the airport (or some similar setting) where a lot of people are doing the same. Generally, we stand in line, the taxis arrive, waiting if necessary, and the party at the front of the line gets in the taxi, works out an agreement with the cab driver, and goes. Sometimes, the cab driver and passengers won't reach an agreement and the passengers will go back to the front of the line and the taxis take the next passenger. Details follow: There are four types of actors involved: Passengers, an airport, the taxi companies, and taxis. Here's what we know of the basic process: 1) passengers and taxis arrive at the airport with random inter-arrival times 2) on arrival, passengers ask the airport for a taxi 3) on arrival, taxis ask the airport for a passenger. 4) the airport makes passenger/taxi pairing by an "oldest first" rule for both taxis and passengers, among those unpaired 5) unpaired taxis and passengers wait for the airport to pair them 6) after a taxi and passenger are paired, they try to agree on terms of service 7) Sometimes taxis and groups can't agree and then the passenger goes back in line. The taxi should be paired with the next available passenger. 8) the airport breaks a pairing when either party tells the airport negotiations failed, or if 1-2 minutes pass. 9) failed pairings will not be repaired 10) the probability of negotiation failures is small for all taxis and all passengers, and eventually every party finds a pair they can deal with 11) the airport can and should make concurrent pairings 12) a taxi departs when it and its passenger reach agreement and it tells the airport of the agreement, whereupon the airport's duties are done 13) taxi companies like to watch the passenger line depth (excluding paired passengers) to make dispatching decisions (the decision mechanism is out of scope) In typical business fashion, these represent a "best effort" attempt to capture the "current" requirements and may be clarified or revised somewhat for arbitrary and capricious reasons, because management is full of evil bastards. Complaining about this, or imprecise language, or other "IT mumbo jumbo" will result in you being reassigned to work on the TPS reports.
Tyler, On Jul 3, 2010, at 12:45 PM, bryan_w_taylor wrote: > I posed a problem to Jan Algermissen in another thread where the typical "enterprisey" solution might use a message queue type of solution. I am seeking a RESTful alternative. I will act as the business analyst here, stating the problem in a language that makes sense to the business. Here's the problem: > > Many of us have had to catch a taxi at the airport (or some similar setting) where a lot of people are doing the same. Generally, we stand in line, the taxis arrive, waiting if necessary, and the party at the front of the line gets in the taxi, works out an agreement with the cab driver, and goes. Sometimes, the cab driver and passengers won't reach an agreement and the passengers will go back to the front of the line and the taxis take the next passenger. Details follow: > > There are four types of actors involved: Passengers, an airport, the taxi companies, and taxis. > > Here's what we know of the basic process: > 1) passengers and taxis arrive at the airport with random inter-arrival times > 2) on arrival, passengers ask the airport for a taxi > 3) on arrival, taxis ask the airport for a passenger. > 4) the airport makes passenger/taxi pairing by an "oldest first" rule for both taxis and passengers, among those unpaired > 5) unpaired taxis and passengers wait for the airport to pair them > 6) after a taxi and passenger are paired, they try to agree on terms of service > 7) Sometimes taxis and groups can't agree and then the passenger goes back in line. The taxi should be paired with the next available passenger. > 8) the airport breaks a pairing when either party tells the airport negotiations failed, or if 1-2 minutes pass. > 9) failed pairings will not be repaired > 10) the probability of negotiation failures is small for all taxis and all passengers, and eventually every party finds a pair they can deal with > 11) the airport can and should make concurrent pairings > 12) a taxi departs when it and its passenger reach agreement and it tells the airport of the agreement, whereupon the airport's duties are done > 13) taxi companies like to watch the passenger line depth (excluding paired passengers) to make dispatching decisions (the decision mechanism is out of scope) > I am not sure what you are up to with this. Do you want to develop a system that simulates the above actors? Or are the actors actors in use cases? What are those use cases and where is the software system that is to realize them? I guess what makes most sense is that the airport is the system, but then you mentioned it as an actor, too. Can you clarify? Jan > > In typical business fashion, these represent a "best effort" attempt to capture the "current" requirements and may be clarified or revised somewhat for arbitrary and capricious reasons, because management is full of evil bastards. Complaining about this, or imprecise language, or other "IT mumbo jumbo" will result in you being reassigned to work on the TPS reports. > > > > ------------------------------------ > > Yahoo! Groups Links > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
On Sat, Jul 3, 2010 at 4:12 AM, bryan_w_taylor <bryan_w_taylor@...> wrote: > >> He makes two assertions that follow: > >> - Distributed transactions are an architectural component of non-REST interaction. > > > > which is true because they violate REST's stateless server constraint (among others I guess). > > I can see this: isolation, the I in ACID, is inherently a form of conversational state. This raises the question, can I have ACD and be RESTful? If I relax C to eventual consistency (E), I think I see how I can have AED in a RESTful setting by submitting batch requests, persisting them on arrival, and using guaranteed delivery to all affected servers. > In this case, as with a lot of your other concerns, I think the best way to go is to understand the REST constraints, what benefits they provide, and what you lose for each REST constraint that you relax or violate. It's not like you'll go to jail, you just might not be able to call it RESTful - and you still may benefit from the constraints you uphold. However, some facts of life on the Web hold regardless of REST. For example, you won't get Isolation across independent agents because no smart counterparty will hold locks while your agent can die and never come back. And Atomicity is likely to be compromised, too, so you might as well adapt to it.
Airport management has issued this as a request for proposal. They are willing to implement systems suggested by the proposal and pay for their development of the entire system. I'd hoping someone will sketch a proposal based on a RESTful solution. The airport will offer some kiosks passengers can use, and taxis also have a client computer.
We also have word that a rival contractor, Fuddy Duddy Enterprise Solutions has bid on the contract.
The Fuddy proposal uses an message broke at the airport and to create a queue in it where messages represent passenger taxi requests. These messages are enqueued when passengers request a taxi in step 2. They plan to create a standalone fat client application for taxis. In step 3 taxis make a connection to the airport system, and make a blocking dequeue request, which is answered in step 4 and the taxi is told how to meet up with the passenger. The taxi app then presents the driver with a screen where they enter Y/N on whether the agreement was reached per steps 8 or 12. The customer can also tell the airport this per step 12, using the same kiosk they used to request the taxi service. Furthermore, the taxi app enforces the timeout per step 8. The customers message will be acknowledged and removed from the queue by the taxi app's "Y" response. If either party backs out or the time expires, the airport passenger's request will be dispatched again to a new
cab, and the original cab will also get the next available passenger. The taxi client will automatically throw an error back if an already rejected passenger is redispatched to it, but it will reissue another dequeue request first, so that it get a new passenger assignment if one is to be had. If not, and there is no other cab to take the passenger, then the cab will repeat repeat this pattern every few seconds. The same client app can be used to query the queue size, per step 12.
----- Original Message ----
From: Jan Algermissen <algermissen1971@...>
On Jul 3, 2010, at 12:45 PM, bryan_w_taylor wrote:
I am not sure what you are up to with this. Do you want to develop a system that simulates the above actors?
Or are the actors actors in use cases? What are those use cases and where is the software system that is to realize them?
I guess what makes most sense is that the airport is the system, but then you mentioned it as an actor, too.
Can you clarify?
Jan
Can't I get atomicity by solving it with batch operations?
For example, I can process
<batch>
<representationA/>
<representationB/>
<representationC/>
</batch>
and let the server assure that A, B, and C all happen or don't happen.
----- Original Message ----
From: Bob Haugen <bob.haugen@...>
In this case, as with a lot of your other concerns, I think the best
way to go is to understand the REST constraints, what benefits they
provide, and what you lose for each REST constraint that you relax or
violate. It's not like you'll go to jail, you just might not be able
to call it RESTful - and you still may benefit from the constraints
you uphold.
However, some facts of life on the Web hold regardless of REST. For
example, you won't get Isolation across independent agents because no
smart counterparty will hold locks while your agent can die and never
come back. And Atomicity is likely to be compromised, too, so you
might as well adapt to it.
On Sun, Jul 4, 2010 at 2:26 PM, Bryan Taylor <bryan_w_taylor@...> wrote: > Can't I get atomicity by solving it with batch operations? > > For example, I can process > <batch> > <representationA/> > <representationB/> > <representationC/> > </batch> > > and let the server assure that A, B, and C all happen or don't happen. Well., yeah. but that's local atomicity, not distributed over independent agents.
If I need to distribute it, I can use guaranteed delivery. --- In rest-discuss@yahoogroups.com, Bob Haugen <bob.haugen@...> wrote: > Well., yeah. but that's local atomicity, not distributed over > independent agents. >
>> For example, I can process >> <batch> >> <representationA/> >> <representationB/> >> <representationC/> >> </batch> >> >> and let the server assure that A, B, and C all happen or don't happen. > > Well., yeah. but that's local atomicity, not distributed over > independent agents. And <batch> is really just a resource bag holding three resources, right? Just working out the the vernacular here... Mark W.
On Jul 4, 2010, at 10:10 PM, bryan_w_taylor wrote: > If I need to distribute it, I can use guaranteed delivery. Are you saying that this solves the distributed transaction problem? Jan > > --- In rest-discuss@yahoogroups.com, Bob Haugen <bob.haugen@...> wrote: >> Well., yeah. but that's local atomicity, not distributed over >> independent agents. >> > > > > > ------------------------------------ > > Yahoo! Groups Links > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
Hi Bryan,
I wrote some stuff on distributed transactions and their
properties - http://betathoughts.blogspot.com/2007/06/brief-history-of-consensus-2pc-and.html
At the bottom there is a reference to some work I did creating a system
that uses distributed transactions and HTTP. However, in my mind
distributed transactions are anti-REST....
cheers
Mark
On Sun, Jul 4, 2010 at 9:10 PM, bryan_w_taylor <bryan_w_taylor@...> wrote:
> If I need to distribute it, I can use guaranteed delivery.
>
> --- In rest-discuss@yahoogroups.com, Bob Haugen <bob.haugen@...> wrote:
>> Well., yeah. but that's local atomicity, not distributed over
>> independent agents.
>>
>
>
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
>
Sagas sound like a reasonable solution no? On 7/4/10, Mark Mc Keown <zzcgumk@...> wrote: > Hi Bryan, > I wrote some stuff on distributed transactions and their > properties - > http://betathoughts.blogspot.com/2007/06/brief-history-of-consensus-2pc-and.html > > At the bottom there is a reference to some work I did creating a system > that uses distributed transactions and HTTP. However, in my mind > distributed transactions are anti-REST.... > > cheers > Mark > > > On Sun, Jul 4, 2010 at 9:10 PM, bryan_w_taylor <bryan_w_taylor@...> > wrote: >> If I need to distribute it, I can use guaranteed delivery. >> >> --- In rest-discuss@yahoogroups.com, Bob Haugen <bob.haugen@...> wrote: >>> Well., yeah. but that's local atomicity, not distributed over >>> independent agents. >>> >> >> >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> >> > -- Sent from my mobile device
In one sense, yes. I think it can provide "AED" level "transactions": atomic, eventually consistent, and durable. The individual operations take a long time or might fail, so I can add a long running operation mechanism and if one of the remote operations fail, I implement "rollback" by invoking all the compensating transaction for the original operation.
What would be nice is if there were a standard media type that the endpoints offering the operations could advertise as a way to tell the batch operation processor how to map the main operations to their corresponding compensations. Although one simple way to do this would be to assure we always invoke such operations via PUT and then we simply DELETE to the same URL to "undo".
----- Original Message ----
From: Jan Algermissen <algermissen1971@...>
On Jul 4, 2010, at 10:10 PM, bryan_w_taylor wrote:
> If I need to distribute it, I can use guaranteed delivery.
Are you saying that this solves the distributed transaction problem?
Jan
On Jun 30, 2010, at 9:41 AM, Bryan Taylor wrote: > My company is examining adopting a RESTful model to its enterprise architecture. Part of the discussion comes down to finding RESTful idioms, standards, and/or tools to apply to certain recurring enterprise integration problems. Umm, the integration problems you describe are mostly architectural properties of very specific architectures. Generally speaking, you don't want to replicate the same architecture when redesigning a system to be more RESTful -- that would be counterproductive. You should instead be looking for ways to design the system such that these are no longer problems that need to be solved. Think of it like locking/concurrency issues. One can design a system such that every concurrent access is protected by some deadlock-proof locking scheme, or one can design a system that isolates all concurrent processing in a shared-nothing architecture that doesn't need any locking whatsoever. You are essentially asking how to avoid deadlocks in a shared nothing architecture. The appropriate answer, therefore, is ... huh? > Specifically, we are trying to find RESTful solutions to: > > 1) Guaranteed Delivery - we need a paradigm to follow so that one service can transfer a sequence of resource representations to another reliably even though both services and the network suffer temporary unreliability That doesn't sound like a problem encountered by RESTful architectures. Reliable upload of multiple files can be performed using a single zip file, but the assumption being made here is that the client has a shared understanding of what the server is intending to do with those files. That's coupling. Applications like that are usually accomplished via code-on-demand. The problem you will run into here is implementation issues regarding current browsers, not architectural issues and certainly not a style issue. Most such tools are developed as browser extensions or app-specific clients, mostly because they need unfettered access to the filesystem and because browsers (for some unknown reason) don't include integrity checks in normal file uploads. > 2) Distributed Transactions - we need a paradigm to allow state changes on multiple services to happen so that the changes succeed or fail as a unit Again, not a characteristic of RESTful architectures. If the client knows the transaction is distributed, you have failed. There are lots of ways to solve this kind of problem on the back-end of services interfaces, behind the resource abstraction, but none of those are relevant to the REST architectural style that might apply on the front-end of the service interface. > 3) Long running operations - we need asynchronous invocations between services and a mechanism for the invoking service to find out when the invoked service is done given that this work may take indefinitely long Any resource can behave as a long-running service. Just program it that way. > 4) Workflow Orchestration - we would like to have orchestration services that define business processes via standardized representations (eg BPMN), then execute instances of those processes and build up an process instance execution data resource by interacting with other RESTful resources using message exchange patterns that could specify the above behaviors. That is a system, not an integration problem. If you want to solve it, buy a full-featured WCM system like Day's CQ5. http://www.day.com/day/en/products/web_content_management.html (sorry, I don't have a way to answer that one without sounding like a sales plug -- it is, after all, why I work for a WCM vendor). ....Roy
--- In rest-discuss@yahoogroups.com, "Roy T. Fielding" <fielding@...> wrote: > > Specifically, we are trying to find RESTful solutions to: > > > > 1) Guaranteed Delivery - we need a paradigm to follow so that one service can transfer a sequence of resource representations to another reliably even though both services and the network suffer temporary unreliability > > That doesn't sound like a problem encountered by RESTful > architectures. Reliable upload of multiple files can be > performed using a single zip file, but the assumption being made > here is that the client has a shared understanding of what the > server is intending to do with those files. That's coupling. I don't follow. Several people have given good simple answers saying to use the idempotent nature of PUT (or fake it with POST) until a GET of the resource succeeds or add an HTTP header of If-None-Match: * and repeat the PUT until you get a 412 Precondition Failed response, so I thought this was a slam dunk. But that answers "how" and I think you are getting at "why". I'm imagining that we have two servers A and B, where A plays the role of the client in the interaction. Events happen on server A and server B must receive some representation related to each event or unacceptable business consequences occur. Why can't we merge the functionality of server A and B? Lots of reasons: Security, regulatory compliance, use of 3rd party systems, organizational boundaries and/or politics are a few. The way a company manages it's systems engineering work is to partition business functionality into pieces, give ownership of each piece to a team, and align physical resources like servers to those teams. If this imposes constraints not found in RESTful systems, then I have no choice but to deal with those. > Applications like that are usually accomplished via code-on-demand. > The problem you will run into here is implementation issues > regarding current browsers, not architectural issues and > certainly not a style issue. Most such tools are developed > as browser extensions or app-specific clients, mostly because > they need unfettered access to the filesystem and because > browsers (for some unknown reason) don't include integrity checks > in normal file uploads. I expect the clients of most of our services would count as "app specific clients". These might be our other services, our app servers that host user interfaces, or sometimes we will allow external business entities (customers, partners, suppliers, etc...) to write such apps directly. I don't mind going beyond browser limitations. > > 2) Distributed Transactions - we need a paradigm to allow state changes on multiple services to happen so that the changes succeed or fail as a unit > > Again, not a characteristic of RESTful architectures. If the > client knows the transaction is distributed, you have failed. > There are lots of ways to solve this kind of problem on the > back-end of services interfaces, behind the resource abstraction, > but none of those are relevant to the REST architectural style > that might apply on the front-end of the service interface. This one I accept doesn't fit in a RESTful solution, and in other posts in this thread, we are exploring several of other ways you mention. > > 3) Long running operations - we need asynchronous invocations between services and a mechanism for the invoking service to find out when the invoked service is done given that this work may take indefinitely long > > Any resource can behave as a long-running service. Just program it that way. Right, the question is how, exactly. Good solutions have been posted in this thread for this. Subbu's RESTful Web Services Cookbook solves this in examples 1.10 and 1.11. I think this was another slam dunk. I'm curious what you think about using so called "web hooks" for this kind of thing. Would you consider this a violation of the client-server constraint? > > 4) Workflow Orchestration - we would like to have orchestration services that define business processes via standardized representations (eg BPMN), then execute instances of those processes and build up an process instance execution data resource by interacting with other RESTful resources using message exchange patterns that could specify the above behaviors. > > That is a system, not an integration problem. If you want to > solve it, buy a full-featured WCM system like Day's CQ5. > > http://www.day.com/day/en/products/web_content_management.html > > (sorry, I don't have a way to answer that one without sounding > like a sales plug -- it is, after all, why I work for a WCM vendor). No need to apologize for pointing me to a product that might be useful for us. I've been in several sales presentations in the last couple weeks with different vendors who have big fancy workflow engines. They all want to talk about WS-BPEL and orchestrating our SOAP endpoints. I enjoy the look of confusion when I mention that we are considering not allowing any new services to be created using SOAP. That seems to get their attention. They say "what will you do instead?" and I say use HTTP and they say "huh?". I agree that what we are eventually looking for is a system. I just need to know how to ask for that system. It would be especially useful if there were open standards for media types that such a system might use. You through out the term "WCM" which I will go learn about.
Roy, On Jul 6, 2010, at 3:03 AM, Roy T. Fielding wrote: > Reliable upload of multiple files can be > performed using a single zip file, but the assumption being made > here is that the client has a shared understanding of what the > server is intending to do with those files. That's coupling. Trying to test my understanding: By 'client' you are refering to 'user agent'? My understanding is that the user agent has no shared understanding beyond how to construct the submission resquest upon the activation of a hypermedia control. (Web browsers know how to create a POST request from a user's submission of a form) The user however does have an understanding (expectation) of what the server is intending to do with those files. This expectation is the basis for choosing to activate the hypermedia control in the first place. Is that point of view correct? Jan
My trial here. This is a typical service orchestration scenario if we consider passengers and taxis as services and the airport implements an orchestration. The basic airport interfaces will be RESERVE /taxi The passenger can use this interface to get a taxi reservation ticket. In the request the passenger needs to provide an endpoint for callback, e.g. Name/cellnumber/url ... RESERVE /passenger The taxi can get a passenger reservation ticket. In the request the taxi needs to provide an endpoint for callback, e.g. plate number/cellnumber/url ... Later the reservation ticket can be used for query about the place in line and also cancel the reservation. When a match is made by the airport, airport will issue notifications to both the passenger and the taxi. Does it looks RESTful? Cheers, Dong On Sun, Jul 4, 2010 at 1:18 PM, Bryan Taylor <bryan_w_taylor@...>wrote: > > > Airport management has issued this as a request for proposal. They are > willing to implement systems suggested by the proposal and pay for their > development of the entire system. I'd hoping someone will sketch a proposal > based on a RESTful solution. The airport will offer some kiosks passengers > can use, and taxis also have a client computer. > > We also have word that a rival contractor, Fuddy Duddy Enterprise Solutions > has bid on the contract. > > The Fuddy proposal uses an message broke at the airport and to create a > queue in it where messages represent passenger taxi requests. These messages > are enqueued when passengers request a taxi in step 2. They plan to create a > standalone fat client application for taxis. In step 3 taxis make a > connection to the airport system, and make a blocking dequeue request, which > is answered in step 4 and the taxi is told how to meet up with the > passenger. The taxi app then presents the driver with a screen where they > enter Y/N on whether the agreement was reached per steps 8 or 12. The > customer can also tell the airport this per step 12, using the same kiosk > they used to request the taxi service. Furthermore, the taxi app enforces > the timeout per step 8. The customers message will be acknowledged and > removed from the queue by the taxi app's "Y" response. If either party backs > out or the time expires, the airport passenger's request will be dispatched > again to a new > cab, and the original cab will also get the next available passenger. The > taxi client will automatically throw an error back if an already rejected > passenger is redispatched to it, but it will reissue another dequeue request > first, so that it get a new passenger assignment if one is to be had. If > not, and there is no other cab to take the passenger, then the cab will > repeat repeat this pattern every few seconds. The same client app can be > used to query the queue size, per step 12. > > > ----- Original Message ---- > From: Jan Algermissen <algermissen1971@... <algermissen1971%40mac.com> > > > > On Jul 3, 2010, at 12:45 PM, bryan_w_taylor wrote: > > I am not sure what you are up to with this. Do you want to develop a system > that simulates the above actors? > > Or are the actors actors in use cases? What are those use cases and where > is the software system that is to realize them? > > I guess what makes most sense is that the airport is the system, but then > you mentioned it as an actor, too. > > Can you clarify? > > Jan > > >
Hello! This is for those cases where I'm forced to work with somewhat limited clients, which might be restricted to just the GET and POST methods. Assume you have a collection resource, maybe called '/customers'. If I want to create a new customer instance, I could POST a representation of that customer to the collection resource. The server returns a "201 created" and the URI of the new customer instance, maybe '/customers/1234'. Now let's say I need to update this particular customer entity with new information. I could PUT the new description to the instance URI or I could POST it. Since '/customers/1234' is not a collection resource there is no ambiguity about what I want: Update this particular customer instance. Is there a problem with that approach? Wikipedia's helpful article about REST says that a POST in that particular case should mean "Treat the addressed member as a collection in its own right and create a new entry in it." But - at least in my case - if I would ever have to have a collection within the customer, I would have given it its own URI, such as '/customers/1234/contacts'. Therefore, it's clear from the context that the customer entity itself is not a collection and never will be. So, is there any particular danger in interpreting PUT and POST in the same way in this particular instance? Or maybe a bit broader: Is there a problem with taking the context into account when looking at the meaning of PUT and POST?
On Jul 6, 2010, at 10:49 PM, brendel.juergen wrote: > Hello! > > This is for those cases where I'm forced to work with somewhat limited clients, which might be restricted to just the GET and POST methods. > > Assume you have a collection resource, maybe called '/customers'. If I want to create a new customer instance, I could POST a representation of that customer to the collection resource. The server returns a "201 created" and the URI of the new customer instance, maybe '/customers/1234'. > > Now let's say I need to update this particular customer entity with new information. I could PUT the new description to the instance URI or I could POST it. Since '/customers/1234' is not a collection resource there is no ambiguity about what I want: Update this particular customer instance. > > Is there a problem with that approach? No. If the semantics of the resource are appropriately defined you can use POST to update the resource state. What you loose is visibility because no intermediary is able to see that there is actually an update going on. There are also issues with idempotency because the user agent (if programmed in a sane way) will reject re-POSTing despite the user knowing that the update is actually idempotent. See also: http://roy.gbiv.com/untangled/2009/it-is-okay-to-use-post Jan > Wikipedia's helpful article about REST says that a POST in that particular case should mean "Treat the addressed member as a collection in its own right and create a new entry in it." But - at least in my case - if I would ever have to have a collection within the customer, I would have given it its own URI, such as '/customers/1234/contacts'. Therefore, it's clear from the context that the customer entity itself is not a collection and never will be. > > So, is there any particular danger in interpreting PUT and POST in the same way in this particular instance? > > Or maybe a bit broader: Is there a problem with taking the context into account when looking at the meaning of PUT and POST? > > > > ------------------------------------ > > Yahoo! Groups Links > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
check here: http://roy.gbiv.com/untangled/2009/it-is-okay-to-use-post and, for one possible implementation pattern for updates and deletes using only POST for writes: http://amundsen.com/blog/archives/1063 mca http://amundsen.com/blog/ http://mamund.com/foaf.rdf#me On Tue, Jul 6, 2010 at 16:49, brendel.juergen <juergen.brendel@...> wrote: > Hello! > > This is for those cases where I'm forced to work with somewhat limited clients, which might be restricted to just the GET and POST methods. > > Assume you have a collection resource, maybe called '/customers'. If I want to create a new customer instance, I could POST a representation of that customer to the collection resource. The server returns a "201 created" and the URI of the new customer instance, maybe '/customers/1234'. > > Now let's say I need to update this particular customer entity with new information. I could PUT the new description to the instance URI or I could POST it. Since '/customers/1234' is not a collection resource there is no ambiguity about what I want: Update this particular customer instance. > > Is there a problem with that approach? Wikipedia's helpful article about REST says that a POST in that particular case should mean "Treat the addressed member as a collection in its own right and create a new entry in it." But - at least in my case - if I would ever have to have a collection within the customer, I would have given it its own URI, such as '/customers/1234/contacts'. Therefore, it's clear from the context that the customer entity itself is not a collection and never will be. > > So, is there any particular danger in interpreting PUT and POST in the same way in this particular instance? > > Or maybe a bit broader: Is there a problem with taking the context into account when looking at the meaning of PUT and POST? > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
"brendel.juergen" wrote: > > Assume you have a collection resource, maybe called '/customers'. If > I want to create a new customer instance, I could POST a > representation of that customer to the collection resource. The > server returns a "201 created" and the URI of the new customer > instance, maybe '/customers/1234'. > Or, try thinking about REST this way... Assume you dereference a resource, and the representation returned instructs the user agent that it can POST to some URI. When the user chooses that state transition, the server might respond 201, in which case the created resource's URI is in the Location: header. Nothing unRESTful there. > > Now let's say I need to update this particular customer entity with > new information. I could PUT the new description to the instance URI > or I could POST it. Since '/customers/1234' is not a collection > resource there is no ambiguity about what I want: Update this > particular customer instance. > Assume you dereference a resource, and the representation returned instructs the user agent that it can PUT to some URI. When the user chooses that state transition, the server might respond 204, in which case the resource must be dereferenced to determine the results. Nothing unRESTful there, either. > > Is there a problem with that approach? Wikipedia's helpful article > about REST says that a POST in that particular case should mean > "Treat the addressed member as a collection in its own right and > create a new entry in it." > No, the problem lies with the Wikipedia article -- REST says nothing about POST. If your system is using POST to accomplish PUT in order to support user agents that don't grok PUT, REST has no problem with that. If your system is using POST to create a new member of a collection, REST says nothing about how to treat the addressed member, and certainly doesn't require that the POST target be a "collection" since REST makes no such distinction. That Wikipedia article doesn't have much of a fanbase here. ;-) > > But - at least in my case - if I would > ever have to have a collection within the customer, I would have > given it its own URI, such as '/customers/1234/contacts'. Therefore, > it's clear from the context that the customer entity itself is not a > collection and never will be. > URIs have no context, they are opaque. REST only requires that you assign URIs to important resources -- *what* URIs you assign has no bearing on anything. The way your system self-documents its API is to use hypertext to instruct user agents what methods to call on what resources. Hypertext then informs the user agent what the result was. The answer to your title question, is you can POST to any resource your hypertext can instruct a user agent to POST to. User agents don't need to distinguish between collection and member in a REST system, and are in fact clueless that any such relationships exist between resources. > > So, is there any particular danger in interpreting PUT and POST in > the same way in this particular instance? > REST favors selecting one meaning per method, and requires that the selected meaning stays constant across all resources. If you've assigned replacement semantics to PUT, then it must mean that for all resources -- no making it sometimes mean "create" (even though HTTP allows both). If replacement semantics have been assigned to PUT, then you can't assign that meaning to some other method. With the exception of the situation you mentioned -- REST mismatches are allowed for specific reasons, like browser compatibility. If you've assigned replacement semantics to PUT and POST, while also assigning creation semantics to POST, it's just a workaround (what's important is that you understand the visibility/idempotency tradeoffs Jan mentioned). -Eric
--- In rest-discuss@yahoogroups.com, "brendel.juergen" <juergen.brendel@...> wrote: > > Hello! > > This is for those cases where I'm forced to work with somewhat limited clients, which might be restricted to just the GET and POST methods. > > Assume you have a collection resource, maybe called '/customers'. If I want to create a new customer instance, I could POST a representation of that customer to the collection resource. The server returns a "201 created" and the URI of the new customer instance, maybe '/customers/1234'. > > Now let's say I need to update this particular customer entity with new information. I could PUT the new description to the instance URI or I could POST it. Since '/customers/1234' is not a collection resource there is no ambiguity about what I want: Update this particular customer instance. > > Is there a problem with that approach? Wikipedia's helpful article about REST says that a POST in that particular case should mean "Treat the addressed member as a collection in its own right and create a new entry in it." Note that the article doesn't provide a description HTTP methods semantics, just an example of what the author considers "a typical usage". You'd better refer to the httpbis draft or a good book on REST for comprehensive description of the HTTP methods and how you should use them. Philippe
On Tue, Jul 6, 2010 at 10:11 PM, Jan Algermissen <algermissen1971@...> wrote: > > On Jul 6, 2010, at 10:49 PM, brendel.juergen wrote: > >> Hello! >> >> This is for those cases where I'm forced to work with somewhat limited clients, which might be restricted to just the GET and POST methods. >> >> Assume you have a collection resource, maybe called '/customers'. If I want to create a new customer instance, I could POST a representation of that customer to the collection resource. The server returns a "201 created" and the URI of the new customer instance, maybe '/customers/1234'. >> >> Now let's say I need to update this particular customer entity with new information. I could PUT the new description to the instance URI or I could POST it. Since '/customers/1234' is not a collection resource there is no ambiguity about what I want: Update this particular customer instance. >> >> Is there a problem with that approach? > > No. If the semantics of the resource are appropriately defined you can use POST to update the resource state. Semantics of the resource, or semantics of the link relation? I always look at it as the latter > What you loose is visibility because no intermediary is able to see that there is actually an update going on. The loss in visibility isn't over update since both PUT and POST methods are unsafe, and so from an intermediary perspective both update/invalidate state. The actual loss in visibility is much more subtle and is over the specific nature of the updating request i.e. omitting PUT removes the opportunity for intermediaries to distinguish, for example, between an idempotent and non-idempotent update. That's often not such a big deal - what types of intermediary mechanism are you thinking of that would require that level of visibility? The web's GET/POST, View/Do-ism seems to have provided enough visibility so far Cheers, Mike
Glenn, Are we still set fair for the 15th? Regards, Alan Dean On Sat, Jun 5, 2010 at 17:57, Glenn Block <glenn.block@...> wrote: > > > Guys, we would like to meet in the afternoon, from 1 to 4. Does that work? > > Also I need a list of folks who would like to attend. We have limited > seating (probably 15 max). I need to know before hand who is > interested. I think about 5 people so far said yes. Any others? > > > On 6/4/10, Dave Evans <list@... <list%40pml1.co.uk>> wrote: > > I wouldn't mind tagging along... > > > > Dave > > > > > > -----Original Message----- > > From: rest-discuss@yahoogroups.com <rest-discuss%40yahoogroups.com>[mailto: > rest-discuss@yahoogroups.com <rest-discuss%40yahoogroups.com>] On > > Behalf Of Alan Dean > > Sent: 04 June 2010 18:25 > > To: Glenn Block > > Cc: rest-discuss@yahoogroups.com <rest-discuss%40yahoogroups.com> > > Subject: Re: [rest-discuss] Re: London Meeting Dates > > > > > > > > Suits me. > > > > Alan > > > > > > On Fri, Jun 4, 2010 at 17:13, Glenn Block <glenn.block@...<glenn.block%40gmail.com>> > wrote: > > > > > > > > > > Yes > > > > I am thinking of something a bit different though. Insteads of having > > just a lunch, what if we meet for two to three ours at the MS campus > > in London? This way we could have a brainstorming / design type > > discussion. I'll supply lunch :-) > > > > My thinking would be Thursday the week of the 12th of July. > > > > What to you guys think? > > > > Glenn > > > > On 6/4/10, Jan Algermissen <algermissen1971@...<algermissen1971%40mac.com> > > <mailto:algermissen1971%40mac.com <algermissen1971%2540mac.com>> > > wrote: > > > Glenn, > > > > > > sorry to be impatient - do you have news on the date for London? > > > > > > (My preferred airline has brilliant deals when booking before Sunday > :-) > > > > > > Jan > > > > > > > -- > > Sent from my mobile device > > > > > > > > > > > > > > > > > > > > -- > Sent from my mobile device > > >
Yes..I'll send out an invite to the group tonite. On Wed, Jul 7, 2010 at 12:06 PM, Alan Dean <alan.dean@...> wrote: > Glenn, > > Are we still set fair for the 15th? > > Regards, > Alan Dean > > On Sat, Jun 5, 2010 at 17:57, Glenn Block <glenn.block@...> wrote: > >> >> >> Guys, we would like to meet in the afternoon, from 1 to 4. Does that work? >> >> Also I need a list of folks who would like to attend. We have limited >> seating (probably 15 max). I need to know before hand who is >> interested. I think about 5 people so far said yes. Any others? >> >> >> On 6/4/10, Dave Evans <list@... <list%40pml1.co.uk>> wrote: >> > I wouldn't mind tagging along... >> > >> > Dave >> > >> > >> > -----Original Message----- >> > From: rest-discuss@yahoogroups.com <rest-discuss%40yahoogroups.com>[mailto: >> rest-discuss@yahoogroups.com <rest-discuss%40yahoogroups.com>] On >> > Behalf Of Alan Dean >> > Sent: 04 June 2010 18:25 >> > To: Glenn Block >> > Cc: rest-discuss@yahoogroups.com <rest-discuss%40yahoogroups.com> >> > Subject: Re: [rest-discuss] Re: London Meeting Dates >> > >> > >> > >> > Suits me. >> > >> > Alan >> > >> > >> > On Fri, Jun 4, 2010 at 17:13, Glenn Block <glenn.block@...<glenn.block%40gmail.com>> >> wrote: >> > >> > >> > >> > >> > Yes >> > >> > I am thinking of something a bit different though. Insteads of having >> > just a lunch, what if we meet for two to three ours at the MS campus >> > in London? This way we could have a brainstorming / design type >> > discussion. I'll supply lunch :-) >> > >> > My thinking would be Thursday the week of the 12th of July. >> > >> > What to you guys think? >> > >> > Glenn >> > >> > On 6/4/10, Jan Algermissen <algermissen1971@mac.com<algermissen1971%40mac.com> >> > <mailto:algermissen1971%40mac.com <algermissen1971%2540mac.com>> > >> wrote: >> > > Glenn, >> > > >> > > sorry to be impatient - do you have news on the date for London? >> > > >> > > (My preferred airline has brilliant deals when booking before Sunday >> :-) >> > > >> > > Jan >> > > >> > >> > -- >> > Sent from my mobile device >> > >> > >> > >> > >> > >> > >> > >> > >> > >> >> -- >> Sent from my mobile device >> >> >> > >
On Jul 6, 2010, at 12:22 AM, bryan_w_taylor wrote: > --- In rest-discuss@yahoogroups.com, "Roy T. Fielding" <fielding@...> wrote: > > > > Specifically, we are trying to find RESTful solutions to: > > > > > > 1) Guaranteed Delivery - we need a paradigm to follow so that one service can transfer a sequence of resource representations to another reliably even though both services and the network suffer temporary unreliability > > > > That doesn't sound like a problem encountered by RESTful > > architectures. Reliable upload of multiple files can be > > performed using a single zip file, but the assumption being made > > here is that the client has a shared understanding of what the > > server is intending to do with those files. That's coupling. > > I don't follow. Several people have given good simple answers saying to use the idempotent nature of PUT (or fake it with POST) until a GET of the resource succeeds or add an HTTP header of If-None-Match: * and repeat the PUT until you get a 412 Precondition Failed response, so I thought this was a slam dunk. I guess it depends on how you define guaranteed delivery. You can certainly do such things with HTTP, but doing CRUD ops via HTTP does not automatically make it a RESTful paradigm. > But that answers "how" and I think you are getting at "why". I'm imagining that we have two servers A and B, where A plays the role of the client in the interaction. Events happen on server A and server B must receive some representation related to each event or unacceptable business consequences occur. Ah, typical event-based integration. That's a good architectural style for some applications. Why use REST to do that? > Why can't we merge the functionality of server A and B? Lots of reasons: Security, regulatory compliance, use of 3rd party systems, organizational boundaries and/or politics are a few. The way a company manages it's systems engineering work is to partition business functionality into pieces, give ownership of each piece to a team, and align physical resources like servers to those teams. If this imposes constraints not found in RESTful systems, then I have no choice but to deal with those. Yes, but the RESTful solution is not to pretend that REST is an event-based integration style. What you want to do with REST is re-architect the system into more isolated parts that are event-based (usually a very small communication subsystem) and the remainder as a layered information system. The reason to do this, presumably, is to expose the RESTful interface to consumers instead of exposing the much more complex (and brittle) event interface. For example, CQ5 has a content repository based on the JCR interface, which includes both observation (change event notifiers) and RESTful interaction. The observation is behind the resource interface, so the fact that it isn't RESTful itself does not interfere with the multi-organizational, long-lived applications that might only use the Web interface. > > Applications like that are usually accomplished via code-on-demand. > > The problem you will run into here is implementation issues > > regarding current browsers, not architectural issues and > > certainly not a style issue. Most such tools are developed > > as browser extensions or app-specific clients, mostly because > > they need unfettered access to the filesystem and because > > browsers (for some unknown reason) don't include integrity checks > > in normal file uploads. > > I expect the clients of most of our services would count as "app specific clients". These might be our other services, our app servers that host user interfaces, or sometimes we will allow external business entities (customers, partners, suppliers, etc...) to write such apps directly. I don't mind going beyond browser limitations. > > > > 2) Distributed Transactions - we need a paradigm to allow state changes on multiple services to happen so that the changes succeed or fail as a unit > > > > Again, not a characteristic of RESTful architectures. If the > > client knows the transaction is distributed, you have failed. > > There are lots of ways to solve this kind of problem on the > > back-end of services interfaces, behind the resource abstraction, > > but none of those are relevant to the REST architectural style > > that might apply on the front-end of the service interface. > > This one I accept doesn't fit in a RESTful solution, and in other posts in this thread, we are exploring several of other ways you mention. > > > > 3) Long running operations - we need asynchronous invocations between services and a mechanism for the invoking service to find out when the invoked service is done given that this work may take indefinitely long > > > > Any resource can behave as a long-running service. Just program it that way. > > Right, the question is how, exactly. Good solutions have been posted in this thread for this. Subbu's RESTful Web Services Cookbook solves this in examples 1.10 and 1.11. I think this was another slam dunk. > > I'm curious what you think about using so called "web hooks" for this kind of thing. Would you consider this a violation of the client-server constraint? No, web hooks is just someone's marketing term for registering notifications. The components that act on them are still either clients or servers during the communication (i.e., they are not trying to do both at the same time and functionality is still split across components). This is not a new concept. E.g., http://www.xent.com/FoRK-archive/apr98/0445.html http://www.xent.com/FoRK-archive/august98/0307.html > > > 4) Workflow Orchestration - we would like to have orchestration services that define business processes via standardized representations (eg BPMN), then execute instances of those processes and build up an process instance execution data resource by interacting with other RESTful resources using message exchange patterns that could specify the above behaviors. > > > > That is a system, not an integration problem. If you want to > > solve it, buy a full-featured WCM system like Day's CQ5. > > > > http://www.day.com/day/en/products/web_content_management.html > > > > (sorry, I don't have a way to answer that one without sounding > > like a sales plug -- it is, after all, why I work for a WCM vendor). > > No need to apologize for pointing me to a product that might be useful for us. I've been in several sales presentations in the last couple weeks with different vendors who have big fancy workflow engines. They all want to talk about WS-BPEL and orchestrating our SOAP endpoints. I enjoy the look of confusion when I mention that we are considering not allowing any new services to be created using SOAP. That seems to get their attention. They say "what will you do instead?" and I say use HTTP and they say "huh?". As much as I like doing things in HTTP, there are many closed systems that are better implemented in an efficient RPC syntax or a wire protocol specifically designed for message queues. Use whatever works best for the specific architecture behind the resource interface and then apply REST as the external facade to support large-scale integration and reusability of the information produced/consumed. Note, however, that SOAP is fairly unique for being the least efficient way of doing anything. That's what happens when core protocol design is driven by marketing. ....Roy
Hi Roy Any thoughs on HTML5 Web Sockets wrt REST? Would it be viable to have a REST resources that can communicate changes (events) through web sockets? Or would you say that it is orthagonal? Thanks Glenn On Wed, Jul 7, 2010 at 8:25 PM, Roy T.Fielding <fielding@...> wrote: > > > On Jul 6, 2010, at 12:22 AM, bryan_w_taylor wrote: > > --- In rest-discuss@yahoogroups.com <rest-discuss%40yahoogroups.com>, > "Roy T. Fielding" <fielding@...> wrote: > > > > > > Specifically, we are trying to find RESTful solutions to: > > > > > > > > 1) Guaranteed Delivery - we need a paradigm to follow so that one > service can transfer a sequence of resource representations to another > reliably even though both services and the network suffer temporary > unreliability > > > > > > That doesn't sound like a problem encountered by RESTful > > > architectures. Reliable upload of multiple files can be > > > performed using a single zip file, but the assumption being made > > > here is that the client has a shared understanding of what the > > > server is intending to do with those files. That's coupling. > > > > I don't follow. Several people have given good simple answers saying to > use the idempotent nature of PUT (or fake it with POST) until a GET of the > resource succeeds or add an HTTP header of If-None-Match: * and repeat the > PUT until you get a 412 Precondition Failed response, so I thought this was > a slam dunk. > > I guess it depends on how you define guaranteed delivery. You can > certainly do such things with HTTP, but doing CRUD ops via HTTP does > not automatically make it a RESTful paradigm. > > > > But that answers "how" and I think you are getting at "why". I'm > imagining that we have two servers A and B, where A plays the role of the > client in the interaction. Events happen on server A and server B must > receive some representation related to each event or unacceptable business > consequences occur. > > Ah, typical event-based integration. That's a good architectural > style for some applications. Why use REST to do that? > > > > Why can't we merge the functionality of server A and B? Lots of reasons: > Security, regulatory compliance, use of 3rd party systems, organizational > boundaries and/or politics are a few. The way a company manages it's systems > engineering work is to partition business functionality into pieces, give > ownership of each piece to a team, and align physical resources like servers > to those teams. If this imposes constraints not found in RESTful systems, > then I have no choice but to deal with those. > > Yes, but the RESTful solution is not to pretend that REST is an > event-based integration style. What you want to do with REST is > re-architect the system into more isolated parts that are event-based > (usually a very small communication subsystem) and the remainder > as a layered information system. The reason to do this, presumably, > is to expose the RESTful interface to consumers instead of exposing > the much more complex (and brittle) event interface. > > For example, CQ5 has a content repository based on the JCR > interface, which includes both observation (change event notifiers) > and RESTful interaction. The observation is behind the resource > interface, so the fact that it isn't RESTful itself does not > interfere with the multi-organizational, long-lived applications > that might only use the Web interface. > > > > > Applications like that are usually accomplished via code-on-demand. > > > The problem you will run into here is implementation issues > > > regarding current browsers, not architectural issues and > > > certainly not a style issue. Most such tools are developed > > > as browser extensions or app-specific clients, mostly because > > > they need unfettered access to the filesystem and because > > > browsers (for some unknown reason) don't include integrity checks > > > in normal file uploads. > > > > I expect the clients of most of our services would count as "app specific > clients". These might be our other services, our app servers that host user > interfaces, or sometimes we will allow external business entities > (customers, partners, suppliers, etc...) to write such apps directly. I > don't mind going beyond browser limitations. > > > > > > 2) Distributed Transactions - we need a paradigm to allow state > changes on multiple services to happen so that the changes succeed or fail > as a unit > > > > > > Again, not a characteristic of RESTful architectures. If the > > > client knows the transaction is distributed, you have failed. > > > There are lots of ways to solve this kind of problem on the > > > back-end of services interfaces, behind the resource abstraction, > > > but none of those are relevant to the REST architectural style > > > that might apply on the front-end of the service interface. > > > > This one I accept doesn't fit in a RESTful solution, and in other posts > in this thread, we are exploring several of other ways you mention. > > > > > > 3) Long running operations - we need asynchronous invocations between > services and a mechanism for the invoking service to find out when the > invoked service is done given that this work may take indefinitely long > > > > > > Any resource can behave as a long-running service. Just program it that > way. > > > > Right, the question is how, exactly. Good solutions have been posted in > this thread for this. Subbu's RESTful Web Services Cookbook solves this in > examples 1.10 and 1.11. I think this was another slam dunk. > > > > I'm curious what you think about using so called "web hooks" for this > kind of thing. Would you consider this a violation of the client-server > constraint? > > No, web hooks is just someone's marketing term for registering > notifications. The components that act on them are still either > clients or servers during the communication (i.e., they are not > trying to do both at the same time and functionality is still > split across components). This is not a new concept. E.g., > > http://www.xent.com/FoRK-archive/apr98/0445.html > > http://www.xent.com/FoRK-archive/august98/0307.html > > > > > > 4) Workflow Orchestration - we would like to have orchestration > services that define business processes via standardized representations (eg > BPMN), then execute instances of those processes and build up an process > instance execution data resource by interacting with other RESTful resources > using message exchange patterns that could specify the above behaviors. > > > > > > That is a system, not an integration problem. If you want to > > > solve it, buy a full-featured WCM system like Day's CQ5. > > > > > > http://www.day.com/day/en/products/web_content_management.html > > > > > > (sorry, I don't have a way to answer that one without sounding > > > like a sales plug -- it is, after all, why I work for a WCM vendor). > > > > No need to apologize for pointing me to a product that might be useful > for us. I've been in several sales presentations in the last couple weeks > with different vendors who have big fancy workflow engines. They all want to > talk about WS-BPEL and orchestrating our SOAP endpoints. I enjoy the look of > confusion when I mention that we are considering not allowing any new > services to be created using SOAP. That seems to get their attention. They > say "what will you do instead?" and I say use HTTP and they say "huh?". > > As much as I like doing things in HTTP, there are many closed systems > that are better implemented in an efficient RPC syntax or a wire > protocol specifically designed for message queues. Use whatever > works best for the specific architecture behind the resource interface > and then apply REST as the external facade to support large-scale > integration and reusability of the information produced/consumed. > > Note, however, that SOAP is fairly unique for being the least efficient > way of doing anything. That's what happens when core protocol design > is driven by marketing. > > ....Roy > > >
trimmed quotes for brevity --- In rest-discuss@yahoogroups.com, Roy T.Fielding <fielding@...> wrote: > > On Jul 6, 2010, at 12:22 AM, bryan_w_taylor wrote: > > --- In rest-discuss@yahoogroups.com, "Roy T. Fielding" <fielding@> wrote: > I guess it depends on how you define guaranteed delivery. You can > certainly do such things with HTTP, but doing CRUD ops via HTTP does > not automatically make it a RESTful paradigm. Fair enough. > > But that answers "how" and I think you are getting at "why". I'm imagining that we have two servers A and B, where A plays the role of the client in the interaction. Events happen on server A and server B must receive some representation related to each event or unacceptable business consequences occur. > > Ah, typical event-based integration. That's a good architectural > style for some applications. Why use REST to do that? Good question. I think using other tools for eventing makes a lot of sense in some cases. But there are sometimes disadvantages too... any of platform interoperability, additional infrastructure, development or runtime complexity sometimes get in the way. So there are times where it might be nice to at least use a straightforward HTTP based mechanism. > > Why can't we merge the functionality of server A and B? Lots of reasons: Security, regulatory compliance, use of 3rd party systems, organizational boundaries and/or politics are a few. The way a company manages it's systems engineering work is to partition business functionality into pieces, give ownership of each piece to a team, and align physical resources like servers to those teams. If this imposes constraints not found in RESTful systems, then I have no choice but to deal with those. > > Yes, but the RESTful solution is not to pretend that REST is an > event-based integration style. What you want to do with REST is > re-architect the system into more isolated parts that are event-based > (usually a very small communication subsystem) and the remainder > as a layered information system. The reason to do this, presumably, > is to expose the RESTful interface to consumers instead of exposing > the much more complex (and brittle) event interface. Well said, and I think this is what I will take away and promote. > > > Any resource can behave as a long-running service. Just program it that way. > > > > Right, the question is how, exactly. Good solutions have been posted in this thread for this. Subbu's RESTful Web Services Cookbook solves this in examples 1.10 and 1.11. I think this was another slam dunk. > > > > I'm curious what you think about using so called "web hooks" for this kind of thing. Would you consider this a violation of the client-server constraint? > > No, web hooks is just someone's marketing term for registering > notifications. The components that act on them are still either > clients or servers during the communication (i.e., they are not > trying to do both at the same time and functionality is still > split across components). This is not a new concept. E.g., > > http://www.xent.com/FoRK-archive/apr98/0445.html > > http://www.xent.com/FoRK-archive/august98/0307.html Good to know. I like section 5.1.3 of that 2nd one from 12 years ago. > As much as I like doing things in HTTP, there are many closed systems > that are better implemented in an efficient RPC syntax or a wire > protocol specifically designed for message queues. Use whatever > works best for the specific architecture behind the resource interface > and then apply REST as the external facade to support large-scale > integration and reusability of the information produced/consumed. OK, I think this is very practical. Thanks for some good input.
As ReST is request-response, any eventing / pubsub you try and slap on it is something else that is not ReST. There are however other thesis that build on top of ReST to provide architectural constraints for such systems, but I don't have the link anymore. Have a search on the mailing list, it should be in the archives somewhere. From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of Glenn Block Sent: 08 July 2010 06:53 To: Roy T.Fielding Cc: bryan_w_taylor; rest-discuss@yahoogroups.com Subject: Re: [rest-discuss] Re: Restful Approaches to some Enterprise Integration Problems Hi Roy Any thoughs on HTML5 Web Sockets wrt REST? Would it be viable to have a REST resources that can communicate changes (events) through web sockets? Or would you say that it is orthagonal? Thanks Glenn On Wed, Jul 7, 2010 at 8:25 PM, Roy T.Fielding <fielding@...<mailto:fielding@...>> wrote: On Jul 6, 2010, at 12:22 AM, bryan_w_taylor wrote: > --- In rest-discuss@yahoogroups.com<mailto:rest-discuss%40yahoogroups.com>, "Roy T. Fielding" <fielding@...> wrote: > > > > Specifically, we are trying to find RESTful solutions to: > > > > > > 1) Guaranteed Delivery - we need a paradigm to follow so that one service can transfer a sequence of resource representations to another reliably even though both services and the network suffer temporary unreliability > > > > That doesn't sound like a problem encountered by RESTful > > architectures. Reliable upload of multiple files can be > > performed using a single zip file, but the assumption being made > > here is that the client has a shared understanding of what the > > server is intending to do with those files. That's coupling. > > I don't follow. Several people have given good simple answers saying to use the idempotent nature of PUT (or fake it with POST) until a GET of the resource succeeds or add an HTTP header of If-None-Match: * and repeat the PUT until you get a 412 Precondition Failed response, so I thought this was a slam dunk. I guess it depends on how you define guaranteed delivery. You can certainly do such things with HTTP, but doing CRUD ops via HTTP does not automatically make it a RESTful paradigm. > But that answers "how" and I think you are getting at "why". I'm imagining that we have two servers A and B, where A plays the role of the client in the interaction. Events happen on server A and server B must receive some representation related to each event or unacceptable business consequences occur. Ah, typical event-based integration. That's a good architectural style for some applications. Why use REST to do that? > Why can't we merge the functionality of server A and B? Lots of reasons: Security, regulatory compliance, use of 3rd party systems, organizational boundaries and/or politics are a few. The way a company manages it's systems engineering work is to partition business functionality into pieces, give ownership of each piece to a team, and align physical resources like servers to those teams. If this imposes constraints not found in RESTful systems, then I have no choice but to deal with those. Yes, but the RESTful solution is not to pretend that REST is an event-based integration style. What you want to do with REST is re-architect the system into more isolated parts that are event-based (usually a very small communication subsystem) and the remainder as a layered information system. The reason to do this, presumably, is to expose the RESTful interface to consumers instead of exposing the much more complex (and brittle) event interface. For example, CQ5 has a content repository based on the JCR interface, which includes both observation (change event notifiers) and RESTful interaction. The observation is behind the resource interface, so the fact that it isn't RESTful itself does not interfere with the multi-organizational, long-lived applications that might only use the Web interface. > > Applications like that are usually accomplished via code-on-demand. > > The problem you will run into here is implementation issues > > regarding current browsers, not architectural issues and > > certainly not a style issue. Most such tools are developed > > as browser extensions or app-specific clients, mostly because > > they need unfettered access to the filesystem and because > > browsers (for some unknown reason) don't include integrity checks > > in normal file uploads. > > I expect the clients of most of our services would count as "app specific clients". These might be our other services, our app servers that host user interfaces, or sometimes we will allow external business entities (customers, partners, suppliers, etc...) to write such apps directly. I don't mind going beyond browser limitations. > > > > 2) Distributed Transactions - we need a paradigm to allow state changes on multiple services to happen so that the changes succeed or fail as a unit > > > > Again, not a characteristic of RESTful architectures. If the > > client knows the transaction is distributed, you have failed. > > There are lots of ways to solve this kind of problem on the > > back-end of services interfaces, behind the resource abstraction, > > but none of those are relevant to the REST architectural style > > that might apply on the front-end of the service interface. > > This one I accept doesn't fit in a RESTful solution, and in other posts in this thread, we are exploring several of other ways you mention. > > > > 3) Long running operations - we need asynchronous invocations between services and a mechanism for the invoking service to find out when the invoked service is done given that this work may take indefinitely long > > > > Any resource can behave as a long-running service. Just program it that way. > > Right, the question is how, exactly. Good solutions have been posted in this thread for this. Subbu's RESTful Web Services Cookbook solves this in examples 1.10 and 1.11. I think this was another slam dunk. > > I'm curious what you think about using so called "web hooks" for this kind of thing. Would you consider this a violation of the client-server constraint? No, web hooks is just someone's marketing term for registering notifications. The components that act on them are still either clients or servers during the communication (i.e., they are not trying to do both at the same time and functionality is still split across components). This is not a new concept. E.g., http://www.xent.com/FoRK-archive/apr98/0445.html http://www.xent.com/FoRK-archive/august98/0307.html > > > 4) Workflow Orchestration - we would like to have orchestration services that define business processes via standardized representations (eg BPMN), then execute instances of those processes and build up an process instance execution data resource by interacting with other RESTful resources using message exchange patterns that could specify the above behaviors. > > > > That is a system, not an integration problem. If you want to > > solve it, buy a full-featured WCM system like Day's CQ5. > > > > http://www.day.com/day/en/products/web_content_management.html > > > > (sorry, I don't have a way to answer that one without sounding > > like a sales plug -- it is, after all, why I work for a WCM vendor). > > No need to apologize for pointing me to a product that might be useful for us. I've been in several sales presentations in the last couple weeks with different vendors who have big fancy workflow engines. They all want to talk about WS-BPEL and orchestrating our SOAP endpoints. I enjoy the look of confusion when I mention that we are considering not allowing any new services to be created using SOAP. That seems to get their attention. They say "what will you do instead?" and I say use HTTP and they say "huh?". As much as I like doing things in HTTP, there are many closed systems that are better implemented in an efficient RPC syntax or a wire protocol specifically designed for message queues. Use whatever works best for the specific architecture behind the resource interface and then apply REST as the external facade to support large-scale integration and reusability of the information produced/consumed. Note, however, that SOAP is fairly unique for being the least efficient way of doing anything. That's what happens when core protocol design is driven by marketing. ....Roy
OK guys, we are setup to meet at the London MS campus on the afternoon of the 15th to discuss the new REST work. I'd like a final list of folks that want to be there. Please little 'r' me if you are interested in attending so that I can send you the invite along with the agenda. Looking forward to meeting you. Thanks Glenn
Sent from my iPhone On Jul 8, 2010, at 10:28, Sebastien Lambla <seb@serialseb.com> wrote: > > > As ReST is request-response, any eventing / pubsub you try and slap > on it is something else that is not ReST. There are however other > thesis that build on top of ReST to provide architectural > constraints for such systems, but I don’t have the link anymore. Hav > e a search > Google for arrested rohit khare Jan > on the mailing list, it should be in the archives somewhere. > > > > From: rest-discuss@yahoogroups.com [mailto:rest- > discuss@yahoogroups.com] On Behalf Of Glenn Block > Sent: 08 July 2010 06:53 > To: Roy T.Fielding > Cc: bryan_w_taylor; rest-discuss@yahoogroups.com > Subject: Re: [rest-discuss] Re: Restful Approaches to some > Enterprise Integration Problems > > > > > > > > Hi Roy > > > > Any thoughs on HTML5 Web Sockets wrt REST? Would it be viable to > have a REST resources that can communicate changes (events) through > web sockets? Or would you say that it is orthagonal? > > > > Thanks > Glenn > > On Wed, Jul 7, 2010 at 8:25 PM, Roy T.Fielding <fielding@...> > wrote: > > > > On Jul 6, 2010, at 12:22 AM, bryan_w_taylor wrote: > > --- In rest-discuss@yahoogroups.com, "Roy T. Fielding" > <fielding@...> wrote: > > > > > > Specifically, we are trying to find RESTful solutions to: > > > > > > > > 1) Guaranteed Delivery - we need a paradigm to follow so that > one service can transfer a sequence of resource representations to > another reliably even though both services and the network suffer > temporary unreliability > > > > > > That doesn't sound like a problem encountered by RESTful > > > architectures. Reliable upload of multiple files can be > > > performed using a single zip file, but the assumption being made > > > here is that the client has a shared understanding of what the > > > server is intending to do with those files. That's coupling. > > > > I don't follow. Several people have given good simple answers > saying to use the idempotent nature of PUT (or fake it with POST) > until a GET of the resource succeeds or add an HTTP header of If- > None-Match: * and repeat the PUT until you get a 412 Precondition > Failed response, so I thought this was a slam dunk. > > I guess it depends on how you define guaranteed delivery. You can > certainly do such things with HTTP, but doing CRUD ops via HTTP does > not automatically make it a RESTful paradigm. > > > > > But that answers "how" and I think you are getting at "why". I'm > imagining that we have two servers A and B, where A plays the role > of the client in the interaction. Events happen on server A and > server B must receive some representation related to each event or > unacceptable business consequences occur. > > Ah, typical event-based integration. That's a good architectural > style for some applications. Why use REST to do that? > > > > > Why can't we merge the functionality of server A and B? Lots of > reasons: Security, regulatory compliance, use of 3rd party systems, > organizational boundaries and/or politics are a few. The way a > company manages it's systems engineering work is to partition > business functionality into pieces, give ownership of each piece to > a team, and align physical resources like servers to those teams. If > this imposes constraints not found in RESTful systems, then I have > no choice but to deal with those. > > Yes, but the RESTful solution is not to pretend that REST is an > event-based integration style. What you want to do with REST is > re-architect the system into more isolated parts that are event-based > (usually a very small communication subsystem) and the remainder > as a layered information system. The reason to do this, presumably, > is to expose the RESTful interface to consumers instead of exposing > the much more complex (and brittle) event interface. > > For example, CQ5 has a content repository based on the JCR > interface, which includes both observation (change event notifiers) > and RESTful interaction. The observation is behind the resource > interface, so the fact that it isn't RESTful itself does not > interfere with the multi-organizational, long-lived applications > that might only use the Web interface. > > > > > > Applications like that are usually accomplished via code-on- > demand. > > > The problem you will run into here is implementation issues > > > regarding current browsers, not architectural issues and > > > certainly not a style issue. Most such tools are developed > > > as browser extensions or app-specific clients, mostly because > > > they need unfettered access to the filesystem and because > > > browsers (for some unknown reason) don't include integrity checks > > > in normal file uploads. > > > > I expect the clients of most of our services would count as "app > specific clients". These might be our other services, our app > servers that host user interfaces, or sometimes we will allow > external business entities (customers, partners, suppliers, etc...) > to write such apps directly. I don't mind going beyond browser > limitations. > > > > > > 2) Distributed Transactions - we need a paradigm to allow > state changes on multiple services to happen so that the changes > succeed or fail as a unit > > > > > > Again, not a characteristic of RESTful architectures. If the > > > client knows the transaction is distributed, you have failed. > > > There are lots of ways to solve this kind of problem on the > > > back-end of services interfaces, behind the resource abstraction, > > > but none of those are relevant to the REST architectural style > > > that might apply on the front-end of the service interface. > > > > This one I accept doesn't fit in a RESTful solution, and in other > posts in this thread, we are exploring several of other ways you > mention. > > > > > > 3) Long running operations - we need asynchronous invocations > between services and a mechanism for the invoking service to find > out when the invoked service is done given that this work may take > indefinitely long > > > > > > Any resource can behave as a long-running service. Just program > it that way. > > > > Right, the question is how, exactly. Good solutions have been > posted in this thread for this. Subbu's RESTful Web Services > Cookbook solves this in examples 1.10 and 1.11. I think this was > another slam dunk. > > > > I'm curious what you think about using so called "web hooks" for > this kind of thing. Would you consider this a violation of the > client-server constraint? > > No, web hooks is just someone's marketing term for registering > notifications. The components that act on them are still either > clients or servers during the communication (i.e., they are not > trying to do both at the same time and functionality is still > split across components). This is not a new concept. E.g., > > http://www.xent.com/FoRK-archive/apr98/0445.html > > http://www.xent.com/FoRK-archive/august98/0307.html > > > > > > > 4) Workflow Orchestration - we would like to have > orchestration services that define business processes via > standardized representations (eg BPMN), then execute instances of > those processes and build up an process instance execution data > resource by interacting with other RESTful resources using message > exchange patterns that could specify the above behaviors. > > > > > > That is a system, not an integration problem. If you want to > > > solve it, buy a full-featured WCM system like Day's CQ5. > > > > > > http://www.day.com/day/en/products/web_content_management.html > > > > > > (sorry, I don't have a way to answer that one without sounding > > > like a sales plug -- it is, after all, why I work for a WCM > vendor). > > > > No need to apologize for pointing me to a product that might be > useful for us. I've been in several sales presentations in the last > couple weeks with different vendors who have big fancy workflow > engines. They all want to talk about WS-BPEL and orchestrating our > SOAP endpoints. I enjoy the look of confusion when I mention that we > are considering not allowing any new services to be created using > SOAP. That seems to get their attention. They say "what will you do > instead?" and I say use HTTP and they say "huh?". > > As much as I like doing things in HTTP, there are many closed systems > that are better implemented in an efficient RPC syntax or a wire > protocol specifically designed for message queues. Use whatever > works best for the specific architecture behind the resource interface > and then apply REST as the external facade to support large-scale > integration and reusability of the information produced/consumed. > > Note, however, that SOAP is fairly unique for being the least > efficient > way of doing anything. That's what happens when core protocol design > is driven by marketing. > > ....Roy > > > > > > > > > >
On Jul 8, 2010, at 7:52 AM, Glenn Block wrote: > > > Hi Roy > > Any thoughs on HTML5 Web Sockets wrt REST? Would it be viable to have a REST resources that can communicate changes (events) through web sockets? Or would you say that it is orthagonal? While it is certainly possible (and even with pure HTTP[1]) it would be against the simplicity/understandability goals of REST because it complicates the architecture (client components take a server role). I'd try to go with polling as long as possible. Jan [1] http://sourceforge.net/projects/mod-pubsub/ > > Thanks > Glenn > > On Wed, Jul 7, 2010 at 8:25 PM, Roy T.Fielding <fielding@...> wrote: > > > On Jul 6, 2010, at 12:22 AM, bryan_w_taylor wrote: > > --- In rest-discuss@yahoogroups.com, "Roy T. Fielding" <fielding@...> wrote: > > > > > > Specifically, we are trying to find RESTful solutions to: > > > > > > > > 1) Guaranteed Delivery - we need a paradigm to follow so that one service can transfer a sequence of resource representations to another reliably even though both services and the network suffer temporary unreliability > > > > > > That doesn't sound like a problem encountered by RESTful > > > architectures. Reliable upload of multiple files can be > > > performed using a single zip file, but the assumption being made > > > here is that the client has a shared understanding of what the > > > server is intending to do with those files. That's coupling. > > > > I don't follow. Several people have given good simple answers saying to use the idempotent nature of PUT (or fake it with POST) until a GET of the resource succeeds or add an HTTP header of If-None-Match: * and repeat the PUT until you get a 412 Precondition Failed response, so I thought this was a slam dunk. > > I guess it depends on how you define guaranteed delivery. You can > certainly do such things with HTTP, but doing CRUD ops via HTTP does > not automatically make it a RESTful paradigm. > > > > But that answers "how" and I think you are getting at "why". I'm imagining that we have two servers A and B, where A plays the role of the client in the interaction. Events happen on server A and server B must receive some representation related to each event or unacceptable business consequences occur. > > Ah, typical event-based integration. That's a good architectural > style for some applications. Why use REST to do that? > > > > Why can't we merge the functionality of server A and B? Lots of reasons: Security, regulatory compliance, use of 3rd party systems, organizational boundaries and/or politics are a few. The way a company manages it's systems engineering work is to partition business functionality into pieces, give ownership of each piece to a team, and align physical resources like servers to those teams. If this imposes constraints not found in RESTful systems, then I have no choice but to deal with those. > > Yes, but the RESTful solution is not to pretend that REST is an > event-based integration style. What you want to do with REST is > re-architect the system into more isolated parts that are event-based > (usually a very small communication subsystem) and the remainder > as a layered information system. The reason to do this, presumably, > is to expose the RESTful interface to consumers instead of exposing > the much more complex (and brittle) event interface. > > For example, CQ5 has a content repository based on the JCR > interface, which includes both observation (change event notifiers) > and RESTful interaction. The observation is behind the resource > interface, so the fact that it isn't RESTful itself does not > interfere with the multi-organizational, long-lived applications > that might only use the Web interface. > > > > > Applications like that are usually accomplished via code-on-demand. > > > The problem you will run into here is implementation issues > > > regarding current browsers, not architectural issues and > > > certainly not a style issue. Most such tools are developed > > > as browser extensions or app-specific clients, mostly because > > > they need unfettered access to the filesystem and because > > > browsers (for some unknown reason) don't include integrity checks > > > in normal file uploads. > > > > I expect the clients of most of our services would count as "app specific clients". These might be our other services, our app servers that host user interfaces, or sometimes we will allow external business entities (customers, partners, suppliers, etc...) to write such apps directly. I don't mind going beyond browser limitations. > > > > > > 2) Distributed Transactions - we need a paradigm to allow state changes on multiple services to happen so that the changes succeed or fail as a unit > > > > > > Again, not a characteristic of RESTful architectures. If the > > > client knows the transaction is distributed, you have failed. > > > There are lots of ways to solve this kind of problem on the > > > back-end of services interfaces, behind the resource abstraction, > > > but none of those are relevant to the REST architectural style > > > that might apply on the front-end of the service interface. > > > > This one I accept doesn't fit in a RESTful solution, and in other posts in this thread, we are exploring several of other ways you mention. > > > > > > 3) Long running operations - we need asynchronous invocations between services and a mechanism for the invoking service to find out when the invoked service is done given that this work may take indefinitely long > > > > > > Any resource can behave as a long-running service. Just program it that way. > > > > Right, the question is how, exactly. Good solutions have been posted in this thread for this. Subbu's RESTful Web Services Cookbook solves this in examples 1.10 and 1.11. I think this was another slam dunk. > > > > I'm curious what you think about using so called "web hooks" for this kind of thing. Would you consider this a violation of the client-server constraint? > > No, web hooks is just someone's marketing term for registering > notifications. The components that act on them are still either > clients or servers during the communication (i.e., they are not > trying to do both at the same time and functionality is still > split across components). This is not a new concept. E.g., > > http://www.xent.com/FoRK-archive/apr98/0445.html > > http://www.xent.com/FoRK-archive/august98/0307.html > > > > > > 4) Workflow Orchestration - we would like to have orchestration services that define business processes via standardized representations (eg BPMN), then execute instances of those processes and build up an process instance execution data resource by interacting with other RESTful resources using message exchange patterns that could specify the above behaviors. > > > > > > That is a system, not an integration problem. If you want to > > > solve it, buy a full-featured WCM system like Day's CQ5. > > > > > > http://www.day.com/day/en/products/web_content_management.html > > > > > > (sorry, I don't have a way to answer that one without sounding > > > like a sales plug -- it is, after all, why I work for a WCM vendor). > > > > No need to apologize for pointing me to a product that might be useful for us. I've been in several sales presentations in the last couple weeks with different vendors who have big fancy workflow engines. They all want to talk about WS-BPEL and orchestrating our SOAP endpoints. I enjoy the look of confusion when I mention that we are considering not allowing any new services to be created using SOAP. That seems to get their attention. They say "what will you do instead?" and I say use HTTP and they say "huh?". > > As much as I like doing things in HTTP, there are many closed systems > that are better implemented in an efficient RPC syntax or a wire > protocol specifically designed for message queues. Use whatever > works best for the specific architecture behind the resource interface > and then apply REST as the external facade to support large-scale > integration and reusability of the information produced/consumed. > > Note, however, that SOAP is fairly unique for being the least efficient > way of doing anything. That's what happens when core protocol design > is driven by marketing. > > ....Roy > > > > > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
Thanks all! On Thu, Jul 8, 2010 at 2:11 AM, Jan Algermissen <algermissen1971@...>wrote: > > On Jul 8, 2010, at 7:52 AM, Glenn Block wrote: > > > > > > > Hi Roy > > > > Any thoughs on HTML5 Web Sockets wrt REST? Would it be viable to have a > REST resources that can communicate changes (events) through web sockets? Or > would you say that it is orthagonal? > > While it is certainly possible (and even with pure HTTP[1]) it would be > against the simplicity/understandability goals of REST because it > complicates the architecture (client components take a server role). > > I'd try to go with polling as long as possible. > > Jan > > > [1] http://sourceforge.net/projects/mod-pubsub/ > > > > > > > Thanks > > Glenn > > > > On Wed, Jul 7, 2010 at 8:25 PM, Roy T.Fielding <fielding@...> > wrote: > > > > > > On Jul 6, 2010, at 12:22 AM, bryan_w_taylor wrote: > > > --- In rest-discuss@yahoogroups.com, "Roy T. Fielding" <fielding@...> > wrote: > > > > > > > > Specifically, we are trying to find RESTful solutions to: > > > > > > > > > > 1) Guaranteed Delivery - we need a paradigm to follow so that one > service can transfer a sequence of resource representations to another > reliably even though both services and the network suffer temporary > unreliability > > > > > > > > That doesn't sound like a problem encountered by RESTful > > > > architectures. Reliable upload of multiple files can be > > > > performed using a single zip file, but the assumption being made > > > > here is that the client has a shared understanding of what the > > > > server is intending to do with those files. That's coupling. > > > > > > I don't follow. Several people have given good simple answers saying to > use the idempotent nature of PUT (or fake it with POST) until a GET of the > resource succeeds or add an HTTP header of If-None-Match: * and repeat the > PUT until you get a 412 Precondition Failed response, so I thought this was > a slam dunk. > > > > I guess it depends on how you define guaranteed delivery. You can > > certainly do such things with HTTP, but doing CRUD ops via HTTP does > > not automatically make it a RESTful paradigm. > > > > > > > But that answers "how" and I think you are getting at "why". I'm > imagining that we have two servers A and B, where A plays the role of the > client in the interaction. Events happen on server A and server B must > receive some representation related to each event or unacceptable business > consequences occur. > > > > Ah, typical event-based integration. That's a good architectural > > style for some applications. Why use REST to do that? > > > > > > > Why can't we merge the functionality of server A and B? Lots of > reasons: Security, regulatory compliance, use of 3rd party systems, > organizational boundaries and/or politics are a few. The way a company > manages it's systems engineering work is to partition business functionality > into pieces, give ownership of each piece to a team, and align physical > resources like servers to those teams. If this imposes constraints not found > in RESTful systems, then I have no choice but to deal with those. > > > > Yes, but the RESTful solution is not to pretend that REST is an > > event-based integration style. What you want to do with REST is > > re-architect the system into more isolated parts that are event-based > > (usually a very small communication subsystem) and the remainder > > as a layered information system. The reason to do this, presumably, > > is to expose the RESTful interface to consumers instead of exposing > > the much more complex (and brittle) event interface. > > > > For example, CQ5 has a content repository based on the JCR > > interface, which includes both observation (change event notifiers) > > and RESTful interaction. The observation is behind the resource > > interface, so the fact that it isn't RESTful itself does not > > interfere with the multi-organizational, long-lived applications > > that might only use the Web interface. > > > > > > > > Applications like that are usually accomplished via code-on-demand. > > > > The problem you will run into here is implementation issues > > > > regarding current browsers, not architectural issues and > > > > certainly not a style issue. Most such tools are developed > > > > as browser extensions or app-specific clients, mostly because > > > > they need unfettered access to the filesystem and because > > > > browsers (for some unknown reason) don't include integrity checks > > > > in normal file uploads. > > > > > > I expect the clients of most of our services would count as "app > specific clients". These might be our other services, our app servers that > host user interfaces, or sometimes we will allow external business entities > (customers, partners, suppliers, etc...) to write such apps directly. I > don't mind going beyond browser limitations. > > > > > > > > 2) Distributed Transactions - we need a paradigm to allow state > changes on multiple services to happen so that the changes succeed or fail > as a unit > > > > > > > > Again, not a characteristic of RESTful architectures. If the > > > > client knows the transaction is distributed, you have failed. > > > > There are lots of ways to solve this kind of problem on the > > > > back-end of services interfaces, behind the resource abstraction, > > > > but none of those are relevant to the REST architectural style > > > > that might apply on the front-end of the service interface. > > > > > > This one I accept doesn't fit in a RESTful solution, and in other posts > in this thread, we are exploring several of other ways you mention. > > > > > > > > 3) Long running operations - we need asynchronous invocations > between services and a mechanism for the invoking service to find out when > the invoked service is done given that this work may take indefinitely long > > > > > > > > Any resource can behave as a long-running service. Just program it > that way. > > > > > > Right, the question is how, exactly. Good solutions have been posted in > this thread for this. Subbu's RESTful Web Services Cookbook solves this in > examples 1.10 and 1.11. I think this was another slam dunk. > > > > > > I'm curious what you think about using so called "web hooks" for this > kind of thing. Would you consider this a violation of the client-server > constraint? > > > > No, web hooks is just someone's marketing term for registering > > notifications. The components that act on them are still either > > clients or servers during the communication (i.e., they are not > > trying to do both at the same time and functionality is still > > split across components). This is not a new concept. E.g., > > > > http://www.xent.com/FoRK-archive/apr98/0445.html > > > > http://www.xent.com/FoRK-archive/august98/0307.html > > > > > > > > > 4) Workflow Orchestration - we would like to have orchestration > services that define business processes via standardized representations (eg > BPMN), then execute instances of those processes and build up an process > instance execution data resource by interacting with other RESTful resources > using message exchange patterns that could specify the above behaviors. > > > > > > > > That is a system, not an integration problem. If you want to > > > > solve it, buy a full-featured WCM system like Day's CQ5. > > > > > > > > http://www.day.com/day/en/products/web_content_management.html > > > > > > > > (sorry, I don't have a way to answer that one without sounding > > > > like a sales plug -- it is, after all, why I work for a WCM vendor). > > > > > > No need to apologize for pointing me to a product that might be useful > for us. I've been in several sales presentations in the last couple weeks > with different vendors who have big fancy workflow engines. They all want to > talk about WS-BPEL and orchestrating our SOAP endpoints. I enjoy the look of > confusion when I mention that we are considering not allowing any new > services to be created using SOAP. That seems to get their attention. They > say "what will you do instead?" and I say use HTTP and they say "huh?". > > > > As much as I like doing things in HTTP, there are many closed systems > > that are better implemented in an efficient RPC syntax or a wire > > protocol specifically designed for message queues. Use whatever > > works best for the specific architecture behind the resource interface > > and then apply REST as the external facade to support large-scale > > integration and reusability of the information produced/consumed. > > > > Note, however, that SOAP is fairly unique for being the least efficient > > way of doing anything. That's what happens when core protocol design > > is driven by marketing. > > > > ....Roy > > > > > > > > > > > > > > > > ----------------------------------- > Jan Algermissen, Consultant > NORD Software Consulting > > Mail: algermissen@... > Blog: http://www.nordsc.com/blog/ > Work: http://www.nordsc.com/ > ----------------------------------- > > > > >
On Jul 8, 2010, at 11:11 AM, Jan Algermissen wrote: > > On Jul 8, 2010, at 7:52 AM, Glenn Block wrote: > >> >> >> Hi Roy >> >> Any thoughs on HTML5 Web Sockets wrt REST? Would it be viable to have a REST resources that can communicate changes (events) through web sockets? Or would you say that it is orthagonal? > > While it is certainly possible (and even with pure HTTP[1]) it would be against the simplicity/understandability goals of REST because it complicates the architecture (client components take a server role). > > I'd try to go with polling as long as possible. There is a presentation on InfoQ by Ian that nice;ly shows how to do this with Atom, BTW: http://www.infoq.com/presentations/robinson-restful-enterprise > > Jan > > > [1] http://sourceforge.net/projects/mod-pubsub/ > > > >> >> Thanks >> Glenn >> >> On Wed, Jul 7, 2010 at 8:25 PM, Roy T.Fielding <fielding@...> wrote: >> >> >> On Jul 6, 2010, at 12:22 AM, bryan_w_taylor wrote: >>> --- In rest-discuss@yahoogroups.com, "Roy T. Fielding" <fielding@...> wrote: >>> >>>>> Specifically, we are trying to find RESTful solutions to: >>>>> >>>>> 1) Guaranteed Delivery - we need a paradigm to follow so that one service can transfer a sequence of resource representations to another reliably even though both services and the network suffer temporary unreliability >>>> >>>> That doesn't sound like a problem encountered by RESTful >>>> architectures. Reliable upload of multiple files can be >>>> performed using a single zip file, but the assumption being made >>>> here is that the client has a shared understanding of what the >>>> server is intending to do with those files. That's coupling. >>> >>> I don't follow. Several people have given good simple answers saying to use the idempotent nature of PUT (or fake it with POST) until a GET of the resource succeeds or add an HTTP header of If-None-Match: * and repeat the PUT until you get a 412 Precondition Failed response, so I thought this was a slam dunk. >> >> I guess it depends on how you define guaranteed delivery. You can >> certainly do such things with HTTP, but doing CRUD ops via HTTP does >> not automatically make it a RESTful paradigm. >> >> >>> But that answers "how" and I think you are getting at "why". I'm imagining that we have two servers A and B, where A plays the role of the client in the interaction. Events happen on server A and server B must receive some representation related to each event or unacceptable business consequences occur. >> >> Ah, typical event-based integration. That's a good architectural >> style for some applications. Why use REST to do that? >> >> >>> Why can't we merge the functionality of server A and B? Lots of reasons: Security, regulatory compliance, use of 3rd party systems, organizational boundaries and/or politics are a few. The way a company manages it's systems engineering work is to partition business functionality into pieces, give ownership of each piece to a team, and align physical resources like servers to those teams. If this imposes constraints not found in RESTful systems, then I have no choice but to deal with those. >> >> Yes, but the RESTful solution is not to pretend that REST is an >> event-based integration style. What you want to do with REST is >> re-architect the system into more isolated parts that are event-based >> (usually a very small communication subsystem) and the remainder >> as a layered information system. The reason to do this, presumably, >> is to expose the RESTful interface to consumers instead of exposing >> the much more complex (and brittle) event interface. >> >> For example, CQ5 has a content repository based on the JCR >> interface, which includes both observation (change event notifiers) >> and RESTful interaction. The observation is behind the resource >> interface, so the fact that it isn't RESTful itself does not >> interfere with the multi-organizational, long-lived applications >> that might only use the Web interface. >> >> >>>> Applications like that are usually accomplished via code-on-demand. >>>> The problem you will run into here is implementation issues >>>> regarding current browsers, not architectural issues and >>>> certainly not a style issue. Most such tools are developed >>>> as browser extensions or app-specific clients, mostly because >>>> they need unfettered access to the filesystem and because >>>> browsers (for some unknown reason) don't include integrity checks >>>> in normal file uploads. >>> >>> I expect the clients of most of our services would count as "app specific clients". These might be our other services, our app servers that host user interfaces, or sometimes we will allow external business entities (customers, partners, suppliers, etc...) to write such apps directly. I don't mind going beyond browser limitations. >>> >>>>> 2) Distributed Transactions - we need a paradigm to allow state changes on multiple services to happen so that the changes succeed or fail as a unit >>>> >>>> Again, not a characteristic of RESTful architectures. If the >>>> client knows the transaction is distributed, you have failed. >>>> There are lots of ways to solve this kind of problem on the >>>> back-end of services interfaces, behind the resource abstraction, >>>> but none of those are relevant to the REST architectural style >>>> that might apply on the front-end of the service interface. >>> >>> This one I accept doesn't fit in a RESTful solution, and in other posts in this thread, we are exploring several of other ways you mention. >>> >>>>> 3) Long running operations - we need asynchronous invocations between services and a mechanism for the invoking service to find out when the invoked service is done given that this work may take indefinitely long >>>> >>>> Any resource can behave as a long-running service. Just program it that way. >>> >>> Right, the question is how, exactly. Good solutions have been posted in this thread for this. Subbu's RESTful Web Services Cookbook solves this in examples 1.10 and 1.11. I think this was another slam dunk. >>> >>> I'm curious what you think about using so called "web hooks" for this kind of thing. Would you consider this a violation of the client-server constraint? >> >> No, web hooks is just someone's marketing term for registering >> notifications. The components that act on them are still either >> clients or servers during the communication (i.e., they are not >> trying to do both at the same time and functionality is still >> split across components). This is not a new concept. E.g., >> >> http://www.xent.com/FoRK-archive/apr98/0445.html >> >> http://www.xent.com/FoRK-archive/august98/0307.html >> >> >>>>> 4) Workflow Orchestration - we would like to have orchestration services that define business processes via standardized representations (eg BPMN), then execute instances of those processes and build up an process instance execution data resource by interacting with other RESTful resources using message exchange patterns that could specify the above behaviors. >>>> >>>> That is a system, not an integration problem. If you want to >>>> solve it, buy a full-featured WCM system like Day's CQ5. >>>> >>>> http://www.day.com/day/en/products/web_content_management.html >>>> >>>> (sorry, I don't have a way to answer that one without sounding >>>> like a sales plug -- it is, after all, why I work for a WCM vendor). >>> >>> No need to apologize for pointing me to a product that might be useful for us. I've been in several sales presentations in the last couple weeks with different vendors who have big fancy workflow engines. They all want to talk about WS-BPEL and orchestrating our SOAP endpoints. I enjoy the look of confusion when I mention that we are considering not allowing any new services to be created using SOAP. That seems to get their attention. They say "what will you do instead?" and I say use HTTP and they say "huh?". >> >> As much as I like doing things in HTTP, there are many closed systems >> that are better implemented in an efficient RPC syntax or a wire >> protocol specifically designed for message queues. Use whatever >> works best for the specific architecture behind the resource interface >> and then apply REST as the external facade to support large-scale >> integration and reusability of the information produced/consumed. >> >> Note, however, that SOAP is fairly unique for being the least efficient >> way of doing anything. That's what happens when core protocol design >> is driven by marketing. >> >> ....Roy >> >> >> >> >> >> >> > > ----------------------------------- > Jan Algermissen, Consultant > NORD Software Consulting > > Mail: algermissen@... > Blog: http://www.nordsc.com/blog/ > Work: http://www.nordsc.com/ > ----------------------------------- > > > > > > > ------------------------------------ > > Yahoo! Groups Links > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
On Jul 7, 2010, at 10:52 PM, Glenn Block wrote: > > Any thoughs on HTML5 Web Sockets wrt REST? Would it be viable to have a REST resources that can communicate changes (events) through web sockets? Or would you say that it is orthagonal? It would be a different style of interaction. Generally speaking, REST is designed to avoid tying a server's connection-level resources to a single client using an opaque protocol that is indistinguishable from a denial of service attack. Go figure. ....Roy
On Jul 6, 2010, at 1:00 AM, Jan Algermissen wrote: > Roy, > > On Jul 6, 2010, at 3:03 AM, Roy T. Fielding wrote: > > > Reliable upload of multiple files can be > > performed using a single zip file, but the assumption being made > > here is that the client has a shared understanding of what the > > server is intending to do with those files. That's coupling. > > Trying to test my understanding: > > By 'client' you are refering to 'user agent'? In this case, yes, though it is true for any client. > My understanding is that the user agent has no shared understanding beyond how to construct the submission resquest upon the activation of a hypermedia control. (Web browsers know how to create a POST request from a user's submission of a form) which it gets from the media type definition, yes. > The user however does have an understanding (expectation) of what the server is intending to do with those files. This expectation is the basis for choosing to activate the hypermedia control in the first place. A user (or configured robot) will understand their own intent, yes, but not necessarily how the server intends to accomplish that functionality. A user is unlikely to know that a given service needs guaranteed delivery, since best-effort delivery is the norm. One would have to add that to the interaction requirements, which means standardizing that kind of interaction through additional definitions in the media type or link relations and sending enough information with the request to enable the recipient to verify the received message integrity, and both sides need to know that the request needs to be repeated automatically if the checks fail. And that still doesn't tell us what to put in the representations being sent. That's why this kind of functionality is more likely found in javascript or a browser extension. There is also no need to limit yourself to one interface. Look at all the interfaces on Apache ActiveMQ, for example http://activemq.apache.org/protocols.html The so-called REST protocol calls for POST to a given queue URI, which I'll just assume isn't guaranteed delivery. Guaranteed delivery could probably be added with a simple message integrity check if the messages are unique, but I would prefer a more explicit pattern. For example, we might define a message sink with a URI such that each client knows (by definition) that it should append its own client-id (perhaps set by cookie) and a message counter to the request URI, as in PUT URI/client-id/count HTTP/1.1 MIC: a162b17f and then the client can send as many messages as it wants, provided the count is incremented for each new message, and the server must verify (and store) the MIC before responding with a success code. Each message can therefore be logged, verified, etc., just like a message queue with guarantees. We could try to standardize something like what I describe above, but it would require multiple independent implementations and a lot more free time than it probably deserves. In any case, it also begs the question of why would we want to do this using HTTP [aside from just avoiding firewall blocks, which is not a rational rationale]. The fact is that most people write message queues for systems that are more operational than informational -- i.e., they are doing something, usually at a high rate of speed, that isn't intended to be viewed as an information service, except in the form of an archive or summary of past events. Would a more RESTful message queue have significant architectural properties that outweigh the trade-off on performance, or would it be better to use a tightly coupled eventing protocol and merely provide the resulting archive and summaries via normal RESTful interaction? That kind of question needs to be answered by an architect familiar with all of the design contraints for the proposed system. ....Roy
Hi,
The resources I have always dealt with have started with "http://" i.e. they are URLs. I know a URN can identify "things" without providing a location or means for accessing that "thing", but how can you interact with that resource unless you know its location/access method (the URL) ?
In short, why don't we use the more specific term (i.e. URL) when referring to RESTful
HTTP? Or when we use the term URI are we implicitly using URL?
Thanks,
Sean./
Sean Kennedy wrote: > > In short, why don't we use the more specific term (i.e. URL) when > referring to RESTful HTTP? Or when we use the term URI are we > implicitly using URL? > We say URI because often we're using the URL as an identifier, rather than as something to dereference. Think of namespaces, Content- Location, etc. where the URI is a reference. REST systems use URIs all over the place, the same URI in one context is dereferenced, in another context it is not, so it's just easier to always say URI. Atom IDs are URIs, but they aren't derefernced, so many folks use URNs, so we say URI because we don't specifically mean URL. There's nothing wrong with saying URL, or even URI (which should be IRI these days), the takeaway is that the string is more of an identifier in practice than a rigid location (locations have no meaning unless dereferenced). -Eric
Hi Eric,
Thanks for that. Is this correct : the string e.g. "http://www.example.org/somepath", when used as an identifier for resources e.g. RDF or for identifying namespaces, is in fact a URN (and thus a URI) because it is not a location that one would type into a browser (dereference)? If you decided to put some explanatory text up at that "location" does that mean that the resource is now both a URN and a URL - that intersection defined in Wikipedia [1]?
Thanks,
Sean.
[1] http://en.wikipedia.org/wiki/File:URI_Euler_Diagram_no_lone_URIs.svg
--- On Fri, 9/7/10, Eric J. Bowman <eric@...> wrote:
From: Eric J. Bowman <eric@...>
Subject: Re: [rest-discuss] URI always a URL?
To: "Sean Kennedy" <seandkennedy@...>
Cc: "Rest Discussion Group" <rest-discuss@yahoogroups.com>
Date: Friday, 9 July, 2010, 10:52
Sean Kennedy wrote:
>
> In short, why don't we use the more specific term (i.e. URL) when
> referring to RESTful HTTP? Or when we use the term URI are we
> implicitly using URL?
>
We say URI because often we're using the URL as an identifier, rather
than as something to dereference. Think of namespaces, Content-
Location, etc. where the URI is a reference. REST systems use URIs all
over the place, the same URI in one context is dereferenced, in another
context it is not, so it's just easier to always say URI. Atom IDs are
URIs, but they aren't derefernced, so many folks use URNs, so we say
URI because we don't specifically mean URL. There's nothing wrong with
saying URL, or even URI (which should be IRI these days), the takeaway
is that the string is more of an identifier in practice than a rigid
location (locations have no meaning unless dereferenced).
-Eric
On Fri, Jul 9, 2010 at 5:23 AM, Sean Kennedy <seandkennedy@...>wrote: > > Hi Eric, > Thanks for that. Is this correct : the string e.g. " > http://www.example.org/somepath", when used as an identifier for resources > e.g. RDF or for identifying namespaces, is in fact a URN (and thus a URI) > because it is not a location that one would type into a browser > (dereference)? > > I could be mistaken, but I thought the format of a URN was different than a URL, thus a URL used as a name is not a URN, but it is strictly a URI. (Wikipedia reference <http://en.wikipedia.org/wiki/Uniform_Resource_Name#URN_Syntax> for URN schema.) Ryan Riley
Dong, Your approach uses a pub/sub approach at the end. To switch it to a more RESTful model, you would need to have the airport post the potential pairing and have both the passenger and the taxi constantly poll to check for their match. The bigger challenge is likely in the agreement failure scenario. How do you allow the taxi and customer to re-request a new pairing without moving them to the back of the line? Bryan, While a REST model might be found, your queueing solution seems the best option. We're dealing with real queues. It might make sense to have a REST facade layer on top to allow easy requesting/reporting, but the underlying system would likely use a queue. Ryan Riley On Tue, Jul 6, 2010 at 7:53 AM, Dong Liu <edongliu@gmail.com> wrote: > > > My trial here. > > This is a typical service orchestration scenario if we consider passengers > and taxis as services and the airport implements an orchestration. > > The basic airport interfaces will be > > RESERVE /taxi > > The passenger can use this interface to get a taxi reservation ticket. In > the request the passenger needs to provide an endpoint for callback, e.g. > Name/cellnumber/url ... > > RESERVE /passenger > > The taxi can get a passenger reservation ticket. In the request the taxi > needs to provide an endpoint for callback, e.g. plate number/cellnumber/url > ... > > Later the reservation ticket can be used for query about the place in line > and also cancel the reservation. > > When a match is made by the airport, airport will issue notifications to > both the passenger and the taxi. > > Does it looks RESTful? > > Cheers, > > Dong > > > On Sun, Jul 4, 2010 at 1:18 PM, Bryan Taylor <bryan_w_taylor@...>wrote: > >> >> >> Airport management has issued this as a request for proposal. They are >> willing to implement systems suggested by the proposal and pay for their >> development of the entire system. I'd hoping someone will sketch a proposal >> based on a RESTful solution. The airport will offer some kiosks passengers >> can use, and taxis also have a client computer. >> >> We also have word that a rival contractor, Fuddy Duddy Enterprise >> Solutions has bid on the contract. >> >> The Fuddy proposal uses an message broke at the airport and to create a >> queue in it where messages represent passenger taxi requests. These messages >> are enqueued when passengers request a taxi in step 2. They plan to create a >> standalone fat client application for taxis. In step 3 taxis make a >> connection to the airport system, and make a blocking dequeue request, which >> is answered in step 4 and the taxi is told how to meet up with the >> passenger. The taxi app then presents the driver with a screen where they >> enter Y/N on whether the agreement was reached per steps 8 or 12. The >> customer can also tell the airport this per step 12, using the same kiosk >> they used to request the taxi service. Furthermore, the taxi app enforces >> the timeout per step 8. The customers message will be acknowledged and >> removed from the queue by the taxi app's "Y" response. If either party backs >> out or the time expires, the airport passenger's request will be dispatched >> again to a new >> cab, and the original cab will also get the next available passenger. The >> taxi client will automatically throw an error back if an already rejected >> passenger is redispatched to it, but it will reissue another dequeue request >> first, so that it get a new passenger assignment if one is to be had. If >> not, and there is no other cab to take the passenger, then the cab will >> repeat repeat this pattern every few seconds. The same client app can be >> used to query the queue size, per step 12. >> >> >> ----- Original Message ---- >> From: Jan Algermissen <algermissen1971@...<algermissen1971%40mac.com> >> > >> >> On Jul 3, 2010, at 12:45 PM, bryan_w_taylor wrote: >> >> I am not sure what you are up to with this. Do you want to develop a >> system that simulates the above actors? >> >> Or are the actors actors in use cases? What are those use cases and where >> is the software system that is to realize them? >> >> I guess what makes most sense is that the airport is the system, but then >> you mentioned it as an actor, too. >> >> Can you clarify? >> >> Jan >> >> > >
Hi Ryan, I think the approach of pub/sub like what pubsubhubbub does is pretty "RESTful". However, it requires the participants, taxi and passenger, to expose service endpoints for callbacks. If this is constrained, then, as you said, polling will do the work via code-in-demand from the airport. How to deal with the failed match cases really depends on the business requirements. If we consider failed matches are just normal, then for each match, if it is failed, denied or timeout, or ... , then the participant, either taxi or passenger, can directly issue a new reservation request with the failed match id. If it is fair for the system, then they can go to the beginning of the line, but they will never be assigned to the failed match any more. They may have this privilege for, say 3 times, then will be move to the end of the line if they still want a match. Cheers, Dong On Fri, Jul 9, 2010 at 10:06 AM, Ryan Riley <ryan.riley@...>wrote: > Dong, > > Your approach uses a pub/sub approach at the end. To switch it to a more > RESTful model, you would need to have the airport post the potential pairing > and have both the passenger and the taxi constantly poll to check for their > match. The bigger challenge is likely in the agreement failure scenario. How > do you allow the taxi and customer to re-request a new pairing without > moving them to the back of the line? > > Bryan, > > While a REST model might be found, your queueing solution seems the best > option. We're dealing with real queues. It might make sense to have a REST > facade layer on top to allow easy requesting/reporting, but the underlying > system would likely use a queue. > > Ryan Riley > > > On Tue, Jul 6, 2010 at 7:53 AM, Dong Liu <edongliu@...> wrote: > >> >> >> My trial here. >> >> This is a typical service orchestration scenario if we consider passengers >> and taxis as services and the airport implements an orchestration. >> >> The basic airport interfaces will be >> >> RESERVE /taxi >> >> The passenger can use this interface to get a taxi reservation ticket. In >> the request the passenger needs to provide an endpoint for callback, e.g. >> Name/cellnumber/url ... >> >> RESERVE /passenger >> >> The taxi can get a passenger reservation ticket. In the request the taxi >> needs to provide an endpoint for callback, e.g. plate number/cellnumber/url >> ... >> >> Later the reservation ticket can be used for query about the place in line >> and also cancel the reservation. >> >> When a match is made by the airport, airport will issue notifications to >> both the passenger and the taxi. >> >> Does it looks RESTful? >> >> Cheers, >> >> Dong >> >> >> On Sun, Jul 4, 2010 at 1:18 PM, Bryan Taylor <bryan_w_taylor@...>wrote: >> >>> >>> >>> Airport management has issued this as a request for proposal. They are >>> willing to implement systems suggested by the proposal and pay for their >>> development of the entire system. I'd hoping someone will sketch a proposal >>> based on a RESTful solution. The airport will offer some kiosks passengers >>> can use, and taxis also have a client computer. >>> >>> We also have word that a rival contractor, Fuddy Duddy Enterprise >>> Solutions has bid on the contract. >>> >>> The Fuddy proposal uses an message broke at the airport and to create a >>> queue in it where messages represent passenger taxi requests. These messages >>> are enqueued when passengers request a taxi in step 2. They plan to create a >>> standalone fat client application for taxis. In step 3 taxis make a >>> connection to the airport system, and make a blocking dequeue request, which >>> is answered in step 4 and the taxi is told how to meet up with the >>> passenger. The taxi app then presents the driver with a screen where they >>> enter Y/N on whether the agreement was reached per steps 8 or 12. The >>> customer can also tell the airport this per step 12, using the same kiosk >>> they used to request the taxi service. Furthermore, the taxi app enforces >>> the timeout per step 8. The customers message will be acknowledged and >>> removed from the queue by the taxi app's "Y" response. If either party backs >>> out or the time expires, the airport passenger's request will be dispatched >>> again to a new >>> cab, and the original cab will also get the next available passenger. The >>> taxi client will automatically throw an error back if an already rejected >>> passenger is redispatched to it, but it will reissue another dequeue request >>> first, so that it get a new passenger assignment if one is to be had. If >>> not, and there is no other cab to take the passenger, then the cab will >>> repeat repeat this pattern every few seconds. The same client app can be >>> used to query the queue size, per step 12. >>> >>> >>> ----- Original Message ---- >>> From: Jan Algermissen <algermissen1971@...<algermissen1971%40mac.com> >>> > >>> >>> On Jul 3, 2010, at 12:45 PM, bryan_w_taylor wrote: >>> >>> I am not sure what you are up to with this. Do you want to develop a >>> system that simulates the above actors? >>> >>> Or are the actors actors in use cases? What are those use cases and where >>> is the software system that is to realize them? >>> >>> I guess what makes most sense is that the airport is the system, but then >>> you mentioned it as an actor, too. >>> >>> Can you clarify? >>> >>> Jan >>> >>> >> >> > >
I reckon this is now correct:
URI = URL (resolvable e.g. "http://..." that you can type into a browser) and URN ("urn:...").
The identifiers in RDF, namespaces that look just like a URL and that are called URI's (or URL's) are in fact just strings that follow the format of URLs.
Seem reasonable?
Sean.
--- On Fri, 9/7/10, Ryan Riley <ryan.riley@...> wrote:
From: Ryan Riley <ryan.riley@...>
Subject: Re: [rest-discuss] URI always a URL?
To: "Sean Kennedy" <seandkennedy@...>
Cc: "Eric J. Bowman" <eric@...>, "Rest Discussion Group" <rest-discuss@yahoogroups.com>
Date: Friday, 9 July, 2010, 15:51
On Fri, Jul 9, 2010 at 5:23 AM, Sean Kennedy <seandkennedy@...> wrote:
Hi Eric,
Thanks for that. Is this correct : the string e.g. "http://www.example.org/somepath", when used as an identifier for resources e.g. RDF or for identifying namespaces, is in fact a URN (and thus a URI) because it is not a location that one would type into a browser (dereference)?
I could be mistaken, but I thought the format of a URN was different than a URL, thus a URL used as a name is not a URN, but it is strictly a URI. (Wikipedia referencefor URN schema.)
Ryan Riley
To emphasize this last point, an XML Namespaces ignore URL equivalence.
That is, while your browser knows that "http://www.example.com:80/foo",
"http://Www.Example.Com/foo", and "http://www.example.com/foo" are the
same thing (for caching purposes, for example), XML Namespace
specification explicitly treats each of these as a different namespace.
-Eric.
On 07/09/2010 10:09 AM, Sean Kennedy wrote:
>
>
> I reckon this is now correct:
> URI = URL (resolvable e.g. "http://..." that you can type into a
> browser) and URN ("urn:...").
> The identifiers in RDF, namespaces that look just like a URL and that
> are called URI's (or URL's) are in fact just strings that follow the
> format of URLs.
>
> Seem reasonable?
>
> Sean.
>
> --- On *Fri, 9/7/10, Ryan Riley /<ryan.riley@...>/* wrote:
>
>
> From: Ryan Riley <ryan.riley@...>
> Subject: Re: [rest-discuss] URI always a URL?
> To: "Sean Kennedy" <seandkennedy@...>
> Cc: "Eric J. Bowman" <eric@...>, "Rest Discussion
> Group" <rest-discuss@yahoogroups.com>
> Date: Friday, 9 July, 2010, 15:51
>
> On Fri, Jul 9, 2010 at 5:23 AM, Sean Kennedy
> <seandkennedy@...
> </mc/compose?to=seandkennedy@...>> wrote:
>
>
> Hi Eric,
> Thanks for that. Is this correct : the string e.g.
> "http://www.example.org/somepath
> <http://www.example.org/somepath>", when used as an identifier
> for resources e.g. RDF or for identifying namespaces, is in
> fact a URN (and thus a URI) because it is not a location that
> one would type into a browser (dereference)?
>
>
> I could be mistaken, but I thought the format of a URN was
> different than a URL, thus a URL used as a name is not a URN, but
> it is strictly a URI. (Wikipedia reference
> <http://en.wikipedia.org/wiki/Uniform_Resource_Name#URN_Syntax> for URN
> schema.)
>
> Ryan Riley
>
>
>
Hi,
I am re-reading Stefans article [1] and am looking at msg reliability. I like Joe Gregorios best practice [2] as it leverages PUT's idempotence. I am not aware of any approach that supports in-order delivery of a sequence of HTTP messages...
What if you needed in-order message delivery? I imagine for a banking application, the order of transactions on an account would be important...
Regards,
Sean.
[1] http://www.infoq.com/articles/tilkov-rest-doubts
[2] http://bitworking.org/news/201/RESTify-DayTrader
Hi Sean, > What if you needed in-order message delivery? I imagine for a banking application, the order of transactions on an account would be important... I think you could model it readily with HTTP status codes and hypermedia. For example, if you interact with a resource which is in an inconsistent state (from your point of view) because it didn't process a prior representation, you could expect a 409 response which invites you as a client to reestablish your view of the server side state before continuing. Hypermedia controls might be embedded in the response representation if there's an opportunity for forward/backward error recovery that the server can determine. Still, another perhaps easier approach is simply to reverse responsibilities: the sender becomes a server and pushes out an feed of events to consumers who poll it. No chance of out-of-order messages here, providing the client understands timestamps. It's also easy to implement crash recovery of both sides with this approach, and to deal with intermitted failures gracefully through caching. Jim
On Jul 13, 2010, at 10:51 AM, Sean Kennedy wrote: > What if you needed in-order message delivery? I imagine for a banking application, the order of transactions on an account would be important... You can do this by including in the client's message a token that expresses the client's assumptions about the state of the resource. The server can use that token to verify that the client's expectation and the actual resource state match. If they do not match, the server instructs the client what to do next. Roy somewhat explains this in [1]: "Think of it instead as a series of individual POST requests that are building up a combined resource that will eventually be a savings account when finished. Each of those requests can include parameters that perform the same role as an ETag -- basically, identifying the client's view of the current state of the resource. Then, when a request is repeated or a state-change lost, the server would see that in the next request and tell the client to refresh its view of the form before continuing to the next step." [1] http://tech.groups.yahoo.com/group/rest-discuss/message/9805 ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
Hi Jim/Jan,
Thanks for taking the time. Does this sound correct:
assuming I have an application that has 3 requests R1, R2 and R3 that need to be processed in that order (with state representations x, y and z respectively)cs = client state (view); ss = server state (view)
my asumption here is that the server and client are independently
calculating their views of the state i.e. the server is not informing the client in this instance of its current state in the reprn. This seems to be a carefully selected breaking of HATEOAS - is that correct??
Scenario A : Message lost on way to server
Client Server
sends --> R1:x (vanilla view) ok (ss: x)
cs:x <-- 200 OK
sends --> R2:y (cs: x) Request Msg. Lost
re-sends --> R2:y (cs: x) cs (x) == ss (x)
new ss: x+y
cs:x+y <-- 200 OK
sends --> R3:z (cs: x+y) cs (x+y) == ss (x+y)
new ss: x+y+z
cs:x+y+z <-- 200 OK
Scenario B : Message lost on way from server
Client Server
sends --> R1:x (vanilla view) ok (ss: x)
cs:x <-- 200 OK
sends --> R2:y (cs: x) cs (x) == ss (x)
new ss: x+y
<-- LOST 200 OK
re-sends --> R2:y (cs: x) cs (x) <> ss (x+y)
cs:x+y <-- 409 Conflict (with maybe last
successful txn from client
and thus the client can
update its state to x+y)
sends --> R3:z (cs: x+y) cs (x+y) == ss (x+y)
new ss: x+y+z
cs:x+y+z <-- 200 OK
How does this look...
Sean.
--- On Tue, 13/7/10, Jan Algermissen <algermissen1971@...> wrote:
From: Jan Algermissen <algermissen1971@...>
Subject: Re: [rest-discuss] HTTP reliability - in order msg delivery?
To: "Sean Kennedy" <seandkennedy@...>
Cc: "REST Discuss" <rest-discuss@yahoogroups.com>
Date: Tuesday, 13 July, 2010, 9:50
On Jul 13, 2010, at 10:51 AM, Sean Kennedy wrote:
> What if you needed in-order message delivery? I imagine for a banking application, the order of transactions on an account would be important...
You can do this by including in the
client's message a token that expresses the client's assumptions about the state of the resource. The server can use that token to verify that the client's expectation and the actual resource state match. If they do not match, the server instructs the client what to do next.
Roy somewhat explains this in [1]:
"Think of it instead as a series of individual POST requests that are
building up a combined resource that will eventually be a savings
account when finished. Each of those requests can include parameters
that perform the same role as an ETag -- basically, identifying the
client's view of the current state of the resource. Then, when a
request is repeated or a state-change lost, the server would see
that in the next request and tell the client to refresh its view
of the form before continuing to the next step."
[1] http://tech.groups.yahoo.com/group/rest-discuss/message/9805
-----------------------------------
Jan Algermissen, Consultant
NORD Software Consulting
Mail: algermissen@...
Blog: http://www.nordsc.com/blog/
Work: http://www.nordsc.com/
-----------------------------------
What's wrong with the simple approach: the client who has to know the message
order a priori, should number the messages to reflect delivery order and do
idempotent PUT of message N until it's verified delivered. Then move to message
N+1. When message N is successfully PUT, the representation returned should
contain the number N+1 and the URL for its PUT, and this resource should be
separately GET'able. The client can drop retention of any message numbered N or
less.
If you want to get fancy, you could allow batch PUT of records N+1 to M in one
message representation, with the understanding the server will discover and
enforce within-message ordering or records. This is just a go fast technique.
This works, but is it really RESTful? The client's understanding of things like
message order is a form of application state that isn't really hypermedia
driven. You are really trying to create an event driven architecture that uses
HTTP as a transport. Which is fine, sometimes you have that problem. But I think
Roy made the point to me in a different thread that we should acknowledge that
event driven architectural styles are different that RESTful ones.
________________________________
From: Sean Kennedy <seandkennedy@...>
To: Rest Discussion Group <rest-discuss@yahoogroups.com>
Sent: Tue, July 13, 2010 3:51:07 AM
Subject: [rest-discuss] HTTP reliability - in order msg delivery?
Hi,
I am re-reading Stefans article [1] and am looking at msg reliability. I
like Joe Gregorios best practice [2] as it leverages PUT's idempotence. I am not
aware of any approach that supports in-order delivery of a sequence of HTTP
messages...
What if you needed in-order message delivery? I imagine for a banking
application, the order of transactions on an account would be important...
Regards,
Sean.
[1] http://www.infoq. com/articles/ tilkov-rest- doubts
[2] http://bitworking. org/news/ 201/RESTify- DayTrader
On Tue, Jul 13, 2010 at 11:44 AM, Bryan Taylor <bryan_w_taylor@...> wrote: > > What's wrong with the simple approach: the client who has to know the message > order a priori, should number the messages to reflect delivery order and do > idempotent PUT of message N until it's verified delivered. Then move to message > N+1. When message N is successfully PUT, the representation returned should > contain the number N+1 and the URL for its PUT, and this resource should be > separately GET'able. The client can drop retention of any message numbered N or > less. Well it comes down to who cares about the message ordering - the client or the application. If the client does (i.e. the client is "harmed" by not having messages processed in order), then it's really up to the client to track it's activity with the server to ensure its overall experience. However, if the server cares are message ordering, and the servers application state is adversely affected, then it's up to the server to enforce it. It can't simply rely on the client to "do the right thing". The example posited is one where the client failed and lost track of the message. Idempotent PUT can somewhat alleviate this behavior (in that message replay should be "safe") but doesn't solve the problem of "skipped" messages. > This works, but is it really RESTful? The client's understanding of things like > message order is a form of application state that isn't really hypermedia > driven. You are really trying to create an event driven architecture that uses > HTTP as a transport. Which is fine, sometimes you have that problem. But I think > Roy made the point to me in a different thread that we should acknowledge that > event driven architectural styles are different that RESTful ones. I think it can. In theory each time you PUT a message you could get an appropriate link for the "next" message. In this way, the application maintains the message ordering, since each link can be tied to the message sequence. In theory this should also eliminate the "skipped" message issue since a client won't know how to submit the 5th message without having submitted the 4th. The problem occurs when the system gets out of sync, when the client "thinks" its sending the 5th message, but the link is for the 4th. The only real way to solve that is to resend the entire message sequence or somehow figure out where the sequence got out of sync and restart from that point. That implies that there's some error detection protocol at the end, for example after sending a message you get a link to the next message or the "finish transaction". When you finish, you should get a checksum of some kind (as simple as number of messages or complicated as a CRC) that matches what you sent, ensuring that the clients intentions match the servers. Regards, Will Hartung (willh@...)
It is said that in a well defined RESTful system, the clients only need to know the root URI or few well known URIs and the client shall discover all other links through these initial URIs. I do understand the benefits (decoupled clients) from this approach but the downside for me is that the client needs to discover the links each time it tries access something i.e given the following hierarchy of resources:
/collection1
collection1
|-sub1
|-sub1sub1
|-sub1sub1sub1
|-sub1sub1sub1sub1
|-sub1sub2
|-sub2
|-sub2sub1
|-sub2sub2
|-sub3
|-sub3sub1
|-sub3sub2
If we follow the "Client only need to know the root URI" approach, then a client shall only be aware of the root URI i.e. /collection1 above and the rest of URIs should be discovered by the clients through hypermedia links. I find this cumbersome because each time a client needs to do a GET, say on sub1sub1sub1sub1, should the client first do a GET on /collection1 and the follow link defined in the returned representation and then do several more GETs on sub resources to reach the desired resource? or is my understanding about connectedness completely wrong?
Best regards,
Suresh
Suresh, On Jul 14, 2010, at 9:17 AM, Suresh wrote: > It is said that in a well defined RESTful system, the clients only need to know the root URI or few well known URIs and the client shall discover all other links through these initial URIs. I do understand the benefits (decoupled clients) from this approach but the downside for me is that the client needs to discover the links each time it tries access something Yes - and no :-) Yes, it will usually be guided by the server through the specific states that constitute a given application, for example orderig a book. The client usually discovers the URI to submit the order to at runtime. We all do things like this every day on the Web. In the general case there is no significant overhead created by this approach because the client needs to take most of the steps anyhow (you sure want to check the current price and availability before you place your order, so remembering the submission target URI is not really what you want). .... While the user is stepping through an application the user often discovers other resources that are suitable entry points (e.g. a product page, a search result, the histry of a trouble ticket). The URIs of those resource should be bookmarkable (cool URIs) for later re-use. We all do this every day on the Web when we bookmark something that we assume makes a good point for re-entering the application at some later point in time. Applications can have (very) many entry URIs, there is no implied limit to a single one. However, clients only need to know one of them to enter the application. Jan > i.e given the following hierarchy of resources: > > /collection1 > collection1 > |-sub1 > |-sub1sub1 > |-sub1sub1sub1 > |-sub1sub1sub1sub1 > |-sub1sub2 > |-sub2 > |-sub2sub1 > |-sub2sub2 > |-sub3 > |-sub3sub1 > |-sub3sub2 > > If we follow the "Client only need to know the root URI" approach, then a client shall only be aware of the root URI i.e. /collection1 above and the rest of URIs should be discovered by the clients through hypermedia links. I find this cumbersome because each time a client needs to do a GET, say on sub1sub1sub1sub1, should the client first do a GET on /collection1 and the follow link defined in the returned representation and then do several more GETs on sub resources to reach the desired resource? or is my understanding about connectedness completely wrong? > > Best regards, > Suresh > > > > ------------------------------------ > > Yahoo! Groups Links > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
On Jul 13, 2010, at 2:15 PM, Sean Kennedy wrote: > > How does this look... > Sean, I am having trouble to see what you are asking. Can you replace the formal expressions with HTTP request/ response examples? Jan > Sean. > > --- On Tue, 13/7/10, Jan Algermissen <algermissen1971@...> wrote: > > From: Jan Algermissen <algermissen1971@...> > Subject: Re: [rest-discuss] HTTP reliability - in order msg delivery? > To: "Sean Kennedy" <seandkennedy@...> > Cc: "REST Discuss" <rest-discuss@yahoogroups.com> > Date: Tuesday, 13 July, 2010, 9:50 > > > On Jul 13, 2010, at 10:51 AM, Sean Kennedy wrote: > > > What if you needed in-order message delivery? I imagine for a banking application, the order of transactions on an account would be important... > > You can do this by including in the client's message a token that expresses the client's assumptions about the state of the resource. The server can use that token to verify that the client's expectation and the actual resource state match. If they do not match, the server instructs the client what to do next. > > Roy somewhat explains this in [1]: > > "Think of it instead as a series of individual POST requests that are > building up a combined resource that will eventually be a savings > account when finished. Each of those requests can include parameters > that perform the same role as an ETag -- basically, identifying the > client's view of the current state of the resource. Then, when a > request is repeated or a state-change lost, the server would see > that in the next request and tell the client to refresh its view > of the form before continuing to the next step." > > [1] http://tech.groups.yahoo.com/group/rest-discuss/message/9805 > > > ----------------------------------- > Jan Algermissen, Consultant > NORD Software Consulting > > Mail: algermissen@... > Blog: http://www.nordsc.com/blog/ > Work: http://www.nordsc.com/ > ----------------------------------- > > > > > > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
Yes, I think your understanding is essentially correct. The caching constraint compensates for this, i.e. clients could hit their local cache instead of the network when traversing out from the entry point - which may be less cumbersome than it first seems. You can, of course, make your resources less granular and thus reduce the number of traversals necessary - however, this will likely make caching harder - it's a judgement call. Cheers, Mike On Wed, Jul 14, 2010 at 8:17 AM, Suresh <sureshkk@...> wrote: > It is said that in a well defined RESTful system, the clients only need to know the root URI or few well known URIs and the client shall discover all other links through these initial URIs. I do understand the benefits (decoupled clients) from this approach but the downside for me is that the client needs to discover the links each time it tries access something i.e given the following hierarchy of resources: > > /collection1 > collection1 > |-sub1 > |-sub1sub1 > |-sub1sub1sub1 > |-sub1sub1sub1sub1 > |-sub1sub2 > |-sub2 > |-sub2sub1 > |-sub2sub2 > |-sub3 > |-sub3sub1 > |-sub3sub2 > > If we follow the "Client only need to know the root URI" approach, then a client shall only be aware of the root URI i.e. /collection1 above and the rest of URIs should be discovered by the clients through hypermedia links. I find this cumbersome because each time a client needs to do a GET, say on sub1sub1sub1sub1, should the client first do a GET on /collection1 and the follow link defined in the returned representation and then do several more GETs on sub resources to reach the desired resource? or is my understanding about connectedness completely wrong? > > Best regards, > Suresh > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
Hi Jan,
Apologies for the confusion. Hopefully this is clearer. Firstly, to confirm I am on firm ground: this situation only appears to arise when the client is unaware of the resource URI and therefore has to use POST instead of idempotent PUT - based on Roy's post that you kindly included, where he refers to a series of individual POST requests.
Secondly, I was looking at Bill de hOra's HTTPLR [1] last night and figured that his use of stateful URI's could be used to keep the client and server in sync i.e. no need for expensive ETag-type values.. Given that methodology, here is an example:
Client
Server
POST /someURI update resource state;
<details> /someUri goes to ".../ready" state
...
<clientViewOfState>
"http://.../initial" -->
</clientViewOfState>
</details>
<-- 200 OK gets lost
client re-sends:
POST /someURI
<details> server
detects conflict;
... informs client of what its view is
<clientViewOfState>
"http://.../initial" -->
</clientViewOfState>
</details>
<-- 409 Conflict
<serverStateView>
".../ready"
</serverStateView>
Thus, the client and server are keeping in synch via the use of the stateful URI's. This means that the server is maintaining some application state i.e. breaking REST's statelessness constraint. However, if I am correct, constraints can be relaxed as and when the situation arises?
Does this seem reasonable...
Regards,
Sean.
[1] http://dehora.net/doc/httplr/draft-httplr-01.html
--- On Wed, 14/7/10, Jan Algermissen <algermissen1971@...> wrote:
From: Jan Algermissen
<algermissen1971@...>
Subject: Re: [rest-discuss] HTTP reliability - in order msg delivery?
To: "Sean Kennedy" <seandkennedy@...>
Cc: "Jim Webber" <jim@...>, "Rest Discussion Group" <rest-discuss@yahoogroups.com>
Date: Wednesday, 14 July, 2010, 7:41
On Jul 13, 2010, at 2:15 PM, Sean Kennedy wrote:
>
> How does this look...
>
Sean,
I am having trouble to see what you are asking. Can you replace the formal expressions with HTTP request/ response examples?
Jan
> Sean.
>
> --- On Tue, 13/7/10, Jan Algermissen <algermissen1971@...> wrote:
>
> From: Jan Algermissen <algermissen1971@...>
> Subject: Re: [rest-discuss] HTTP reliability - in order msg delivery?
> To: "Sean Kennedy" <seandkennedy@...>
> Cc: "REST Discuss" <rest-discuss@yahoogroups.com>
> Date: Tuesday, 13 July, 2010, 9:50
>
>
> On Jul 13, 2010, at 10:51 AM, Sean Kennedy wrote:
>
> > What if you needed in-order message delivery? I imagine for a banking application, the order of transactions on an account would be important...
>
> You can do this by including in the client's message a token that expresses the client's assumptions about the state of the resource. The server can use that token to verify that the client's expectation and the actual resource state match. If they do not match, the server instructs the client what to do next.
>
> Roy somewhat explains this in [1]:
>
> "Think of it instead as a series of individual POST requests that are
> building up a combined resource that will eventually be a savings
> account when finished. Each of those requests can include parameters
> that perform the same role as an ETag -- basically, identifying the
> client's view of the current state of the resource. Then, when a
> request is repeated or a state-change lost, the server would see
> that in the next request and tell the client to refresh its view
> of the form before continuing to the next step."
>
> [1] http://tech.groups.yahoo.com/group/rest-discuss/message/9805
>
>
> -----------------------------------
> Jan Algermissen, Consultant
> NORD Software Consulting
>
> Mail: algermissen@...
> Blog: http://www.nordsc.com/blog/
> Work: http://www.nordsc.com/
> -----------------------------------
>
>
>
>
>
>
>
>
-----------------------------------
Jan Algermissen, Consultant
NORD Software Consulting
Mail: algermissen@...
Blog: http://www.nordsc.com/blog/
Work: http://www.nordsc.com/
-----------------------------------
There is IMHO a big mistake in taken "connectedness" as "hateoas". The former implies that a system is simply "connected" in a somewhat static way, while the later, specially by using the concept of "engine", implies a dynamic system, driven by the server. The former implies that if you go to /A you'll then can connect to /B and /C, while the later will just assume that getting /A will allow you, or not, to follow other paths, which can be at some point in time /B and /C and in others /X and /Y. But of course, since I'm not even english-speaker, maybe I'm just wrong. Neverthless, in the situation you describe, if you as the system designer feel that /collection1/sub1/sub1sub1/sub1sub1sub1/sub1sub1sub1sub1 is so important to the clients, you just give it a URI like /theimportantmemberofcollection that can redirect to /collection1/sub1/sub1sub1/sub1sub1sub1/sub1sub1sub1sub1. This of course assumes this is not a "dynamic" resource (one wich the representation is a result of a server process, and thus is bound to be diferent in diferent points in time), otherwise you'll have to follow all the steps by following the hipermedia. But then again, if that was the case you'll never know if you finnally get to /collection1/sub1/sub1sub1/sub1sub1sub1/sub1sub1sub1sub1 because any of the (sub)resources's representation may or not have a hipemedia link to the following... On 14 July 2010 10:04, Mike Kelly <mike@mykanjo.co.uk> wrote: > > > Yes, I think your understanding is essentially correct. > > The caching constraint compensates for this, i.e. clients could hit > their local cache instead of the network when traversing out from the > entry point - which may be less cumbersome than it first seems. > > You can, of course, make your resources less granular and thus reduce > the number of traversals necessary - however, this will likely make > caching harder - it's a judgement call. > > Cheers, > Mike > > > On Wed, Jul 14, 2010 at 8:17 AM, Suresh <sureshkk@...<sureshkk%40gmail.com>> > wrote: > > It is said that in a well defined RESTful system, the clients only need > to know the root URI or few well known URIs and the client shall discover > all other links through these initial URIs. I do understand the benefits > (decoupled clients) from this approach but the downside for me is that the > client needs to discover the links each time it tries access something i.e > given the following hierarchy of resources: > > > > /collection1 > > collection1 > > |-sub1 > > |-sub1sub1 > > |-sub1sub1sub1 > > |-sub1sub1sub1sub1 > > |-sub1sub2 > > |-sub2 > > |-sub2sub1 > > |-sub2sub2 > > |-sub3 > > |-sub3sub1 > > |-sub3sub2 > > > > If we follow the "Client only need to know the root URI" approach, then a > client shall only be aware of the root URI i.e. /collection1 above and the > rest of URIs should be discovered by the clients through hypermedia links. I > find this cumbersome because each time a client needs to do a GET, say on > sub1sub1sub1sub1, should the client first do a GET on /collection1 and the > follow link defined in the returned representation and then do several more > GETs on sub resources to reach the desired resource? or is my understanding > about connectedness completely wrong? > > > > Best regards, > > Suresh > > > > > > > > ------------------------------------ > > > > Yahoo! Groups Links > > > > > > > > > >
On Wed, Jul 14, 2010 at 12:17 AM, Suresh <sureshkk@...> wrote: > It is said that in a well defined RESTful system, the clients only > need to know the root URI or few well known URIs and the client > shall discover all other links through these initial URIs. I do > understand the benefits (decoupled clients) from this approach > but the downside for me is that the client needs to discover > the links each time it tries access something i.e given the > following hierarchy of resources: One of the premises behind this is that the server need only COMMIT to supporting the entry point URIs. Ideally, these entry point URIs are the long term API that the server supports. Anything that's not a long term API entry point in to the service is more easily changed by the server. Consider the long term APIs the subroutine calls, while the internal links the guts of the code. It's abstraction method that promotes change and reduces rigidity. Obviously a single entry point is the "most flexible", but it has the execution costs that you mentioned -- fine grained, constant dereferencing in order to get things accomplished. Adding more entry points, gives finer grained access to the system. But at the same time, you have more of an obligation to maintain those endpoints. So clearly one endpoint is probably too few. But having zillions becomes an issue as well. Obviously you can do whatever you want. You don't have to maintain any endpoints and can change the willy nilly if you like. But consider a real world example. Say, the DMV. Around here, you go to a single kiosk, tell them what you want, they give you the form or forms and redirect you to the proper station. As a DMV user, you only need to know how to fill out the forms properly, and how to hit the kiosk. The internal mapping of resources (clerks, capabilities, etc.) is handled by the DMV, and they guide you through the process (Now serving 123, please go to window 4). What you can't do at the DMV is walk straight up to Sally at window 3 when you come back, just because you worked with her last time (you cached her URI). Sally may not be there, someone else may be at window 3, or the task for which window 3 may have changed (now they do registrations instead of licensing). Keeping the entry point at the greeting kiosk lets the DMV reallocate it internal resources as it feels is best. Now, consider going to the County Hall of Records. You walk in, go to the Information Kiosk, and ask for the Birth Certificate Office, and the greeter sends you to the second floor, office 210. You head down there and proceed to work your transaction. Next time you come, you skip the info desk and go straight to the second floor, office 210. It was pretty clear that this is a long term entry point to the Hall of Records system that they maintain, and you get to be more efficient because you can skip the Information kiosk step. But this time you have a problem, and the clerk is kind and says "go do this and call me directly", giving you the ability to go directly to the clerk. But, you wait to long, things happen, and 6 months later try to call the clerk, and he's gone. While the system was able to provide you with a direct link to a resource, it didn't commit that this was a long term interface to be maintained. Since the clerk is gone, you have to go back to the beginning and restart your transaction. So, basically, you expose as much of your system as you're comfortable supporting, and that you're comfortable having clients rely upon. Small systems get tightly coupled because the same people write both sides of the system, and have intimate knowledge. But when the systems audience grows larger, you have more of an obligation of commitment to stability and long term use. Others are making investments in to your system so the API can't be a shifting sand beneath their feet. So, think of it simply as a bureaucracy that grows over time. Regards, Will Hartung (willh@...)
So it is essentially up to me to decide on the REST API based on the client needs rather than strictly following the "*Client only need to know the root URI*". Thanks everybody for helping me understand this confusing part of REST to me. Best regards, Suresh On Wed, Jul 14, 2010 at 11:04 PM, Will Hartung <willh@...> wrote: > On Wed, Jul 14, 2010 at 12:17 AM, Suresh <sureshkk@...> wrote: > > > It is said that in a well defined RESTful system, the clients only > > need to know the root URI or few well known URIs and the client > > shall discover all other links through these initial URIs. I do > > understand the benefits (decoupled clients) from this approach > > but the downside for me is that the client needs to discover > > the links each time it tries access something i.e given the > > following hierarchy of resources: > > One of the premises behind this is that the server need only COMMIT to > supporting the entry point URIs. Ideally, these entry point URIs are > the long term API that the server supports. > > Anything that's not a long term API entry point in to the service is > more easily changed by the server. Consider the long term APIs the > subroutine calls, while the internal links the guts of the code. It's > abstraction method that promotes change and reduces rigidity. > > Obviously a single entry point is the "most flexible", but it has the > execution costs that you mentioned -- fine grained, constant > dereferencing in order to get things accomplished. > > Adding more entry points, gives finer grained access to the system. > But at the same time, you have more of an obligation to maintain those > endpoints. > > So clearly one endpoint is probably too few. But having zillions > becomes an issue as well. > > Obviously you can do whatever you want. You don't have to maintain any > endpoints and can change the willy nilly if you like. > > But consider a real world example. Say, the DMV. Around here, you go > to a single kiosk, tell them what you want, they give you the form or > forms and redirect you to the proper station. As a DMV user, you only > need to know how to fill out the forms properly, and how to hit the > kiosk. The internal mapping of resources (clerks, capabilities, etc.) > is handled by the DMV, and they guide you through the process (Now > serving 123, please go to window 4). > > What you can't do at the DMV is walk straight up to Sally at window 3 > when you come back, just because you worked with her last time (you > cached her URI). Sally may not be there, someone else may be at window > 3, or the task for which window 3 may have changed (now they do > registrations instead of licensing). Keeping the entry point at the > greeting kiosk lets the DMV reallocate it internal resources as it > feels is best. > > Now, consider going to the County Hall of Records. You walk in, go to > the Information Kiosk, and ask for the Birth Certificate Office, and > the greeter sends you to the second floor, office 210. You head down > there and proceed to work your transaction. Next time you come, you > skip the info desk and go straight to the second floor, office 210. It > was pretty clear that this is a long term entry point to the Hall of > Records system that they maintain, and you get to be more efficient > because you can skip the Information kiosk step. > > But this time you have a problem, and the clerk is kind and says "go > do this and call me directly", giving you the ability to go directly > to the clerk. But, you wait to long, things happen, and 6 months later > try to call the clerk, and he's gone. While the system was able to > provide you with a direct link to a resource, it didn't commit that > this was a long term interface to be maintained. Since the clerk is > gone, you have to go back to the beginning and restart your > transaction. > > So, basically, you expose as much of your system as you're comfortable > supporting, and that you're comfortable having clients rely upon. > Small systems get tightly coupled because the same people write both > sides of the system, and have intimate knowledge. > > But when the systems audience grows larger, you have more of an > obligation of commitment to stability and long term use. Others are > making investments in to your system so the API can't be a shifting > sand beneath their feet. > > So, think of it simply as a bureaucracy that grows over time. > > Regards, > > Will Hartung > (willh@...) > -- When the facts change, I change my mind. What do you do, sir?
On Jul 14, 2010, at 10:45 AM, Sean Kennedy wrote: > > > Hi Jan, > Apologies for the confusion. Hopefully this is clearer. Firstly, to confirm I am on firm ground: this situation only appears to arise when the client is unaware of the resource URI and therefore has to use POST instead of idempotent PUT - based on Roy's post that you kindly included, where he refers to a series of individual POST requests. > Secondly, I was looking at Bill de hOra's HTTPLR [1] last night and figured that his use of stateful URI's could be used to keep the client and server in sync i.e. no need for expensive ETag-type values.. Given that methodology, here is an example: > > Client Server > > POST /someURI update resource state; > <details> /someUri goes to ".../ready" state > ... > <clientViewOfState> > "http://.../initial" --> > </clientViewOfState> > </details> > > <-- 200 OK gets lost > > client re-sends: > POST /someURI > <details> server detects conflict; > ... informs client of what its view is > <clientViewOfState> > "http://.../initial" --> > </clientViewOfState> > </details> > > <-- 409 Conflict > <serverStateView> > ".../ready" > </serverStateView> > > I am not sure what you are getting at with the URIs here but I see your point. Why not have the client do a GET on the resource it wants to update? > > Thus, the client and server are keeping in synch via the use of the stateful URI's. Hmm - what is a 'stateful URI'? > This means that the server is maintaining some application state i.e. breaking REST's statelessness constraint. How so? The application state is where the client is in the overall application. How is the server maintaining that information in your example? > However, if I am correct, constraints can be relaxed as and when the situation arises? Well, if you relax REST's constraints it ain't REST anymore and you'll have to do the 'induced properties analysis' all over :-) Jan > > Does this seem reasonable... > > Regards, > Sean. > > [1] http://dehora.net/doc/httplr/draft-httplr-01.html > > --- On Wed, 14/7/10, Jan Algermissen <algermissen1971@...> wrote: > > From: Jan Algermissen <algermissen1971@...> > Subject: Re: [rest-discuss] HTTP reliability - in order msg delivery? > To: "Sean Kennedy" <seandkennedy@...> > Cc: "Jim Webber" <jim@...>, "Rest Discussion Group" <rest-discuss@yahoogroups.com> > Date: Wednesday, 14 July, 2010, 7:41 > > > On Jul 13, 2010, at 2:15 PM, Sean Kennedy wrote: > > > > > How does this look... > > > > Sean, > > I am having trouble to see what you are asking. Can you replace the formal expressions with HTTP request/ response examples? > > Jan > > > Sean. > > > > --- On Tue, 13/7/10, Jan Algermissen <algermissen1971@...> wrote: > > > > From: Jan Algermissen <algermissen1971@...> > > Subject: Re: [rest-discuss] HTTP reliability - in order msg delivery? > > To: "Sean Kennedy" <seandkennedy@...> > > Cc: "REST Discuss" <rest-discuss@yahoogroups.com> > > Date: Tuesday, 13 July, 2010, 9:50 > > > > > > On Jul 13, 2010, at 10:51 AM, Sean Kennedy wrote: > > > > > What if you needed in-order message delivery? I imagine for a banking application, the order of transactions on an account would be important... > > > > You can do this by including in the client's message a token that expresses the client's assumptions about the state of the resource. The server can use that token to verify that the client's expectation and the actual resource state match. If they do not match, the server instructs the client what to do next. > > > > Roy somewhat explains this in [1]: > > > > "Think of it instead as a series of individual POST requests that are > > building up a combined resource that will eventually be a savings > > account when finished. Each of those requests can include parameters > > that perform the same role as an ETag -- basically, identifying the > > client's view of the current state of the resource. Then, when a > > request is repeated or a state-change lost, the server would see > > that in the next request and tell the client to refresh its view > > of the form before continuing to the next step." > > > > [1] http://tech.groups.yahoo.com/group/rest-discuss/message/9805 > > > > > > ----------------------------------- > > Jan Algermissen, Consultant > > NORD Software Consulting > > > > Mail: algermissen@... > > Blog: http://www.nordsc.com/blog/ > > Work: http://www.nordsc.com/ > > ----------------------------------- > > > > > > > > > > > > > > > > > > ----------------------------------- > Jan Algermissen, Consultant > NORD Software Consulting > > Mail: algermissen@... > Blog: http://www.nordsc.com/blog/ > Work: http://www.nordsc.com/ > ----------------------------------- > > > > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
+1, THIS was a very useful thread :-) On Thu, Jul 15, 2010 at 3:24 AM, Suresh Kumar <sureshkk@...> wrote: > > > So it is essentially up to me to decide on the REST API based on the client > needs rather than strictly following the "*Client only need to know the > root URI*". > > Thanks everybody for helping me understand this confusing part of REST to > me. > > Best regards, > Suresh > > > On Wed, Jul 14, 2010 at 11:04 PM, Will Hartung <willh@...>wrote: > >> On Wed, Jul 14, 2010 at 12:17 AM, Suresh <sureshkk@...> wrote: >> >> > It is said that in a well defined RESTful system, the clients only >> > need to know the root URI or few well known URIs and the client >> > shall discover all other links through these initial URIs. I do >> > understand the benefits (decoupled clients) from this approach >> > but the downside for me is that the client needs to discover >> > the links each time it tries access something i.e given the >> > following hierarchy of resources: >> >> One of the premises behind this is that the server need only COMMIT to >> supporting the entry point URIs. Ideally, these entry point URIs are >> the long term API that the server supports. >> >> Anything that's not a long term API entry point in to the service is >> more easily changed by the server. Consider the long term APIs the >> subroutine calls, while the internal links the guts of the code. It's >> abstraction method that promotes change and reduces rigidity. >> >> Obviously a single entry point is the "most flexible", but it has the >> execution costs that you mentioned -- fine grained, constant >> dereferencing in order to get things accomplished. >> >> Adding more entry points, gives finer grained access to the system. >> But at the same time, you have more of an obligation to maintain those >> endpoints. >> >> So clearly one endpoint is probably too few. But having zillions >> becomes an issue as well. >> >> Obviously you can do whatever you want. You don't have to maintain any >> endpoints and can change the willy nilly if you like. >> >> But consider a real world example. Say, the DMV. Around here, you go >> to a single kiosk, tell them what you want, they give you the form or >> forms and redirect you to the proper station. As a DMV user, you only >> need to know how to fill out the forms properly, and how to hit the >> kiosk. The internal mapping of resources (clerks, capabilities, etc.) >> is handled by the DMV, and they guide you through the process (Now >> serving 123, please go to window 4). >> >> What you can't do at the DMV is walk straight up to Sally at window 3 >> when you come back, just because you worked with her last time (you >> cached her URI). Sally may not be there, someone else may be at window >> 3, or the task for which window 3 may have changed (now they do >> registrations instead of licensing). Keeping the entry point at the >> greeting kiosk lets the DMV reallocate it internal resources as it >> feels is best. >> >> Now, consider going to the County Hall of Records. You walk in, go to >> the Information Kiosk, and ask for the Birth Certificate Office, and >> the greeter sends you to the second floor, office 210. You head down >> there and proceed to work your transaction. Next time you come, you >> skip the info desk and go straight to the second floor, office 210. It >> was pretty clear that this is a long term entry point to the Hall of >> Records system that they maintain, and you get to be more efficient >> because you can skip the Information kiosk step. >> >> But this time you have a problem, and the clerk is kind and says "go >> do this and call me directly", giving you the ability to go directly >> to the clerk. But, you wait to long, things happen, and 6 months later >> try to call the clerk, and he's gone. While the system was able to >> provide you with a direct link to a resource, it didn't commit that >> this was a long term interface to be maintained. Since the clerk is >> gone, you have to go back to the beginning and restart your >> transaction. >> >> So, basically, you expose as much of your system as you're comfortable >> supporting, and that you're comfortable having clients rely upon. >> Small systems get tightly coupled because the same people write both >> sides of the system, and have intimate knowledge. >> >> But when the systems audience grows larger, you have more of an >> obligation of commitment to stability and long term use. Others are >> making investments in to your system so the API can't be a shifting >> sand beneath their feet. >> >> So, think of it simply as a bureaucracy that grows over time. >> >> Regards, >> >> Will Hartung >> (willh@...) >> > > > > -- > When the facts change, I change my mind. What do you do, sir? > > >
On 15 July 2010 07:27, Jan Algermissen <algermissen1971@...> wrote: > > Well, if you relax REST's constraints it ain't REST anymore and you'll have > to do the 'induced properties analysis' all over :-) > > Jan > > I find this argument counter-productive, to say the least. It reminds me of a ongoing discussion on LinkedIn Java Developers group, "Is Java a pure OOP" - in which almost everybody (I'm not one of then though) says it's not because it has primitives types. Let's forget the arguments and agree with then. Should we not call Java a OOP because of that? When I describe my app written in Java must I say, well, it's not OO because it uses primitive type - altougth all the properties of a OO are present? So let's say I drop the cache constraint - let's say because I'm on a intranet with a dozen users and big machines and I don't have latency problems - but I do apply all the remaining constraints because I see the value that it brings - so I can't present my architecture as being REST based? I understand that calling something REST and do the opposite (like tunneling and POST for RPC requests) is something wich should be exposed as non-REST, but "relaxing constraints"? I mean, I don't give a damn if I call it REST, RESTish or RESTwtcc or anything else, but saying relaxing REST constraints is not REST is driving away people from a solution that could be worth, and above all, that it works. We relax OO in order to use it effectivly. After all, Styles, Archtectures, Applications, should be used to solve problems, not to comply with any given acronym. They should be used or applied because they were prooven to be good ways of solving particular classes of problems. They are frameworks for a particular class of problems. If any given framework, or style, or architecture, solves 90% of my problems I'll go along with it. If I have to relax it to solve the other 10% I'll do it. Acronym or no acronym. But don't drive people away based on purity issues.
On Jul 15, 2010, at 9:36 AM, Antnio Mota wrote: > But don't drive people away based on purity issues. Question is: Do people understand the consequences of relaxing a constraint? If you do and can live with the resulting loss of guaranteed system properties, fine. Go ahead. OTH, relaxing the stateless server constraint at the cost of lost scalability and much reduced understandability will not make adopters of REST happy in the long run. I'll argue for purity every time. And I really do not see any problem with doing pure REST anyhow. Jan ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
I'm not arguing again purity, actually I'm not arguing against anything nor trying to start a discussion about it. I'm just trying to point that it can be counter-productive to tell people that REST is a all-or-nothing style. I'm not even saying it *is*, only that it can be. In real life scenarios there is not a "all-or-nothing" (well, there is, like fundamentalism being in politics or religion - which are most of the times counter-productive) but not in IT anyhow... If people do understand the properties that constraints originate, if people are applying REST style because it applies to their "problem space" and not just because it's REST, basically, if people understand the consequences of applying a constraint, then they will understand the consequences of relaxing a constraint. 2010/7/15 Jan Algermissen <algermissen1971@...>: > > Question is: Do people understand the consequences of relaxing a constraint? > > If you do and can live with the resulting loss of guaranteed system properties, fine. Go ahead. > > OTH, relaxing the stateless server constraint at the cost of lost scalability and much reduced understandability will not make adopters of REST happy in the long run. > > I'll argue for purity every time. And I really do not see any problem with doing pure REST anyhow. > > Jan > > > ----------------------------------- > Jan Algermissen, Consultant > NORD Software Consulting > > Mail: algermissen@... > Blog: http://www.nordsc.com/blog/ > Work: http://www.nordsc.com/ > ----------------------------------- > > > > >
On Jul 14, 2010, at 10:45 AM, Sean Kennedy wrote:
>
>
> Hi Jan,
> Apologies for the confusion. Hopefully this is clearer. Firstly, to confirm I am on firm ground: this situation only appears to arise when the client is unaware of the resource URI and therefore has to use POST instead of idempotent PUT - based on Roy's post that you kindly included, where he refers to a series of individual POST requests.
> Secondly, I was looking at Bill de hOra's HTTPLR [1] last night and figured that his use of stateful URI's could be used to keep the client and server in sync i.e. no need for expensive ETag-type values.. Given that methodology, here is an example:
>
> Client Server
>
> POST /someURI update resource state;
> <details> /someUri goes to ".../ready" state
> ...
> <clientViewOfState>
> "http://.../initial" -->
> </clientViewOfState>
> </details>
>
> <-- 200 OK gets lost
>
> client re-sends:
> POST /someURI
> <details> server detects conflict;
> ... informs client of what its view is
> <clientViewOfState>
> "http://.../initial" -->
> </clientViewOfState>
> </details>
>
> <-- 409 Conflict
> <serverStateView>
> ".../ready"
> </serverStateView>
>
>
I am not sure what you are getting at with the URIs here but I see your point. Why not have the client do a GET on the resource it wants to update?-- This comes back to the original q: how does HTTP implement in-order message delivery? I was following the link to Roy's reply that you sent me i.e. multiple POSTs. Are you saying that a client could GET a repn which would contain the URI's to PUT in the correct sequence (where the client does not proceed until it gets a 200 OK from each individual PUT) ? Seems a neat solution...but there would be no need for the client to send up a token to indicate it's view of the resource state to the server (as I don't think it can get out of synch with PUTs)...
>
> Thus, the client and server are keeping in synch via the use of the stateful URI's.
Hmm - what is a 'stateful URI'?
-- for me, that is a string that looks like a URI that informs the client/server where it is in the application. Is this correct/incorrect?
> This means that the server is maintaining some application state i.e. breaking REST's statelessness constraint.
How so? The application state is where the client is in the overall application. How is the server maintaining that information in your example?
-- the URI string represents the current state of the application. I suppose if the server sticks it into a db then it becomes resource state and the issue is solved... Sean.
> [1] http://dehora.net/doc/httplr/draft-httplr-01.html
>
> --- On Wed, 14/7/10, Jan Algermissen <algermissen1971@...> wrote:
>
> From: Jan Algermissen <algermissen1971@...>
> Subject: Re: [rest-discuss] HTTP reliability - in order msg delivery?
> To: "Sean Kennedy" <seandkennedy@...>
> Cc: "Jim Webber" <jim@...>, "Rest Discussion Group" <rest-discuss@yahoogroups.com>
> Date: Wednesday, 14 July, 2010, 7:41
>
>
> On Jul 13, 2010, at 2:15 PM, Sean Kennedy wrote:
>
> >
> > How does this look...
> >
>
> Sean,
>
> I am having trouble to see what you are asking. Can you replace the formal expressions with HTTP request/ response examples?
>
> Jan
>
> > Sean.
> >
> > --- On Tue, 13/7/10, Jan Algermissen <algermissen1971@...> wrote:
> >
> > From: Jan Algermissen <algermissen1971@...>
> > Subject: Re: [rest-discuss] HTTP reliability - in order msg delivery?
> > To: "Sean Kennedy" <seandkennedy@...>
> > Cc: "REST Discuss" <rest-discuss@yahoogroups.com>
> > Date: Tuesday, 13 July, 2010, 9:50
> >
> >
> > On Jul 13, 2010, at 10:51 AM, Sean Kennedy wrote:
> >
> > > What if you needed in-order message delivery? I imagine for a banking application, the order of transactions on an account would be important...
> >
> > You can do this by including in the client's message a token that expresses the client's assumptions about the state of the resource. The server can use that token to verify that the client's expectation and the actual resource state match. If they do not match, the server instructs the client what to do next.
> >
> > Roy somewhat explains this in [1]:
> >
> > "Think of it instead as a series of individual POST requests that are
> > building up a combined resource that will eventually be a savings
> > account when finished. Each of those requests can include parameters
> > that perform the same role as an ETag -- basically, identifying the
> > client's view of the current state of the resource. Then, when a
> > request is repeated or a state-change lost, the server would see
> > that in the next request and tell the client to refresh its view
> > of the form before continuing to the next step."
> >
> > [1] http://tech.groups.yahoo.com/group/rest-discuss/message/9805
> >
> >
> > -----------------------------------
> > Jan Algermissen, Consultant
> > NORD Software Consulting
> >
> > Mail: algermissen@acm.org
> > Blog: http://www.nordsc.com/blog/
> > Work: http://www.nordsc.com/
> > -----------------------------------
> >
> >
> >
> >
> >
> >
> >
> >
>
> -----------------------------------
> Jan Algermissen, Consultant
> NORD Software Consulting
>
> Mail: algermissen@acm.org
> Blog: http://www.nordsc.com/blog/
> Work: http://www.nordsc.com/
> -----------------------------------
>
>
>
>
>
>
-----------------------------------
Jan Algermissen, Consultant
NORD Software Consulting
Mail: algermissen@...
Blog: http://www.nordsc.com/blog/
Work: http://www.nordsc.com/
-----------------------------------
Hello list! :-D I've read a lot of criticism to WADL (see for example http://bitworking.org/news/193/Do-we-need-WADL), since it could lead to something like WSDL/SOAP/RPC/Berlusconi & other human's fault. BTW, I'd like to use it as a hypertext as in http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven It would be possible? Actually a wadl file should have it's own mime type (does it have one?) but it seem quite good as hypertext language as far as the client can handle it (through, for example, some code on demand). I'm not considering it a tool to generate such code (even if it would be possible, and as far as the code is downloaded with the wadl, still restful), but just a simple and clean way to connect the resources. It seem to me that nothing restfully wrong to use an alternative hypertext language instead of html, isn't it? Giacomo
On Jul 16, 2010, at 10:45 AM, Giacomo Tesio wrote: > > > Hello list! :-D Hello Giacomo :-) > > I've read a lot of criticism to WADL (see for example http://bitworking.org/news/193/Do-we-need-WADL), since it could lead to something like WSDL/SOAP/RPC/Berlusconi & other human's fault. Basic problem with WADL is its design time use (as you already understand I think). RESTful systems do not need (and in fact forbid) knowledge as expressed by WADL at design time. > > BTW, I'd like to use it as a hypertext as in http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven Yes, you can use WADL as a forms mechanism (runtime use). > > It would be possible? Yes - though I personnally doubt the usefulness. I recommend the specification of a media type specific to your domain. That media type should provide the means for the necessary hypermedia controls (along the lines of application/atom+xml and application/atomsrv+xml). Jan > > Actually a wadl file should have it's own mime type (does it have one?) but it seem quite good as hypertext language as far as the client can handle it (through, for example, some code on demand). > > I'm not considering it a tool to generate such code (even if it would be possible, and as far as the code is downloaded with the wadl, still restful), but just a simple and clean way to connect the resources. > > It seem to me that nothing restfully wrong to use an alternative hypertext language instead of html, isn't it? > > > Giacomo > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
I remember a article by Mark Hardley from thr Jersey project about HATEOAS WADL, try to google it. Neverthless I too think that its use should be avoided... On 16 Jul 2010 16:59, "Jan Algermissen" <algermissen1971@...> wrote: On Jul 16, 2010, at 10:45 AM, Giacomo Tesio wrote: > > > Hello list! :-D Hello Giacomo :-) > > I've read a lot of criticism to WADL (see for example http://bitworking.org/news/193/Do-we-ne... Basic problem with WADL is its design time use (as you already understand I think). RESTful systems do not need (and in fact forbid) knowledge as expressed by WADL at design time. > > BTW, I'd like to use it as a hypertext as in http://roy.gbiv.com/untangled/2008/rest-apis-mus... Yes, you can use WADL as a forms mechanism (runtime use). > > It would be possible? Yes - though I personnally doubt the usefulness. I recommend the specification of a media type specific to your domain. That media type should provide the means for the necessary hypermedia controls (along the lines of application/atom+xml and application/atomsrv+xml). Jan > > Actually a wadl file should have it's own mime type (does it have one?) but it seem quite goo... ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... <algermissen%40acm.org> Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
On Jul 15, 2010, at 10:28 AM, Sean Kennedy wrote: > > > > > > > > On Jul 14, 2010, at 10:45 AM, Sean Kennedy wrote: > > > > > > > Hi Jan, > > Apologies for the confusion. Hopefully this is clearer. Firstly, to confirm I am on firm ground: this situation only appears to arise when the client is unaware of the resource URI and therefore has to use POST instead of idempotent PUT - based on Roy's post that you kindly included, where he refers to a series of individual POST requests. > > Secondly, I was looking at Bill de hOra's HTTPLR [1] last night and figured that his use of stateful URI's could be used to keep the client and server in sync i.e. no need for expensive ETag-type values.. Given that methodology, here is an example: > > > > Client Server > > > > POST /someURI update resource state; > > <details> /someUri goes to ".../ready" state > > ... > > <clientViewOfState> > > "http://.../initial" --> > > </clientViewOfState> > > </details> > > > > <-- 200 OK gets lost > > > > client re-sends: > > POST /someURI > > <details> server detects conflict; > > ... informs client of what its view is > > <clientViewOfState> > > "http://.../initial" --> > > </clientViewOfState> > > </details> > > > > <-- 409 Conflict > > <serverStateView> > > ".../ready" > > </serverStateView> > > > > > > I am not sure what you are getting at with the URIs here but I see your point. Why not have the client do a GET on the resource it wants to update? > > -- This comes back to the original q: how does HTTP implement in-order message delivery? I was following the link to Roy's reply that you sent me i.e. multiple POSTs. Are you saying that a client could GET a repn which would contain the URI's to PUT in the correct sequence (where the client does not proceed until it gets a 200 OK from each individual PUT) ? Hmm, not really. The idea is that the client includes a token in the POST that reflects its own understanding what state the resource has. The server can then check whether that is true or not (and in the latter case send the current state). > Seems a neat solution...but there would be no need for the client to send up a token to indicate it's view of the resource state to the server (as I don't think it can get out of synch with PUTs)... > > > > > Thus, the client and server are keeping in synch via the use of the stateful URI's. > > Hmm - what is a 'stateful URI'? > > -- for me, that is a string that looks like a URI that informs the client/server where it is in the application. Is this correct/incorrect? Hmm - I guess any token would be fine. No need for a URI. > > > > This means that the server is maintaining some application state i.e. breaking REST's statelessness constraint. > > How so? The application state is where the client is in the overall application. How is the server maintaining that information in your example? > > -- the URI string represents the current state of the application. I suppose if the server sticks it into a db then it becomes resource state and the issue is solved... I think you are thinking too complicated about the solution. POST <details expectedState="1"> <address>Foo</address> </details> would do it. (Or we are talking past each other, maybe..?) Jan > > Sean. > > > [1] http://dehora.net/doc/httplr/draft-httplr-01.html > > > > --- On Wed, 14/7/10, Jan Algermissen <algermissen1971@...> wrote: > > > > From: Jan Algermissen <algermissen1971@...> > > Subject: Re: [rest-discuss] HTTP reliability - in order msg delivery? > > To: "Sean Kennedy" <seandkennedy@...> > > Cc: "Jim Webber" <jim@...>, "Rest Discussion Group" <rest-discuss@yahoogroups.com> > > Date: Wednesday, 14 July, 2010, 7:41 > > > > > > On Jul 13, 2010, at 2:15 PM, Sean Kennedy wrote: > > > > > > > > How does this look... > > > > > > > Sean, > > > > I am having trouble to see what you are asking. Can you replace the formal expressions with HTTP request/ response examples? > > > > Jan > > > > > Sean. > > > > > > --- On Tue, 13/7/10, Jan Algermissen <algermissen1971@...> wrote: > > > > > > From: Jan Algermissen <algermissen1971@...> > > > Subject: Re: [rest-discuss] HTTP reliability - in order msg delivery? > > > To: "Sean Kennedy" <seandkennedy@...> > > > Cc: "REST Discuss" <rest-discuss@yahoogroups.com> > > > Date: Tuesday, 13 July, 2010, 9:50 > > > > > > > > > On Jul 13, 2010, at 10:51 AM, Sean Kennedy wrote: > > > > > > > What if you needed in-order message delivery? I imagine for a banking application, the order of transactions on an account would be important... > > > > > > You can do this by including in the client's message a token that expresses the client's assumptions about the state of the resource. The server can use that token to verify that the client's expectation and the actual resource state match. If they do not match, the server instructs the client what to do next. > > > > > > Roy somewhat explains this in [1]: > > > > > > "Think of it instead as a series of individual POST requests that are > > > building up a combined resource that will eventually be a savings > > > account when finished. Each of those requests can include parameters > > > that perform the same role as an ETag -- basically, identifying the > > > client's view of the current state of the resource. Then, when a > > > request is repeated or a state-change lost, the server would see > > > that in the next request and tell the client to refresh its view > > > of the form before continuing to the next step." > > > > > > [1] http://tech.groups.yahoo.com/group/rest-discuss/message/9805 > > > > > > > > > ----------------------------------- > > > Jan Algermissen, Consultant > > > NORD Software Consulting > > > > > > Mail: algermissen@... > > > Blog: http://www.nordsc.com/blog/ > > > Work: http://www.nordsc.com/ > > > ----------------------------------- > > > > > > > > > > > > > > > > > > > > > > > > > > > > ----------------------------------- > > Jan Algermissen, Consultant > > NORD Software Consulting > > > > Mail: algermissen@... > > Blog: http://www.nordsc.com/blog/ > > Work: http://www.nordsc.com/ > > ----------------------------------- > > > > > > > > > > > > > > ----------------------------------- > Jan Algermissen, Consultant > NORD Software Consulting > > Mail: algermissen@... > Blog: http://www.nordsc.com/blog/ > Work: http://www.nordsc.com/ > ----------------------------------- > > > > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
On Fri, Jul 16, 2010 at 5:58 PM, Jan Algermissen <algermissen1971@...> wrote: > > > > > It would be possible? > > Yes - though I personnally doubt the usefulness. I recommend the > specification of a media type specific to your domain. That media type > should provide the means for the necessary hypermedia controls (along the > lines of application/atom+xml and application/atomsrv+xml). > > That's not clear to me... Should I write a specification for a "cargo" mime type to deliver an application that show such things? Should such mime type become a standard? It seem a little too complex... Is there some example of such a process worked? Giacomo
G: I wrote a set of blog posts on this topic recently: http://www.amundsen.com/blog/archives/1041 It might give you some ideas. mca http://amundsen.com/blog/ http://mamund.com/foaf.rdf#me On Sat, Jul 17, 2010 at 18:00, Giacomo Tesio <giacomo@...> wrote: > > > On Fri, Jul 16, 2010 at 5:58 PM, Jan Algermissen <algermissen1971@... > > wrote: > > >> > >> > It would be possible? >> >> Yes - though I personnally doubt the usefulness. I recommend the >> specification of a media type specific to your domain. That media type >> should provide the means for the necessary hypermedia controls (along the >> lines of application/atom+xml and application/atomsrv+xml). >> >> > That's not clear to me... > > Should I write a specification for a "cargo" mime type to deliver an > application that show such things? > Should such mime type become a standard? > > It seem a little too complex... Is there some example of such a process > worked? > > > Giacomo > > > >
On Jul 18, 2010, at 12:00 AM, Giacomo Tesio wrote: > > > On Fri, Jul 16, 2010 at 5:58 PM, Jan Algermissen <algermissen1971@...> wrote: > > > > > It would be possible? > > Yes - though I personnally doubt the usefulness. I recommend the specification of a media type specific to your domain. That media type should provide the means for the necessary hypermedia controls (along the lines of application/atom+xml and application/atomsrv+xml). > > > That's not clear to me... > > Should I write a specification for a "cargo" mime type to deliver an application that show such things? Think of the media type as domain specific. The application itself is formed when the components (user agent, servers, intermediaries) start working together. IOW, the media type is not for *this* application but for the domain (even if that is a rather loose term :-). Take AtomPub or HTML as examples: when these are specified the stuff that will later on be done with them (the applications) is not known. HTML can be used for displaying a Web page in a browser or for crawling and indexing a site. Both are applications and they *use* HTML is not made *for* them. > Should such mime type become a standard? Yes. It must be. However, 'standard' in this sense means more 'application independent' than 'IETF or W3C standard'. It is ok if the media type is only standard in you organisation. Important is that it is not defined by the service under development. > > It seem a little too complex... Why does this seem complex? > Is there some example of such a process worked? HTML AtomPub OpenSearch NewsML (to some extend) Jan > > > Giacomo > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
I also agree. We don't want REST turning into SOAP, it's power is it's simplicity. On Friday, July 16, 2010, Antnio Mota <amsmota@...> wrote: > > > > > > > > > > > > > > > > > > > > > > > > > > I remember a article by Mark Hardley from thr Jersey project about HATEOAS WADL, try to google it. > Neverthless I too think that its use should be avoided... > On 16 Jul 2010 16:59, "Jan Algermissen" <algermissen1971@...> wrote: > > > > > > > > > > > > > > On Jul 16, 2010, at 10:45 AM, Giacomo Tesio wrote: > >> >> >> Hello list! :-D > > Hello Giacomo :-) > >> >> I've read a lot of criticism to WADL (see for example http://bitworking.org/news/193/Do-we-ne... > > Basic problem with WADL is its design time use (as you already understand I think). > > RESTful systems do not need (and in fact forbid) knowledge as expressed by WADL at design time. > >> >> BTW, I'd like to use it as a hypertext as in http://roy.gbiv.com/untangled/2008/rest-apis-mus... > > Yes, you can use WADL as a forms mechanism (runtime use). > >> >> It would be possible? > > Yes - though I personnally doubt the usefulness. I recommend the specification of a media type specific to your domain. That media type should provide the means for the necessary hypermedia controls (along the lines of application/atom+xml and application/atomsrv+xml). > > Jan > >> >> Actually a wadl file should have it's own mime type (does it have one?) but it seem quite goo... > ----------------------------------- > Jan Algermissen, Consultant > NORD Software Consulting > > Mail: algermissen@acm.org<algermissen%40acm.org> > Blog: http://www.nordsc.com/blog/ > Work: http://www.nordsc.com/ > ----------------------------------- > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > >
On Sun, Jul 18, 2010 at 8:37 AM, Jan Algermissen <algermissen1971@...>wrote: > > > Should such mime type become a standard? > > Yes. It must be. However, 'standard' in this sense means more 'application > independent' than 'IETF or W3C standard'. It is ok if the media type is only > standard in you organisation. Important is that it is not defined by the > service under development. > So, to represent a cargo I should send a Content-Type with it of type application/x-cargo+xml for example? Is it right that different resources with such a mime type MUST share the same XSD? > > It seem a little too complex... > > Why does this seem complex? > I find the RESTful architecture style appliable even in domains where there is no spread standard. Design an "application indipendent" standard rappresentation format each time I want to deliver a different application to a customer seem far too effort... And by the way, designing such formats do not seem a REST constraint: what do we need "code on demand" for otherwise? > > > Is there some example of such a process worked? > > HTML > AtomPub > OpenSearch > NewsML (to some extend) > It seem to me that representing a chair, a cargo, a voyage, an private banker advisory session in HTML is just as inappropriate (from a semantic point of view) as using XML (but without the xml strictness). And btw, I don't think that a private banker advisory session is suitable for a standard "application indipendent" reppresentation. I could embeed that in an Atom extension, but that sound like simply use xml. What do you think about this? Giacomo
2010/7/19 Glenn Block <glenn.block@...> I also agree. We don't want REST turning into SOAP, it's power is it's > simplicity. > Neither do I. But keeping simple the simple things should not constraint us to use the REST simplicity in complex domain, should it? Giacomo
I'm thinking about this also, and wondering, what is wrong with a simple approach like: PUT /messages/1 -->202 Then the client repeats GET /messages/1 -->404 until GET /messages/1 -->200 and then just proceed with PUT /messages/2 -->202 and so on... On 16 July 2010 17:56, Jan Algermissen <algermissen1971@...> wrote: > > > > On Jul 15, 2010, at 10:28 AM, Sean Kennedy wrote: > > > > > > > > > > > > > > > > > On Jul 14, 2010, at 10:45 AM, Sean Kennedy wrote: > > > > > > > > > > > Hi Jan, > > > Apologies for the confusion. Hopefully this is clearer. Firstly, to > confirm I am on firm ground: this situation only appears to arise when the > client is unaware of the resource URI and therefore has to use POST instead > of idempotent PUT - based on Roy's post that you kindly included, where he > refers to a series of individual POST requests. > > > Secondly, I was looking at Bill de hOra's HTTPLR [1] last night and > figured that his use of stateful URI's could be used to keep the client and > server in sync i.e. no need for expensive ETag-type values.. Given that > methodology, here is an example: > > > > > > Client Server > > > > > > POST /someURI update resource state; > > > <details> /someUri goes to ".../ready" state > > > ... > > > <clientViewOfState> > > > "http://.../initial" --> > > > </clientViewOfState> > > > </details> > > > > > > <-- 200 OK gets lost > > > > > > client re-sends: > > > POST /someURI > > > <details> server detects conflict; > > > ... informs client of what its view is > > > <clientViewOfState> > > > "http://.../initial" --> > > > </clientViewOfState> > > > </details> > > > > > > <-- 409 Conflict > > > <serverStateView> > > > ".../ready" > > > </serverStateView> > > > > > > > > > > I am not sure what you are getting at with the URIs here but I see your > point. Why not have the client do a GET on the resource it wants to update? > > > > -- This comes back to the original q: how does HTTP implement in-order > message delivery? I was following the link to Roy's reply that you sent me > i.e. multiple POSTs. Are you saying that a client could GET a repn which > would contain the URI's to PUT in the correct sequence (where the client > does not proceed until it gets a 200 OK from each individual PUT) ? > > Hmm, not really. The idea is that the client includes a token in the POST > that reflects its own understanding what state the resource has. The server > can then check whether that is true or not (and in the latter case send the > current state). > > > > Seems a neat solution...but there would be no need for the client to send > up a token to indicate it's view of the resource state to the server (as I > don't think it can get out of synch with PUTs)... > > > > > > > > Thus, the client and server are keeping in synch via the use of the > stateful URI's. > > > > Hmm - what is a 'stateful URI'? > > > > -- for me, that is a string that looks like a URI that informs the > client/server where it is in the application. Is this correct/incorrect? > > Hmm - I guess any token would be fine. No need for a URI. > > > > > > > > This means that the server is maintaining some application state i.e. > breaking REST's statelessness constraint. > > > > How so? The application state is where the client is in the overall > application. How is the server maintaining that information in your example? > > > > -- the URI string represents the current state of the application. I > suppose if the server sticks it into a db then it becomes resource state and > the issue is solved... > > I think you are thinking too complicated about the solution. > > POST > > <details expectedState="1"> > <address>Foo</address> > </details> > > would do it. > > (Or we are talking past each other, maybe..?) > > Jan > > > > > > Sean. > > > > > [1] http://dehora.net/doc/httplr/draft-httplr-01.html > > > > > > --- On Wed, 14/7/10, Jan Algermissen <algermissen1971@...<algermissen1971%40mac.com>> > wrote: > > > > > > From: Jan Algermissen <algermissen1971@...<algermissen1971%40mac.com> > > > > > Subject: Re: [rest-discuss] HTTP reliability - in order msg delivery? > > > To: "Sean Kennedy" <seandkennedy@...<seandkennedy%40yahoo.co.uk> > > > > > Cc: "Jim Webber" <jim@... <jim%40webber.name>>, "Rest > Discussion Group" <rest-discuss@yahoogroups.com<rest-discuss%40yahoogroups.com> > > > > > Date: Wednesday, 14 July, 2010, 7:41 > > > > > > > > > On Jul 13, 2010, at 2:15 PM, Sean Kennedy wrote: > > > > > > > > > > > How does this look... > > > > > > > > > > Sean, > > > > > > I am having trouble to see what you are asking. Can you replace the > formal expressions with HTTP request/ response examples? > > > > > > Jan > > > > > > > Sean. > > > > > > > > --- On Tue, 13/7/10, Jan Algermissen <algermissen1971@mac.com<algermissen1971%40mac.com>> > wrote: > > > > > > > > From: Jan Algermissen <algermissen1971@...<algermissen1971%40mac.com> > > > > > > Subject: Re: [rest-discuss] HTTP reliability - in order msg delivery? > > > > To: "Sean Kennedy" <seandkennedy@...<seandkennedy%40yahoo.co.uk> > > > > > > Cc: "REST Discuss" <rest-discuss@yahoogroups.com<rest-discuss%40yahoogroups.com> > > > > > > Date: Tuesday, 13 July, 2010, 9:50 > > > > > > > > > > > > On Jul 13, 2010, at 10:51 AM, Sean Kennedy wrote: > > > > > > > > > What if you needed in-order message delivery? I imagine for a > banking application, the order of transactions on an account would be > important... > > > > > > > > You can do this by including in the client's message a token that > expresses the client's assumptions about the state of the resource. The > server can use that token to verify that the client's expectation and the > actual resource state match. If they do not match, the server instructs the > client what to do next. > > > > > > > > Roy somewhat explains this in [1]: > > > > > > > > "Think of it instead as a series of individual POST requests that are > > > > building up a combined resource that will eventually be a savings > > > > account when finished. Each of those requests can include parameters > > > > that perform the same role as an ETag -- basically, identifying the > > > > client's view of the current state of the resource. Then, when a > > > > request is repeated or a state-change lost, the server would see > > > > that in the next request and tell the client to refresh its view > > > > of the form before continuing to the next step." > > > > > > > > [1] http://tech.groups.yahoo.com/group/rest-discuss/message/9805 > > > > > > > > > > > > ----------------------------------- > > > > Jan Algermissen, Consultant > > > > NORD Software Consulting > > > > > > > > Mail: algermissen@... <algermissen%40acm.org> > > > > Blog: http://www.nordsc.com/blog/ > > > > Work: http://www.nordsc.com/ > > > > ----------------------------------- > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ----------------------------------- > > > Jan Algermissen, Consultant > > > NORD Software Consulting > > > > > > Mail: algermissen@... <algermissen%40acm.org> > > > Blog: http://www.nordsc.com/blog/ > > > Work: http://www.nordsc.com/ > > > ----------------------------------- > > > > > > > > > > > > > > > > > > > > > > ----------------------------------- > > Jan Algermissen, Consultant > > NORD Software Consulting > > > > Mail: algermissen@... <algermissen%40acm.org> > > Blog: http://www.nordsc.com/blog/ > > Work: http://www.nordsc.com/ > > ----------------------------------- > > > > > > > > > > > > > > ----------------------------------- > Jan Algermissen, Consultant > NORD Software Consulting > > Mail: algermissen@... <algermissen%40acm.org> > Blog: http://www.nordsc.com/blog/ > Work: http://www.nordsc.com/ > ----------------------------------- > > >
Probably a noob question, but I will ask anyway. Let's say I do a GET on a tasks resource which returns a list of tasks. Then I go and do a POST against the same resource in order to add a new tasks. In the meanwhile someone else has gone and posted a task to the same resource. Now I need to determine if the collection I have on the client has changed on the server. Is the recommended route to save the ETAG when I did the get against the tasks resource, and after I've posted do a GET again against that collection using an if match header? Thanks Glenn
On Sun, 2010-07-18 at 08:37 +0200, Jan Algermissen wrote: > > On Jul 18, 2010, at 12:00 AM, Giacomo Tesio wrote: > > > Think of the media type as domain specific. The application itself is > formed when the components (user agent, servers, intermediaries) start > working together. IOW, the media type is not for *this* application > but for the domain (even if that is a rather loose term :-). Take > AtomPub or HTML as examples: when these are specified the stuff that > will later on be done with them (the applications) is not known. HTML > can be used for displaying a Web page in a browser or for crawling and > indexing a site. Both are applications and they *use* HTML is not made > *for* them. > > > Should such mime type become a standard? > > Yes. It must be. However, 'standard' in this sense means more > 'application independent' than 'IETF or W3C standard'. It is ok if the > media type is only standard in you organisation. Important is that it > is not defined by the service under development. Why is that important? > > > > It seem a little too complex... > > Why does this seem complex? > > > Is there some example of such a process worked? > > HTML > AtomPub > OpenSearch > NewsML (to some extend) Which result in semantic tunnelling. Look at the work involved in mapping gdata/activitystreams/odata into Atom or microformats/rdfa into HTML. Those are global/community efforts that dwarf the caabilities of a time-constrained business that needs to solve its problem rather than its industry's problem. What would be 'wrong' with with having a link to WADL in your media type? Until such a time as the RDF singularity that is, although frankly TDF looks more and more like a better option than blocking the 'applications' and the 'service under development' on some non-existent 'standard'. Bill
On Mon, Jul 19, 2010 at 9:58 PM, Bill de hra <bill@...> wrote: > > Which result in semantic tunnelling. Look at the work involved in > mapping gdata/activitystreams/odata into Atom or microformats/rdfa into > HTML. Those are global/community efforts that dwarf the caabilities of a > time-constrained business that needs to solve its problem rather than > its industry's problem. > > What would be 'wrong' with with having a link to WADL in your media > type? Until such a time as the RDF singularity that is, although frankly > TDF looks more and more like a better option than blocking the > 'applications' and the 'service under development' on some non-existent > 'standard'. > > That is exactly my fear. Giacomo
It's quite interesting, actually. But would it be "right" (from a REST point of view) to use an home made (potentially subjected to changes) mime type like *application/list.man+xml* This is actually the point. Giacomo On Sun, Jul 18, 2010 at 12:46 AM, mike amundsen <mamund@...> wrote: > G: > > I wrote a set of blog posts on this topic recently: > http://www.amundsen.com/blog/archives/1041 > > It might give you some ideas. > > > > mca > http://amundsen.com/blog/ > http://mamund.com/foaf.rdf#me > > > > On Sat, Jul 17, 2010 at 18:00, Giacomo Tesio <giacomo@...> wrote: > >> >> >> On Fri, Jul 16, 2010 at 5:58 PM, Jan Algermissen < >> algermissen1971@...> wrote: >> >> >>> > >>> > It would be possible? >>> >>> Yes - though I personnally doubt the usefulness. I recommend the >>> specification of a media type specific to your domain. That media type >>> should provide the means for the necessary hypermedia controls (along the >>> lines of application/atom+xml and application/atomsrv+xml). >>> >>> >> That's not clear to me... >> >> Should I write a specification for a "cargo" mime type to deliver an >> application that show such things? >> Should such mime type become a standard? >> >> It seem a little too complex... Is there some example of such a process >> worked? >> >> >> Giacomo >> >> >> >> > >
Glenn Block wrote: > > Probably a noob question, but I will ask anyway. > It's a FAQ with an answer, here: http://www.w3.org/1999/04/Editing/ > > Then I go and do a POST against the same resource in order to add a > new tasks. > If the resource already exists, and is a list of tasks, then the semantic of adding a new task to the list would map to HTTP PUT. > > Is the recommended route to save the ETAG when I did the get against > the tasks resource, and after I've posted do a GET again against that > collection using an if match header? > No. The if-match goes on the PUT or POST request. If you're doing it on the subsequent GET, there's no guarantee you won't get the stale response you're asking for from some intermediary. I think what you're trying to do is compare the Etags before and after the change request, to see if the request was processed. You do this with 'max-age=0' to insure a fresh response from the origin server, containing the Etag you're after. -Eric
for each GET request, the response may contain a Cache-Control, Expires, ETag, and/or Last-Modified headers. if they exist in the response, the client should keep track of each of these values for that URI and consult them for any subsequent GET request to the same URI. that includes the case you describe where you are sending POST requests to the same URI. note that the POST may return a number of different "success" responses (200 OK + a body, 201 + Location Header, 202 Accepted + a body, 204 No Content, 3xx w/ a new URI, etc.). of course 4xx and 5xx responses are a possibility, too. It is up to the client application to decide how to handle these responses. mca http://amundsen.com/blog/ http://mamund.com/foaf.rdf#me On Mon, Jul 19, 2010 at 15:53, Glenn Block <glenn.block@...> wrote: > > > Probably a noob question, but I will ask anyway. Let's say I do a GET on a > tasks resource which returns a list of tasks. Then I go and do a POST > against the same resource in order to add a new tasks. In the meanwhile > someone else has gone and posted a task to the same resource. Now I need to > determine if the collection I have on the client has changed on the server. > > Is the recommended route to save the ETAG when I did the get against the > tasks resource, and after I've posted do a GET again against that collection > using an if match header? > > Thanks > Glenn > > > > >
On Mon, Jul 19, 2010 at 12:53 PM, Glenn Block <glenn.block@...> wrote: > Probably a noob question, but I will ask anyway. Let's say I do a > GET on a tasks resource which returns a list of tasks. Then I go > and do a POST against the same resource in order to add a new tasks. > In the meanwhile someone else has gone and posted a task to the > same resource. Now I need to determine if the collection I have on > the client has changed on the server. > Is the recommended route to save the ETAG when I did the get against > the tasks resource, and after I've posted do a GET again against > that collection using an if match header? Well, arguably, after your POST, the ETAG will change anyway (I mean, you just modified it, why wouldn't it change?). So you should likely follow your POST with a HEAD to get the latest info. The problem, of course, is that the "latest info" may not match what you have. For example: A. GET /tasks // returns task collection B. POST /tasks // updates task collection C. HEAD /tasks // gets ETAG and Last-Modified headers. The problem is that when you do the HEAD, the ETAG you get back may well not match the collection of tasks you're maintaining locally. Someone could have changed /tasks between A and B (changed /tasks before you did), or even between B and C (after you posted changes, but before you fetched the modification information). So, there's no real way you can ensure that your local copy of tasks matches the servers. The better tact, at least initially, is to do a conditional POST, passing the IF-MATCH for the ETAG or the Unmodified-Since for a Last-Modified date. That's the "optimistic locking" tactic. If the ETAG doesn't match, the POST fails, and it's up to you to resolve the issue, getting a valid ETAG value. In the reply to the POST, I think its valid for the response to include a new ETAG value that matches the state of the resource as a result of the POST. So, if you wish to POST again, you can that new ETAG value. Another thing tied to this is that you might want to be able to use the Last-Modified value to get changes since that time. GET /tasks // Gets full collection of tasks, capture Last-Modified POST /tasks // with Unmodified-Since header set to the Last-Modified value captured from the GET // resource changed behind your back, and you get a 409 result because your Unmodified-Since condition fails, so you try to resync GET /tasks?changessince=<Last-Modified value from initial GET> // get changes, capture Last-Modified again // integrate changes and make new POST POST /tasks // with Unmodified-Since // Successful POST, capture Last-Modified again for next time. Race conditions still exist, but you at least make some effort to work around them, and they don't transparently happen. The optimistic locking part kicks in notifying you of the changes. Regards, Will Hartung (willh@...)
<snip> But would it be "right" (from a REST point of view) to use an home made (potentially subjected to changes) mime type like application/list.man+xml </snip> the REST style has nothing to say on who authors the media type. and, in the beginning all media types are "home made", of course. Andrew Wahbe published a nice post today on hypermedia types and REST [1]. Finally, no matter who the author is, where it's "registered", how old they are, etc. once it's published and in use, media types must be changed very carefully so as to not break promises to existing users (servers and clients). W3C has a nice piece on that touches on most of these issues [2]. [1] http://linkednotbound.net/2010/07/19/self-descriptive-hypermedia/ [2] http://www.w3.org/2001/tag/doc/versioning-xml-20070326.html mca http://amundsen.com/blog/ http://mamund.com/foaf.rdf#me On Mon, Jul 19, 2010 at 16:32, Giacomo Tesio <giacomo@...> wrote: > > > > It's quite interesting, actually. > But would it be "right" (from a REST point of view) to use an home made (potentially subjected to changes) mime type likeapplication/list.man+xml > This is actually the point. > > Giacomo > > On Sun, Jul 18, 2010 at 12:46 AM, mike amundsen <mamund@...> wrote: >> >> G: >> I wrote a set of blog posts on this topic recently: >> http://www.amundsen.com/blog/archives/1041 >> It might give you some ideas. >> >> >> mca >> http://amundsen.com/blog/ >> http://mamund.com/foaf.rdf#me >> >> >> >> On Sat, Jul 17, 2010 at 18:00, Giacomo Tesio <giacomo@...> wrote: >>> >>> >>> On Fri, Jul 16, 2010 at 5:58 PM, Jan Algermissen<algermissen1971@...>wrote: >>>> >>>> > >>>> > It would be possible? >>>> >>>> Yes - though I personnally doubt the usefulness. I recommend the specification of a media type specific to your domain. That media type should provide the means for the necessary hypermedia controls (along the lines of application/atom+xml and application/atomsrv+xml). >>>> >>> >>> That's not clear to me... >>> Should I write a specification for a "cargo" mime type to deliver an application that show such things? >>> Should such mime type become a standard? >>> It seem a little too complex... Is there some example of such a process worked? >>> >>> Giacomo >>> > > > > > >
POST should be fine to *add* a new task in the task list. To use PUT, you need to know a good url for the new task to PUT. If you want to update a task, then PUT + If-Match should do the work. Or you wan to *update the whole task list*, PUT + If-Match also works. Cheers, Dong On Mon, Jul 19, 2010 at 2:34 PM, Eric J. Bowman <eric@...>wrote: > > > Glenn Block wrote: > > > > Probably a noob question, but I will ask anyway. > > > > It's a FAQ with an answer, here: > > http://www.w3.org/1999/04/Editing/ > > > > > > Then I go and do a POST against the same resource in order to add a > > new tasks. > > > > If the resource already exists, and is a list of tasks, then the > semantic of adding a new task to the list would map to HTTP PUT. > > > > > > Is the recommended route to save the ETAG when I did the get against > > the tasks resource, and after I've posted do a GET again against that > > collection using an if match header? > > > > No. The if-match goes on the PUT or POST request. If you're doing it > on the subsequent GET, there's no guarantee you won't get the stale > response you're asking for from some intermediary. > > I think what you're trying to do is compare the Etags before and after > the change request, to see if the request was processed. You do this > with 'max-age=0' to insure a fresh response from the origin server, > containing the Etag you're after. > > -Eric > > >
Will Hartung wrote: > > So, there's no real way you can ensure that your local copy of tasks > matches the servers. > Except for Content-Md5. ;-) -Eric
Dong Liu wrote: > > POST should be fine to *add* a new task in the task list. To use PUT, > you need to know a good url for the new task to PUT. > This depends on implementation. I suggested a list of tasks, to which tasks may be added or removed. So the list's URI is known. Each change to the list, replaces the list. Simple, and un-RESTful to assign replacement semantics to POST when that's what PUT is for. Now, if we're talking about a list which is a collection of individual task resources, then we're talking about some other implementation. If adding a task involves the creation of a resource, instead of replacing the existing list with an appended list, *then* POST is correct. Or PUT, since PUT may also be assigned creation semantics. This depends on whether or not the URI for the created task is known in advance. It may very well be included in the hypertext representation. But, again, I thought we were discussing a simple list of tasks with a known URI. In such a model, with replacement semantics assigned to PUT, creating a new to-do-list resource would fall to POST at some other URI. -Eric
On Jul 19, 2010, at 9:58 PM, Bill de hra wrote: > On Sun, 2010-07-18 at 08:37 +0200, Jan Algermissen wrote: >> >> On Jul 18, 2010, at 12:00 AM, Giacomo Tesio wrote: >>> >> Think of the media type as domain specific. The application itself is >> formed when the components (user agent, servers, intermediaries) start >> working together. IOW, the media type is not for *this* application >> but for the domain (even if that is a rather loose term :-). Take >> AtomPub or HTML as examples: when these are specified the stuff that >> will later on be done with them (the applications) is not known. HTML >> can be used for displaying a Web page in a browser or for crawling and >> indexing a site. Both are applications and they *use* HTML is not made >> *for* them. >> >>> Should such mime type become a standard? >> >> Yes. It must be. However, 'standard' in this sense means more >> 'application independent' than 'IETF or W3C standard'. It is ok if the >> media type is only standard in you organisation. Important is that it >> is not defined by the service under development. > > Why is that important? It is important because we want to avoid defining new stuff for every new service. The overal media type (or small set of types) should ideally already provide what is necessary. > >>> >>> It seem a little too complex... >> >> Why does this seem complex? >> >>> Is there some example of such a process worked? >> >> HTML >> AtomPub >> OpenSearch >> NewsML (to some extend) > > > Which result in semantic tunnelling. Look at the work involved in > mapping gdata/activitystreams/odata into Atom or microformats/rdfa into > HTML. Those are global/community efforts that dwarf the caabilities of a > time-constrained business that needs to solve its problem rather than > its industry's problem. Personally I have very ambivalent feelings towards extensions. I think they are to be used for incremental evolution of the media type. IOW, they allow for punctual experimentation and when they turn out to be useful, they should flow back into the type. At the very least, profile parameters should IMHO be used to make conneg explicit. > > What would be 'wrong' with with having a link to WADL in your media > type? What would that buy us? The fact that a client chooses to interact with a certain resource is something that happens at the 'intent'-level. If the goal is to submit an order, you turn to an order accepting resource, not just any resource. This decision and HTTP standard semantics should usually be enough to construct the request. Can you provide an example what WADL could describe at runtime that needs to be described? > Until such a time as the RDF singularity that is, although frankly > TDF looks more and more like a better option than blocking the > 'applications' and the 'service under development' on some non-existent > 'standard'. Do you think that in the case of AtomPub it would have been better to create an AtomService and have that service define and own the format? Jan > > Bill > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
On Jul 19, 2010, at 10:21 PM, Giacomo Tesio wrote: > > > On Mon, Jul 19, 2010 at 9:58 PM, Bill de hra <bill@...> wrote: > > Which result in semantic tunnelling. Look at the work involved in > mapping gdata/activitystreams/odata into Atom or microformats/rdfa into > HTML. Those are global/community efforts that dwarf the caabilities of a > time-constrained business that needs to solve its problem rather than > its industry's problem. > > What would be 'wrong' with with having a link to WADL in your media > type? Until such a time as the RDF singularity that is, although frankly > TDF looks more and more like a better option than blocking the > 'applications' and the 'service under development' on some non-existent > 'standard'. > > > That is exactly my fear. Why do you fear that? It is at the heart of what is necessary to overcome coupling of clients to servers. When you apply REST to your problem, media type design is the primary design activity. Services come later and are guided by the media type you then have. Just like it was done for AtomPub. Jan > > Giacomo > > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
On Jul 19, 2010, at 9:53 PM, Glenn Block wrote: > > > Probably a noob question, but I will ask anyway. Let's say I do a GET on a tasks resource which returns a list of tasks. Then I go and do a POST against the same resource in order to add a new tasks. In the meanwhile someone else has gone and posted a task to the same resource. Now I need to determine if the collection I have on the client has changed on the server. > > Is the recommended route to save the ETAG when I did the get against the tasks resource, and after I've posted do a GET again against that collection using an if match header? Yes, exactly. (though it would be If-None-Match: yourRememberedTag) Alternatively, you can do a HEAD if you just want to poke but not retrieve. Jan > > Thanks > Glenn > > > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
On Mon, 2010-07-19 at 23:32 +0200, Jan Algermissen wrote: > Personally I have very ambivalent feelings towards extensions. > I think they are to be used for incremental evolution of the > media . type. IOW, they allow for punctual experimentation and > when they turn out to be useful, they should flow back into the > type. If the type cannot be extended, extensions cannot flow back because extensions cannot exist to begin with. I think the reasoning is circular. > At the very least, profile parameters > should IMHO be used to make conneg explicit. Practical examples? > The fact that a client chooses to interact with a > certain resource is something that happens at the 'intent'-level. If the goal > is to submit an order, you turn to an order accepting resource, not > just any resource. This decision and HTTP standard semantics should > usually be enough to construct the request. If the order method is uniform then media type does not need to signal it for the resource. If the order method is not uniform, then the media type must somehow convey that intent beyond the uniform interface for the resource according to your architectural technique. Again I think the reasoning is circular and the notion of an 'intent- level' seems very vague for a software intensive system unless you mean it's done out of band. > Can you provide an example what WADL could describe at > runtime that needs to be described? Ignoring the faux-Socratic technique and hence no sense of feeling obliged to provide an example - I have no clear idea what you're asking for. But let's say I can't provide one and there isn't anything WADL can describe 'at runtime' - given the two ogical holes above, what would be the point? > > Until such a time as the RDF singularity that is, although frankly > > TDF looks more and more like a better option than blocking the > > 'applications' and the 'service under development' on some non-existent > > 'standard'. > > Do you think that in the case of AtomPub it would have been better to create an > AtomService and have that service define and own the format? No. And I suspect that doesn't help reduce your sense of ambivalence - true? Bill
I was thinking each task is a resource, hence why I said POST not PUT. On Mon, Jul 19, 2010 at 2:39 PM, Eric J. Bowman <eric@...>wrote: > Dong Liu wrote: > > > > POST should be fine to *add* a new task in the task list. To use PUT, > > you need to know a good url for the new task to PUT. > > > > This depends on implementation. I suggested a list of tasks, to which > tasks may be added or removed. So the list's URI is known. Each > change to the list, replaces the list. Simple, and un-RESTful to assign > replacement semantics to POST when that's what PUT is for. > > Now, if we're talking about a list which is a collection of individual > task resources, then we're talking about some other implementation. If > adding a task involves the creation of a resource, instead of replacing > the existing list with an appended list, *then* POST is correct. > > Or PUT, since PUT may also be assigned creation semantics. This > depends on whether or not the URI for the created task is known in > advance. It may very well be included in the hypertext representation. > > But, again, I thought we were discussing a simple list of tasks with a > known URI. In such a model, with replacement semantics assigned to > PUT, creating a new to-do-list resource would fall to POST at some > other URI. > > -Eric >
Thank Jan. On Mon, Jul 19, 2010 at 2:34 PM, Jan Algermissen <algermissen1971@...>wrote: > > On Jul 19, 2010, at 9:53 PM, Glenn Block wrote: > > > > > > > Probably a noob question, but I will ask anyway. Let's say I do a GET on > a tasks resource which returns a list of tasks. Then I go and do a POST > against the same resource in order to add a new tasks. In the meanwhile > someone else has gone and posted a task to the same resource. Now I need to > determine if the collection I have on the client has changed on the server. > > > > Is the recommended route to save the ETAG when I did the get against the > tasks resource, and after I've posted do a GET again against that collection > using an if match header? > > Yes, exactly. (though it would be If-None-Match: yourRememberedTag) > > Alternatively, you can do a HEAD if you just want to poke but not retrieve. > > Jan > > > > > > Thanks > > Glenn > > > > > > > > > > > > ----------------------------------- > Jan Algermissen, Consultant > NORD Software Consulting > > Mail: algermissen@... > Blog: http://www.nordsc.com/blog/ > Work: http://www.nordsc.com/ > ----------------------------------- > > > > >
Giacomo Tesio wrote: > > It seem to me that nothing restfully wrong to use an alternative > hypertext language instead of html, isn't it? > Not if it's a standard, no. REST argues specifically against creating a new media type in the face of an existing, ubiquitous media type which already solves the problem. I'm not saying HTML and Atom are always the solution -- they're just *almost* always the solution. In a nutshell, REST says that most online systems, of any complexity, may be represented as a standard HTML/HTTP website. There's more to it than that, but this is where REST is coming from. By way of example, let's consider the problem space of online genetic databases (which are proliferating as more genomes are mapped). If each genetic-database website implements a "REST API" by developing a proprietary media type (even if it's registered and public, if it's only used by one system I say it's proprietary), then the result is a mishmash of different systems requiring different clients. This isn't REST -- no Uniform Interface in sight. OTOH, if each website agrees to use HTML + RDFa and Atom, then each team can evaluate the work of other teams using common Web browsers. The data is a combination of tables and lists, so using HTML means everyone agrees on general markup semantics. What will differ is the RDFa metadata used to express specific markup semantics. This approach would eventually yield a consensus schema (XSD, RELAX NG + Schematron, DTD, whatever) for representing genetic data as XHTML, and a domain-specific metadata vocabulary to annotate the general semantics of the markup language to be specific to the genetic-data problem area. A common search syntax for URIs would also be developed. The result would be myriad genetic-database websites, all dedicated to different fields of research, all looking as different as they do now, style-wise, but which share the same API by virtue of being designed to a uniform interface. A researcher who knows how to search one, knows how to search them all, using a Web browser -- only if those REST APIs, taken as a whole, amount to a Uniform Interface. (Taken as a whole, the interoperable RESTful blogosphere shares a Uniform Interface to syndicated collections of journal entries. Instead of everyone's weblog having proprietary media types, a small set of ubiquitous media types is used to express Atom Protocol interfaces, such that if you know how to post to one Atom Protocol- based REST-API weblog, you know how to post to them all, even in the face of significant variation in the stylistic presentations.) A bunch of genetic-database websites using proprietary media types in their "REST APIs" won't interoperate, because none of them actually has a Uniform Interface. It is my contention that only by following a disciplined approach of working through the development of a system using ubiquitous media types, can the parameters of any new media types ultimately required, be properly determined. IOW, it may be that the genetic-database community determines that Web browsers can't do certain things, or identifies some shortcomings in the approach of using ubiquitous media types, of otherwise decides that a new media type is in order. That media type would fill a legitimate need that isn't otherwise met, while not re-inventing any of the wheels that make up the existing HTML Web (like lists, tables, linking, accessibility). The point I'm trying to make, is that approaching REST development by creating media types, misses the point. IMO, there's a greater than 95% chance that the media type or types your system needs, already exist. Their ubiquity is what makes using them RESTful -- genetic data presented using HTML + Atom presents the opportunity for serendipitous re-use precisely because a new media type *wasn't* created. > > So, to represent a cargo I should send a Content-Type with it of type > application/x-cargo+xml for example? > What is a 'cargo' other than a list of items? Right off the top of my head, the general semantics would be those of a definition list. Any additional semantics may be added using RDFa. I see no compelling need to create a media type that re-invents the definition list in an application-specific fashion, when to do so accomplishes nothing which can't be done using the ubiquitous HTML media types. Or, shorter version of me, why *not* be able to use a Web browser to review a cargo manifest? > > Is it right that different resources with such a mime type MUST share > the same XSD? > No. A REST system assigns set semantics to protocol methods, i.e. if you have application/atom+xml resources and implement Atom Protocol, you constrain POST to create and PUT to replace, for all resources on your system, as method semantics must not vary by media type. Domain-specific vocabularies are expressed to the user agent within the media type, but there is no constraint which requires all resources to use the same domain-specific vocabulary -- that's an implementation detail, hidden behind the uniform interface. > > Design an "application indipendent" standard rappresentation format > each time I want to deliver a different application to a customer > seem far too effort... > Exactly. That's why REST requires standard, not just registered, media types -- re-use. I've come across very few custom application interfaces that can't be presented using HTML, which is nice because I don't have to re-invent the accessibility wheel for every system I develop. While it's true that an "evolving set of standard media types" requires initial implementations of a media type to be proprietary, if your custom media type is only ever used by a couple of systems, then it defeats the purpose of self-descriptive messaging. It's also much easier to hire someone to maintain a running system that's built around ubiquitous media types they already know, instead of having to train new hires to a proprietary media type they've never encountered. If there's a compelling need for your new media type, then it ought to be widely adopted and standardized. If it's specific to your implementation and doesn't fit anyone else's needs, or doesn't do anything that can't already be done using ubiquitous media types, then it won't proliferate -- no serendipitous re-use. If it does proliferate, then creating it was the right call. If it doesn't proliferate, then what benefit is it to the RESTful goals of your system? > > And by the way, designing such formats do not seem a REST constraint: > what do we need "code on demand" for otherwise? > Yes and no. Technically, code on demand is used to extend a user agent's understanding of a media type, not as a substitute for creating a media type. > > It seem to me that representing a chair, a cargo, a voyage, an private > banker advisory session in HTML is just as inappropriate (from a > semantic point of view) as using XML (but without the xml strictness). > Why? A list of deck chairs on the Titanic, by voyage, seems like list/ tabular data to me. So why not represent it using HTML tables and lists? For those service consumers which need to know that a chaise lounge is a type of deck chair, the table can be marked up with RDFa to express that domain-specific vocabulary. Developing a media type specific to the requirement of listing the deck chairs on the Titanic by voyage, runs counter to the REST style, where "The trade-off... is that a uniform interface degrades efficiency, since information is transferred in a standardized form rather than one which is specific to an application's needs." Ubiquitous media types is the standardized form Roy's talking about. Creating new media types for every resource you develop, is proprietary. > > And btw, I don't think that a private banker advisory session is > suitable for a standard "application indipendent" reppresentation. > I don't see what actions in such a use case, couldn't be modeled using HTML. Anyone with authorization could then use the system from any Web browser, instead of first needing to download and install some sort of specialized client component. The former is the whole point of REST as an alternative to the latter. -Eric
--- In rest-discuss@yahoogroups.com, Bill de h�ra <bill@...> wrote: > > On Sun, 2010-07-18 at 08:37 +0200, Jan Algermissen wrote: > > > > On Jul 18, 2010, at 12:00 AM, Giacomo Tesio wrote: > > > > > Think of the media type as domain specific. The application itself is > > formed when the components (user agent, servers, intermediaries) start > > working together. IOW, the media type is not for *this* application > > but for the domain (even if that is a rather loose term :-). Take > > AtomPub or HTML as examples: when these are specified the stuff that > > will later on be done with them (the applications) is not known. HTML > > can be used for displaying a Web page in a browser or for crawling and > > indexing a site. Both are applications and they *use* HTML is not made > > *for* them. > > > > > Should such mime type become a standard? > > > > Yes. It must be. However, 'standard' in this sense means more > > 'application independent' than 'IETF or W3C standard'. It is ok if the > > media type is only standard in you organisation. Important is that it > > is not defined by the service under development. > > Why is that important? > > > > > > > It seem a little too complex... > > > > Why does this seem complex? > > > > > Is there some example of such a process worked? > > > > HTML > > AtomPub > > OpenSearch > > NewsML (to some extend) > > > Which result in semantic tunnelling. Look at the work involved in > mapping gdata/activitystreams/odata into Atom or microformats/rdfa into > HTML. Those are global/community efforts that dwarf the caabilities of a > time-constrained business that needs to solve its problem rather than > its industry's problem. Agreed. "Semantic tunnelling" is a good phrase. The benefits, in the case of Atom/AtomPub include viewing in feed reader, validating w/ feed validator, and testing w/ Tim Bray's APE. But often individual implementations became Frankensteins of add-ons, etc. for which those benefits are not really significant (or just don't work). I still think the process of developing Atom/AtomPub was a hugely useful exercise, many of the lessons of which are applicable elsewhere. (Give me the AtomPub spec over any API docs I've ever seen any day). To say "use Atom or HTML" is getting to sound pedantic to me. "Modeling" my resources into one of those two if rife w/ challenges & pitfalls. And heretical as it sounds, what is the real value of a shared media type? To me, it's the definition of link semantics (the protocol, e.g. HTTP defines the uniform interface). Perhaps we don't need a Web Application Description Language, we need a "Media Type Description Language" or even a "link semantics" language (how to indentify links and their "relations" in the representation). I'm convinced that there is another highly useful, broadly applicable media type (or media meta-type) waiting to emerge alongside HTML. And it is going to look a lot like JSON. How we get link semantics defined is an important consideration. --peter keane > > What would be 'wrong' with with having a link to WADL in your media > type? Until such a time as the RDF singularity that is, although frankly > TDF looks more and more like a better option than blocking the > 'applications' and the 'service under development' on some non-existent > 'standard'. > > Bill >
I don't remember seeing anyone say it's all of nothing, just that you shouldn't call it REST if it doesn't meet all the constraints. As you said, if it's useful, use it. What's wrong with clarifying such an approach is not, therefore, RESTful? By definition, it's not. But if it works, why do you care? Sent from my iPhone On Jul 15, 2010, at 2:10 AM, António Mota <amsmota@...> wrote: > I'm not arguing again purity, actually I'm not arguing against > anything nor trying to start a discussion about it. I'm just trying to > point that it can be counter-productive to tell people that REST is a > all-or-nothing style. I'm not even saying it *is*, only that it can > be. In real life scenarios there is not a "all-or-nothing" (well, > there is, like fundamentalism being in politics or religion - which > are most of the times counter-productive) but not in IT anyhow... > > If people do understand the properties that constraints originate, if > people are applying REST style because it applies to their "problem > space" and not just because it's REST, basically, if people understand > the consequences of applying a constraint, then they will understand > the consequences of relaxing a constraint. > > 2010/7/15 Jan Algermissen <algermissen1971@...>: > > > > Question is: Do people understand the consequences of relaxing a constraint? > > > > If you do and can live with the resulting loss of guaranteed system properties, fine. Go ahead. > > > > OTH, relaxing the stateless server constraint at the cost of lost scalability and much reduced understandability will not make adopters of REST happy in the long run. > > > > I'll argue for purity every time. And I really do not see any problem with doing pure REST anyhow. > > > > Jan > > > > > > ----------------------------------- > > Jan Algermissen, Consultant > > NORD Software Consulting > > > > Mail: algermissen@... > > Blog: http://www.nordsc.com/blog/ > > Work: http://www.nordsc.com/ > > ----------------------------------- > > > > > > > > > > >
Because it drives people away from REST. And a style is just a style, nothing more than that. Implementations are all about compromising. One thing is to break a constraint, another is not to apply it. I reckon that sometimes, but not always, not not apply it indeed means break it - but again, not always. The problem is, for non-REST people, to say "that is not REST" is half-way for them to understand "REST is not for you, go back to SOAP or whatever you came from". That's this I think is counter-productive, to drive people away on account of purism (or sometimes fanatic or religious) point-of-views. This is just technology, all around us technology is made of compromises, even if some people like to make it, and have fun with, some kind of war (like Java vs .NET, MS vs. OpenSource, iPhone vs Android, and of course REST vs WS-*). But in the end most of those people will work, will mix and match, will adapt every other technology as they seem fit. REST is just that, a style to be applied (and thus, pragmatically), not a religious holy-grail... 2010/7/20 Ryan Riley <ryan.riley@...> > I don't remember seeing anyone say it's all of nothing, just that you > shouldn't call it REST if it doesn't meet all the constraints. As you said, > if it's useful, use it. What's wrong with clarifying such an approach is > not, therefore, RESTful? By definition, it's not. But if it works, why do > you care? > > Sent from my iPhone > > On Jul 15, 2010, at 2:10 AM, Antnio Mota <amsmota@...> wrote: > > > > I'm not arguing again purity, actually I'm not arguing against > anything nor trying to start a discussion about it. I'm just trying to > point that it can be counter-productive to tell people that REST is a > all-or-nothing style. I'm not even saying it *is*, only that it can > be. In real life scenarios there is not a "all-or-nothing" (well, > there is, like fundamentalism being in politics or religion - which > are most of the times counter-productive) but not in IT anyhow... > > If people do understand the properties that constraints originate, if > people are applying REST style because it applies to their "problem > space" and not just because it's REST, basically, if people understand > the consequences of applying a constraint, then they will understand > the consequences of relaxing a constraint. > > 2010/7/15 Jan Algermissen < <algermissen1971%40mac.com> > algermissen1971@...>: > > > > Question is: Do people understand the consequences of relaxing a > constraint? > > > > If you do and can live with the resulting loss of guaranteed system > properties, fine. Go ahead. > > > > OTH, relaxing the stateless server constraint at the cost of lost > scalability and much reduced understandability will not make adopters of > REST happy in the long run. > > > > I'll argue for purity every time. And I really do not see any problem > with doing pure REST anyhow. > > > > Jan > > > > > > ----------------------------------- > > Jan Algermissen, Consultant > > NORD Software Consulting > > > > Mail: <algermissen%40acm.org>algermissen@... > > Blog: <http://www.nordsc.com/blog/>http://www.nordsc.com/blog/ > > Work: <http://www.nordsc.com/>http://www.nordsc.com/ > > ----------------------------------- > > > > > > > > > > > > >
BTW, pragmatically speaking, what I really wanted to know is what is wrong with this approach to the subject: PUT /messages/1 -->202 Then the client repeats GET /messages/1 -->404 until GET /messages/1 -->200 and then just proceed with PUT /messages/2 -->202 and so on... 2010/7/20 Antnio Mota <amsmota@...> > Because it drives people away from REST. And a style is just a style, > nothing more than that. Implementations are all about compromising. One > thing is to break a constraint, another > is not to apply it. I reckon that sometimes, but not always, not not apply > it indeed means break it - but again, not always. > > The problem is, for non-REST people, to say "that is not REST" is half-way > for them to understand "REST is not for you, go back to SOAP or whatever you > came from". That's this I think is counter-productive, to drive people away > on account of purism (or sometimes fanatic or religious) point-of-views. > This is just technology, all around us technology is made of compromises, > even if some people like to make it, and have fun with, some kind of war > (like Java vs .NET, MS vs. OpenSource, iPhone vs Android, and of course REST > vs WS-*). But in the end most of those people will work, will mix and match, > will adapt every other technology as they seem fit. > > REST is just that, a style to be applied (and thus, pragmatically), not a > religious holy-grail... > > > 2010/7/20 Ryan Riley <ryan.riley@...> > > I don't remember seeing anyone say it's all of nothing, just that you >> shouldn't call it REST if it doesn't meet all the constraints. As you said, >> if it's useful, use it. What's wrong with clarifying such an approach is >> not, therefore, RESTful? By definition, it's not. But if it works, why do >> you care? >> >> Sent from my iPhone >> >> On Jul 15, 2010, at 2:10 AM, Antnio Mota <amsmota@...> wrote: >> >> >> >> I'm not arguing again purity, actually I'm not arguing against >> anything nor trying to start a discussion about it. I'm just trying to >> point that it can be counter-productive to tell people that REST is a >> all-or-nothing style. I'm not even saying it *is*, only that it can >> be. In real life scenarios there is not a "all-or-nothing" (well, >> there is, like fundamentalism being in politics or religion - which >> are most of the times counter-productive) but not in IT anyhow... >> >> If people do understand the properties that constraints originate, if >> people are applying REST style because it applies to their "problem >> space" and not just because it's REST, basically, if people understand >> the consequences of applying a constraint, then they will understand >> the consequences of relaxing a constraint. >> >> 2010/7/15 Jan Algermissen < <algermissen1971%40mac.com> >> algermissen1971@...>: >> > >> > Question is: Do people understand the consequences of relaxing a >> constraint? >> > >> > If you do and can live with the resulting loss of guaranteed system >> properties, fine. Go ahead. >> > >> > OTH, relaxing the stateless server constraint at the cost of lost >> scalability and much reduced understandability will not make adopters of >> REST happy in the long run. >> > >> > I'll argue for purity every time. And I really do not see any problem >> with doing pure REST anyhow. >> > >> > Jan >> > >> > >> > ----------------------------------- >> > Jan Algermissen, Consultant >> > NORD Software Consulting >> > >> > Mail: <algermissen%40acm.org>algermissen@... >> > Blog: <http://www.nordsc.com/blog/>http://www.nordsc.com/blog/ >> > Work: <http://www.nordsc.com/>http://www.nordsc.com/ >> > ----------------------------------- >> > >> > >> > >> > >> > >> >> >> >
On Tue, Jul 20, 2010 at 2:14 AM, Eric J. Bowman <eric@...>wrote: > Giacomo Tesio wrote: > > > > So, to represent a cargo I should send a Content-Type with it of type > > application/x-cargo+xml for example? > > > > What is a 'cargo' other than a list of items? It could be an history of handling, For example. And it could have strict business rules in what handlings can be done, where and when. XML still seem to me exactly, the same than HTML if I have to supplement the HTML with human (and not machine) readable description of what the different items of the list mean. Note that I'm really tring to understand your point. But please consider that for any "real" object (being it a cargo, a chair, a contract between an employee and a company, etc..) there are many different possible interesting reppresentations, showing many different prospective on the same object itself. The point of view is related to the application's domain. That's not simply a serialization problem (xml vs json vs html vs atom and so on): the same representation could be serialized in any known format, but actually that cover the Y of the "application/X+Y" mime type. I'm wondering about the X. > > > > And by the way, designing such formats do not seem a REST constraint: > > what do we need "code on demand" for otherwise? > > > > Yes and no. Technically, code on demand is used to extend a user > agent's understanding of a media type, not as a substitute for creating > a media type. > I agree. But given an application/X+Y mime type the code on demand can help using the X part. Can't it? And BTW it could also handle the custom domain specific media type. > > > > > It seem to me that representing a chair, a cargo, a voyage, an private > > banker advisory session in HTML is just as inappropriate (from a > > semantic point of view) as using XML (but without the xml strictness). > > > > Why? A list of deck chairs on the Titanic, by voyage, seems like list/ > tabular data to me. So why not represent it using HTML tables and > lists? For those service consumers which need to know that a chaise > lounge is a type of deck chair, the table can be marked up with RDFa to > express that domain-specific vocabulary. > That would be useful to humans but could be not enought for softwares. There is more than an HTML table in that page, something that a software could use to define the chairs still available. So even if HTML+XML could be the Y the point is that the X is not HTML. > > > > And btw, I don't think that a private banker advisory session is > > suitable for a standard "application indipendent" reppresentation. > > > > I don't see what actions in such a use case, couldn't be modeled using > HTML. Anyone with authorization could then use the system from any Web > browser, instead of first needing to download and install some sort of > specialized client component. The former is the whole point of REST as > an alternative to the latter. > > Whenever the browser have javascript enabled the difference disappear, don't you think? It seem that this is a deployment point. In the Fielding dissertation he talks about java applets. Giacomo
You also have the situation when you want to use those same media-types for in-house application integration (even if not in a REST based arch.) as well in your external REST web services. That makes is it even more difficult to "standardize". Of course there are ways to solve this, like having (if you use XML) a simple XSLT to convert from the "standard" model to the "in-house" model, but that means more maintenance and POF's problems... On 20 July 2010 10:24, Giacomo Tesio <giacomo@...> wrote: > > > > > On Tue, Jul 20, 2010 at 2:14 AM, Eric J. Bowman <eric@...>wrote: > > Giacomo Tesio wrote: >> > >> > So, to represent a cargo I should send a Content-Type with it of type >> > application/x-cargo+xml for example? >> > >> >> What is a 'cargo' other than a list of items? > > > It could be an history of handling, For example. And it could have strict > business rules in what handlings can be done, where and when. > > XML still seem to me exactly, the same than HTML if I have to supplement > the HTML with human (and not machine) readable description of what the > different items of the list mean. > > Note that I'm really tring to understand your point. > > But please consider that for any "real" object (being it a cargo, a chair, > a contract between an employee and a company, etc..) there are many > different possible interesting reppresentations, showing many different > prospective on the same object itself. > > The point of view is related to the application's domain. > That's not simply a serialization problem (xml vs json vs html vs atom and > so on): the same representation could be serialized in any known format, but > actually that cover the Y of the "application/X+Y" mime type. > > I'm wondering about the X. > > > >> > >> > And by the way, designing such formats do not seem a REST constraint: >> > what do we need "code on demand" for otherwise? >> > >> >> Yes and no. Technically, code on demand is used to extend a user >> agent's understanding of a media type, not as a substitute for creating >> a media type. >> > > I agree. But given an application/X+Y mime type the code on demand can help > using the X part. > Can't it? > > And BTW it could also handle the custom domain specific media type. > > >> >> > >> > It seem to me that representing a chair, a cargo, a voyage, an private >> > banker advisory session in HTML is just as inappropriate (from a >> > semantic point of view) as using XML (but without the xml strictness). >> > >> >> Why? A list of deck chairs on the Titanic, by voyage, seems like list/ >> tabular data to me. So why not represent it using HTML tables and >> lists? For those service consumers which need to know that a chaise >> lounge is a type of deck chair, the table can be marked up with RDFa to >> express that domain-specific vocabulary. >> > > That would be useful to humans but could be not enought for softwares. > > There is more than an HTML table in that page, something that a software > could use to define the chairs still available. > > So even if HTML+XML could be the Y the point is that the X is not HTML. > > > >> > >> > And btw, I don't think that a private banker advisory session is >> > suitable for a standard "application indipendent" reppresentation. >> > >> >> I don't see what actions in such a use case, couldn't be modeled using >> HTML. Anyone with authorization could then use the system from any Web >> browser, instead of first needing to download and install some sort of >> specialized client component. The former is the whole point of REST as >> an alternative to the latter. >> >> > Whenever the browser have javascript enabled the difference disappear, > don't you think? > > It seem that this is a deployment point. In the Fielding dissertation he > talks about java applets. > > > Giacomo > >
BTW, the post I mentioned about HATEOAS WADL is here: http://weblogs.java.net/blog/2009/04/02/hateoas-wadl What I don;t like about this, is that WADL ends up being a way to describe your resources almost in a "physical" sense, and to use that instead of media-types, where you can describe the "logical" sense of what you're doing, will be much more "reductive"... On the subject, did you saw a thread "UDDI dead?" / "WADL usage?" http://tech.groups.yahoo.com/group/rest-discuss/message/15632 2010/7/20 Antnio Mota <amsmota@...> > > You also have the situation when you want to use those same media-types for in-house application integration (even if not in a REST based arch.) as well in your external REST web services. That makes is it even more difficult to "standardize". Of course there are ways to solve this, like having (if you use XML) a simple XSLT to convert from the "standard" model to the "in-house" model, but that means more maintenance and POF's problems... > > On 20 July 2010 10:24, Giacomo Tesio <giacomo@tesio.it> wrote: >> >> >> >> On Tue, Jul 20, 2010 at 2:14 AM, Eric J. Bowman <eric@...> wrote: >>> >>> Giacomo Tesio wrote: >>> > >>> > So, to represent a cargo I should send a Content-Type with it of type >>> > application/x-cargo+xml for example? >>> > >>> >>> What is a 'cargo' other than a list of items? >> >> It could be an history of handling, For example. And it could have strict business rules in what handlings can be done, where and when. >> >> XML still seem to me exactly, the same than HTML if I have to supplement the HTML with human (and not machine) readable description of what the different items of the list mean. >> >> Note that I'm really tring to understand your point. >> >> But please consider that for any "real" object (being it a cargo, a chair, a contract between an employee and a company, etc..) there are many different possible interesting reppresentations, showing many different prospective on the same object itself. >> >> The point of view is related to the application's domain. >> That's not simply a serialization problem (xml vs json vs html vs atom and so on): the same representation could be serialized in any known format, but actually that cover the Y of the "application/X+Y" mime type. >> >> I'm wondering about the X. >> >> >>> >>> > >>> > And by the way, designing such formats do not seem a REST constraint: >>> > what do we need "code on demand" for otherwise? >>> > >>> >>> Yes and no. Technically, code on demand is used to extend a user >>> agent's understanding of a media type, not as a substitute for creating >>> a media type. >> >> I agree. But given an application/X+Y mime type the code on demand can help using the X part. >> Can't it? >> >> And BTW it could also handle the custom domain specific media type. >> >>> >>> > >>> > It seem to me that representing a chair, a cargo, a voyage, an private >>> > banker advisory session in HTML is just as inappropriate (from a >>> > semantic point of view) as using XML (but without the xml strictness). >>> > >>> >>> Why? A list of deck chairs on the Titanic, by voyage, seems like list/ >>> tabular data to me. So why not represent it using HTML tables and >>> lists? For those service consumers which need to know that a chaise >>> lounge is a type of deck chair, the table can be marked up with RDFa to >>> express that domain-specific vocabulary. >> >> That would be useful to humans but could be not enought for softwares. >> >> There is more than an HTML table in that page, something that a software could use to define the chairs still available. >> >> So even if HTML+XML could be the Y the point is that the X is not HTML. >> >> >>> >>> > >>> > And btw, I don't think that a private banker advisory session is >>> > suitable for a standard "application indipendent" reppresentation. >>> > >>> >>> I don't see what actions in such a use case, couldn't be modeled using >>> HTML. Anyone with authorization could then use the system from any Web >>> browser, instead of first needing to download and install some sort of >>> specialized client component. The former is the whole point of REST as >>> an alternative to the latter. >>> >> >> Whenever the browser have javascript enabled the difference disappear, don't you think? >> >> It seem that this is a deployment point. In the Fielding dissertation he talks about java applets. >> >> >> Giacomo >>
Hello Giacomo. Let jump in to the late part of the discussion (almost always do that). 1. The late discovery constrain is there to allow flexibility in the system. You can chance anything, anytime, and the client will not break, that is the idea. 2. WADL, just as WSDL (that, BTW, has a mime type too), can be used for design time discovery, and thus making fixed clients that break when something changes. Still, WSDL, and WADL, can be used dynamically, if the client loads them at runtime and process accordingly. Note here they are used as hypertext. 3. Now, if you don't use them, you can still make breakable clients. For instance, there are people that code the client knowing URLs and semantics, and if something changes all goes havoc. Note that using all that is needed for REST, including HATEOAS, gives you the top level in Ruby's maturity model, but eliminating HATEOAS you drop to the level below. Even some REST frameworks do not provide HATEOAS, which can tell you how hard is to accomplish that. 4. Ok. But there is a design problem with all this, and it is about semantics. Media types are simple artifacts that you know how to handle, but they can have different semantics. A media type like WSLD is to describe web services, so it is a protocol level semantic driven thing. A Cargo media type is an element from the application domain, so its semantics are different. When modeling, mixing semantics is not a good thing. So, following a line of discovery can be done following a particular semantic line for the app, or following a general semantic line for the protocol. That is, I can write a general purpose client using the WSDL, or an specific purpose client following the Cargo. Particularly, as I just mention, having protocol information inside the Cargo definition does not appeal to me. Mixing. It is better to design the links in the cargo as domain driven links, not protocol driven. So, any client that understands what a cargo is, can follow the link since it is semantically representative to it. Complicated, I know. 5. Lastly, SOAP is not that ugly. It has been very badly used, and may not fit as nicely to REST. We can think of it as a layer on top of REST that reduces REST capabilities, but does so because it is a general purpose thing that can be used on other protocols, not only HTTP, and thus not only WEB. Hope this clarifies, it is like a summary of what has been expressed. Sort of. Cheers. William Martinez. --- In rest-discuss@yahoogroups.com, Giacomo Tesio <giacomo@...> wrote: > > Hello list! :-D > > I've read a lot of criticism to WADL (see for example > http://bitworking.org/news/193/Do-we-need-WADL), since it could lead to > something like WSDL/SOAP/RPC/Berlusconi & other human's fault. > > BTW, I'd like to use it as a hypertext as in > http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven > > It would be possible? > > Actually a wadl file should have it's own mime type (does it have one?) but > it seem quite good as hypertext language as far as the client can handle it > (through, for example, some code on demand). > > I'm not considering it a tool to generate such code (even if it would be > possible, and as far as the code is downloaded with the wadl, still > restful), but just a simple and clean way to connect the resources. > > It seem to me that nothing restfully wrong to use an alternative hypertext > language instead of html, isn't it? > > > Giacomo >
This looks nice and straightforward. However, the client needs to know the URI's to PUT to; if not, then the client will have to POST to a collection resource. How does the client "know" the PUT URIs - could the client do a GET to a special resource that will return the URI's setup by the server?
If it has to be multiple POSTs then I think Jan's suggestion is nice - a token in the reprn. to keep the client and server in sync...
Sean.
--- On Tue, 20/7/10, António Mota <amsmota@...> wrote:
From: António Mota <amsmota@...>
Subject: Re: [rest-discuss] HTTP reliability - in order msg delivery?
To: "Ryan Riley" <ryan.riley@...>
Cc: "Jan Algermissen" <algermissen1971@...>, "Sean Kennedy" <seandkennedy@yahoo.co.uk>, "Rest Discussion Group" <rest-discuss@yahoogroups.com>, "Jim Webber" <jim@...>
Date: Tuesday, 20 July, 2010, 8:51
BTW, pragmatically speaking, what I really wanted to know is what is wrong with this approach to the subject:
PUT /messages/1
-->202
Then the client repeats
GET /messages/1
-->404
until
GET /messages/1
-->200
and then just proceed with
PUT /messages/2
-->202
and so on...
2010/7/20 António Mota <amsmota@...>
Because it drives people away from REST. And a style is just a style, nothing more than that. Implementations are all about compromising. One thing is to break a constraint, another
is not to apply it. I reckon that sometimes, but not always, not not apply it indeed means break it - but again, not always.
The problem is, for non-REST people, to say "that is not REST" is half-way for them to understand "REST is not for you, go back to SOAP or whatever you came from". That's this I think is counter-productive, to drive people away on account of purism (or sometimes fanatic or religious) point-of-views. This is just technology, all around us technology is made of compromises, even if some people like to make it, and have fun with, some kind of war (like Java vs .NET, MS vs. OpenSource, iPhone vs Android, and of course REST vs WS-*). But in the end most of those people will work, will mix and match, will adapt every other technology as they seem fit.
REST is just that, a style to be applied (and thus, pragmatically), not a religious holy-grail...
2010/7/20 Ryan Riley <ryan.riley@panesofglass.org>
I don't remember seeing anyone say it's all of nothing, just that you shouldn't call it REST if it doesn't meet all the constraints. As you said, if it's useful, use it. What's wrong with clarifying such an approach is not, therefore, RESTful? By definition, it's not. But if it works, why do you care?
Sent from my iPhone
On Jul 15, 2010, at 2:10 AM, António Mota <amsmota@...> wrote:
I'm not arguing again purity, actually I'm not arguing against
anything nor trying to start a discussion about it. I'm just trying to
point that it can be counter-productive to tell people that REST is a
all-or-nothing style. I'm not even saying it *is*, only that it can
be. In real life scenarios there is not a "all-or-nothing" (well,
there is, like fundamentalism being in politics or religion - which
are most of the times counter-productive) but not in IT anyhow...
If people do understand the properties that constraints originate, if
people are applying REST style because it applies to their "problem
space" and not just because it's REST, basically, if people understand
the consequences of applying a constraint, then they will understand
the consequences of relaxing a constraint.
2010/7/15 Jan Algermissen <algermissen1971@...>:
>
> Question is: Do people understand the consequences of relaxing a constraint?
>
> If you do and can live with the resulting loss of guaranteed system properties, fine. Go ahead.
>
> OTH, relaxing the stateless server constraint at the cost of lost scalability and much reduced understandability will not make adopters of REST happy in the long run.
>
> I'll argue for purity every time. And I really do not see any problem with doing pure REST anyhow.
>
> Jan
>
>
> -----------------------------------
> Jan Algermissen, Consultant
> NORD Software Consulting
>
> Mail: algermissen@...
> Blog: http://www.nordsc.com/blog/
> Work: http://www.nordsc.com/
> -----------------------------------
>
>
>
>
>
On Jul 20, 2010, at 9:07 AM, Ryan Riley wrote: > > > I don't remember seeing anyone say it's all of nothing, just that you shouldn't call it REST if it doesn't meet all the constraints. Yes. However, it is important to understand that you do *not* gain the system properties induced by REST if you violate a constraint. It is important to understand what the consequences of omitting a certain constraint are. See my analysis: http://www.nordsc.com/ext/classification_of_http_based_apis.html (Server currently down, hopefully it reboots normally. Otherwise use google cache). > As you said, if it's useful, use it. What's wrong with clarifying such an approach is not, therefore, RESTful? By definition, it's not. But if it works, why do you care? I care because I am pretty sure that people usually propose some "half-REST" approach because they think doing REST is somehow difficult. I see no burdon in applying Web architecture correctly. The only thing that needs to happen that one must develop that different point of view regarding the design of networked applications. Doing "Half-REST" does not help that at all. In fact, it only makes the necessary shift of mind harder because it causes the impression that the rest of REST is just a scientific excercise. And: there is always the question whether it is really better to build a half-REST system than it is to go with traditional RPC (given that the coupling and complexity easily amounts to the same). The consequence can be that people in the end are disappointed by REST because it causes them the same problems they set out to solve. That is why I care and why I recommend using REST as defined by Roy. Jan > > Sent from my iPhone > > On Jul 15, 2010, at 2:10 AM, Antnio Mota <amsmota@...> wrote: > >> I'm not arguing again purity, actually I'm not arguing against >> anything nor trying to start a discussion about it. I'm just trying to >> point that it can be counter-productive to tell people that REST is a >> all-or-nothing style. I'm not even saying it *is*, only that it can >> be. In real life scenarios there is not a "all-or-nothing" (well, >> there is, like fundamentalism being in politics or religion - which >> are most of the times counter-productive) but not in IT anyhow... >> >> If people do understand the properties that constraints originate, if >> people are applying REST style because it applies to their "problem >> space" and not just because it's REST, basically, if people understand >> the consequences of applying a constraint, then they will understand >> the consequences of relaxing a constraint. >> >> 2010/7/15 Jan Algermissen <algermissen1971@...>: >> > >> > Question is: Do people understand the consequences of relaxing a constraint? >> > >> > If you do and can live with the resulting loss of guaranteed system properties, fine. Go ahead. >> > >> > OTH, relaxing the stateless server constraint at the cost of lost scalability and much reduced understandability will not make adopters of REST happy in the long run. >> > >> > I'll argue for purity every time. And I really do not see any problem with doing pure REST anyhow. >> > >> > Jan >> > >> > >> > ----------------------------------- >> > Jan Algermissen, Consultant >> > NORD Software Consulting >> > >> > Mail: algermissen@... >> > Blog: http://www.nordsc.com/blog/ >> > Work: http://www.nordsc.com/ >> > ----------------------------------- >> > >> > >> > >> > >> > >> > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@acm.org Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
Giacomo Tesio wrote: > > > Why? A list of deck chairs on the Titanic, by voyage, seems like > > list/ tabular data to me. So why not represent it using HTML > > tables and lists? For those service consumers which need to know > > that a chaise lounge is a type of deck chair, the table can be > > marked up with RDFa to express that domain-specific vocabulary. > > > > That would be useful to humans but could be not enought for softwares. > What is it about an HTML table marked up with RDFa, that makes it not machine-readable? This point of view always leaves me befuddled -- why should a m2m language *not* be human-readable for maintenance purposes? Frankly, I see no benefit to humans from RDFa embedded in HTML. That metadata is specifically targeted to machines, not humans. -Eric
2010/7/20 Sean Kennedy <seandkennedy@...> > This looks nice and straightforward. However, the client needs to know the > URI's to PUT to; if not, then the client will have to POST to a collection > resource. How does the client "know" the PUT URIs - could the client do a > GET to a special resource that will return the URI's setup by the server? > > If it has to be multiple POSTs then I think Jan's suggestion is nice - a > token in the reprn. to keep the client and server in sync... > > Sean. > > > Yes, that comes across my mind also, the URL where to PUT. However, if I have to use POST, everything stays pretty the same: GET /wellknowuri --> 200; post-uri: /messages POST /messages id:1; msg:blahablala -->202; Location: /anyurltheserverwants/anyid Then the client repeats GET /anyurltheserverwants/anyid -->404 until GET /anyurltheserverwants/anyid -->200 and then just proceed with GET /wellknowuri --> 200; post-uri: /nowiwantmymessageshere POST /nowiwantmymessageshere id:2; msg:blahablala2 -->202; Location: /anyurltheserverwants/anyotherid and so on... However the question of using PUT to create resources with a HATEOAS context is interesting. How can a client decide what the URL for the new resource will be if that should be driven by server? This one seens odd to me: GET /whereshouldiput 200; Location: /iwantmyputshere/123 --> a URI for a resource that does not exist yet!!! PUT /iwantmyputshere/123 201 Actually there may be other ways, like using url templates (kind of) for wich I have another use case. But this question of PUT in HATEOAS should probably be matter for another thread...
On Tue, Jul 20, 2010 at 2:52 PM, William Martinez Pomares < wmartinez@...> wrote: > > > > Hello Giacomo. > Let jump in to the late part of the discussion (almost always do that). > > 1. The late discovery constrain is there to allow flexibility in the > system. You can chance anything, anytime, and the client will not break, > that is the idea. > Ok. You make me realize that REST allow this by using two approaches: 1. Code on demand: like a Java applet able to handle a particular "specialized" mime type. Such approach allow the code provided to evolve with the mime type 2. *Out of bound* knowledge known as "Standard MIME types". Even if it seem to me that the REST community there's no general consensus over the topic, I think we could agree about this. The problem about a application/xml resource is probably that the client don't know how to present it in a useful way (while an Atom or HTML file would be easy to present in a usable way). That said, my conclusion to this discussion are: - WADL can be used as an hypertext language. - whenever the wadl file link a different type of resource it should define the mime type (even when home made). - "home made" mimes should be documented (for example with an XSD when appropriate) - Code on demand could be provided as a resource from the wadl file to present such mime types (including XSLT to html or atom when appropriate) > > 2. WADL, just as WSDL (that, BTW, has a mime type too), can be used for > design time discovery, and thus making fixed clients that break when > something changes. Still, WSDL, and WADL, can be used dynamically, if the > client loads them at runtime and process accordingly. Note here they are > used as hypertext. > That is exactly what I need (AFAIK). > > 4. Ok. But there is a design problem with all this, and it is about > semantics. Media types are simple artifacts that you know how to handle, but > they can have different semantics. A media type like WSLD is to describe web > services, so it is a protocol level semantic driven thing. A Cargo media > type is an element from the application domain, so its semantics are > different. When modeling, mixing semantics is not a good thing. So, > following a line of discovery can be done following a particular semantic > line for the app, or following a general semantic line for the protocol. > That is, I can write a general purpose client using the WSDL, or an specific > purpose client following the Cargo. Particularly, as I just mention, having > protocol information inside the Cargo definition does not appeal to me. > Mixing. It is better to design the links in the cargo as domain driven > links, not protocol driven. So, any client that understands what a cargo is, > can follow the link since it is semantically representative to it. > Complicated, I know. > May be I'm missing the complexity: why should it be complicated? Simply write an application/x-cargo+xml mime type with an annotated xsd able to identify URIs. Would this be unRESTful? May be I'm missing somethin by the way. Giacomo
Mike, Good post. One question. Shouldn't the date range query URI look like this: http://www.example.org/list/?*date-range&* date-start=2010-03-01&date-stop=2010-03-31 not this: http://www.example.org/list/?date-start=2010-03-01&date-stop=2010-03-31 Since your query "semantics" say: If the <query /> element in the list manager document has child <data /> elements, the name and value attributes of those elements should be *added* to the URI to form a valid query. Also, I think it is debatable whether "it was done without resorting to documenting URI conventions". Since the above sentence regarding how to compose a URI from XML elements is arguably documenting a "URI convention", albeit one I like. -- Nick Nick Gall Phone: +1.781.608.5871 Twitter: ironick AOL IM: Nicholas Gall Yahoo IM: nick_gall_1117 MSN IM: (same as email) Google Talk: (same as email) Email: nick.gall AT-SIGN gmail DOT com Weblog: http://ironick.typepad.com/ironick/ On Sat, Jul 17, 2010 at 6:46 PM, mike amundsen <mamund@...> wrote: > > > G: > > I wrote a set of blog posts on this topic recently: > http://www.amundsen.com/blog/archives/1041 > > It might give you some ideas. > > > > mca > http://amundsen.com/blog/ > http://mamund.com/foaf.rdf#me > > > > > On Sat, Jul 17, 2010 at 18:00, Giacomo Tesio <giacomo@...> wrote: > >> >> >> On Fri, Jul 16, 2010 at 5:58 PM, Jan Algermissen < >> algermissen1971@...> wrote: >> >> >>> > >>> > It would be possible? >>> >>> Yes - though I personnally doubt the usefulness. I recommend the >>> specification of a media type specific to your domain. That media type >>> should provide the means for the necessary hypermedia controls (along the >>> lines of application/atom+xml and application/atomsrv+xml). >>> >>> >> That's not clear to me... >> >> Should I write a specification for a "cargo" mime type to deliver an >> application that show such things? >> Should such mime type become a standard? >> >> It seem a little too complex... Is there some example of such a process >> worked? >> >> >> Giacomo >> >> >> > > > >
Giacomo Tesio wrote: > > Simply write an application/x-cargo+xml mime type with an annotated > xsd able to identify URIs. Would this be unRESTful? > This is where it would help to have some reference to link to, going in depth about media type design -- REST's missing chapter. What you have there is no different, and just as useless to REST, as application/xml. Sure, you can whip up an XML language, give it a schema, and assign it a media type identifier. But that doesn't make it a media type, certainly not one that's of use for driving a REST system. What does application/x-cargo+xml tell me that's any different from what application/xml tells me? Is there some definition (and no, XSD doesn't begin to cut it) of how to render and interact with such a document? If your cargo manifest is HTML tables and lists, then block-level vs. inline elements are clearly understood by anyone looking at your HTTP headers. Anyone looking at the HTTP headers sees by the media type, how linking is handled -- this particular out-of-band knowledge, is common knowledge. If, to make these determinations, I have to look beyond your media type identifier to some other resource to determine what a link is, and there's nothing to refer to anywhere telling me which elements are block and which are inline, then you've missed the point of REST entirely -- your out-of-band knowledge isn't common knowledge, it's proprietary to your system. (Your system may not have the concerns of laying out the document, but still, your "media type" doesn't really tell me anything about what methods to use. An HTML media type tells me lots about how to interact using GET, POST and urlencoded queries before I've even looked at the message body -- this is self-descriptiveness. The application/xml media type tells me nothing of the sort, neither does any *+xml type that is functionally no different from application/xml.) But, the majority crowd who equate HTTP with REST, will be satisfied to call what you propose RESTful. -Eric
Nick: Thanks for the feedback. The examples given for the query could be done lots of diff ways. i picked what i thought was the simplest for that short blog post. One approach would be to follow the HTML model and provide a set of elements that represent the variables: <data name="date-start" /> <data name="date-stop" /> and let the client know that these need to be converted into URI parameters using the [name]=[value] model. The implementation details can be left to the client. Another would be to employ a URI template approach[1]. By passing the "map" to the parameters in the representation, clients can focus on "fill in the blanks" details and generate the URI as needed. So yes, it would be better to return the model within the representation instead of documenting that model out of band. I skipped over those details in that post,. FWIW, As for "URI convention" usage, this is already happening in HTML for example image maps[2] use a URI convention for argument-passing. I recently put together an example for handling resource properties (based on an idea by Fielding)[3] that takes a similar approach. in these two cases, i think the use of convention is fine. [1] http://tools.ietf.org/html/draft-gregorio-uritemplate-04 [2] http://www.w3.org/TR/REC-html40/struct/objects.html#h-13.6 [3] http://www.amundsen.com/examples/fielding-props/ mca http://amundsen.com/blog/ http://mamund.com/foaf.rdf#me On Tue, Jul 20, 2010 at 10:42, Nick Gall <nick.gall@...> wrote: > Mike, > > Good post. One question. Shouldn't the date range query URI look like this: > > http://www.example.org/list/?*date-range&* > date-start=2010-03-01&date-stop=2010-03-31 > > not this: > > http://www.example.org/list/?date-start=2010-03-01&date-stop=2010-03-31 > > Since your query "semantics" say: > > If the <query /> element in the list manager document has child <data /> elements, > the name and value attributes of those elements should be *added* to the > URI to form a valid query. > > Also, I think it is debatable whether "it was done without resorting to > documenting URI conventions". Since the above sentence regarding how to > compose a URI from XML elements is arguably documenting a "URI convention", > albeit one I like. > > -- Nick > > Nick Gall > Phone: +1.781.608.5871 > Twitter: ironick > AOL IM: Nicholas Gall > Yahoo IM: nick_gall_1117 > MSN IM: (same as email) > Google Talk: (same as email) > Email: nick.gall AT-SIGN gmail DOT com > Weblog: http://ironick.typepad.com/ironick/ > > > > On Sat, Jul 17, 2010 at 6:46 PM, mike amundsen <mamund@...> wrote: > >> >> >> G: >> >> I wrote a set of blog posts on this topic recently: >> http://www.amundsen.com/blog/archives/1041 >> >> It might give you some ideas. >> >> >> >> mca >> http://amundsen.com/blog/ >> http://mamund.com/foaf.rdf#me >> >> >> >> >> On Sat, Jul 17, 2010 at 18:00, Giacomo Tesio <giacomo@...> wrote: >> >>> >>> >>> On Fri, Jul 16, 2010 at 5:58 PM, Jan Algermissen < >>> algermissen1971@...> wrote: >>> >>> >>>> > >>>> > It would be possible? >>>> >>>> Yes - though I personnally doubt the usefulness. I recommend the >>>> specification of a media type specific to your domain. That media type >>>> should provide the means for the necessary hypermedia controls (along the >>>> lines of application/atom+xml and application/atomsrv+xml). >>>> >>>> >>> That's not clear to me... >>> >>> Should I write a specification for a "cargo" mime type to deliver an >>> application that show such things? >>> Should such mime type become a standard? >>> >>> It seem a little too complex... Is there some example of such a process >>> worked? >>> >>> >>> Giacomo >>> >>> >>> >> >> >> >> > >
Giacomo Tesio wrote: > > > I don't see what actions in such a use case, couldn't be modeled > > using HTML. Anyone with authorization could then use the system > > from any Web browser, instead of first needing to download and > > install some sort of specialized client component. The former is > > the whole point of REST as an alternative to the latter. > > > > > Whenever the browser have javascript enabled the difference > disappear, don't you think? > No. Hypertext markup is declarative code, javascript is not. An interface written in markup is easy to decipher (self-documenting); javascript, not so much. BTW, what I meant by specialized client component, is a custom application, not code-on-demand. When your application interfaces are using HTML, you get lots of value-added benefits, primarily accessibility. If your hypertext interface is WADL it won't be accessible to those using alternative reading/input devices. > > It seem that this is a deployment point. In the Fielding dissertation > he talks about java applets. > Code-on-demand is an optional constraint, because it reduces visibility. Thus, it is not a substitute for selecting, or creating, an appropriate media type. Code-on-demand has its uses, sure; making WADL human-interactive in lieu of using HTML is not one of them -- doing that violates the self-descriptive messaging constraint. -Eric
--- In rest-discuss@yahoogroups.com, "Eric J. Bowman" <eric@...> wrote: > > Giacomo Tesio wrote: > > > > Simply write an application/x-cargo+xml mime type with an annotated > > xsd able to identify URIs. Would this be unRESTful? > > > > This is where it would help to have some reference to link to, going in > depth about media type design -- REST's missing chapter. > > What you have there is no different, and just as useless to REST, as > application/xml. Sure, you can whip up an XML language, give it a > schema, and assign it a media type identifier. But that doesn't make > it a media type, certainly not one that's of use for driving a REST > system. What does application/x-cargo+xml tell me that's any different > from what application/xml tells me? > > Is there some definition (and no, XSD doesn't begin to cut it) of how > to render and interact with such a document? If your cargo manifest is > HTML tables and lists, then block-level vs. inline elements are clearly > understood by anyone looking at your HTTP headers. Anyone looking at > the HTTP headers sees by the media type, how linking is handled -- this > particular out-of-band knowledge, is common knowledge. > > If, to make these determinations, I have to look beyond your media type > identifier to some other resource to determine what a link is, and > there's nothing to refer to anywhere telling me which elements are > block and which are inline, then you've missed the point of REST > entirely -- your out-of-band knowledge isn't common knowledge, it's > proprietary to your system. > > (Your system may not have the concerns of laying out the document, but > still, your "media type" doesn't really tell me anything about what > methods to use. An HTML media type tells me lots about how to interact > using GET, POST and urlencoded queries before I've even looked at the > message body -- this is self-descriptiveness. The application/xml > media type tells me nothing of the sort, neither does any *+xml type > that is functionally no different from application/xml.) > I wonder, though, if it would be possible and/or useful to determine exactly what it is that a standard media type offers and come up with a language for encoding that in a declarative manner (i.e. make that media type description machine readable). So if a client needs to "know" about the media type, this machine-readable media type definition could be delivered runtime (say, linked-to w/ an http header). I'm not saying we have such a description language, I am just speculating, what if.... Certainly this is sub-optimal when a well-known media type (HTML, Atom, etc.) would do the trick. But are there instances when such an approach would be defensible from a REST perspective? --peter > But, the majority crowd who equate HTTP with REST, will be satisfied to > call what you propose RESTful. > > -Eric >
--- In rest-discuss@yahoogroups.com, "Eric J. Bowman" <eric@...> wrote: > > Giacomo Tesio wrote: > > > > Simply write an application/x-cargo+xml mime type with an annotated > > xsd able to identify URIs. Would this be unRESTful? > > > > This is where it would help to have some reference to link to, going in > depth about media type design -- REST's missing chapter. > > What you have there is no different, and just as useless to REST, as > application/xml. Sure, you can whip up an XML language, give it a > schema, and assign it a media type identifier. But that doesn't make > it a media type, certainly not one that's of use for driving a REST > system. What does application/x-cargo+xml tell me that's any different > from what application/xml tells me? > > Is there some definition (and no, XSD doesn't begin to cut it) of how > to render and interact with such a document? If your cargo manifest is > HTML tables and lists, then block-level vs. inline elements are clearly > understood by anyone looking at your HTTP headers. Anyone looking at > the HTTP headers sees by the media type, how linking is handled -- this > particular out-of-band knowledge, is common knowledge. > > If, to make these determinations, I have to look beyond your media type > identifier to some other resource to determine what a link is, and > there's nothing to refer to anywhere telling me which elements are > block and which are inline, then you've missed the point of REST > entirely -- your out-of-band knowledge isn't common knowledge, it's > proprietary to your system. > > (Your system may not have the concerns of laying out the document, but > still, your "media type" doesn't really tell me anything about what > methods to use. An HTML media type tells me lots about how to interact > using GET, POST and urlencoded queries before I've even looked at the > message body -- this is self-descriptiveness. The application/xml > media type tells me nothing of the sort, neither does any *+xml type > that is functionally no different from application/xml.) > I wonder, though, if it would be possible and/or useful to determine exactly what it is that a standard media type offers and come up with a language for encoding that in a declarative manner (i.e. make that media type description machine readable). So if a client needs to "know" about the media type, this machine-readable media type definition could be delivered runtime (say, linked-to w/ an http header). I'm not saying we have such a description language, I am just speculating, what if.... Certainly this is sub-optimal when a well-known media type (HTML, Atom, etc.) would do the trick. But are there instances when such an approach would be defensible from a REST perspective? --peter > But, the majority crowd who equate HTTP with REST, will be satisfied to > call what you propose RESTful. > > -Eric >
On Tue, Jul 20, 2010 at 5:15 PM, Eric J. Bowman <eric@...>wrote: > Giacomo Tesio wrote: > > > > Simply write an application/x-cargo+xml mime type with an annotated > > xsd able to identify URIs. Would this be unRESTful? > > > > This is where it would help to have some reference to link to, going in > depth about media type design -- REST's missing chapter. > You are surely right. I can't find an authoritative answer to such a question on the web. This is really a missing chapter. But it seem to me more related to which out of bound knowledge is allowed and which not. Saying that only standard media types can be used in a RESTful styled application mean that there are a huge number of domains not suitable to be implemented with such style (everywhere there's no standard mimetype yet). I'm not telling that HTML should not be provided, it should! And BTW I could provide (and obviously link) a simple xslt could translate my custom x-cargo+xml mime to html. But it seem to me quite reductive to allow only HTML documents to reppresent resources. As far as the client can find and interpret the links. I would say that even a HATEOAS with binary data could be right according to the restful constraints. > What you have there is no different, and just as useless to REST, as > application/xml. Quite strangely I can't understand what you mean. Consider that I'm not tring to use a rpc approach. I'm just tring to discriminate what is rest and what is not. Knowing that AtomPub is REST by design is not enought. I'm tring to understand wherther and why the RESTful style REQUIRES such out of bound / common / standard knowledge. > (Your system may not have the concerns of laying out the document, but > still, your "media type" doesn't really tell me anything about what > methods to use. An HTML media type tells me lots about how to interact > using GET, POST and urlencoded queries before I've even looked at the > message body -- this is self-descriptiveness. The application/xml > media type tells me nothing of the sort, neither does any *+xml type > that is functionally no different from application/xml.) > Why you sould not be able to PUT or DELETE a video mime typed resource? If so, why you should not be able to PUT or DELETE an x-cargo mime typed resource? Which operation you could actually do could depend on the OPTIONS provided by the resource itself, but the wadl file could also tell the client such information exactly as an html file. > > But, the majority crowd who equate HTTP with REST, will be satisfied to > call what you propose RESTful. > That's not what I need. I don't think I'm equating HTTP with REST, but may be I'm missing something. On the other hand, it seem to me that you are equating HTML with REST (or at least with "HyperText"). This thread has grown beyond my expectations, and actually I'm not sure to have understood the pros and contras of the proposed approach. The only concrete problem I see is related to the misuse of wadl files to generate statically typed classes. But as far as I can see, there's no argument (concrete or theoretical) against it's proper use as an hypertext representation. Moreover I still can't understand if home made mime violate RESTful constraints or not. It's clear that representing any resource as HTML (or, when appropriate, Atom) is good. I just do not understand why an xml representation would be wrong as far as the client is able to handle it (may be with the help of code on demand). Giacomo PS: thank you all for your patience...
Peter: <snip> I wonder, though, if it would be possible and/or useful to determine exactly what it is that a standard media type offers and come up with a language for encoding that in a declarative manner (i.e. make that media type description machine readable). </snip> I've actually started an example hyper media-type[1] (a prototypic example) that does what I think you are suggesting. This one loads in a browser and displays similar to HTML (just because I wanted this prototype to be viewable in a browser). Doing a "view source" on that URL will show you the raw markup. I have a few more examples based on this proto-media-type, but haven't published them. That prototype uses elements based on my Hypermedia Factors[2]. I also published a blog post on designing hypermedia types that contains a sample machine-oriented media type example[3]. This is actually an early version of a machine-oriented declarative markup that I've not yet published. I think there are a few minor things to change and this could be a clean machine-readable media-type for a wide range of uses. While the example there is expressed in XML, the general rules used in that example could easily be translated into other data formats including JSON. I will also mention that Mike Kelly has published an example of a machine-readable media-type (Hal) that is worth reviewing. [1] http://amundsen.com/hypermedia/examples/doc.xml [2] http://amundsen.com/hypermedia/hfactor/ [3] http://amundsen.com/blog/archives/1041 [4] http://restafari.blogspot.com/2010/06/please-accept-applicationhalxml.html mca http://amundsen.com/blog/ http://mamund.com/foaf.rdf#me On Tue, Jul 20, 2010 at 12:37, Peter <pkeane@...> wrote: > > > --- In rest-discuss@yahoogroups.com, "Eric J. Bowman" <eric@...> wrote: >> >> Giacomo Tesio wrote: >> > >> > Simply write an application/x-cargo+xml mime type with an annotated >> > xsd able to identify URIs. Would this be unRESTful? >> > >> >> This is where it would help to have some reference to link to, going in >> depth about media type design -- REST's missing chapter. >> >> What you have there is no different, and just as useless to REST, as >> application/xml. Sure, you can whip up an XML language, give it a >> schema, and assign it a media type identifier. But that doesn't make >> it a media type, certainly not one that's of use for driving a REST >> system. What does application/x-cargo+xml tell me that's any different >> from what application/xml tells me? >> >> Is there some definition (and no, XSD doesn't begin to cut it) of how >> to render and interact with such a document? If your cargo manifest is >> HTML tables and lists, then block-level vs. inline elements are clearly >> understood by anyone looking at your HTTP headers. Anyone looking at >> the HTTP headers sees by the media type, how linking is handled -- this >> particular out-of-band knowledge, is common knowledge. >> >> If, to make these determinations, I have to look beyond your media type >> identifier to some other resource to determine what a link is, and >> there's nothing to refer to anywhere telling me which elements are >> block and which are inline, then you've missed the point of REST >> entirely -- your out-of-band knowledge isn't common knowledge, it's >> proprietary to your system. >> >> (Your system may not have the concerns of laying out the document, but >> still, your "media type" doesn't really tell me anything about what >> methods to use. An HTML media type tells me lots about how to interact >> using GET, POST and urlencoded queries before I've even looked at the >> message body -- this is self-descriptiveness. The application/xml >> media type tells me nothing of the sort, neither does any *+xml type >> that is functionally no different from application/xml.) >> > > I wonder, though, if it would be possible and/or useful to determine exactly what it is that a standard media type offers and come up with a language for encoding that in a declarative manner (i.e. make that media type description machine readable). So if a client needs to "know" about the media type, this machine-readable media type definition could be delivered runtime (say, linked-to w/ an http header). I'm not saying we have such a description language, I am just speculating, what if.... Certainly this is sub-optimal when a well-known media type (HTML, Atom, etc.) would do the trick. But are there instances when such an approach would be defensible from a REST perspective? > > --peter > > >> But, the majority crowd who equate HTTP with REST, will be satisfied to >> call what you propose RESTful. >> >> -Eric >> > > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
great stuff, Mike. Thanks much! --peter On Tue, Jul 20, 2010 at 12:15 PM, mike amundsen <mamund@...> wrote: > Peter: > > <snip> > I wonder, though, if it would be possible and/or useful to determine > exactly what it is that a standard media type offers and come up with > a language for encoding that in a declarative manner (i.e. make that > media type description machine readable). > </snip> > > I've actually started an example hyper media-type[1] (a prototypic > example) that does what I think you are suggesting. This one loads in > a browser and displays similar to HTML (just because I wanted this > prototype to be viewable in a browser). Doing a "view source" on that > URL will show you the raw markup. I have a few more examples based on > this proto-media-type, but haven't published them. > > That prototype uses elements based on my Hypermedia Factors[2]. > > I also published a blog post on designing hypermedia types that > contains a sample machine-oriented media type example[3]. This is > actually an early version of a machine-oriented declarative markup > that I've not yet published. I think there are a few minor things to > change and this could be a clean machine-readable media-type for a > wide range of uses. While the example there is expressed in XML, the > general rules used in that example could easily be translated into > other data formats including JSON. > > I will also mention that Mike Kelly has published an example of a > machine-readable media-type (Hal) that is worth reviewing. > > [1] http://amundsen.com/hypermedia/examples/doc.xml > [2] http://amundsen.com/hypermedia/hfactor/ > [3] http://amundsen.com/blog/archives/1041 > [4] http://restafari.blogspot.com/2010/06/please-accept-applicationhalxml.html > > mca > http://amundsen.com/blog/ > http://mamund.com/foaf.rdf#me > > > > > On Tue, Jul 20, 2010 at 12:37, Peter <pkeane@...> wrote: >> >> >> --- In rest-discuss@yahoogroups.com, "Eric J. Bowman" <eric@...> wrote: >>> >>> Giacomo Tesio wrote: >>> > >>> > Simply write an application/x-cargo+xml mime type with an annotated >>> > xsd able to identify URIs. Would this be unRESTful? >>> > >>> >>> This is where it would help to have some reference to link to, going in >>> depth about media type design -- REST's missing chapter. >>> >>> What you have there is no different, and just as useless to REST, as >>> application/xml. Sure, you can whip up an XML language, give it a >>> schema, and assign it a media type identifier. But that doesn't make >>> it a media type, certainly not one that's of use for driving a REST >>> system. What does application/x-cargo+xml tell me that's any different >>> from what application/xml tells me? >>> >>> Is there some definition (and no, XSD doesn't begin to cut it) of how >>> to render and interact with such a document? If your cargo manifest is >>> HTML tables and lists, then block-level vs. inline elements are clearly >>> understood by anyone looking at your HTTP headers. Anyone looking at >>> the HTTP headers sees by the media type, how linking is handled -- this >>> particular out-of-band knowledge, is common knowledge. >>> >>> If, to make these determinations, I have to look beyond your media type >>> identifier to some other resource to determine what a link is, and >>> there's nothing to refer to anywhere telling me which elements are >>> block and which are inline, then you've missed the point of REST >>> entirely -- your out-of-band knowledge isn't common knowledge, it's >>> proprietary to your system. >>> >>> (Your system may not have the concerns of laying out the document, but >>> still, your "media type" doesn't really tell me anything about what >>> methods to use. An HTML media type tells me lots about how to interact >>> using GET, POST and urlencoded queries before I've even looked at the >>> message body -- this is self-descriptiveness. The application/xml >>> media type tells me nothing of the sort, neither does any *+xml type >>> that is functionally no different from application/xml.) >>> >> >> I wonder, though, if it would be possible and/or useful to determine exactly what it is that a standard media type offers and come up with a language for encoding that in a declarative manner (i.e. make that media type description machine readable). So if a client needs to "know" about the media type, this machine-readable media type definition could be delivered runtime (say, linked-to w/ an http header). I'm not saying we have such a description language, I am just speculating, what if.... Certainly this is sub-optimal when a well-known media type (HTML, Atom, etc.) would do the trick. But are there instances when such an approach would be defensible from a REST perspective? >> >> --peter >> >> >>> But, the majority crowd who equate HTTP with REST, will be satisfied to >>> call what you propose RESTful. >>> >>> -Eric >>> >> >> >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> >> >
--- In rest-discuss@yahoogroups.com, Giacomo Tesio <giacomo@...> wrote: > That's not what I need. > I don't think I'm equating HTTP with REST, but may be I'm missing something. > > On the other hand, it seem to me that you are equating HTML with REST (or at > least with "HyperText"). > > > This thread has grown beyond my expectations, and actually I'm not sure to > have understood the pros and contras of the proposed approach. > > The only concrete problem I see is related to the misuse of wadl files to > generate statically typed classes. > > But as far as I can see, there's no argument (concrete or theoretical) > against it's proper use as an hypertext representation. > > Moreover I still can't understand if home made mime violate RESTful > constraints or not. > > > It's clear that representing any resource as HTML (or, when appropriate, > Atom) is good. > I just do not understand why an xml representation would be wrong as far as > the client is able to handle it (may be with the help of code on demand). > > > > Giacomo > PS: thank you all for your patience... > I don't have time right now for a long response, but in the meantime the short response is this -- a lot of your confusion seems to come from the fact that you are approaching the problem from the perspective of building an API. Instead, think about this from the perspective of a) building a new kind of browser and associated media type(s); and b) designing "sites" for that new kind of browser. Some other quick points: 1) "Standard" doesn't mean what you think it means in the context of REST. New standards come from somewhere, often grass roots efforts. What is important is that in whatever domain the client and server are working, there is a well-understood way to get to map a media type identifier to a specification. That's the IANA registry on the open Internet. If you are doing something within the enterprise it could be something as simple as "All our internal media types are described on Bob's wiki page". In that "in between" stage, a community could adopt a "standard" pre-registration but as the format grows it needs to get registered so folks not part of the original community become part of the eco-system. The main problem with "application/xml" is that it maps to the XML specification which is likely not the right thing required to process the format properly. 2) You don't want a media type for each kind of resource -- that makes evolution hard. It's better to design media types around client domains than service domains. For example, a web browser is happy to see every resource in the Web as marked-up text + forms + graphics (etc) because that is sufficient to let it do what it needs to do with the information and navigate the app. Other types of clients will likely have their own ways of looking at the data -- design your media types around those views and you will give the services & resources a lot more room to change and evolve -- just as they can on the Web. Also, this mapping from raw resource data to the "client's view" of data gives you data encapsulation. 3) WADL is the opposite of what I described in (2) -- it specifies precisely what the service does. If this is the agreement between the client and server, then every change of the service changes the agreement. It just makes the cost of change high. You don't have that on the web. The HTML spec doesn't change because Facebook added a feature. If the browsers start adding features then the HTML spec changes as we are now seeing with HTML5. My recent blog entries might help you here: http://linkednotbound.net/2010/06/09/hypermedia-is-the-clients-lens/ http://linkednotbound.net/2010/07/19/self-descriptive-hypermedia/ (Hmm that wasn't so short after all!) Regards, Andrew
Yes, 2) was what I was trying to describe as the "logical" view of a resource as opposed to the "phisical" view that here is 3). And why 2) is better than 3)... Very nice explanation, cheers. On 20 Jul 2010 18:36, "wahbedahbe" <andrew.wahbe@...> wrote: --- In rest-discuss@yahoogroups.com, Giacomo Tesio <giacomo@...> wrote: > That's not what I need... I don't have time right now for a long response, but in the meantime the short response is this -- a lot of your confusion seems to come from the fact that you are approaching the problem from the perspective of building an API. Instead, think about this from the perspective of a) building a new kind of browser and associated media type(s); and b) designing "sites" for that new kind of browser. Some other quick points: 1) "Standard" doesn't mean what you think it means in the context of REST. New standards come from somewhere, often grass roots efforts. What is important is that in whatever domain the client and server are working, there is a well-understood way to get to map a media type identifier to a specification. That's the IANA registry on the open Internet. If you are doing something within the enterprise it could be something as simple as "All our internal media types are described on Bob's wiki page". In that "in between" stage, a community could adopt a "standard" pre-registration but as the format grows it needs to get registered so folks not part of the original community become part of the eco-system. The main problem with "application/xml" is that it maps to the XML specification which is likely not the right thing required to process the format properly. 2) You don't want a media type for each kind of resource -- that makes evolution hard. It's better to design media types around client domains than service domains. For example, a web browser is happy to see every resource in the Web as marked-up text + forms + graphics (etc) because that is sufficient to let it do what it needs to do with the information and navigate the app. Other types of clients will likely have their own ways of looking at the data -- design your media types around those views and you will give the services & resources a lot more room to change and evolve -- just as they can on the Web. Also, this mapping from raw resource data to the "client's view" of data gives you data encapsulation. 3) WADL is the opposite of what I described in (2) -- it specifies precisely what the service does. If this is the agreement between the client and server, then every change of the service changes the agreement. It just makes the cost of change high. You don't have that on the web. The HTML spec doesn't change because Facebook added a feature. If the browsers start adding features then the HTML spec changes as we are now seeing with HTML5. My recent blog entries might help you here: http://linkednotbound.net/2010/06/09/hypermedia-is-the-clients-lens/ http://linkednotbound.net/2010/07/19/self-descriptive-hypermedia/ (Hmm that wasn't so short after all!) Regards, Andrew
Giacomo Tesio wrote: > > > What is a 'cargo' other than a list of items? > > It could be an history of handling, For example. And it could have > strict business rules in what handlings can be done, where and when. > No different than an airline reservation. Yet, airline reservations are commonly made online via HTML/HTTP APIs. Such systems aren't (last I checked) RESTful, but that doesn't mean they couldn't be. A Uniform Interface for airline reservation systems would mean an agreement as to an industry-specific metadata language to mark up existing reservation forms to have the same semantics. It would not involve the creation of a new media type for airline reservations -- as the goal is to allow online reservations, it makes sense to stick with ubiquitous media types that browsers already know. A new media type, requiring extensive code-on-demand or even a special client-component application for reservationists, would go against the REST style. Business rules are a back-end implementation detail. A REST API abstracts these rules away behind a uniform interface frontend. Just like online airline reservations already do, in practice. -Eric
If I'm allowed some imagery, I'll ilustrate it like this: Suppose you come to Dublin and at some point we meet in the street and you ask me: Where am I? I could answer 2) you're near Temple Bar, or 3) you're 53o 20' N and 6o 15' W Now suppose you're intent is to have a pint of Guiness and have good craic. With 2) you would now exactly what to do, as it is commom knowledge what to do in at Temple Bar. With 3) you'll probably fulfill you intent as well, but you would have to make much more questions... And 2) is reusable, every time you're somewhat near Temple Bar you could apply the same rules to craic. With 3) if you ask the other day and you get a response "53o 22' N and 6o 14' W" you would have to start all over again... On 20 Jul 2010 18:53, "Antnio Mota" <amsmota@...> wrote: Yes, 2) was what I was trying to describe as the "logical" view of a resource as opposed to the "phisical" view that here is 3). And why 2) is better than 3)... Very nice explanation, cheers. > > On 20 Jul 2010 18:36, "wahbedahbe" <andrew.wahbe@...> wrote: > > > > > > > --- In rest-discuss@yahoogroups.com, Giacomo Tesio <giacomo@...> wrote: > > That's not ... I don't have time right now for a long response, but in the meantime the short response is this -- a lot of your confusion seems to come from the fact that you are approaching the problem from the perspective of building an API. Instead, think about this from the perspective of a) building a new kind of browser and associated media type(s); and b) designing "sites" for that new kind of browser. > > Some other quick points: > > 1) "Standard" doesn't mean what you think it means in the context... >
However, a browser do have handlers to deal not only with text/html but also image/gif, image/jpg, application/pdf and so on... If things were as you describe, why not specify all of them in the same media type? On 20 Jul 2010 19:14, "Eric J. Bowman" <eric@...> wrote: Giacomo Tesio wrote: > > > What is a 'cargo' other than a list of items? > > It could be an histo... No different than an airline reservation. Yet, airline reservations are commonly made online via HTML/HTTP APIs. Such systems aren't (last I checked) RESTful, but that doesn't mean they couldn't be. A Uniform Interface for airline reservation systems would mean an agreement as to an industry-specific metadata language to mark up existing reservation forms to have the same semantics. It would not involve the creation of a new media type for airline reservations -- as the goal is to allow online reservations, it makes sense to stick with ubiquitous media types that browsers already know. A new media type, requiring extensive code-on-demand or even a special client-component application for reservationists, would go against the REST style. Business rules are a back-end implementation detail. A REST API abstracts these rules away behind a uniform interface frontend. Just like online airline reservations already do, in practice. -Eric
> REST is just that, a style to be applied > (and thus, pragmatically), not a religious > holy-grail... You are mixing things up. Instead of criticising people who know what they're talking about, you should give your style a name. It's reasonable for people who understand the approach (or modern architectural practice in general) to object to a different set of constraints being called 'REST'. The difference with REST and say, Web Services or N-Tier Middleware, or LAMP, is that REST writes down the constraints to achieve a desired set of properties and tradeoffs involved, and the others are extremely vague. In that sense, it's a relatively scientific approach. The REST constraints and trade-offs are fairly holistic, not quite a house of cards, but they are meant to work in concert to achieve an overall effect. I think one reason people paint REST proponents as 'religious' is because designers who understand REST are aware of what is lost when certain constraints are lifted, and when said designers point that out, the same people assume this is dogmatism instead of an objective comment. Which is both ironic and stupid, not unlike an alchemist calling a scientist a witch. Bill On Tue, 2010-07-20 at 09:49 +0100, António Mota wrote: > > Because it drives people away from REST. And a style is just a style, > nothing more than that. Implementations are all about compromising. > One thing is to break a constraint, another > is not to apply it. I reckon that sometimes, but not always, not not > apply it indeed means break it - but again, not always. > > The problem is, for non-REST people, to say "that is not REST" is > half-way for them to understand "REST is not for you, go back to SOAP > or whatever you came from". That's this I think is counter-productive, > to drive people away on account of purism (or sometimes fanatic or > religious) point-of-views. This is just technology, all around us > technology is made of compromises, even if some people like to make > it, and have fun with, some kind of war (like Java vs .NET, MS vs. > OpenSource, iPhone vs Android, and of course REST vs WS-*). But in the > end most of those people will work, will mix and match, will adapt > every other technology as they seem fit. > > REST is just that, a style to be applied (and thus, pragmatically), > not a religious holy-grail... > > > > > 2010/7/20 Ryan Riley <ryan.riley@...> > I don't remember seeing anyone say it's all of nothing, just > that you shouldn't call it REST if it doesn't meet all the > constraints. As you said, if it's useful, use it. What's wrong > with clarifying such an approach is not, therefore, RESTful? > By definition, it's not. But if it works, why do you care? > > Sent from my iPhone > > > On Jul 15, 2010, at 2:10 AM, António Mota <amsmota@...> > wrote: > > > > > > > > > I'm not arguing again purity, actually I'm not arguing > > against > > anything nor trying to start a discussion about it. I'm just > > trying to > > point that it can be counter-productive to tell people that > > REST is a > > all-or-nothing style. I'm not even saying it *is*, only that > > it can > > be. In real life scenarios there is not a > > "all-or-nothing" (well, > > there is, like fundamentalism being in politics or religion > > - which > > are most of the times counter-productive) but not in IT > > anyhow... > > > > If people do understand the properties that constraints > > originate, if > > people are applying REST style because it applies to their > > "problem > > space" and not just because it's REST, basically, if people > > understand > > the consequences of applying a constraint, then they will > > understand > > the consequences of relaxing a constraint. > > > > 2010/7/15 Jan Algermissen <algermissen1971@...>: > > > > > > Question is: Do people understand the consequences of > > relaxing a constraint? > > > > > > If you do and can live with the resulting loss of > > guaranteed system properties, fine. Go ahead. > > > > > > OTH, relaxing the stateless server constraint at the cost > > of lost scalability and much reduced understandability will > > not make adopters of REST happy in the long run. > > > > > > I'll argue for purity every time. And I really do not see > > any problem with doing pure REST anyhow. > > > > > > Jan > > > > > > > > > ----------------------------------- > > > Jan Algermissen, Consultant > > > NORD Software Consulting > > > > > > Mail: algermissen@... > > > Blog: http://www.nordsc.com/blog/ > > > Work: http://www.nordsc.com/ > > > ----------------------------------- > > > > > > > > > > > > > > > > > > > > > > > > > >
On Tue, 2010-07-20 at 15:42 +0200, Jan Algermissen wrote: > > > On Jul 20, 2010, at 9:07 AM, Ryan Riley wrote: > > > > > > > I don't remember seeing anyone say it's all of nothing, just that > you shouldn't call it REST if it doesn't meet all the constraints. > > Yes. However, it is important to understand that you do *not* gain the > system properties induced by REST if you violate a constraint. It is > important to understand what the consequences of omitting a certain > constraint are. 100% agree. Bill
Giacomo Tesio wrote: > > Saying that only standard media types can be used in a RESTful styled > application mean that there are a huge number of domains not suitable > to be implemented with such style (everywhere there's no standard > mimetype yet). > That isn't what I said. While there are legitimate cases for new media types to evolve, 99.9% of custom media types aren't. Atom and Atom Protocol filled a need that was unmet in existing media types, like the notion of collections and members. The need was so pressing, that the media types created became ubiquitous before they were even finalized. The problem is this: Obviously, a single media type defeats the purpose of self-descriptive messaging; just as obvious is the fact that too great a proliferation of media types defeats the purpose of self- descriptive messaging. The disagreement lies with how many media types constitutes "just right," with me favoring a very low number. Some argue for creating new media types willy-nilly, but it is my belief that of the innumerable media types which stand to be created in the next decade, only one or two (if that) will become ubiquitous. The ubiquitous media types which already exist may be combined in so many ways that there just aren't any really big holes left (like the one Atom filled). > > And BTW I could provide (and obviously link) a simple xslt could > translate my custom x-cargo+xml mime to html. > But it seem to me quite reductive to allow only HTML documents to > reppresent resources. > That isn't what I said, either. Of course you can use XSLT to transform a backend data format into frontend HTML. But that also holds true if your media type is application/xml. Or Atom containing HTML markup. If you have a collection of containers holding cargo, then HTML's <li> indicates an item in a collection, while Atom's <entry> indicates a container, while <feed> lists the containers in a vessel's manifest. The semantics of collection/member, at the document level and at the protocol level, are evident from the media types used in support of whatever media type (typically HTML) drives the hypertext application. Common out-of-band knowledge (ubiquitous methods, media types and link relations) explains linking and interaction. All those aspects of presenting an interface to a user over the Web, have already been worked out (or are being worked out) in HTML. So HTML is truly the language of choice for REST APIs, not something which should be so easily dismissed upon project inception (as most REST projects do, with the result being most REST projects, aren't). > > Knowing that AtomPub is REST by design is not enought. > I've also never said that; just the opposite in fact. > > I'm tring to understand wherther and why the RESTful style REQUIRES > such out of bound / common / standard knowledge. > REST requires that all out-of-band information be encapsulated by media type definitions. In order to truly achieve REST's benefits, a system's out-of-band knowledge must be common knowledge, not proprietary knowledge. Thus, REST APIs leverage the ubiquity of ubiquitous media types -- the self-descriptive messaging constraint. > > Why you sould not be able to PUT or DELETE a video mime typed > resource? > Who said you couldn't? ;-) All I know from looking at an HTML media type, is that GET and POST are commonly understood. However, some clients may understand extensions to the media type -- Xforms, HTML 5 etc. which define the use of PUT and DELETE for the HTML media types. As I've explained here many times before, you can't just declare that DELETE is allowed on a resource and call it REST. A hypertext API may present a list of resources, and instruct a client how to call the DELETE method of one or more of those resources. That deletion form is what REST is all about, not some IDL (like WADL) listing DELETE as an allowable method to call on a resource. You have one application state, a list of resources. The user agent presents this to the user with a delete button -- this form is presenting the user with a choice of transitions to the next application state (the list of resources, minus those the user wanted removed). When the user decides to remove a resource by highlighting it in a selection list and clicking a delete button, the hypertext instructs the client what URI to call and what method to use -- most likely DELETE, but DELETE may be tunneled over POST. WADL just informs a user agent that a resource is deletable, it doesn't provide instructions to a user agent about *how* to delete resources. So yeah, WADL's hypertext, but something's being hypertext doesn't make it suitable for driving a REST API -- it can serve a supporting role, just not take the lead. > > Which operation you could actually do could depend on the OPTIONS > provided by the resource itself, but the wadl file could also tell > the client such information exactly as an html file. > I didn't realize there was a media type for WADL whose description says anything about how to render the document into a choice of state transitions the user can select from. ;-) It is not REST to do an OPTIONS request, see Allow: DELETE (or some other notation like an IDL), then call the DELETE method of the resource. The out-of-band knowledge needed to make a client behave this way, isn't encompassed in a media type description, and isn't being driven by hypertext, so it wouldn't be REST. > > On the other hand, it seem to me that you are equating HTML with REST > (or at least with "HyperText"). > Not at all. But it is true, that there are very few media types which suffice to drive a hypertext API -- 'application/xml' is not one of them, 'application/svg+xml' is. > > But as far as I can see, there's no argument (concrete or theoretical) > against it's proper use as an hypertext representation. > I've made several of each. Accessibility is a very concrete argument against using WADL as the hypertext to drive an API. If you can make WADL emulate HTML, it would still lack all the accessibility hooks that make it possible for other media types to interoperate with alternative input/output methods/devices. Transforming WADL into HTML makes more sense, because you get all the value-adds like accessibility. See, the work of presenting a user interface in a browser has already been done -- you just need to leverage it. Instead of reinventing the capabilities of HTML for another media type, choose the media type with the capabilities you need. If the capabilities you need are to present a user interface over the Web, then why would you consider anything *but* HTML to drive your API? > > Moreover I still can't understand if home made mime violate RESTful > constraints or not. > It isn't a yes-or-no question. If, like application/atom+xml, it meets a pressing universal need and will likely be adopted far and wide over time, then go ahead. This application/cargo+xml that you propose, doesn't offer any advantage over application/xml with a schema, and relies on re-inventing several wheels -- giving it no hope of ever becoming ubiquitous. The resulting system, being based on a proprietary media type with insignificant adoption, can't be considered REST -- a media type nobody's ever heard of doesn't make a message self- descriptive, using ubiquitous media types does. > > It's clear that representing any resource as HTML (or, when > appropriate, Atom) is good. > I just do not understand why an xml representation would be wrong as > far as the client is able to handle it (may be with the help of code > on demand). > What you should be doing, is developing an API that interoperates with the deployed Web as much as possible. Yeah, you can make a browser understand a custom media type, but can an intermediary pre-fetch anything, like they can with a GET form in HTML? The reason such intermediaries work, is due to the ubiquity of the media type. So if you want to leverage the serendipitous re-use and scaling of the REST style, it's counter-productive to start by selecting or creating a media type nobody has ever heard of. Why not create a hypertext API that allows intermediaries to prefetch? No cost to you, and better user-perceived performance, provided you stick with ubiquitous media types. -Eric
I'm not the least interested in this "purist" discussion specially here in this list of course. The point I was trying to make is that people in this list, specially those "who know what they're talking about" - of wich I'm not one, I'm on the other side - should refrain to say things like, "that's not REST" or "you can't call it REST" or "give it another name" when they are answering questions of people new to this list and that don't belong to that group of people "who know what they're talking about"... Because that will more often than not drive people away of what they were asking in the first place. I've seen it happen personnaly. It was not my intention to criticise those "who know what they're talking about", at least not in what they know, maybe a little in the "form" they express their knowledege. Ofcourse, if the discussion is only between those people "who know what they're talking about" then those expressions may be correctly used. Or not, I don't really know... And all this because I was under the impression that the initial author of this thread was new to the list. Now much more interesting than the people "who know what they're talking about" criticising me for a remark on the "form" they were expressing their ideas (not about the ideas thenselves, even if I do find they sometimes are argumentable - but that was not what my remark was about) more interesting than that will be those people answering simple questions with simple answers... As for me I'm willing to continue to learn from the people "who know what they're talking about" even if sometimes they are ofended by some earth-to-earth remarks coming from people who don't know as much as then. Who knows, maybe someday I'll too learn the secret handshake... (it's a joke, not a offense...) On 20 Jul 2010 20:01, "Bill de hra" <bill@...> wrote: > REST is just that, a style to be applied > (and thus, pragmatically), not a religious > holy-gra... You are mixing things up. Instead of criticising people who know what they're talking about, you should give your style a name. It's reasonable for people who understand the approach (or modern architectural practice in general) to object to a different set of constraints being called 'REST'. The difference with REST and say, Web Services or N-Tier Middleware, or LAMP, is that REST writes down the constraints to achieve a desired set of properties and tradeoffs involved, and the others are extremely vague. In that sense, it's a relatively scientific approach. The REST constraints and trade-offs are fairly holistic, not quite a house of cards, but they are meant to work in concert to achieve an overall effect. I think one reason people paint REST proponents as 'religious' is because designers who understand REST are aware of what is lost when certain constraints are lifted, and when said designers point that out, the same people assume this is dogmatism instead of an objective comment. Which is both ironic and stupid, not unlike an alchemist calling a scientist a witch. Bill On Tue, 2010-07-20 at 09:49 +0100, Antnio Mota wrote: > > Because it drives people away from RE... > Messages in this topic (17) > Recent Activity: > * New Members 7 > Visit Your Group > Yahoo! Groups > Switch to: Text-Only, Daily Digest Unsubscribe Terms of Use > > . > > __,_._...
I hear you Antonio. But a core tenet (IMHO, not being one who "knows what he's talking about") of REST is the limitations and constraints imposed by it, and what, by assuming those constraints, the system enables you to do. It's important, perhaps less now than in the past, to keep REST "pure" for a reason. REST has been horribly muddied. Most feel that it's nothing more than POX over HTTP, as witnessed by the myriad of "REST" apis that have been published, but really aren't. So, the adherents have been quite vocal, rightly I think, to try and keep the term "REST" meaningful, and not lost. REST is not like pornography, in that "we know it when we see it", but otherwise can't describe it in detail. It has been described in detail. And the only way to measure its effectiveness, specifically, IMHO, in B2B systems doing M2M transactions (i.e. not just "browsers" and "the web"), is to see it practiced as described. That's the only way we can find out if it's actually usable and useful (in that domain). So constraints are important, and calling out when discussion drifts from those constraints is also important. Otherwise, really, it's just Stuff over HTTP. And I don't think we need a dedicated list just for that. And finally, most here will be the first to admit that REST is not the only architecture, nor should it be the only architecture. Another reason to try and "stay pure" with the notions of REST. If what is needed for a specific system does not "work" with the REST idiom, then by all means, finish your system, but simply don't call it REST. Regards, Will Hartung (willh@...)
On Tue, 2010-07-20 at 22:56 +0100, António Mota wrote: > > I'm not the least interested in this "purist" discussion specially > here in this list of course. Who is? The goal I think should be technical accuracy, not technical purism. > The point I was trying to make is that people in this list, specially > those "who know what they're talking about" - of wich I'm not one, I'm > on the other side - should refrain to say things like, "that's not > REST" or "you can't call it REST" or "give it another name" when they > are answering questions of people new to this list and that don't > belong to that group of people "who know what they're talking > about"... Because that will more often than not drive people away of > what they were asking in the first place. Perhaps, and perhaps being strict in definitions isn't for everyone. It's a given that terms in our industry get muddied and for 'REST' it seems worse since it entered a hype phase, than when it wasn't a generally accepted approach; apparently that's normal: http://martinfowler.com/bliki/SemanticDiffusion.html Otoh, no-one is saying don't relax or add design constraints. Bill
Will, I completly agree with what you say in the first paragraph. And the second. And parts of the others too... However, and I didn't want to say this because its going to piss people here and it is not that my intention, and also because it's just my feeling and I dont have evidence to support it, I suspect that it was this "purist" approach that drove all those auto-denomitade REST frameworks to simply ignore the fundamentals and go with the REST moniker all the same. Because if what the experts only say is "if you don't apply this then its no REST" without a sound explanation of the why's, they probaly follow the easy path of applyng what they can, leave the rest, and still call it REST because of market reasons. Now if the experts instead of just repeating that, could come with a sensible explanation of what each constraint gives, and most important, what the system looses for each constraint dropped or relaxed - instead of immediatly say "its not REST, call it other thing" - maybe a effort of classification like the one "Type I, Type II..." (I'm sorry I'm on my mobile, is hard to google for it) would be generalized enough as to be accepted by the industry, and we could have a REST Maturity Model (I think it was that's called) that inequivocly said that REST Type I is not fully REST because misses constraint A,B,C and REST Type II misses constraint X, and REST Type V, or whatever, does indeed follow all the constraints and thus indeed complies with Roy thesis. And if that was recognalizable and accepted by the industry maybe companies will had the honesty of explicitly call their applications to conform with REST Type I, where everibody knew a priori what that means. And they still have the marketing buzzword... They could even leverage that marketing tool - mine is more Restfull then yours - or present thenselves as the good guys - our application is REST Type II but we aim to turn it fully REST in the next months and the upgrade will be only a small fee... Instead, what do we have? Confusion!!! Companies claiming its REST, experts claiming its no, companies not giving a crap to what the experts say... All this because the all-or-nothing, don't call it REST attitude of the experts... Don't get me wrong, I do care about REST, I do appreciate the experts, and what I'm saying does not have the single purpose of pissing them off as they probably think it is. I just think more flexibility and clarity or transparencie will be good for everyone... But the truth is, it seems I'm already hearing, I don't have years of rest, I'm trying to redefine rest, I'm not willing to learn from what the experts say, I'm just criticising people who knows what they're talking about, I'm not paying my respects to the experts... But its not, I care, I really care :) On 20 Jul 2010 23:11, "Will Hartung" <willh@...> wrote: I hear you Antonio. But a core tenet (IMHO, not being one who "knows what he's talking about") of REST is the limitations and constraints imposed by it, and what, by assuming those constraints, the system enables you to do. It's important, perhaps less now than in the past, to keep REST "pure" for a reason. REST has been horribly muddied. Most feel that it's nothing more than POX over HTTP, as witnessed by the myriad of "REST" apis that have been published, but really aren't. So, the adherents have been quite vocal, rightly I think, to try and keep the term "REST" meaningful, and not lost. REST is not like pornography, in that "we know it when we see it", but otherwise can't describe it in detail. It has been described in detail. And the only way to measure its effectiveness, specifically, IMHO, in B2B systems doing M2M transactions (i.e. not just "browsers" and "the web"), is to see it practiced as described. That's the only way we can find out if it's actually usable and useful (in that domain). So constraints are important, and calling out when discussion drifts from those constraints is also important. Otherwise, really, it's just Stuff over HTTP. And I don't think we need a dedicated list just for that. And finally, most here will be the first to admit that REST is not the only architecture, nor should it be the only architecture. Another reason to try and "stay pure" with the notions of REST. If what is needed for a specific system does not "work" with the REST idiom, then by all means, finish your system, but simply don't call it REST. Regards, Will Hartung (willh@...)
On Jul 21, 2010, at 1:41 AM, Antnio Mota wrote: > Now if the experts instead of just repeating that, could come with a sensible explanation of what each constraint gives, and most important, what the system looses for each constraint dropped or relaxed [snip] Antnio, I have found the experts on this list to be exceptionally helpful and exceptionally patient with helping people to make the necessary mind shifts. This reminds me of Mark who did an incredible job between 2002 and 2008 in staying true to what REST is until whoever was truely interested groked it (or at least part of it). I thank him for the explanations he gave and that, after all, appear to have come at a cost[1]. Jan [1] http://www.markbaker.ca/blog/2008/01/rest-vs-soap-the-personal-cost/ ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
<snip> Now if the experts instead of just repeating that, could come with a sensible explanation of what each constraint gives, and most important, what the system looses for each constraint dropped or relaxed - instead of immediatly say "its not REST, call it other thing" - maybe a effort of classification like the one "Type I, Type II..." (I'm sorry I'm on my mobile, is hard to google for it) would be generalized enough as to be accepted by the industry, and we could have a REST Maturity Model (I think it was that's called) that inequivocly said that REST Type I is not fully REST because misses constraint A,B,C and REST Type II misses constraint X, and REST Type V, or whatever, does indeed follow all the constraints and thus indeed complies with Roy thesis. </snip> I think you may know of the Richardson Maturity Model [1]. Jan Algermissen has also taken some time to devise a table for comparing various Web implementations using a "Type" system[2]. A while back I summarized (bullet points, really)[3] the pros/cons of the various top-level REST constraints identified in Fielding's Dissertation[4]. The WS-REST Workshop @ WWW in Raliegh, NC this past April had a number of interesting papers on REST[5]. The one titled "Towards a Practical Model to Facilitate Reasoning about REST Extensions and Reuse" was, I thought, very good as it proposed a working model for adding/removing key constraints and calculating the possible resulting affects. [1] http://martinfowler.com/articles/richardsonMaturityModel.html <http://martinfowler.com/articles/richardsonMaturityModel.html>[2] http://nordsc.com/ext/classification_of_http_based_apis.html [3] http://www.amundsen.com/blog/archives/1009 [4] http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm#sec_5_1 [5] http://www.ws-rest.org/files/WSREST2010-Preliminary-Proceedings.pdf<http://nordsc.com/ext/classification_of_http_based_apis.html> mca http://amundsen.com/blog/ http://mamund.com/foaf.rdf#me 2010/7/20 Antnio Mota <amsmota@gmail.com> > > > Will, I completly agree with what you say in the first paragraph. And the > second. And parts of the others too... > > However, and I didn't want to say this because its going to piss people > here and it is not that my intention, and also because it's just my feeling > and I dont have evidence to support it, I suspect that it was this "purist" > approach that drove all those auto-denomitade REST frameworks to simply > ignore the fundamentals and go with the REST moniker all the same. Because > if what the experts only say is "if you don't apply this then its no REST" > without a sound explanation of the why's, they probaly follow the easy path > of applyng what they can, leave the rest, and still call it REST because of > market reasons. > > Now if the experts instead of just repeating that, could come with a > sensible explanation of what each constraint gives, and most important, what > the system looses for each constraint dropped or relaxed - instead of > immediatly say "its not REST, call it other thing" - maybe a effort of > classification like the one "Type I, Type II..." (I'm sorry I'm on my > mobile, is hard to google for it) would be generalized enough as to be > accepted by the industry, and we could have a REST Maturity Model (I think > it was that's called) that inequivocly said that REST Type I is not fully > REST because misses constraint A,B,C and REST Type II misses constraint X, > and REST Type V, or whatever, does indeed follow all the constraints and > thus indeed complies with Roy thesis. > > And if that was recognalizable and accepted by the industry maybe companies > will had the honesty of explicitly call their applications to conform with > REST Type I, where everibody knew a priori what that means. And they still > have the marketing buzzword... They could even leverage that marketing tool > - mine is more Restfull then yours - or present thenselves as the good guys > - our application is REST Type II but we aim to turn it fully REST in the > next months and the upgrade will be only a small fee... > > Instead, what do we have? Confusion!!! Companies claiming its REST, experts > claiming its no, companies not giving a crap to what the experts say... All > this because the all-or-nothing, don't call it REST attitude of the > experts... > > Don't get me wrong, I do care about REST, I do appreciate the experts, and > what I'm saying does not have the single purpose of pissing them off as they > probably think it is. I just think more flexibility and clarity or > transparencie will be good for everyone... > > But the truth is, it seems I'm already hearing, I don't have years of rest, > I'm trying to redefine rest, I'm not willing to learn from what the experts > say, I'm just criticising people who knows what they're talking about, I'm > not paying my respects to the experts... > > But its not, I care, I really care :) > > On 20 Jul 2010 23:11, "Will Hartung" <willh@...> wrote: > > I hear you Antonio. > > But a core tenet (IMHO, not being one who "knows what he's talking about") > of REST is the limitations and constraints imposed by it, and what, by > assuming those constraints, the system enables you to do. > > It's important, perhaps less now than in the past, to keep REST "pure" for > a reason. REST has been horribly muddied. Most feel that it's nothing more > than POX over HTTP, as witnessed by the myriad of "REST" apis that have been > published, but really aren't. > > So, the adherents have been quite vocal, rightly I think, to try and keep > the term "REST" meaningful, and not lost. > > REST is not like pornography, in that "we know it when we see it", but > otherwise can't describe it in detail. It has been described in detail. And > the only way to measure its effectiveness, specifically, IMHO, in B2B > systems doing M2M transactions (i.e. not just "browsers" and "the web"), is > to see it practiced as described. That's the only way we can find out if > it's actually usable and useful (in that domain). > > So constraints are important, and calling out when discussion drifts from > those constraints is also important. Otherwise, really, it's just Stuff over > HTTP. And I don't think we need a dedicated list just for that. > > And finally, most here will be the first to admit that REST is not the only > architecture, nor should it be the only architecture. Another reason to try > and "stay pure" with the notions of REST. If what is needed for a specific > system does not "work" with the REST idiom, then by all means, finish your > system, but simply don't call it REST. > > > > Regards, > > Will Hartung > (willh@...) > > > > >
On Jul 21, 2010, at 1:41 AM, Antnio Mota wrote: > Instead, what do we have? Confusion!!! Companies claiming its REST, experts claiming its no, companies not giving a crap to what the experts say... All this because the all-or-nothing, don't call it REST attitude of the experts... Hmm,.... do you really think that this is true? Jan ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
2010/7/20 Jan Algermissen <algermissen1971@...> > On Jul 21, 2010, at 1:41 AM, Antnio Mota wrote: > > Instead, what do we have? Confusion!!! Companies claiming its REST, experts claiming its no, companies not giving a crap to what the experts say... All this because the all-or-nothing, don't call it REST attitude of the experts... > Hmm,.... do you really think that this is true? I'm not Antonio, but no, I don't think that is true. I think our business is often one of fads and hype rather than substance, so when REST became fashionable, people wanted to adopt the label without understanding the substance. This list has been a corrective. But still, I think the arguments here would be clear if they were less dogmatic (as in, "do it this way because we say so" or "because Roy's dissertation says so") and more technically explanatory. By that I mean, explain why the REST constraints exist and what you gain or lose by abandoning each constraint, rather than arguing from authority. I am often faced with solving a problem. Sometimes the problem is difficult to solve following all the advice on this list, yet I must solve it. If I do not follow all the REST constraints, I do not call my solution RESTful. But if I follow some (but not all) of the REST constraints, I still get some of the benefits (but I also lose some). P.S. I also learned whatever I understand about REST from Mark Baker...
Giacomo Tesio wrote: > > XML still seem to me exactly, the same than HTML if I have to > supplement the HTML with human (and not machine) readable description > of what the different items of the list mean. > Exactly. The dual nature of hypertext, as both machine and human readable, is the reason hypertext has turned out to be such a potent solution to the distributed application problem. The ability to "view source" to introspect the API behind any Web system, is likely the major factor behind the phenomenal success of the Web. Even if the user is a machine, the development and maintenance of the system is a human endeavor. A hyperbinary system just wouldn't fit with the REST style. HTML tables and definition lists have the elements <th> and <dt>, the contents of which are human-readable on screen. Human sight allows for the visual association of <td> and <dd> elements, when tables and definition lists are rendered on-screen. The structure of such markup provides semantics which allow, for example, vision-impaired users to make the same associations between headings and cells -- regardless of the nature of the data (cargo manifest, airline schedule, etc.). These elements, and data structures like <ol> and <ul> nested inside of <dd> and <td> elements, can express m2m relationships as simple as name/ value pairs, to any level of complexity required to model data of almost any type -- particularly data that's inherently tabular/list-ish like a cargo manifest -- right up to the set of problems solved by using Atom as a container for HTML (resources as members of collections, etc.). The human-readable content that appears on-screen may not be suitable to generate, say, a name/value pair of the proper syntax for m2m interoperability. That's where metadata attributes come in, like @abbr or @title, plus those attributes defined by RDFa. Such metadata piggybacks off the natural data structures' semantic markup. I can list the upsides to this all day, without seeing any drawback. This way, a human reads that there are "five chaise lounges" on deck, while a machine reads "class=deckchair, type=chaise, count=5", from the _same_ markup. When it comes time to maintain the system, all a human needs to do is "view source" to access what amounts to thoroughly- commented m2m code, particularly when linked to a schema and/or an ontology. Even without any knowledge of the schema/ontology, the data structures are self-evident due to the markup semantics of the ubiquitous media type used. Hypertext is, by definition, human-readable. That REST requires any machine-readable payload used to drive an API to also be human- readable (the hypertext constraint) is a feature, not a bug. The resulting API is self-documenting. This highly-desirable quality is what allows the system to be deciphered via "view source", which is the key enabler of the serendipitous re-use responsible for the Web's explosive growth. > > Note that I'm really tring to understand your point. > And I'm really trying to help you understand REST. Please don't mistake my bluntness and multiple responses, for impatience or browbeating. -Eric
Antnio Mota wrote: > > You also have the situation when you want to use those same > media-types for in-house application integration (even if not in a > REST based arch.) as well in your external REST web services. > An in-house media type is ubiquitous within the scope of the system. REST simply requires that a media type description exists, in such case. Basically, document your out-of-band knowledge and point to it with a media type identifier. Pragmatically, everything I mention about the ease of developing and maintaining a system built around standard Web media types applies just as much for intranets as it does on the Web, even if REST only requires it for the Web. Common sense must prevail. You're right, sending it outside the house will require using XSLT or somesuch to convert it to an appropriate media type for the Web. And yeah, it's a tradeoff. Routine tradeoffs like having to write and maintain a script or some XSLT are hardly limited to REST development, though, so you're off-topic. The tradeoffs of violating a REST constraint by *not* transforming an in-house media type for use over the Web, is what matters here. It is a feature of the style that a gateway can perform such a task, so that it won't impact on the in-house system. Which would be enough of a benefit to completely moot the issue of having to maintain an XSLT stylesheet. Discussing such implementation details can only serve to cloud the issue, since we only care about what passes between connectors, not how it's derived. -Eric
Antnio Mota wrote: > > However, a browser do have handlers to deal not only with text/html > but also image/gif, image/jpg, application/pdf and so on... > Where did I imply otherwise? Let me try again to make this clear. If your purpose is to deliver a distributed hypertext application over the Web, then it makes nothing but sense to use those ubiquitous media types which browsers know how to render into user interfaces. That's HTML, CSS, Javascript, images, PDF, whatever you need provided that you're USING STANDARD MEDIA TYPES. > > If things were as you describe, why not specify all of them in the > same media type? > It's very, very hard not to flame you when you postulate argumentative strawmen like that. Gee, yeah, you're right, obviously what I'm saying means media types can't possibly serve any purpose... /sarcasm If nobody can agree on a common set of media types to use, and as a result media types in common use proliferate to the point that nobody can keep track of them any more, then the resulting architecture would lack self-descriptive messaging and therefore have no resemblance to REST. This would be functionally no different than not having media types. The REST style works best when the largest number of developers can agree on the fewest number of media types actually required to express hypertext APIs. Thus the requirement to USE STANDARD MEDIA TYPES. -Eric
Hi Bill,
Thanks for the reply. I agree: I think both Jim's and Jan's ideas on this have been very helpful. I believe it come down to a central thought process - the server knows when it gets a msg out of order and by using the correct response code (409) and links, can inform the client how to get back in sync.
Sean.
--- On Wed, 21/7/10, bdehora <bill@dehora.net> wrote:
From: bdehora <bill@...>
Subject: Re: HTTP reliability - in order msg delivery?
To: "Sean Kennedy" <seandkennedy@...>
Date: Wednesday, 21 July, 2010, 0:02
Hi Sean,
I pretty much agree with Jim about putting links for state transitions into the representation from the server helps greatly. It makes partial ordering, as in ordering transitions over a single resource, possible, which is an option for modeling in-order message delivery. Global ordering or coordinating over multiple resources is much harder ;)
Btw - when I wrote httplr, I did think a format was needed to solve its problems but was kind of fumbling around with the idea. I've learned via since then that doing any kind collection structuring over HTTP indeed requires a format (eg AtomPub). It was mainly an attempt to show that queues/message passing was doable with HTTP (debated at the time). I would just take it a design reference whose main idea is that defining a resource to identify the state of a message transfer independent of the message itself is a viable tactic. I suspect if you are working with resources directly and not emulating queues, then Jim's approach will get you a better result.
Bill
--- In rest-discuss@yahoogroups.com, Sean Kennedy <seandkennedy@...> wrote:
>
> Hi Jan,
> Apologies for the confusion. Hopefully this is clearer. Firstly, to confirm I am on firm ground: this situation only appears to arise when the client is unaware of the resource URI and therefore has to use POST instead of idempotent PUT - based on Roy's post that you kindly included, where he refers to a series of individual POST requests.
> Secondly, I was looking at Bill de hOra's HTTPLR [1] last night and figured that his use of stateful URI's could be used to keep the client and server in sync i.e. no need for expensive ETag-type values.. Given that methodology, here is an example:
>
> Client
> Server
>
> POST /someURI update resource state;
> <details> /someUri goes to ".../ready" state
> ...
> <clientViewOfState>
> "http://.../initial" -->
> </clientViewOfState>
>
> </details>
>
> <-- 200 OK gets lost
>
> client re-sends:
> POST /someURI
> <details> server
> detects conflict;
>
> ... informs client of what its view is
>
> <clientViewOfState>
>
> "http://.../initial" -->
>
> </clientViewOfState>
> </details>
>
>
>
> <-- 409 Conflict
> <serverStateView>
> ".../ready"
> </serverStateView>
>
>
>
> Thus, the client and server are keeping in synch via the use of the stateful URI's. This means that the server is maintaining some application state i.e. breaking REST's statelessness constraint. However, if I am correct, constraints can be relaxed as and when the situation arises?
>
> Does this seem reasonable...
>
> Regards,
> Sean.
>
> [1] http://dehora.net/doc/httplr/draft-httplr-01.html
>
> --- On Wed, 14/7/10, Jan Algermissen <algermissen1971@...> wrote:
>
> From: Jan Algermissen
> <algermissen1971@...>
> Subject: Re: [rest-discuss] HTTP reliability - in order msg delivery?
> To: "Sean Kennedy" <seandkennedy@...>
> Cc: "Jim Webber" <jim@...>, "Rest Discussion Group" <rest-discuss@yahoogroups.com>
> Date: Wednesday, 14 July, 2010, 7:41
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> On Jul 13, 2010, at 2:15 PM, Sean Kennedy wrote:
>
>
>
> >
>
> > How does this look...
>
> >
>
>
>
> Sean,
>
>
>
> I am having trouble to see what you are asking. Can you replace the formal expressions with HTTP request/ response examples?
>
>
>
> Jan
>
>
>
> > Sean.
>
> >
>
> > --- On Tue, 13/7/10, Jan Algermissen <algermissen1971@...> wrote:
>
> >
>
> > From: Jan Algermissen <algermissen1971@...>
>
> > Subject: Re: [rest-discuss] HTTP reliability - in order msg delivery?
>
> > To: "Sean Kennedy" <seandkennedy@...>
>
> > Cc: "REST Discuss" <rest-discuss@yahoogroups.com>
>
> > Date: Tuesday, 13 July, 2010, 9:50
>
> >
>
> >
>
> > On Jul 13, 2010, at 10:51 AM, Sean Kennedy wrote:
>
> >
>
> > > What if you needed in-order message delivery? I imagine for a banking application, the order of transactions on an account would be important...
>
> >
>
> > You can do this by including in the client's message a token that expresses the client's assumptions about the state of the resource. The server can use that token to verify that the client's expectation and the actual resource state match. If they do not match, the server instructs the client what to do next.
>
> >
>
> > Roy somewhat explains this in [1]:
>
> >
>
> > "Think of it instead as a series of individual POST requests that are
>
> > building up a combined resource that will eventually be a savings
>
> > account when finished. Each of those requests can include parameters
>
> > that perform the same role as an ETag -- basically, identifying the
>
> > client's view of the current state of the resource. Then, when a
>
> > request is repeated or a state-change lost, the server would see
>
> > that in the next request and tell the client to refresh its view
>
> > of the form before continuing to the next step."
>
> >
>
> > [1] http://tech.groups.yahoo.com/group/rest-discuss/message/9805
>
> >
>
> >
>
> > -----------------------------------
>
> > Jan Algermissen, Consultant
>
> > NORD Software Consulting
>
> >
>
> > Mail: algermissen@...
>
> > Blog: http://www.nordsc.com/blog/
>
> > Work: http://www.nordsc.com/
>
> > -----------------------------------
>
> >
>
> >
>
> >
>
> >
>
> >
>
> >
>
> >
>
> >
>
>
>
> -----------------------------------
>
> Jan Algermissen, Consultant
>
> NORD Software Consulting
>
>
>
> Mail: algermissen@...
>
> Blog: http://www.nordsc.com/blog/
>
> Work: http://www.nordsc.com/
>
> -----------------------------------
>
Well, I agree that it is better to use standards than not, inclusive if that implies to "bend" what we really want to conform to those standards. However, those standards you mention are standards for a "human consumable" web, why should we adapt that to the "machines talking to machines" web as TB-L mention in his "I have a dream for the Web" view? [1] HTML is as much as structuring text as it is about presentation (although the tendency and recommendation is to use CSS for that, but nevertheless the markup is there). So, if my main goal is not to make a website but to allow services to access my data in a "machines talking to machines" way, why should I care about <b>bolds</> and <i>italics</i>? Why should I care about accessibility at all? That is a problem for web designers to deal, not to a "data provider" service designer... So why not put the focus on XML, that "is a set of rules for encoding documents in machine-readable form","widely used for the representation of arbitrary data structures" [2]? Why use HTML that "tells you how data should look" instead of XML that "tells you what it means", that "applies context to the data", and "separates data content from data presentation"[3]? Wouldn't you agree that REST is much more than a glorified way of making web sites? So I would say: use HTML for your general purpose (data/presentation) services, use XML for your specific purpose (data) services. Create your own Media-Type. Create hundreds, thousands of new Media-Types. Don't wait for some organization to define then from the top-down. Do it yourself from the ground-up and encourage others to do the same. From all of those I'm sure much more than 0.01% will be adopted for specific areas of business - it will happen as soon they see it works... [1] http://logicerror.com/timsDream [2] http://en.wikipedia.org/wiki/XML [3] http://xml.gov/presentations/gsa/sld001.htm 2010/7/21 Eric J. Bowman <eric@bisonsystems.net>: > Antnio Mota wrote: >> >> However, a browser do have handlers to deal not only with text/html >> but also image/gif, image/jpg, application/pdf and so on... >> > > Where did I imply otherwise? Let me try again to make this clear. If > your purpose is to deliver a distributed hypertext application over the > Web, then it makes nothing but sense to use those ubiquitous media > types which browsers know how to render into user interfaces. That's > HTML, CSS, Javascript, images, PDF, whatever you need provided that > you're USING STANDARD MEDIA TYPES. > >> >> If things were as you describe, why not specify all of them in the >> same media type? >> > > It's very, very hard not to flame you when you postulate argumentative > strawmen like that. Gee, yeah, you're right, obviously what I'm saying > means media types can't possibly serve any purpose... /sarcasm > > If nobody can agree on a common set of media types to use, and as a > result media types in common use proliferate to the point that nobody > can keep track of them any more, then the resulting architecture would > lack self-descriptive messaging and therefore have no resemblance to > REST. > > This would be functionally no different than not having media types. > > The REST style works best when the largest number of developers can > agree on the fewest number of media types actually required to express > hypertext APIs. Thus the requirement to USE STANDARD MEDIA TYPES. > > -Eric >
Yes, I was trying to refer to [1] and [2], and I also had read your [3] before. I didn't knew [5] though. Thanks for the links. 2010/7/21 mike amundsen <mamund@...> > <snip> > Now if the experts instead of just repeating that, could come with a > sensible explanation of what each constraint gives, and most important, what > the system looses for each constraint dropped or relaxed - instead of > immediatly say "its not REST, call it other thing" - maybe a effort of > classification like the one "Type I, Type II..." (I'm sorry I'm on my > mobile, is hard to google for it) would be generalized enough as to be > accepted by the industry, and we could have a REST Maturity Model (I think > it was that's called) that inequivocly said that REST Type I is not fully > REST because misses constraint A,B,C and REST Type II misses constraint X, > and REST Type V, or whatever, does indeed follow all the constraints and > thus indeed complies with Roy thesis. > </snip> > > I think you may know of the Richardson Maturity Model [1]. > > Jan Algermissen has also taken some time to devise a table for comparing > various Web implementations using a "Type" system[2]. > > A while back I summarized (bullet points, really)[3] the pros/cons of the > various top-level REST constraints identified in Fielding's > Dissertation[4]. > > The WS-REST Workshop @ WWW in Raliegh, NC this past April had a number of > interesting papers on REST[5]. The one titled "Towards a Practical Model to > Facilitate Reasoning about REST Extensions and Reuse" was, I thought, very > good as it proposed a working model for adding/removing key constraints and > calculating the possible resulting affects. > > [1] http://martinfowler.com/articles/richardsonMaturityModel.html > <http://martinfowler.com/articles/richardsonMaturityModel.html>[2] > http://nordsc.com/ext/classification_of_http_based_apis.html > [3] http://www.amundsen.com/blog/archives/1009 > [4] > http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm#sec_5_1<http://www.ics.uci.edu/%7Efielding/pubs/dissertation/rest_arch_style.htm#sec_5_1> > [5] http://www.ws-rest.org/files/WSREST2010-Preliminary-Proceedings.pdf<http://nordsc.com/ext/classification_of_http_based_apis.html> > > mca > http://amundsen.com/blog/ > http://mamund.com/foaf.rdf#me > > > > 2010/7/20 Antnio Mota <amsmota@...> > >> >> >> Will, I completly agree with what you say in the first paragraph. And the >> second. And parts of the others too... >> >> However, and I didn't want to say this because its going to piss people >> here and it is not that my intention, and also because it's just my feeling >> and I dont have evidence to support it, I suspect that it was this "purist" >> approach that drove all those auto-denomitade REST frameworks to simply >> ignore the fundamentals and go with the REST moniker all the same. Because >> if what the experts only say is "if you don't apply this then its no REST" >> without a sound explanation of the why's, they probaly follow the easy path >> of applyng what they can, leave the rest, and still call it REST because of >> market reasons. >> >> Now if the experts instead of just repeating that, could come with a >> sensible explanation of what each constraint gives, and most important, what >> the system looses for each constraint dropped or relaxed - instead of >> immediatly say "its not REST, call it other thing" - maybe a effort of >> classification like the one "Type I, Type II..." (I'm sorry I'm on my >> mobile, is hard to google for it) would be generalized enough as to be >> accepted by the industry, and we could have a REST Maturity Model (I think >> it was that's called) that inequivocly said that REST Type I is not fully >> REST because misses constraint A,B,C and REST Type II misses constraint X, >> and REST Type V, or whatever, does indeed follow all the constraints and >> thus indeed complies with Roy thesis. >> >> And if that was recognalizable and accepted by the industry maybe >> companies will had the honesty of explicitly call their applications to >> conform with REST Type I, where everibody knew a priori what that means. And >> they still have the marketing buzzword... They could even leverage that >> marketing tool - mine is more Restfull then yours - or present thenselves as >> the good guys - our application is REST Type II but we aim to turn it fully >> REST in the next months and the upgrade will be only a small fee... >> >> Instead, what do we have? Confusion!!! Companies claiming its REST, >> experts claiming its no, companies not giving a crap to what the experts >> say... All this because the all-or-nothing, don't call it REST attitude of >> the experts... >> >> Don't get me wrong, I do care about REST, I do appreciate the experts, and >> what I'm saying does not have the single purpose of pissing them off as they >> probably think it is. I just think more flexibility and clarity or >> transparencie will be good for everyone... >> >> But the truth is, it seems I'm already hearing, I don't have years of >> rest, I'm trying to redefine rest, I'm not willing to learn from what the >> experts say, I'm just criticising people who knows what they're talking >> about, I'm not paying my respects to the experts... >> >> But its not, I care, I really care :) >> >> On 20 Jul 2010 23:11, "Will Hartung" <willh@...> wrote: >> >> I hear you Antonio. >> >> But a core tenet (IMHO, not being one who "knows what he's talking about") >> of REST is the limitations and constraints imposed by it, and what, by >> assuming those constraints, the system enables you to do. >> >> It's important, perhaps less now than in the past, to keep REST "pure" for >> a reason. REST has been horribly muddied. Most feel that it's nothing more >> than POX over HTTP, as witnessed by the myriad of "REST" apis that have been >> published, but really aren't. >> >> So, the adherents have been quite vocal, rightly I think, to try and keep >> the term "REST" meaningful, and not lost. >> >> REST is not like pornography, in that "we know it when we see it", but >> otherwise can't describe it in detail. It has been described in detail. And >> the only way to measure its effectiveness, specifically, IMHO, in B2B >> systems doing M2M transactions (i.e. not just "browsers" and "the web"), is >> to see it practiced as described. That's the only way we can find out if >> it's actually usable and useful (in that domain). >> >> So constraints are important, and calling out when discussion drifts from >> those constraints is also important. Otherwise, really, it's just Stuff over >> HTTP. And I don't think we need a dedicated list just for that. >> >> And finally, most here will be the first to admit that REST is not the >> only architecture, nor should it be the only architecture. Another reason to >> try and "stay pure" with the notions of REST. If what is needed for a >> specific system does not "work" with the REST idiom, then by all means, >> finish your system, but simply don't call it REST. >> >> >> >> Regards, >> >> Will Hartung >> (willh@...) >> >> >> >> >> > >
On Jul 21, 2010, at 12:00 PM, Antnio Mota wrote: > > Create > your own Media-Type. Create hundreds, thousands of new Media-Types. > Don't wait for some organization to define then from the top-down. Do > it yourself from the ground-up and encourage others to do the same. > From all of those I'm sure much more than 0.01% will be adopted for > specific areas of business - it will happen as soon they see it > works... Exactly. What matters is that the media type is not specifically for a certain service but that it covers enough hypermedia controls to enable all of the anticipated use cases of a domain. Surely this is an iterative process and surely it is inspired/initially driven by a set of anticipated services because you cannot know the perfect media type out of the blue. And that is why Roy wrote in the Thesis "an *evolving* set of standard data types" (emphasis added) Jan > > [1] http://logicerror.com/timsDream > [2] http://en.wikipedia.org/wiki/XML > [3] http://xml.gov/presentations/gsa/sld001.htm > > > > > 2010/7/21 Eric J. Bowman <eric@...>: >> Antnio Mota wrote: >>> >>> However, a browser do have handlers to deal not only with text/html >>> but also image/gif, image/jpg, application/pdf and so on... >>> >> >> Where did I imply otherwise? Let me try again to make this clear. If >> your purpose is to deliver a distributed hypertext application over the >> Web, then it makes nothing but sense to use those ubiquitous media >> types which browsers know how to render into user interfaces. That's >> HTML, CSS, Javascript, images, PDF, whatever you need provided that >> you're USING STANDARD MEDIA TYPES. >> >>> >>> If things were as you describe, why not specify all of them in the >>> same media type? >>> >> >> It's very, very hard not to flame you when you postulate argumentative >> strawmen like that. Gee, yeah, you're right, obviously what I'm saying >> means media types can't possibly serve any purpose... /sarcasm >> >> If nobody can agree on a common set of media types to use, and as a >> result media types in common use proliferate to the point that nobody >> can keep track of them any more, then the resulting architecture would >> lack self-descriptive messaging and therefore have no resemblance to >> REST. >> >> This would be functionally no different than not having media types. >> >> The REST style works best when the largest number of developers can >> agree on the fewest number of media types actually required to express >> hypertext APIs. Thus the requirement to USE STANDARD MEDIA TYPES. >> >> -Eric >> > > > ------------------------------------ > > Yahoo! Groups Links > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
On 21 July 2010 10:06, Sean Kennedy <seandkennedy@...> wrote: > > > Hi Bill, > Thanks for the reply. I agree: I think both Jim's and Jan's ideas on > this have been very helpful. I believe it come down to a central thought > process - the server knows when it gets a msg out of order and by using the > correct response code (409) and links, can inform the client how to get back > in sync. > > Sean. > > I don't agree with that not a little bit. That smells like maintaining conversation state in the server...
Jan after Eric wrote: > Surely this is an iterative process and surely it is inspired/initially driven by a set of anticipated services > because you cannot know the perfect media type out of the blue. > > And that is why Roy wrote in the Thesis "an *evolving* set of standard data types" (emphasis added) Having lived in the EDI world for a bit, this is very true. The trouble that people get into with system integrations is that they think there's a Holy Grail document. I have seen this in the EDIFACT world, the ANSI X12 world, and it seems to have reappeared in the Microformat world. There just isn't a perfect document and there won't be a perfect media type. Get over it. If I could go back in time and talk to our "perfect" document creators, I'd tell them to spend more time on identifying dictionary items and not whole documents. Evolve a standard for locations, items, people, dates, etc. and let users put those together into documents that carry their own semantics. Microformats seemed to head off into that direction at the start but the document people appeared to take over the last time I looked... Mark W.
I've been discussing PUT for create with some coworkers. This is certainly valid
HTTP, but I'm wondering if people consider it RESTful. It seems to me that
giving the client control over part of the URI requires them to understand how
resources are organized and forces them to construct URIs as non-opaque strings.
So I wonder if this conflicts with HATEOAS. It potentially also puts a burden on
the client to avoid namespace collisions, so that it must adopt some uniqueness
logic which again requires application state that seems problematic.
On Wed, Jul 21, 2010 at 10:25 AM, Bryan Taylor <bryan_w_taylor@...>wrote: > > > I've been discussing PUT for create with some coworkers. This is certainly > valid > HTTP, but I'm wondering if people consider it RESTful. It seems to me that > giving the client control over part of the URI requires them to understand > how > resources are organized and forces them to construct URIs as non-opaque > strings. > So I wonder if this conflicts with HATEOAS. It potentially also puts a > burden on > the client to avoid namespace collisions, so that it must adopt some > uniqueness > logic which again requires application state that seems problematic. > > While "client control over part of the URI" can be a problem, sometimes it is exactly what you want. Consider uploading an image to a photo sharing site -- you (the client) might very much care what the ultimate filename is, and probably also what folder it gets put in. I would tend to think more about the difference in idempotency (if the same request gets submitted twice, say because the client didn't hear the initial response for some reason, do two things get created or just one?) between PUT and POST. In the case of PUT, its up the server to do the right thing, whereas the fact that POST is not idempotent shifts that responsibility to the client. That all being said, thinking back over all the web services I have built over the last few years, POST is used for creating new resources in nearly every case. Craig
Funny that I wrote about this two days ago. Quoting myself... <quote> (...) the question of using PUT to create resources with a HATEOAS context is interesting. How can a client decide what the URL for the new resource will be if that should be driven by server? This one seens odd to me: GET /whereshouldiput 200; Location: /iwantmyputshere/123 --> a URI for a resource that does not exist yet!!! PUT /iwantmyputshere/123 201 Actually there may be other ways, like using url templates (kind of) for wich I have another use case. But this question of PUT in HATEOAS should probably be matter for another thread.. </quote> On 21 Jul 2010 18:25, "Bryan Taylor" <bryan_w_taylor@...> wrote: I've been discussing PUT for create with some coworkers. This is certainly valid HTTP, but I'm wondering if people consider it RESTful. It seems to me that giving the client control over part of the URI requires them to understand how resources are organized and forces them to construct URIs as non-opaque strings. So I wonder if this conflicts with HATEOAS. It potentially also puts a burden on the client to avoid namespace collisions, so that it must adopt some uniqueness logic which again requires application state that seems problematic.
--- In rest-discuss@yahoogroups.com, Craig McClanahan <craigmcc@...> wrote: > > On Wed, Jul 21, 2010 at 10:25 AM, Bryan Taylor <bryan_w_taylor@...>wrote: > > > > > > > I've been discussing PUT for create with some coworkers. This is certainly > > valid > > HTTP, but I'm wondering if people consider it RESTful. > > While "client control over part of the URI" can be a problem, sometimes it > is exactly what you want. Consider uploading an image to a photo sharing > site -- you (the client) might very much care what the ultimate filename is, > and probably also what folder it gets put in. > > I would tend to think more about the difference in idempotency (if the same > request gets submitted twice, say because the client didn't hear the initial > response for some reason, do two things get created or just one?) between > PUT and POST. In the case of PUT, its up the server to do the right thing, > whereas the fact that POST is not idempotent shifts that responsibility to > the client. > > That all being said, thinking back over all the web services I have built > over the last few years, POST is used for creating new resources in nearly > every case. > > Craig > Also, just because you are using PUT doesn't necessarily mean that the client cooks up the URI from scratch. The hypermedia document could give the client the URI to PUT to, it could also provide a URI template that the client fills in. If-None-Match headers could be used to avoid collisions but each client could be given their own URI "sub-space" to play in too (controlled by the URI template and HTTP auth on the URI space). I agree that POST gets used more (almost exclusively) in the wild though. I'd think that the extra work to guide the client to construct the right URLs (which it could potentially ignore or get wrong, requiring further work on the server to validate things) is likely not worth the benefits of idempotence to most folks. There are probably other tradeoffs that I'm missing too... Andrew
If client-control over the URI is what you are after, have you considered making a "suggested URI" part of the message you POST when creating a new resource? The server than has the option to take up your suggestion or to ignore it. The actual URI of the new resource is returned in the 'Location' response header of the "201 Created" you will get back. That's how we implemented this for RESTx (http://restx.mulesoft.org). -- Juergen Brendel Architect, MuleSoft Inc. http://restx.mulesoft.org
<snip> > If client-control over the URI is what you are after, have you > considered making a "suggested URI" part of the message you POST when > creating a new resource? </snip> Atom uses the Slug Header[1] as a way to suggest URI details when using POST. As already mentioned, clients can be instructed (e.g. lead via hypermedia) to request a URI to use when doing in idempotent write (PUT), too. [1] http://tools.ietf.org/html/rfc5023#section-9.7 mca http://amundsen.com/blog/ http://mamund.com/foaf.rdf#me On Wed, Jul 21, 2010 at 14:13, Juergen Brendel <juergen.brendel@...> wrote: > > If client-control over the URI is what you are after, have you > considered making a "suggested URI" part of the message you POST when > creating a new resource? The server than has the option to take up your > suggestion or to ignore it. The actual URI of the new resource is > returned in the 'Location' response header of the "201 Created" you will > get back. That's how we implemented this for RESTx > (http://restx.mulesoft.org). > > > -- > Juergen Brendel > Architect, MuleSoft Inc. > http://restx.mulesoft.org > > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
Bryan Taylor wrote: > I've been discussing PUT for create with some coworkers. This is > certainly valid HTTP, but I'm wondering if people consider it > RESTful. It seems to me that giving the client control over part > of the URI requires them to understand how resources are organized > and forces them to construct URIs as non-opaque strings. If your client is forced to construct URI's as non-opaque strings then your media type is lacking. The response to the previous request should contain a form or other URI-construction element that tells the client how to construct the URI (and possibly the request body) in a fashion that is as opaque as possible to the client. The fact that HTML does not provide this for PUT is an unfortunate oversight. Some protocols make this knowledge so uniform that it doesn't need the media type to provide a form--it's embedded in the definition of the protocol itself. Robert Brewer fumanchu@...
Mark Wonsil wrote: > > There just isn't a perfect document and there won't > be a perfect media type. Get over it. > Exactly. Nobody is suggesting that the ubiquitous media types of the Web are "perfect". This is not even expected: "Implementations are decoupled from the services they provide, which encourages independent evolvability. The trade-off, though, is that a uniform interface degrades efficiency, since information is transferred in a standardized form rather than one which is specific to an application's needs." When you start a REST project by dismissing ubiquitous media types, in favor of creating a format specific to your application's needs, you are not developing in the REST style. You are, in fact, coupling your implementation to the service you are providing. HTML is not perfect, but it *does* decouple your implementation from your service. Yes, your REST system should strongly resemble a Website. How many times does Roy have to say so, before it takes? Once again, I'm totally befuddled by the pushback here. Yes, media types can evolve. But, of the thousands of media types "REST" developers have coined this year, how many meet REST's requirement of being standardized? Minting media types willy-nilly has _nothing_ to do with the REST style. -Eric
Antnio Mota wrote: > > However, those standards you mention are standards for a "human > consumable" web, why should we adapt that to the "machines talking to > machines" web as TB-L mention in his "I have a dream for the Web" > view? [1] > What are you even talking about? HTML media types have always been machine-readable. There is no human involved, correct me if I'm wrong, in Google's index of the Web. The very fact that alternate browsing devices can guide the blind through an HTML table, proves the machine- readability of the media types I mention. That's not even taking RDFa into consideration. How on Earth do you read that reference, and decide that it means HTML should only be written for machines? > > HTML is as much as structuring text as it is about presentation > (although the tendency and recommendation is to use CSS for that, but > nevertheless the markup is there). > Yes, HTML can be misused for presentation. How does that change the fact that, properly used, it's about structuring data? Not just text, data. Since 2004, all my markup is based on structuring data and documents logically, separate from their presentation, all of which may be found in external CSS documents. How on Earth does pointing out that it's *possible* to mix structure and presentation, begin to make what I'm saying wrong? Is it possible for you to tone down the argumentativeness of, well, _all_ your posts? > > So, if my main goal is not to make > a website but to allow services to access my data in a "machines > talking to machines" way, > Who says the main goal of a REST system is making a website? What a REST system *is*, is a distributed hypertext application. A user, could be human or machine, is attempting to perform a task, using an interface transferred over the network. The main goal of a REST system is to provide a hypertext application interface over the network. > > why should I care about <b>bolds</> and > <i>italics</i>? > This is yet another question for the sake of being argumentative, isn't it? Inline markup has nothing to do with the structure I was speaking of, which comes from block-level markup. If your data doesn't include natural language which may contain emphasized passages, then, *obviously*, these tags are irrelevant to your system. A cargo manifest is still just tables and lists, structurally. Same with genetic data. Or flight reservations. Or concert tickets. Or anything else you can think about interacting with over the Web -- you can model it using ubiquitous media types. BTW, <b> and <i> aren't automatically presentational. Semantically, <em>USS Constitution</em> is improper -- ship names are marked up with <i>, so if the cargo manifest has anything to do with maritime shipping, then yeah, you need to care about <i>. Or re-invent that wheel in your custom media type. > > Why should I care about accessibility at all? That is > a problem for web designers to deal, not to a "data provider" service > designer... > You're right. Why shouldn't the ability to see be a prerequisite for developing and maintaining your system? After all, it's not like there's any solution out there to the problem of communicating text to the blind... I don't know about you, but that sounds mighty discriminatory to me. I don't care if your system is targeted to machine users. You don't have a bunch of Cylons developing it, do you? No? You're using humans? Yeah, I thought so. How on Earth is a human going to develop, let alone debug, a hypertext interface that's only machine readable? So, as it turns out, humans will have to use your hypertext after all... which is also the basis of the Web's success. How will another entity interoperate with, or re-use, your system if their humans can't access your interface? Again, the reason for choosing HTML as your hypertext for driving the application, is because the option is there for accessibility. Fine, you don't give a rat's ass. But what if, next month, your boss does? Perhaps as the result of an anti-discrimination in the workplace lawsuit, since you're excluding the blind from a job that only involves reading and manipulating text... Wouldn't it be easier to just add in the accessibility markup to your existing HTML, than have to re-invent that wheel for your custom media type? > > So why not put the focus on XML, that "is a set of rules for encoding > documents in machine-readable form","widely used for the > representation of arbitrary data structures" [2]? > Oh, good grief. Do I honestly have to type (X)HTML every time? Isn't it obvious when I say "HTML media types" that I'm referring to text/html _and_ application/xhtml+xml? Nitpicking semantics is, again, argumentative. > > Why use HTML that "tells you how data should look" instead of XML that > "tells you what it means", that "applies context to the data", and > "separates data content from data presentation"[3]? > Because HTML doesn't tell you how data should look, you're making false assumptions. > > Wouldn't you agree that REST is much more than a glorified way of > making web sites? > Yes. But your habit of asking questions in that form tempts me to argue against it to the bitter end. A telephony system that doesn't involve HTML or browsers can be RESTful, and if it is, a data-view analysis of it will look exactly like the data-view analysis of a website. This holds true for any REST system. > > So I would say: ... Create your own Media-Type. Create hundreds, > thousands of new Media-Types. Don't wait for some organization to > define then from the top-down. Do it yourself from the ground-up and > encourage others to do the same. > Yeah, you would -- you're always arguing against the importance of some aspect of REST or another. This is absolutely, unequivocally NOT the REST style, and represents an utter failure to grasp the notion of a media type. Look, HTML 5 will use the same media type as HTML 4, and HTML 3, and HTML 2, and HTML before that. HTML 5 will be used to do all the same complicated things, like airline reservations, over the Web, using the same media type that's always been used. This is the point of REST, folks. If you're putting a hypertext interface out on the Web for human consumption, then it does not matter what the nature of your data is, you can create an HTML interface. The worst possible thing you can do, is go about minting new media types for each resource type you create. If you do that, then I expect the JPEG images in each image gallery you create, to have unique media types too, entirely defeating the purpose and, most critically, breaking the self-descriptive messaging constraint. REST is based on the principle of generality here, which is why it says, clearly in black-and-white, to re-use _standard_ media types. That it also says the set of such types is evolving, is not some loophole for the creation of thousands of new media types every month, not one of which will EVER become a standard. NOT REST!!! -Eric
On Wed, Jul 21, 2010 at 11:52 AM, Eric J. Bowman <eric@...> wrote: > When you start a REST project by dismissing ubiquitous media types, in > favor of creating a format specific to your application's needs, you > are not developing in the REST style. You are, in fact, coupling your > implementation to the service you are providing. HTML is not perfect, > but it *does* decouple your implementation from your service. Eric has been banging this drum for sometime, and it's a beat you can dance to. My only concerns with it, and it may simply be just me missing something, is that by doing this, the media type is no longer enough to communicate to the system the specifics of how to process the data. application/xhtml+xml doesn't tell the program what the payload is (beyond a general sense -- it's XHTML). application/vnd.example.purchaseorder+xml is (ideally) much more specific. In theory, I can find the "public" document describing the application/vnd.example.purchaseorder+xml to learn the semantics of how that data is used. But I can't do the same with application/xhtml+xml. Instead I have to reflect upon the internal structure of the payload to see if it happens to be something that I'm looking for. Perhaps dredging for microformats or links or RDFa hints, and basically hoping I'll find what I'm looking for (i.e. the client sent not just application/xhtml+xml, but a document containing Interesting Things that I'm looking for). How do you resolve this kind of conflict when the common, standard type are extremely generic and you want to use them for a specific domain? Regards, Will Hartung (willh@...)
<snip> > How do you resolve this kind of conflict when the common, standard > type are extremely generic and you want to use them for a specific > domain? </snip> I have run into this issue before. How do you extend (X)HTML semantics in ways that allow clients to understand what's going on? One solution I employed a few times w/ general success was using the "profile" attribute[1] to contain the added info. IN my case, i include a URI that points to a document that details the additional semantics (mostly @rel values and their meaning, etc.). In my case, this was a highly scripted browser app and the added work of dereferencing the profile URI and parsing the semantics was not a very big deal. This made XHTML documents relatively easy for desktop clients to work w/, too. It's a bummer that there is some visibility loss here, but it was effective for what I was doing for some internal solutions at the time. Several years ago, Tantek Celik proposed a pattern for linking the profile attribute to a document[2]. I wrote about this a while back in my blog[3]. [1] http://www.w3.org/TR/html401/struct/global.html#profiles [2] http://gmpg.org/xmdp/description [3] http://amundsen.com/blog/archives/1043 mca http://amundsen.com/blog/ http://mamund.com/foaf.rdf#me On Wed, Jul 21, 2010 at 15:59, Will Hartung <willh@...> wrote: > On Wed, Jul 21, 2010 at 11:52 AM, Eric J. Bowman <eric@bisonsystems.net> wrote: > >> When you start a REST project by dismissing ubiquitous media types, in >> favor of creating a format specific to your application's needs, you >> are not developing in the REST style. You are, in fact, coupling your >> implementation to the service you are providing. HTML is not perfect, >> but it *does* decouple your implementation from your service. > > Eric has been banging this drum for sometime, and it's a beat you can dance to. > > My only concerns with it, and it may simply be just me missing > something, is that by doing this, the media type is no longer enough > to communicate to the system the specifics of how to process the data. > application/xhtml+xml doesn't tell the program what the payload is > (beyond a general sense -- it's XHTML). > application/vnd.example.purchaseorder+xml is (ideally) much more > specific. > > In theory, I can find the "public" document describing the > application/vnd.example.purchaseorder+xml to learn the semantics of > how that data is used. But I can't do the same with > application/xhtml+xml. Instead I have to reflect upon the internal > structure of the payload to see if it happens to be something that I'm > looking for. Perhaps dredging for microformats or links or RDFa hints, > and basically hoping I'll find what I'm looking for (i.e. the client > sent not just application/xhtml+xml, but a document containing > Interesting Things that I'm looking for). > > How do you resolve this kind of conflict when the common, standard > type are extremely generic and you want to use them for a specific > domain? > > Regards, > > Will Hartung > (willh@...) > > > ------------------------------------ > > Yahoo! Groups Links > > > >
Will Hartung wrote: > > My only concerns with it, and it may simply be just me missing > something, is that by doing this, the media type is no longer enough > to communicate to the system the specifics of how to process the data. > Intermediaries, and user agents, don't need to know application specifics. Only the consumer, i.e. the human or machine user, needs to know those specifics -- what's referred to as domain-specific vocabulary. Such vocabulary can be linked, or embedded (RDFa). > > application/xhtml+xml doesn't tell the program what the payload is > (beyond a general sense -- it's XHTML). > application/vnd.example.purchaseorder+xml is (ideally) much more > specific. > It's actually less specific. I've never heard of that vnd.* media type, it isn't common knowledge. So to me (or an intermediary), it's just application/xml, which tells me nothing by comparison to application/xhtml+xml. > > In theory, I can find the "public" document describing the > application/vnd.example.purchaseorder+xml to learn the semantics of > how that data is used. > Maybe. But not if the public document is just a schema, then you know nothing about rendering the purchase order beyond what application/xml tells you (which is nothing). If it's a well-designed media type, then it will re-invent the wheel of expressing a data table horizontally. But is that really well-designed? Not to me, no, because it isn't doing anything the ubiquitous HTML media types weren't designed to do. > > But I can't do the same with application/xhtml+xml. Instead I have to > reflect upon the internal structure of the payload to see if it > happens to be something that I'm looking for. > I disagree. If I'm developing a purchase-order payload, by starting with application/xhtml+xml I can first design something that *looks* like a purchase order. While tables may be abused for presentation, if the data is tabular, then laying it out as a table is structural -- what the table looks like (cellpadding, borders/rules etc.), its presentation, is handled by CSS. Now that I have an HTML document which looks like a purchase order because it's structured like a purchase order, I can then proceed to annotate it to conform to whatever domain-specific vocabulary I choose (i.e. GoodRelations). Bingo! Human- and machine- readable. Even if it's not meant for human consumption, it's eminently debuggable and maintainable by humans because it's self-documenting. When a system based on application/vnd.example.purchaseorder+xml breaks down, the human fixing it isn't pulling up a document that looks like a purchase order, making things that much harder to debug -- talk about having to reflect on the internals of the data -- not to mention develop in the first place. There is nothing about the data structure of a purchase order that can't be modeled in HTML + RDFa. You may have some internal system where an application-specific media type is better suited. Fine. But if your goal is to use that resource as part of a Web system, then there's every reason to convert it into a ubiquitous media type such that anyone can decipher your API with a browser and 'view source'. It is a benefit of the REST style that you can pull up your representations in a browser to debug them. > > Perhaps dredging for microformats or links or RDFa hints, and > basically hoping I'll find what I'm looking for (i.e. the client sent > not just application/xhtml+xml, but a document containing > Interesting Things that I'm looking for). > That's exactly how it's done. None of the things covered in the GoodRelations ontology *matter* to intermediaries or user agents, only to the user. Since the domain-specific vocabulary is agreed upon by user and provider, it goes inside the payload and is not exposed at the protocol level. All those common-knowledge things, like recognizing the horizontal layout of tabular-structure data, the name/value pairs represented by that structure, etc. are what belong in the media type. At the protocol level, *that* a payload consists of name/value pairs is of interest, not *what* those name/value pairs represent, if you catch my drift. > > How do you resolve this kind of conflict when the common, standard > type are extremely generic and you want to use them for a specific > domain? > By not seeing it as a conflict. Standard and RDFa metadata attributes can annotate the generic semantics of HTML with the specific semantics of (most) any problem domain. Every vendor of online shopping carts has a different take on marking up the same problem space. This is not as large a problem as it once seemed, because the GoodRelations ontology is proliferating, and with it, interoperability. Bringing me back to my toilet-paper-resupply analogy. It becomes trivial to write an agent which regularly orders my preferred brand, color and scent of TP at best price from a list of supplier websites. All these sites have different markup, but it's the-same-enough that agents can easily glean the item number, then use it to fill out the 'item' and 'quantity' fields of an order form -- provided that the vendors have all agreed to the same domain-specific vocabulary by annotating their diverse markup with the same GoodRelations metadata. There is a strictly limited choice of 'item' and 'quantity' markup available in HTML. While all shopping-cart vendors will have different implementations, we're still talking about an agent which solves the finite problem of determining which <input> elements to target -- if those vendors agree to a common (domain-specific) metadata vocabulary, this is indeed trivial. My TP agent becomes distinctly non-trivial when it must understand an entirely different media type for each shopping-cart application vendor. Instead of looking for an <input> element matching well-known criteria, the agent must first determine the equivalent of an <input> element for each non-ubiquitous media type that re-invents that particular wheel. -Eric
May be I'm getting the point (even if it's quite hard to deeply understand all your posts :-D). I've found here something very interesting http://linkednotbound.net/2010/07/19/self-descriptive-hypermedia/ <http://linkednotbound.net/2010/07/19/self-descriptive-hypermedia/>While I think I've got the Eric point about the usefulness of standard media types (I've never thought to discard them, but to show them along with custom ones) I've finally found links to these Fielding's posts http://tech.groups.yahoo.com/group/rest-discuss/message/6594 <http://tech.groups.yahoo.com/group/rest-discuss/message/6594> http://tech.groups.yahoo.com/group/rest-discuss/message/6613 <http://tech.groups.yahoo.com/group/rest-discuss/message/6613> So let me clarify the architecture I've in mind so that you could tell me "how much" it is a RESTful style architectural instance. I'm in a huge but _closed_ network, with (almost) none interaction with the external internet. I've to write a quite complex application where I've full control over the client and the servers, but not the intermediaries. Some of the clients are for humans (almost equally old browsers and silverlight applications). But many of the clients also work like (web) servers for other users (needing my data and elaborations for presenting them). Some of those clients could, in the future go into the internet, but my application will not. Since I'm not planning to write a protocol I've thought that I could use HTTP (that I know quite well). And since I read RESTful Web Service it seemed to me quite a good approach. Since we will write both the client and the server we are exactly in the condition described from andrew: > a) building a new kind of browser and associated media type(s); and b) > designing "sites" for that new kind of browser. Ir seem that we "easily" adopt all the constraints a part from the media type standardization. This, paraphasing Fielding, would be "less RESTful" that an application that would serve HTML. But, AFAIK (currently), it should be a small effort to make it more RESTful by simply providing "filters" that translate the custom mime types in HTML or ATOM. Moreover, since we will write the client I think that we will use WADL as an hypertext language. This becouse we will have a variable amount of representations lacking connectedness (they are like a plain gifs or bitmaps, and we will expose as binary data / json / xml as appropriate), one that will provide some connections to other resources and the WADL itself. I'll document properly both the +binary, the +json and the +xml mimes. Probably none of these home made mime types will become IANA registered standards (unless we will surclass both Oracle and Microsoft in the financial application development market). To make it "more" RESTful we would translate both the wadl and the other connected mime type to HTML or ATOM and properly transform the unconnected resources in human readable html pages. Am I still missing something big? Could this be called RESTful? Giacomo PS: I really have to thank you all for the effort of clarifing all these things. I think that I'll apply verbatim all your suggestions, still from the next internet application I'll have to design.
Giacomo Tesio wrote: > > Probably none of these home made mime types will become IANA > registered standards. > Don't confuse 'registered' with 'standardized'. The procedure for registering a media type identifer, is a two-minute phone call which will tell you either it's already taken, or you can have it. The purpose of this is simply to avoid collisions on the Web, i.e. application/vnd.foo+xml should only refer to one media type, not a set of media types from different vendors. Obviously, this is not a concern on your Intranet, where there's no danger that your outfit's application/vnd.foo+xml gets confused for some other outfit's application/vnd.foo+xml. Media type identifiers don't need to point to standardized media types, to be registered. Media type identifiers only need to be registered, if they're to be used on the public Internet. -Eric
Its funny how I do think that sometimes I post things to hastly without concerns of justify what I wrote, because in the end what I wrote are my opinions and nothing else. Of course opinions taken from elsewere, but for wich I'm the sole reponsible. When I'm wrong its me who's wrong. But I never present my opinions as if they were "the thruth". However, since you were so nice as to respond to my posts instead of just ignore then, this time I did had the work to try to support my opinions with references to some external, more or less authoriative sources, so you wouldn't think I'm talking just for the talk and we could have some external, common ground to try to understand each other. Not that I was expecting you to try that, but neverthless. So I must admit I was surprised by your tone even more arrogant than usual. But not surprised that you once more present your opinions has "the thruth" with little or no reasoning besides the "because that's the way it is" and your quotations "ad eternum" - not to say "ad nauseum" of Roy Fielding, more times than not without context. If I'm allowed the imagery, its like you're convinced that Roy is God and you are His Prophet... Actually, I found it interesting that both you and Jan (wich I consider to be one of the experts on this list whose opinions I consider highly, independently of agree or not), that both have quoted the exactly some passage from Roy dissertation as support to two opposite conclusions... Now for the hard part, to comment on your comments without flaming, altougth some of then were already addresses by other members of this list with far more knowledge than me. On Wednesday, July 21, 2010, Eric J. Bowman <eric@...t> wrote: > Antnio Mota wrote: >> >> However, those standards you mention are standards for a "human >> consumable" web, why should we adapt that to the "machines talking to >> machines" web as TB-L mention in his "I have a dream for the Web" >> view? [1] >> > > What are you even talking about? HTML media types have always been > machine-readable. There is no human involved, correct me if I'm wrong, > in Google's index of the Web. The very fact that alternate browsing > devices can guide the blind through an HTML table, proves the machine- > readability of the media types I mention. That's not even taking RDFa > into consideration. I was referring to that vision of m2m of the future, where no need of direct human consuption in a structured way, like it is now using browsers, will be necessary. It still will be possible, but not necessary. (Other ways, non structured ones, will certainly appear has is characteristic of "pervasive computing"). One of the characteristics of XML as it is presented in the ref.(2) is that XML should be human rleadable, even if its purpose is for m2m consumption. I think this will also answer the nasty remarks below where you refer blind people. And this vision its not mine, is from TB-L, that I used thinking you would consider him "authoritative" and someone who knows what is talking about. > > How on Earth do you read that reference, and decide that it means HTML > should only be written for machines? > I don't understand where I meant that, if I meant something on those lines (and i do not) it will be the opposite. >> >> HTML is as much as structuring text as it is about presentation >> (although the tendency and recommendation is to use CSS for that, but >> nevertheless the markup is there). >> > > Yes, HTML can be misused for presentation. How does that change the > fact that, properly used, it's about structuring data? Not just text, > data. Since 2004, all my markup is based on structuring data and > documents logically, separate from their presentation, all of which may > be found in external CSS documents. > > How on Earth does pointing out that it's *possible* to mix structure > and presentation, begin to make what I'm saying wrong? Is it possible > for you to tone down the argumentativeness of, well, _all_ your posts? > Can be misused? HTML was created for that, and the fact that at some point it was *recommended* that part of formatting will be better done separaly, does not make that false. Its like saying it is a misuse to write in paper since now we have word processors to do it... What I was saying is that albeit HTML can be used to m2m comunication, there are better, more specialized tools to do it. >> >> So, if my main goal is not to make >> a website but to allow services to access my data in a "machines >> talking to machines" way, >> > > Who says the main goal of a REST system is making a website? What a > REST system *is*, is a distributed hypertext application. A user, > could be human or machine, is attempting to perform a task, using an > interface transferred over the network. The main goal of a REST system > is to provide a hypertext application interface over the network. > And where did I said that the main goal was that? Cmon, I'm not *that* stupid, or my english is very bad or you misread what I say, intentionally or not. >> >> why should I care about <b>bolds</> and >> <i>italics</i>? >> > > This is yet another question for the sake of being argumentative, isn't > it? Inline markup has nothing to do with the structure I was speaking > of, which comes from block-level markup. If your data doesn't include > natural language which may contain emphasized passages, then, > *obviously*, these tags are irrelevant to your system. Funny how later on you contradict that. But again or I don't know how to explain myself or you understand things the way it suites you. You once again took the trees as the forest. I and B are trees, HTML is the forest. So once again, why should I use a general purpose markup language that was designed to structure *and* format general data, when I can use XML that was designed "as a set of rules for encoding documents in machine-readable form","widely used for the representation of arbitrary data structures"? Is this definition wrong? I know what you say, HTML is ubiquious, etc... But isn't XML ubiquos as well? In HTML you already have meaning like TD and TR. And what? You have a table. Don't you have to define if that table is a table of potatos or apples or whatever? Don't you have to give a specific meaning to the general meaning of table? So why don't you do it in XML? The only advantage in using a HTML table is that it renders in browsers as a table. Formatting... > > A cargo manifest is still just tables and lists, structurally. Same > with genetic data. Or flight reservations. Or concert tickets. Or > anything else you can think about interacting with over the Web -- you > can model it using ubiquitous media types. BTW, <b> and <i> aren't > automatically presentational. Semantically, <em>USS Constitution</em> > is improper -- ship names are marked up with <i>, so if the cargo > manifest has anything to do with maritime shipping, then yeah, you need > to care about <i>. Or re-invent that wheel in your custom media type. > You contradict what you said, but that's not important. Nor unusual. >> >> Why should I care about accessibility at all? That is >> a problem for web designers to deal, not to a "data provider" service >> designer... >> > > You're right. Why shouldn't the ability to see be a prerequisite for > developing and maintaining your system? After all, it's not like > there's any solution out there to the problem of communicating text to > the blind... I don't know about you, but that sounds mighty > discriminatory to me. > > I don't care if your system is targeted to machine users. You don't > have a bunch of Cylons developing it, do you? No? You're using > humans? Yeah, I thought so. How on Earth is a human going to develop, > let alone debug, a hypertext interface that's only machine readable? > > So, as it turns out, humans will have to use your hypertext after all... > which is also the basis of the Web's success. How will another entity > interoperate with, or re-use, your system if their humans can't access > your interface? Again, the reason for choosing HTML as your hypertext > for driving the application, is because the option is there for > accessibility. > > Fine, you don't give a rat's ass. But what if, next month, your boss > does? Perhaps as the result of an anti-discrimination in the workplace > lawsuit, since you're excluding the blind from a job that only involves > reading and manipulating text... Wouldn't it be easier to just add in > the accessibility markup to your existing HTML, than have to re-invent > that wheel for your custom media type? > That's the part where you are just being nasty, isn't it? >> >> So why not put the focus on XML, that "is a set of rules for encoding >> documents in machine-readable form","widely used for the >> representation of arbitrary data structures" [2]? >> > > Oh, good grief. Do I honestly have to type (X)HTML every time? Isn't > it obvious when I say "HTML media types" that I'm referring to > text/html _and_ application/xhtml+xml? Nitpicking semantics is, again, > argumentative. > >> >> Why use HTML that "tells you how data should look" instead of XML that >> "tells you what it means", that "applies context to the data", and >> "separates data content from data presentation"[3]? >> > > Because HTML doesn't tell you how data should look, you're making false > assumptions. > Not me then, I was just quoting... But I already commented about this. >> >> Wouldn't you agree that REST is much more than a glorified way of >> making web sites? >> > > Yes. But your habit of asking questions in that form tempts me to > argue against it to the bitter end. A telephony system that doesn't > involve HTML or browsers can be RESTful, and if it is, a data-view > analysis of it will look exactly like the data-view analysis of a > website. This holds true for any REST system. > >> >> So I would say: ... Create your own Media-Type. Create hundreds, >> thousands of new Media-Types. Don't wait for some organization to >> define then from the top-down. Do it yourself from the ground-up and >> encourage others to do the same. >> > > Yeah, you would -- you're always arguing against the importance of some > aspect of REST or another. This is absolutely, unequivocally NOT the > REST style, and represents an utter failure to grasp the notion of a > media type. Look, HTML 5 will use the same media type as HTML 4, and > HTML 3, and HTML 2, and HTML before that. HTML 5 will be used to do > all the same complicated things, like airline reservations, over the > Web, using the same media type that's always been used. > > This is the point of REST, folks. If you're putting a hypertext > interface out on the Web for human consumption, then it does not matter > what the nature of your data is, you can create an HTML interface. The > worst possible thing you can do, is go about minting new media types > for each resource type you create. > > If you do that, then I expect the JPEG images in each image gallery you > create, to have unique media types too, entirely defeating the purpose > and, most critically, breaking the self-descriptive messaging > constraint. REST is based on the principle of generality here, which > is why it says, clearly in black-and-white, to re-use _standard_ media > types. That it also says the set of such types is evolving, is not > some loophole for the creation of thousands of new media types every > month, not one of which will EVER become a standard. NOT REST!!! > Well, Jan and others already commented on that, I think. And better than I could. Lots of people who knowledge at least as respectable as yours don't agree with you. Instead of just showting NOT REST maybe you should add "in my nothing humble opinion". Its late and I'm tired, but I don't want to finish without saying that I do enjoy debating with you, and I think to debate is a good way of learning, and in any way I feel any disrespect for you as it was implied elsewhere. > -Eric >
On Thu, 22 Jul 2010 02:32:32 +0100 Antnio Mota <amsmota@...> wrote: > > I think this will also answer the nasty remarks below where you refer > blind people. > Advocating for accessibility, such that the blind may actually work with what amounts to just text, is nasty how? > > And this vision its not mine, is from TB-L, that I used thinking you > would consider him "authoritative" and someone who knows what is > talking about. > There is nothing in Tim's vision which obsoletes HTML, certainly not now that we have RDFa, which he couldn't have anticipated back when he wrote that, could he? Stop posting things which don't dispute what I say, and framing it as if they do. -Eric
Antnio Mota wrote: > > So once again, why should I use a general purpose markup language that > was designed to structure *and* format general data, when I can use > XML that was designed "as a set of rules for encoding documents in > machine-readable form","widely used for the representation of > arbitrary data structures"? Is this definition wrong? > No, but it does fit perfectly with using XHTML, so I miss your point. > > I know what you say, HTML is ubiquious, etc... But isn't XML ubiquos > as well? In HTML you already have meaning like TD and TR. And what? > You have a table. Don't you have to define if that table is a table of > potatos or apples or whatever? Don't you have to give a specific > meaning to the general meaning of table? So why don't you do it in > XML? > The only aspects of your system which care whether a table represents apples or potatoes, are those which are out-of-scope to REST. That the data is structured as a table, is something the media type definition tells you. So you don't get that in XML, unless you're using XHTML, or some other hypertext language which has defined tabular-data semantics, like Docbook. > > The only advantage in using a HTML table is that it renders in > browsers as a table. Formatting... > Structure. To have a table is to imply a structure of horizontal rows and vertical columns. To use a table as layout control, is formatting. > > > Fine, you don't give a rat's ass. But what if, next month, your > > boss does? Perhaps as the result of an anti-discrimination in the > > workplace lawsuit, since you're excluding the blind from a job that > > only involves reading and manipulating text... Wouldn't it be > > easier to just add in the accessibility markup to your existing > > HTML, than have to re-invent that wheel for your custom media type? > > > > That's the part where you are just being nasty, isn't it? > That's colorful language. Nastiness is in the eye of the reader. The argument remains, that if you're creating a hypertext interface, there's no excuse for it not to be accessible to humans, even those with disabilities. I don't care if it's an m2m payload. -Eric
That depends on how you maintain the conversation state. If you put it in a database as resource state then you are ok; if you maintain it in-memory as a session then you are not. In the first scenario, you can link to it, bookmark it etc..; in the other it is unamed in-memory state that will cause problems if you want to scale with clustered systems. That's the take-away I got from Stefan's excellent software engineering radio interview [1].
Sean.
[1] http://www.se-radio.net/podcast/2008-05/episode-98-stefan-tilkov-rest
--- On Wed, 21/7/10, Antnio Mota <amsmota@...> wrote:
From: Antnio Mota <amsmota@...>
Subject: Re: [rest-discuss] Fw: Re: HTTP reliability - in order msg delivery?
To: "Sean Kennedy" <seandkennedy@yahoo.co.uk>
Cc: "Rest Discussion Group" <rest-discuss@yahoogroups.com>
Date: Wednesday, 21 July, 2010, 10:32
On 21 July 2010 10:06, Sean Kennedy <seandkennedy@...> wrote:
Hi Bill,
Thanks for the reply. I agree: I think both Jim's and Jan's ideas on this have been very helpful. I believe it come down to a central thought process - the server knows when it gets a msg out of order and by using the correct response code (409) and links, can inform the client how to get back in sync.
Sean.
I don't agree with that not a little bit. That smells like maintaining conversation state in the server...
There should be no conversation state. To hide it besides a DB it's just that, hiding it. There could be a application state, but unless there are business reasons to do it, it's just hidden. But if there is a business reason to store that as part as applications state, then, since application state is driven by the server, there will never be a "msg out of order" - because it will be the server to instruct the client what msg to sent, and the server already know the order. Unless the client is trying to cheat, in wich case the server can assume that it is a malicious client and it should *not* instruct it to do the right thing, it should instead send him a GFY code. (This is just the way I look into it. I think I should start to add a "Disclaimer: The opinions expressed here are just my opinions and only by chance they correspond to true facts." :) 2010/7/22 Sean Kennedy <seandkennedy@...> > > That depends on how you maintain the conversation state. If you put it in a database as resource state then you are ok; if you maintain it in-memory as a session then you are not. In the first scenario, you can link to it, bookmark it etc..; in the other it is unamed in-memory state that will cause problems if you want to scale with clustered systems. That's the take-away I got from Stefan's excellent software engineering radio interview [1]. > > Sean. > > [1] http://www.se-radio.net/podcast/2008-05/episode-98-stefan-tilkov-rest > > --- On Wed, 21/7/10, Antnio Mota <amsmota@...> wrote: > > From: Antnio Mota <amsmota@...> > Subject: Re: [rest-discuss] Fw: Re: HTTP reliability - in order msg delivery? > To: "Sean Kennedy" <seandkennedy@...> > Cc: "Rest Discussion Group" <rest-discuss@yahoogroups.com> > Date: Wednesday, 21 July, 2010, 10:32 > > On 21 July 2010 10:06, Sean Kennedy <seandkennedy@...> wrote: > > > > Hi Bill, > Thanks for the reply. I agree: I think both Jim's and Jan's ideas on this have been very helpful. I believe it come down to a central thought process - the server knows when it gets a msg out of order and by using the correct response code (409) and links, can inform the client how to get back in sync. > > Sean. > > > I don't agree with that not a little bit. That smells like maintaining conversation state in the server... >
--- In rest-discuss@yahoogroups.com, Giacomo Tesio <giacomo@...> wrote: > > May be I'm getting the point (even if it's quite hard to deeply understand > all your posts :-D). > > I've found here something very interesting > http://linkednotbound.net/2010/07/19/self-descriptive-hypermedia/ > > <http://linkednotbound.net/2010/07/19/self-descriptive-hypermedia/>While I > think I've got the Eric point about the usefulness of standard media types > (I've never thought to discard them, but to show them along with custom > ones) I've finally found links to these Fielding's posts > http://tech.groups.yahoo.com/group/rest-discuss/message/6594 > <http://tech.groups.yahoo.com/group/rest-discuss/message/6594> > http://tech.groups.yahoo.com/group/rest-discuss/message/6613 > <http://tech.groups.yahoo.com/group/rest-discuss/message/6613> > So let me clarify the architecture I've in mind so that you could tell me > "how much" it is a RESTful style architectural instance. > I hope you came away from the above links with the sense that, as the designer, you have to make the key decisions on what makes sense for your architecture. There is no silver bullet approach - you have to consider the tradeoffs and decide what will work best for you. That said, while I don't have all the background into what you are doing, I can provide a few comments based on what you've provided. > I'm in a huge but _closed_ network, with (almost) none interaction with the > external internet. > I've to write a quite complex application where I've full control over the > client and the servers, but not the intermediaries. > Ok. So based on that I would say that if you think it would make things better, it is perfectly fine to use a custom hypermedia type if nothing else out there meets your needs. Essentially, what you should do is adopt a convention something like: "We will use IANA standard media types, plus the custom media types documented at internal URI X". I think this is ok for a closed system. As an aside, for a "closed" system, full-blown REST might also be overkill. But since you also described your system as "huge" then I could see REST still being useful. > Some of the clients are for humans (almost equally old browsers and > silverlight applications). > But many of the clients also work like (web) servers for other users > (needing my data and elaborations for presenting them). > Some of those clients could, in the future go into the internet, but my > application will not. > > Since I'm not planning to write a protocol I've thought that I could use > HTTP (that I know quite well). > And since I read RESTful Web Service it seemed to me quite a good approach. > There's nothing wrong with using HTTP if your architecture is not fully RESTful. > Since we will write both the client and the server we are exactly in the > condition described from andrew: > > > a) building a new kind of browser and associated media type(s); and b) > > designing "sites" for that new kind of browser. > > > Ir seem that we "easily" adopt all the constraints a part from the media > type standardization. > As I was getting at above. "Standard" within your closed system is the next best thing and in my opinion still RESTful. > This, paraphasing Fielding, would be "less RESTful" that an application that > would serve HTML. > Depends on how you read it. Maybe "less RESTful" but not "un-RESTful" mainly because your system is closed. > But, AFAIK (currently), it should be a small effort to make it more RESTful > by simply providing "filters" that translate the custom mime types in HTML > or ATOM. > I don't see why this makes it more RESTful (if your clients are still using the custom mime types). It maybe be useful for debugging and management purposes though. > Moreover, since we will write the client I think that we will use WADL as an > hypertext language. > This becouse we will have a variable amount of representations lacking > connectedness (they are like a plain gifs or bitmaps, and we will expose as > binary data / json / xml as appropriate), one that will provide some > connections to other resources and the WADL itself. > Ok. This means that you wont' always have HATEOAS? Could be ok if you don't need it's benefits -- but it would likely help with the "huge" aspect of the system. > I'll document properly both the +binary, the +json and the +xml mimes. > Good. > Probably none of these home made mime types will become IANA registered > standards (unless we will surclass both Oracle and Microsoft in the > financial application development market). > That's ok if they are just standard for your system. > To make it "more" RESTful we would translate both the wadl and the other > connected mime type to HTML or ATOM and properly transform the unconnected > resources in human readable html pages. > Again, not sure this really makes it more RESTful but could still be a useful thing to do. > > Am I still missing something big? > Could this be called RESTful? > Don't get hung up on if this is RESTful or not. You should be more concerned with having a good architecture that has the right properties to meet your needs. Andrew
On Thu, Jul 22, 2010 at 4:14 PM, wahbedahbe <andrew.wahbe@...> wrote: > > > I hope you came away from the above links with the sense that, as the > designer, you have to make the key decisions on what makes sense for your > architecture. There is no silver bullet approach - you have to consider the > tradeoffs and decide what will work best for you. > That was clear even before the links. > This, paraphasing Fielding, would be "less RESTful" that an application > that > > would serve HTML. > > > > > Depends on how you read it. Maybe "less RESTful" but not "un-RESTful" > mainly because your system is closed. > > Exactly. So, as far as it's closed, it would be less restful. > > > But, AFAIK (currently), it should be a small effort to make it more > RESTful > > by simply providing "filters" that translate the custom mime types in > HTML > > or ATOM. > > > > I don't see why this makes it more RESTful (if your clients are still using > the custom mime types). It maybe be useful for debugging and management > purposes though. > That in the (realy remote) case that it would become public with browser clients. > > > Moreover, since we will write the client I think that we will use WADL as > an > > hypertext language. > > This becouse we will have a variable amount of representations lacking > > connectedness (they are like a plain gifs or bitmaps, and we will expose > as > > binary data / json / xml as appropriate), one that will provide some > > connections to other resources and the WADL itself. > > > > Ok. This means that you wont' always have HATEOAS? Could be ok if you don't > need it's benefits -- but it would likely help with the "huge" aspect of the > system. > No, I have HATEOAS: the WADL will point to the unconnected resources and some time to the connected one. May be some WADL will point to other WADL. Don't get hung up on if this is RESTful or not. You should be more > concerned with having a good architecture that has the right properties to > meet your needs. > Sure. Actually being RESTful is not a requirement. :-) But, being RESTful would mean to follow a "simple" and well known path, that is surely a plus (given that all the other components of the project are at the cutting edge in their domain, and thus could miserably fail). Thanks a lot. Giacomo
I think that is where PUT can be used to create a resource. Consider two services that handles a collection of entities. Each entity is unique in that collection. The collection needs to be transferred from one service to the other. The collections containing the same set of entities can have different URL's on different services. In order to create an entity in a newly initialized collection, PUT is used. So if the entity was already there, then it will consider if an update is needed. If the entity was not there yet, then create the entity. Cheers, Dong On Wed, Jul 21, 2010 at 11:51 AM, Craig McClanahan <craigmcc@...>wrote: > > > > > On Wed, Jul 21, 2010 at 10:25 AM, Bryan Taylor <bryan_w_taylor@...>wrote: > >> >> >> I've been discussing PUT for create with some coworkers. This is certainly >> valid >> HTTP, but I'm wondering if people consider it RESTful. It seems to me that >> >> giving the client control over part of the URI requires them to understand >> how >> resources are organized and forces them to construct URIs as non-opaque >> strings. >> So I wonder if this conflicts with HATEOAS. It potentially also puts a >> burden on >> the client to avoid namespace collisions, so that it must adopt some >> uniqueness >> logic which again requires application state that seems problematic. >> >> While "client control over part of the URI" can be a problem, sometimes it > is exactly what you want. Consider uploading an image to a photo sharing > site -- you (the client) might very much care what the ultimate filename is, > and probably also what folder it gets put in. > > I would tend to think more about the difference in idempotency (if the same > request gets submitted twice, say because the client didn't hear the initial > response for some reason, do two things get created or just one?) between > PUT and POST. In the case of PUT, its up the server to do the right thing, > whereas the fact that POST is not idempotent shifts that responsibility to > the client. > > That all being said, thinking back over all the web services I have built > over the last few years, POST is used for creating new resources in nearly > every case. > > Craig > > >
Do you recognize this, right? (thanks Giacomo for the links) The *** are mine, of course The broader question is what does it take to create an *evolving* > set of standard data types? > > ***Obviously, I can't say that all data types have to be *the* standard > before they are used in a REST-based architecture.*** > > At the same time, I do require enough standardization > to allow the data format sent to be understood as such by the > recipient. Hence, both sender and recipient agree to a common > registration authority (the standard) for associating media types > with data format descriptions. > > ***The degree to which the format chosen is a commonly accepted standard is > less important than making sure that the sender and recipient agree to the > same thing*** > > , and that's all I meant by an evolving set of standard data types. > > Sure, it is easier to deploy the use of a commonly understood data > format. However, > > ***it is also more efficient to use a format that is more specifically > intended for a given application.*** > > Where those two trade-offs intersect is often dependent on the > application. > > ***REST does not demand that everyone agree on a single > format for the exchange of data -- only that the participants in the > communication agree.*** > > Beyond that, designers need to apply their > own common sense and ***choose/create*** the best formats for the job. > But I suppose where I see white you see black... Please don't use the stratagem of misquoting what I said, because I'm guessing you're going to say I'm advocating the use of custom xml media types only, so remember I said: use HTML for your general purpose (data/presentation) > services, use XML for your specific purpose (data) services. > > BTW >The > argument remains, that if you're creating a hypertext interface, > there's no excuse for it not to be accessible to humans, even those > with disabilities. I don't care if it's an m2m payload. > In relation to this, I don't believe you being serious, you're just picking on me. I can have hipermedia formats in binary format, you also want to apply direct accessibility to that? Me, as a designer of a data access service API, am responsible for what will be the final use of that data? You know what a API is, rigth? And you know that not all the data end up in web sites, that is where that acessability counts, don't you? Do use REST for situations other than build web sites at all? Ah, it's just a waste of time to take you too seriously... I have to lighten up and read you lightly.
Giacomo, we had a similar situation to yours (in-house, machine clients...) - with one big diference, ours is a multi-protocol infrastructure, not only HTTP - so I'll quickly outline what we did: - no HTML, only XML - the XML in most situations "mimic" objects in our business domain - no "standard" definition of media-types, they are based in common by the client devs and the server devs, and published in a Wiki - with didn't took much consideration regarding cache or intermediaries - since the fat clients do much of the processing, we aren't fully driven by the server/hipermedia - we use WADL only in a reporting service, basically to discover in runtime the number/types of parametrs for each report. So, basically, in the words of Eric, it's NOT REST. But we did what we could and I hope we can restify it more as long as we're learning...
One more thing, when you have your first design, try to think ahead 1 or2 years, try to figure out if something you left out now will be needed in that hipotetic future. If yes try to weigth if the pros and cons of spending more time developing now things that maybe will never be needed, against the hipotetic benefits that hipotetic future could bring. Because chances are if you don't do it now while its fresh you'll never do it in the future. On Thursday, July 22, 2010, Antnio Mota <amsmota@...> wrote: > > Giacomo, we had a similar situation to yours (in-house, machine clients...) - with one big diference, ours is a multi-protocol infrastructure, not only HTTP - so I'll quickly outline what we did: > > - no HTML, only XML > - the XML in most situations "mimic" objects in our business domain > - no "standard" definition of media-types, they are based in common by the client devs and the server devs, and published in a Wiki > - with didn't took much consideration regarding cache or intermediaries > - since the fat clients do much of the processing, we aren't fully driven by the server/hipermedia > - we use WADL only in a reporting service, basically to discover in runtime the number/types of parametrs for each report. > > So, basically, in the words of Eric, it's NOT REST. But we did what we could and I hope we can restify it more as long as we're learning... > -- *Disclaimer: The opinions expressed herein are just my opinions and only by chance they are right.*
Antnio Mota wrote: > > Ah, it's just a waste of time to take you too seriously... I have to > lighten up and read you lightly. > I assure you I am serious. With the above remark, I will no longer be answering your questions. You are obviously more interested in turning every thread you participate in, into flaming uselessness. This goes for threads I don't participate in, as well. Your mind is closed to anyone who tells you what you don't want to hear. If anyone thinks you're asking me a legitimate question, I'll let them re-phrase it, I will no longer be responding to you, as your interest lies in conflict, not learning. -Eric
Antnio Mota wrote: > > So, basically, in the words of Eric, it's NOT REST. But we did what > we could and I hope we can restify it more as long as we're > learning... > If you aren't implementing the constraints of REST, then your project is NOT REST, by definition, not by the words of Eric. As to learning, well, that requires an open mind of the sort you don't appear to have. -Eric
> Your mind is closed to anyone who tells you what you don't want to hear. Its funny that that sentence will be very close how I would describe you. You see, its not the first nor the second time you pick small, relativly uninportant parts of what I say, and transform then in the center of the discussion, leaving the important things unanswered - specially, I noticed, when those things doesent fit or contradict what you say. Like now, where you will not comment the quotes from Roy that in my understand disproof your points of view. No harm will descend upon the world if you don't answer my questions, of course you were never obliged to do it. Neverthless it was never my intention to flame the discussion, actually I was in a good mood when I wrote, not pissed of your response. So if you feel that I flamed you I would apologie. -- *Disclaimer: The opinions expressed herein are just my opinions and only by chance they are right.*
I *know* it is not fully REST but unlike many people in this list I don't see it in a black or white, all or nothing thing. I see value in what the partially REST infrastructure does, I reckon where their failures are, and I even know how to correct then, were I given the time to do it. All thanks to what I've been learning in the last two years. On Thursday, July 22, 2010, Eric J. Bowman <eric@...> wrote: > Antnio Mota wrote: >> >> So, basically, in the words of Eric, it's NOT REST. But we did what >> we could and I hope we can restify it more as long as we're >> learning... >> > > If you aren't implementing the constraints of REST, then your project > is NOT REST, by definition, not by the words of Eric. As to learning, > well, that requires an open mind of the sort you don't appear to have. > > -Eric > -- *Disclaimer: The opinions expressed herein are just my opinions and only by chance they are right.*
Antnio Mota wrote: > > You see, its not the first nor the second time you pick small, > relativly uninportant parts of what I say, and transform then in the > center of the discussion, leaving the important things unanswered - > specially, I noticed, when those things doesent fit or contradict what > you say. Like now, where you will not comment the quotes from Roy that > in my understand disproof your points of view. > What I say doesn't contradict Roy. You only *think* it does. Therein lies the essence of my objection to your presence here. Despite not having any experience with REST and not understanding it, you make authoritative statements in response to others, or you make authoritative statements like I'm contradicting Roy, when you just don't know what you're talking about. You don't ask questions, you challenge others with strawman restatements of what they've said while bitching that everyone *else* is arrogant, or whatever. I've had enough. Cluttering up every thread with quotes from Roy that you think I'm contradicting, then requiring me to straighten you out, is far too time- consuming to bother with. You don't know what you're talking about, because you have yet to learn REST, but if anyone disagrees with you it's because of some flaw on their part (like how this week, in your chat with Bill, you have once again resorted to casting aspersions and ad-hominems against the entire REST community -- like a stuck record). Your abrasive attitude and unwillingness to learn, brings out the worst in me, in response. Therefore, no more responses from me. -Eric
Antnio Mota wrote: > > I *know* it is not fully REST but unlike many people in this list I > don't see it in a black or white, all or nothing thing. I see value in > what the partially REST infrastructure does, I reckon where their > failures are, and I even know how to correct then, were I given the > time to do it. > And all this, without having invested even a fraction of the time it's taken for *anyone* else, Roy included, to figure out REST. You, who couldn't possibly know REST because you're just starting with it by your own admission, are absolutely certain that you know better than those who have, to the point where you are comfortable finding fault with every expert in the field for not seeing REST as you do. I spent many, many years _listening_ and _asking_ before I ever presumed to start _teaching_. You're like an incoming freshman who thinks he knows better than his professors -- obnoxious as hell. -Eric
Well, it is true that I don't think of this list is as made of a few some "enlightened" who made their life purpose to teach us, the mere ignorants, the mysteries of something that only those highnesses possess. I do think this is a list of peers, professionals of the same area, and as far as I'm concerned all at the same level of professional worthtiness. I said once and I repeat, since you want this discussion to go to this level: who the hell died and made you high priest? Why the frack do you think you're more deserving of respect than the others, and that you're word should not be disputed? Is it because you're been trying to understand REST for the last 12 years and you didn't suceeded yet? If I had to spend 12 years for every technology I worked in my life I will be 200 years old by now. C'mon, in my book someone who spend 12 years trying to learn something and all he has is questionable opinions, I would call that person, well, limited, to refrain myself from harsh words... Get real, the time you spend learning means sqwack, zero, rien, nada... Its what you have learned, not the time. All I known of OSGi, for example, I learned from a guy half my age and maybe 1/10 of professional years... and he did knew about that, and I did learn with him. C'mon guys REST is just a technology, stop acting like its the most important thing on earth since sliced bread... That being said, I think its good for me if you stop answering(!) my posts. Less time I waste reading your questionable opinions, to be kind in words. Of course, this is the third time you've been trying to piss me since you said that, so we'll never know. But please do it. You know what? GFY... Its "good for you" just in case... On 22 Jul 2010 21:23, "Eric J. Bowman" <eric@...> wrote: Antnio Mota wrote: > > I *know* it is not fully REST but unlike many people in this list I > don't ... And all this, without having invested even a fraction of the time it's taken for *anyone* else, Roy included, to figure out REST. You, who couldn't possibly know REST because you're just starting with it by your own admission, are absolutely certain that you know better than those who have, to the point where you are comfortable finding fault with every expert in the field for not seeing REST as you do. I spent many, many years _listening_ and _asking_ before I ever presumed to start _teaching_. You're like an incoming freshman who thinks he knows better than his professors -- obnoxious as hell. -Eric
I think this is the very first time I quote Roy, and yet you're accusing me of "cluttering"... that by the way is what to do more... Of course it contradics, both in letter and in spirit. I hope you won't spend 12 more years to see it. You object my presence here? Oh my, should I be worry? After all, you're the big cheese around here, aren't you? But I do am going to refrain to post in this list, there's no chance of learning from people that think they have some kind of high power that elevates then above the others. Maybe if I don't have to read your inlationed ego mambo-jambo I could have a little more rest... On 22 Jul 2010 21:19, "Eric J. Bowman" <eric@bisonsystems.net> wrote: Antnio Mota wrote: > > You see, its not the first nor the second time you pick small, > relativly ... What I say doesn't contradict Roy. You only *think* it does. Therein lies the essence of my objection to your presence here. Despite not having any experience with REST and not understanding it, you make authoritative statements in response to others, or you make authoritative statements like I'm contradicting Roy, when you just don't know what you're talking about. You don't ask questions, you challenge others with strawman restatements of what they've said while bitching that everyone *else* is arrogant, or whatever. I've had enough. Cluttering up every thread with quotes from Roy that you think I'm contradicting, then requiring me to straighten you out, is far too time- consuming to bother with. You don't know what you're talking about, because you have yet to learn REST, but if anyone disagrees with you it's because of some flaw on their part (like how this week, in your chat with Bill, you have once again resorted to casting aspersions and ad-hominems against the entire REST community -- like a stuck record). Your abrasive attitude and unwillingness to learn, brings out the worst in me, in response. Therefore, no more responses from me. -Eric
Giacomo Tesio wrote: > > May be I'm getting the point (even if it's quite hard to deeply > understand all your posts :-D). > Unfortunately with REST, it often comes down to figuring out how to cancel the incorrect preconceptions newbies have been exposed to, before they can be taught properly. This thread has convinced me to semi-retire from this list for a while, until I've written a proper introductory article I can link to. No existing article stresses the hypertext notion, or explains how you've done REST before without even knowing it. Which certainly won't solve the problem I constantly run into here, which is completely unexpected pushback against fundamental truths, like XHTML being machine-readable, or assigning URIs to variants. As I've been saying a lot this year, REST isn't taught properly, which only makes it that much harder to teach REST, and this is the critical problem to address right now since the proliferation of non-REST REST APIs is prima facie evidence of a widespread failure to teach REST. I took freshman honors Chemistry from Nobel laureate Dr. Tom Cech. The prerequisite for honors Chemistry, was High School Chemistry. Dr. Cech started his first lecture by telling us to forget everything we ever learned in High School Chemistry, because it was wrong and would only get in the way. I feel I must apply such an attitude to REST. I'm probably the right person for that job. Roy's background is that he has formal training in the field of applied networked software architecture. Most of the other experts here are converts from other solutions like CORBA or WS-*. My first attempt at scaling a website, was a site I built by following REST's precursor, HTTP Request Object. Boy was I glad that I hadn't followed the herd and just used cookies! So my background, is as someone who never needed to be converted away from using SOAP, or cookies, or RPC, so I bring no such baggage to my understanding of REST as a style intimately related to websites. I simply never bought in to anything else, because those other solutions were so damn complicated by comparison, and assumed that HTTP was only useful for simple things like websites, which I knew to be false. This website-ness of REST systems notion gets severe pushback, for reasons that elude me since the thesis comes right out and says so. This doesn't mean REST systems have to be websites, no not at all, far from it. But a telephony system that's built to work like a (properly- executed) website, even without browsers or even HTTP, is a REST system. The Web is lightning. REST is an attempt to bottle that lightning, but not just on the Web. > > I'm in a huge but _closed_ network, with (almost) none interaction > with the external internet. > I've to write a quite complex application where I've full control > over the client and the servers, but not the intermediaries. > So your intermediaries won't understand any custom media type you create. Right? > > Some of those clients could, in the future go into the internet, but > my application will not. > That's a confusing statement. In REST, "application" means the hypertext the user agent is executing. Are you saying that some clients may have access to the Web, but your REST _system_ (your resources) will remain behind the firewall? > > Since I'm not planning to write a protocol I've thought that I could > use HTTP (that I know quite well). > And since I read RESTful Web Service it seemed to me quite a good > approach. > You're mostly on the right track. > > Since we will write both the client and the server we are exactly in > the condition described from andrew: > > > a) building a new kind of browser and associated media type(s); and > > b) designing "sites" for that new kind of browser. > That condition only applies if there's no media type suitable for the task at hand, that's already ubiquitous enough that you don't have to write your own client library. This should be a last resort. > > Ir seem that we "easily" adopt all the constraints a part from the > media type standardization. > If your media type is only to be used on your intranet, then it is standardized and ubiquitous, for all intents and purposes, within the boundaries of your system. > > This, paraphasing Fielding, would be "less RESTful" that an > application that would serve HTML. > You're misinterpreting Roy's comments, which have the context of the Web at large, not an intranet. It's also a contradiction with what Roy has said elsewhere, that there are no shades of REST, RESTfulness is a binary proposition. So I disagree with the above-quoted comment, in favor of REST/NOT REST. What Roy's saying is that if the client and the server both agree to and understand the media type, REST is satisfied. The reason it's "less RESTful" than HTML, is that to an intermediary, it isn't a self- descriptive message due to the proprietary, non-ubiquitous nature of the media type identifier. If we're talking shades of REST, then it isn't a violation of the self- descriptive messaging constraint, because the user agent understands the media type. If we're talking REST/NOT REST, it is a violation of the self-descriptive messaging constraint, because intermediaries don't understand the media type. > > But, AFAIK (currently), it should be a small effort to make it more > RESTful by simply providing "filters" that translate the custom mime > types in HTML or ATOM. > REST is a layered system, so you can implement a transcoding gateway, which will perform that conversion for data bound for the Web. This has no bearing on whether your internal system is RESTful or not. > > Moreover, since we will write the client I think that we will use > WADL as an hypertext language. > WADL is hypertext, but it isn't generally suited for driving a REST API, even with a purpose-built client. In my system, WADL will be used as the output of OPTIONS requests, i.e. as an IDL. Maintenance bots will be coded against this WADL to exercise the protocol and insure that the system is operating according to its specification -- if a response doesn't match the IDL, an error is reported to the log. Such bots are RESTful, but they aren't exercising the application, only the application interfaces. HTML can instruct a user agent dereferencing resource A, that resource A may be manipulated via a PUT to resource B. All an IDL can instruct a user agent, is that resource B accepts PUT, and resource A doesn't. The relationship, that updating B updates A, can't be expressed in WADL. WADL is not meant to present a user with a choice of transitions to the next application state, only what state transitions are allowed on a given resource. Using HTML where you need HTML-like capability, is the re-use according to the principle of generality, that is a key part of the REST style. Repurposing WADL to serve the same function as HTML goes against the REST style. > > This becouse we will have a variable amount of representations lacking > connectedness (they are like a plain gifs or bitmaps, and we will > expose as binary data / json / xml as appropriate), one that will > provide some connections to other resources and the WADL itself. > The working REST demo I posted shows how XHTML representations may link to XSLT stylesheets to render XHTML application steady-states from Atom source documents. Once the interfaces expressed in XHTML are understood, a purpose-built client may interact with the Atom directly, bypassing the XHTML entirely. This would still be a REST system. REST doesn't require that a user agent follow the hypertext API, REST only requires that such an API exist. That API may even be derived from your WADL documents. I don't understand what you mean by "representations lacking connectedness". Image files are still hypertext, when served over HTTP. All an image file needs to be "connected" is a URI, and somewhere, a link to that URI (like from an HTML or Atom document). > > Am I still missing something big? > I think you're underestimating the work involved in repurposing WADL to function like HTML, particularly if you ever need accessibility. This is pragmatism, though, because from a REST standpoint I can't say that the resulting media type won't replace HTML, or otherwise proliferate. I can say that needing HTML-like capability and not choosing HTML violates the *spirit* of REST (which is what I mean when I say something isn't in the REST style, vs. saying something violates a constraint). -Eric
Antnio Mota wrote: > > Who the hell died and made you high priest? Why the frack do you > think you're more deserving of respect than the others, and that > you're word should not be disputed? > Nobody, and I've never made such claims. Nor has anyone but you ever made such allegations against me. You came to this list with a big chip on your shoulder against anyone who thinks they know more about REST than you do, which is pretty much everyone. Nobody who actually does understand REST will dispute the fact that it takes years to learn. Anyone who jumps into this list like they've mastered it inside a month, will be treated with deserved skepticism. You're the arrogant one who comes in here and dismisses what the experts have to say, as if you know REST as well as anyone else does. That's disrespectful, and I'm deserving of at least as much respect as you give to those who don't know REST -- same with the rest of us who have spent long enough learning this to be able to teach it. Instead, all we get from you is bitching and scorn, which doesn't help *anyone* learn REST. You'll never learn REST if your starting point is an assumption that you already know it. -Eric
Actually, I'm a bit surprised about the flame developed here... We are not talking about anything that merit (or require) a religious war. It would be absolutely good to define some "still open points" in the RESTful architecture (if they exists), where the community can't agree upon. On Thu, Jul 22, 2010 at 11:50 PM, Eric J. Bowman <eric@...>wrote: > This thread has convinced me to > semi-retire from this list for a while, until I've written a proper > introductory article I can link to. No existing article stresses the > hypertext notion, or explains how you've done REST before without even > knowing it. > I've previously read this, on the topic : http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven (note btw, the last Fielding's point). > > > I'm in a huge but _closed_ network, with (almost) none interaction > > with the external internet. > > I've to write a quite complex application where I've full control > > over the client and the servers, but not the intermediaries. > > > > So your intermediaries won't understand any custom media type you > create. Right? > Exactly. I'm afraid I'm underevaluating the problems I could encounter with this, can you list some of them, please? > > > > > Some of those clients could, in the future go into the internet, but > > my application will not. > > > > That's a confusing statement. In REST, "application" means the > hypertext the user agent is executing. Are you saying that some > clients may have access to the Web, but your REST _system_ (your > resources) will remain behind the firewall? > > Some of my clients will be ASP.NET applications that could in the feature become public on the internet. I've no control over how the client is coded, but on the way they interact with my application, since we will provide the browser (as a library to link). Such library could obviously not use HTTP at all, we have considered WCF and even Remoting, but all have drawbacks in this use case we can't accept. So writing a library that works like a WADL + custom mime type browser fit better our needs and the customer requirements. > > > > Ir seem that we "easily" adopt all the constraints a part from the > > media type standardization. > > > > If your media type is only to be used on your intranet, then it is > standardized and ubiquitous, for all intents and purposes, within the > boundaries of your system. > So it would be RESTful, wouldn't it? May seem strange, but I'd like to recieve an authoritative "YES": as said, the whole project is a guess that make the coding part quite a secondary part. > If we're talking shades of REST, then it isn't a violation of the self- > descriptive messaging constraint, because the user agent understands > the media type. If we're talking REST/NOT REST, it is a violation of > the self-descriptive messaging constraint, because intermediaries don't > understand the media type. > I'd like to know the disadvantages. I can't understand why firewalls and proxies should be interessed about the content they transfer. Giacomo
What's the better way to achieve authentication over HTTP in a RESTful application? Even if it's a bit outdated I've found this: http://www.artima.com/weblogs/viewpost.jsp?thread=155252 The bad news is that current state of security with HTTP is bad. The best > interoperable solution is Basic over HTTPS. > > Is it right? And why it's so bad? Giacomo
My apps usually support BASIC and DIGEST over either HTTP or HTTPS. I let each connecting client decide which they want to use. mca http://amundsen.com/blog/ http://mamund.com/foaf.rdf#me On Fri, Jul 23, 2010 at 10:46, Giacomo Tesio <giacomo@...> wrote: > > > What's the better way to achieve authentication over HTTP in a RESTful application? > > Even if it's a bit outdated I've found this: http://www.artima.com/weblogs/viewpost.jsp?thread=155252 > >> The bad news is that current state of security with HTTP is bad. The best interoperable solution is Basic over HTTPS. >> > > > Is it right? And why it's so bad? > > > Giacomo > > > >
On Fri, Jul 23, 2010 at 7:46 AM, Giacomo Tesio <giacomo@...> wrote: > What's the better way to achieve authentication over HTTP in a RESTful application? > > Even if it's a bit outdated I've found this: http://www.artima.com/weblogs/viewpost.jsp?thread=155252 > >> The bad news is that current state of security with HTTP is bad. The best >> interoperable solution is Basic over HTTPS. > > Is it right? And why it's so bad? All of those points still seem quite valid. The primary complaint is simply that the browser has control over the authentication experience, not the application itself. For MtoM systems, this is a non-issue. For interactive applications, I think all of those issues are pretty valid. The User Experience surrounding HTTP Authentication is pretty awful. Regards, Will Hartung (willh@...)
Hello! Is there a standardized way to describe a new hypermedia format? Let me provide some context: Yesterday, I had a long discussion on Twitter with Andrew Wahbe about the topic of whether it's even possible to create a hypermedia format without knowledge of the client. More specifically, the concern was that whatever hypermedia format you come up with would 'bind' the client to that server: It's difficult to have a generic client, since instead you end up with clients that have to have specific knowledge about the particular hypermedia format you use. There might be some public format you could use, but what if you need to define your own? Is there a way to do this so that a generic client could deal with it? The thought then occurred that there doesn't seem to be a standardized way to define a hypermedia format. For example, while everyone agrees what a link looks like in HTML, this is not the case for XML or even JSON (or YAML, or ...). Some blogs have been written about <http://www.amundsen.com/blog/archives/1054> this <http://www.subbu.org/blog/2008/04/hypermedia-and-json> and suggestions have been made, but as far as I can tell, no consensus has emerged. And it's not only links, it's also parameters that aren't defined in a standardized way. For example, compare the definition of parameters here <http://www.amundsen.com/blog/archives/1054> (not containing info about types and default values) with the definition of parameters here <http://restx.mulesoft.org/restful-server-api> (includes type, default and is-mandatory info). So, the question then is: Could/should we try to come up with a standard to describe a custom hypermedia format? I'm not talking about a blow-by-blow definition of all available services and resources of a particular application, but about something one level higher up: For example, wouldn't it be nice if we could agree on THE standard way to describe a link in JSON? Or in XML? And then strive to always use this? One could then say: I'm using application/json+foo for this (some custom format), but it follows the standard way to describe links and parameters. Whether this description itself can be formalized to the point where it is machine readable is another story. It would force clients to be a bit more generic, but would also allow for quick adaptations of a format if for whatever reason the standard doesn't quite fit. However, I'd be happy to just know that there is an agreed upon standard way to define links, parameters and whatever else you need in JSON, XML and other 'base' formats. Consider also that it would be very nice to point people that are new to REST to such a 'standard document', since this topic often seems to be very mysterious and odd them. Just a thought... Juergen -- http://restx.mulesoft.orgRESTx <http://restx.org>
from my POV, there is nothing wrong with the way HTTP provides for authentication; including the ability to support new auth schemes. instead, the problem is that common Web browsers have a lousy built-in UI experience when handling HTTP auth. this despite repeated attempts (in 1999[1], 2004[2], 2007[3], etc.) to improve it. [sigh] [1] http://www.w3.org/TR/NOTE-authentform [2] http://www.mnot.net/blog/2004/08/26/form_auth [3] http://www.w3.org/html/wg/tracker/issues/13 mca http://amundsen.com/blog/ http://mamund.com/foaf.rdf#me On Fri, Jul 23, 2010 at 13:36, Will Hartung <willh@mirthcorp.com> wrote: > On Fri, Jul 23, 2010 at 7:46 AM, Giacomo Tesio <giacomo@tesio.it> wrote: >> What's the better way to achieve authentication over HTTP in a RESTful application? >> >> Even if it's a bit outdated I've found this: http://www.artima.com/weblogs/viewpost.jsp?thread=155252 >> >>> The bad news is that current state of security with HTTP is bad. The best >>> interoperable solution is Basic over HTTPS. >> >> Is it right? And why it's so bad? > > All of those points still seem quite valid. > > The primary complaint is simply that the browser has control over the > authentication experience, not the application itself. > > For MtoM systems, this is a non-issue. For interactive applications, I > think all of those issues are pretty valid. The User Experience > surrounding HTTP Authentication is pretty awful. > > Regards, > > Will Hartung > (willh@...) > > > ------------------------------------ > > Yahoo! Groups Links > > > >
You might be interested in http://json-schema.org/ Personally, I don't see the point. There is so much more to a format than "where are the links". If no consensus has emerged, perhaps it's because there isn't enough experience yet to commoditize the solution space. My advice: give people time to experiment with URI templates and other innovations before trying to go all meta. Robert Brewer fumanchu@... From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of brendel.juergen Sent: Friday, July 23, 2010 10:36 AM To: rest-discuss@yahoogroups.com Subject: [rest-discuss] A standardized way to describe a new hypermedia format? Hello! Is there a standardized way to describe a new hypermedia format? Let me provide some context: Yesterday, I had a long discussion on Twitter with Andrew Wahbe about the topic of whether it's even possible to create a hypermedia format without knowledge of the client. More specifically, the concern was that whatever hypermedia format you come up with would 'bind' the client to that server: It's difficult to have a generic client, since instead you end up with clients that have to have specific knowledge about the particular hypermedia format you use. There might be some public format you could use, but what if you need to define your own? Is there a way to do this so that a generic client could deal with it? The thought then occurred that there doesn't seem to be a standardized way to define a hypermedia format. For example, while everyone agrees what a link looks like in HTML, this is not the case for XML or even JSON (or YAML, or ...). Some blogs have been written about <http://www.amundsen.com/blog/archives/1054> this <http://www.subbu.org/blog/2008/04/hypermedia-and-json> and suggestions have been made, but as far as I can tell, no consensus has emerged. And it's not only links, it's also parameters that aren't defined in a standardized way. For example, compare the definition of parameters here <http://www.amundsen.com/blog/archives/1054> (not containing info about types and default values) with the definition of parameters here <http://restx.mulesoft.org/restful-server-api> (includes type, default and is-mandatory info). So, the question then is: Could/should we try to come up with a standard to describe a custom hypermedia format? I'm not talking about a blow-by-blow definition of all available services and resources of a particular application, but about something one level higher up: For example, wouldn't it be nice if we could agree on THE standard way to describe a link in JSON? Or in XML? And then strive to always use this? One could then say: I'm using application/json+foo for this (some custom format), but it follows the standard way to describe links and parameters. Whether this description itself can be formalized to the point where it is machine readable is another story. It would force clients to be a bit more generic, but would also allow for quick adaptations of a format if for whatever reason the standard doesn't quite fit. However, I'd be happy to just know that there is an agreed upon standard way to define links, parameters and whatever else you need in JSON, XML and other 'base' formats. Consider also that it would be very nice to point people that are new to REST to such a 'standard document', since this topic often seems to be very mysterious and odd them. Just a thought... Juergen -- http://restx.mulesoft.org RESTx <http://restx.org>
--- In rest-discuss@yahoogroups.com, "Robert Brewer" <fumanchu@...> wrote: > > You might be interested in http://json-schema.org/ > > > > Personally, I don't see the point. There is so much more to a format > than "where are the links". > I don't necessarily disagree, but I'm curious what else you think is important to know about a format besides how to recognize a link and how to determine link relations. --peter keane > > > If no consensus has emerged, perhaps it's because there isn't enough > experience yet to commoditize the solution space. My advice: give people > time to experiment with URI templates and other innovations before > trying to go all meta. > > > > > > Robert Brewer > > fumanchu@... > > > > From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] > On Behalf Of brendel.juergen > Sent: Friday, July 23, 2010 10:36 AM > To: rest-discuss@yahoogroups.com > Subject: [rest-discuss] A standardized way to describe a new hypermedia > format? > > > > > > > > > Hello! > > > > Is there a standardized way to describe a new hypermedia format? > > > > Let me provide some context: > > > > Yesterday, I had a long discussion on Twitter with Andrew Wahbe about > the topic of whether it's even possible to create a hypermedia format > without knowledge of the client. > > > > More specifically, the concern was that whatever hypermedia format you > come up with would 'bind' the client to that server: It's difficult to > have a generic client, since instead you end up with clients that have > to have specific knowledge about the particular hypermedia format you > use. > > > > There might be some public format you could use, but what if you need to > define your own? Is there a way to do this so that a generic client > could deal with it? > > > > The thought then occurred that there doesn't seem to be a standardized > way to define a hypermedia format. For example, while everyone agrees > what a link looks like in HTML, this is not the case for XML or even > JSON (or YAML, or ...). Some blogs have been written about > <http://www.amundsen.com/blog/archives/1054> this > <http://www.subbu.org/blog/2008/04/hypermedia-and-json> and > suggestions have been made, but as far as I can tell, no consensus has > emerged. > > > > And it's not only links, it's also parameters that aren't defined in a > standardized way. For example, compare the definition of parameters here > <http://www.amundsen.com/blog/archives/1054> (not containing info > about types and default values) with the definition of parameters here > <http://restx.mulesoft.org/restful-server-api> (includes type, default > and is-mandatory info). > > > > So, the question then is: Could/should we try to come up with a standard > to describe a custom hypermedia format? I'm not talking about a > blow-by-blow definition of all available services and resources of a > particular application, but about something one level higher up: For > example, wouldn't it be nice if we could agree on THE standard way to > describe a link in JSON? Or in XML? And then strive to always use this? > One could then say: I'm using application/json+foo for this (some custom > format), but it follows the standard way to describe links and > parameters. > > > > Whether this description itself can be formalized to the point where it > is machine readable is another story. It would force clients to be a bit > more generic, but would also allow for quick adaptations of a format if > for whatever reason the standard doesn't quite fit. However, I'd be > happy to just know that there is an agreed upon standard way to define > links, parameters and whatever else you need in JSON, XML and other > 'base' formats. > > > > Consider also that it would be very nice to point people that are new to > REST to such a 'standard document', since this topic often seems to be > very mysterious and odd them. > > > > Just a thought... > > > > Juergen > > > > -- > > http://restx.mulesoft.org > > RESTx <http://restx.org> >
On Jul 23, 2010, at 7:36 PM, brendel.juergen wrote: > > > Hello! > > Is there a standardized way to describe a new hypermedia format? No, but you might be interested in http://www.nordsc.com/blog/?p=6 and http://www.nordsc.com/blog/?p=8 > > Let me provide some context: > > Yesterday, I had a long discussion on Twitter with Andrew Wahbe about the topic of whether it's even possible to create a hypermedia format without knowledge of the client. > > More specifically, the concern was that whatever hypermedia format you come up with would 'bind' the client to that server: It's difficult to have a generic client, since instead you end up with clients that have to have specific knowledge about the particular hypermedia format you use. Yes, that is natural (and ok). User agents provide the interaction point between users and the application. They do this by - exposing information (e.g. render the content of <title> element in the browser window head or store the result of a link check run for a site in a database or file) - performing automatic transitions (e.g. GET an HTML inline image or recursively traverse links of a site for validation) - provide a means for the user to activate hypermedia controls (e.g. make links found in a page clickable, display forms) Doing all this requires in-depth knowledge of the media type and coding for all this requires in-depth, hard coded knowledge about the expected media types. It is simply ok that browsers implement HTML and that AtomPub clients implement the AtomPub media types. It is also ok that Google's indexer likely understands all media types it knows to be in use on the Web. > > There might be some public format you could use, but what if you need to define your own? Is there a way to do this so that a generic client could deal with it? There is no point in doing that. > > The thought then occurred that there doesn't seem to be a standardized way to define a hypermedia format. For example, while everyone agrees what a link looks like in HTML, this is not the case for XML or even JSON (or YAML, or ...). Some blogs have been written about this and suggestions have been made, but as far as I can tell, no consensus has emerged. What is the point of making links generic (like XLink does, for example) aside from enabling the creation of generic crawlers? (Note that whatever the crawlers would do beyond traversing the links would require knowledge of the specific type anyhow). > > And it's not only links, it's also parameters that aren't defined in a standardized way. For example, compare the definition of parametershere (not containing info about types and default values) with the definition of parameters here (includes type, default and is-mandatory info). What is the value of making parameters generic? > > So, the question then is: Could/should we try to come up with a standard to describe a custom hypermedia format? I'm not talking about a blow-by-blow definition of all available services and resources of a particular application, but about something one level higher up: For example, wouldn't it be nice if we could agree on THE standard way to describe a link in JSON? Or in XML? There is XLink already - wouldn't it be nice if everyone would agree to *use* that agreed sdandard? :-) > And then strive to always use this? One could then say: I'm using application/json+foo for this (some custom format), but it follows the standard way to describe links and parameters. I agree that it would be useful to do the same things the same way in media type design (patterns) but we should probably start building media types before that :-) Jan > > Whether this description itself can be formalized to the point where it is machine readable is another story. It would force clients to be a bit more generic, but would also allow for quick adaptations of a format if for whatever reason the standard doesn't quite fit. However, I'd be happy to just know that there is an agreed upon standard way to define links, parameters and whatever else you need in JSON, XML and other 'base' formats. > > Consider also that it would be very nice to point people that are new to REST to such a 'standard document', since this topic often seems to be very mysterious and odd them. > > Just a thought... > > Juergen > > -- > http://restx.mulesoft.org > RESTx > > > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
Giacomo Tesio wrote: > > I've previously read this, on the topic : > http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven > (note btw, the last Fielding's point). > Yes, I believe everyone's familiar with that post. But, it's hardly introductory-level. ;-) As to Roy's last point, plenty of people get that wrong, by interpreting it to mean that a service must have a well- known URI as the endpoint through which the service is accessed. Roy actually meant just the opposite. > > > > > So your intermediaries won't understand any custom media type you > > create. Right? > > > > Exactly. I'm afraid I'm underevaluating the problems I could > encounter with this, can you list some of them, please? > I was just making sure you're aware of the problem. If your internal intermediaries aren't "doing" anything with their knowledge of HTML media types, then there is no problem. If you're going over the Web, then the whole point is that you don't know *what* intermediaries are doing with their knowledge of HTML media types. Whatever it is they're doing, transcoding or prefetching or what-have- you, are the sorts of things which make the Web "anarchically scalable". You don't get this "serendipitous re-use" unless you're using ubiquitous media types. REST defines a sweet spot of the deployed Web architecture that you miss out on with custom media types. > > So it would be RESTful, wouldn't it? > > May seem strange, but I'd like to recieve an authoritative "YES": as > said, the whole project is a guess that make the coding part quite a > secondary part. > There is no authoritative answer that may be given based on speculative musings. The best I can do is say it *could* be RESTful. There are plenty of "REST" APIs out there which properly contain all out-of-band knowledge within custom media types, however, those custom media types do silly things like assign partial-update semantics to PUT, or otherwise break various REST constraints. Only if you put an actual system on-line, and it allows me to actually curl some request/response pairs, can I give you that answer, even if you're using standard media types. REST is a set of interdependent constraints, so it's impossible to focus on one aspect of a proposed system and give you the yes-or-no answer you seek. > > > If we're talking shades of REST, then it isn't a violation of the > > self- descriptive messaging constraint, because the user agent > > understands the media type. If we're talking REST/NOT REST, it is > > a violation of the self-descriptive messaging constraint, because > > intermediaries don't understand the media type. > > > > I'd like to know the disadvantages. > > I can't understand why firewalls and proxies should be interessed > about the content they transfer. > It happens all the time without you even knowing it. ISPs use caches which look for (amongst other things) common errors, like HTML scaling a large image to fit in a Web page, then reformat the image to the proper size on-the-fly. This is one technique of many (including prefetching linked-to content), which are marketed together as dialup "acceleration" for modem users. Many smartphones require application/xhtml+xml, so the networks they run on use transcoding gateways and standard libraries like TagSoup or HTML Tidy, to reformat text/html on-the-fly. This was prevalent for years, but is less so, now. The point is that you can't know, another point is that this sort of thing can only work with ubiquitous media types, and another point is that this is why REST says to use standard media types -- how _else_ are you going to achieve Web scale, if you go against how the Web scales? -Eric
Eric J. Bowman wrote: > Many smartphones require application/xhtml+xml, so the networks they > run on use transcoding gateways and standard libraries like TagSoup or > HTML Tidy, to reformat text/html on-the-fly. This was prevalent for > years, but is less so, now. The point is that you can't know, another > point is that this sort of thing can only work with ubiquitous media > types, and another point is that this is why REST says to use standard > media types -- how _else_ are you going to achieve Web scale, if you go > against how the Web scales? Not to discount your points in the least, but shipping Javascript to the client that then knows how to interpret the custom media-type seems to be a very popular approach to the "how else" these days. It certainly doesn't promote "serendipitous reuse" for clients that don't do Javascript <wink>, but for those that do, it leverages one ubiquitous media-type (javascript) to lift another, less ubiquitous one. Robert Brewer fumanchu@...
--- In rest-discuss@yahoogroups.com, Jan Algermissen <algermissen1971@...> wrote: > > > On Jul 23, 2010, at 7:36 PM, brendel.juergen wrote: > > > > > > > Hello! > > > > Is there a standardized way to describe a new hypermedia format? > > No, but you might be interested in > > http://www.nordsc.com/blog/?p=6 and > http://www.nordsc.com/blog/?p=8 > > > > > Let me provide some context: > > > > Yesterday, I had a long discussion on Twitter with Andrew Wahbe about the topic of whether it's even possible to create a hypermedia format without knowledge of the client. > > > > More specifically, the concern was that whatever hypermedia format you come up with would 'bind' the client to that server: It's difficult to have a generic client, since instead you end up with clients that have to have specific knowledge about the particular hypermedia format you use. > > Yes, that is natural (and ok). User agents provide the interaction point between users and the application. They do this by > > - exposing information (e.g. render the content of <title> element in > the browser window head or store the result of a link check run for > a site in a database or file) > - performing automatic transitions (e.g. GET an HTML inline image or > recursively traverse links of a site for validation) > - provide a means for the user to activate hypermedia controls > (e.g. make links found in a page clickable, display forms) > > Doing all this requires in-depth knowledge of the media type and coding for all this requires in-depth, hard coded knowledge about the expected media types. > > It is simply ok that browsers implement HTML and that AtomPub clients implement the AtomPub media types. It is also ok that Google's indexed likely understands all media types it knows to be in use on the Web. > Ah, but remember that the context here is that the hypermedia format is designed without knowledge of what the client does. So in the case of HTML, the client can do all the things you mention quite easily because HTML is explicitly designed for them -- well explicitly for the browser examples like <title> and perhaps implicitly for the spider examples by virtue of HTML following the Principle of Least Power. In short -- HTML was designed around a client domain (interactive information presentation). If the hypermedia format was not designed around the client domain, then the client would be tasked with mapping the data to its domain (which may not even be feasible). Also, the client would be bound to that domain -- restricting it from doing other things. For example, if Amazon designed it's hypermedia format before browser's and HTML existed it would likely revolve around buying books and other items. It would likely be impossible to express something like Facebook in this language - you'd need some new format. An "Amazon browser" would not be able to interact with Facebook and vice versa, even though the client domain (again interactive information presentation) was the same. And before anyones says anything about this having to do anything with the user guiding the browser let me address this. CCXML is an example of a hypermedia language that does not drive clients with UIs. CCXML drives a call control platform that among other things, accepts, places and connects phone calls. It can be used for many applications -- for example you could create a Google-Voice-like application that looks at the number dialed and calls multiple phone numbers allowing you to have one phone number that rings your cell and your deskphone. Alternatively, a call center application might look at the number of the caller, figure out who they are whether they are a "gold", "silver" or "bronze" customer (via a DB lookup) and connect them to the right agents for that customer pool. So the key difference is that in the first example we route based on number dialed and in the second based on the caller's number -- but the finer details of the apps can be quite different. Because CCXML is designed around what the raw telephony capabilities of the client and not the applications, the same hypermedia format and clients can be used for both types of applications. So it seems that you can get a "broader reaching" hypermedia format by designing it around the client. But the original question was if you have no knowledge of the client domain, can you design something that qualifies as a "hypermedia format"? Could you really implement a client that achieves HATEOAS based on a format that was designed without any consideration for the client domain? In my earlier Amazon browser example, I'd imagined that the hypermedia format was designed for "interactive information presentation" but that the data structures and controls were customized for Amazon's domain -- hence the client is an "Amazon browser". One might propose Atom/AtomPub. But I tend to think of this as designed with clients in mind -- a client is a feed processor that may also possibly allow publishing to the feeds. There are some very specific client workflows in mind here. Say that the content in the feeds is employee data. You can have a feed for all the employees in the company that allow clients to process the data in terms of a big list. But if the client wants to process the data as a hierarchy it can't do it. Of course you could define a link relation to capture the hierarchy, but now your just extending the hypermedia format to meet the needs of the client. (And that's maybe a dumb, toy example, but I couldn't think of another on the spot.) So can you design a hypermedia format that cuts across the full range of clients that one might want to have interact with a service or does HATEOAS _require_ that hypermedia formats are designed for a specific client domain? Does anyone have an example of client agnostic formats that allowed a truly RESTful client to be built? Regards, Andrew
"Robert Brewer" wrote: > > Not to discount your points in the least, but shipping Javascript to > the client that then knows how to interpret the custom media-type > seems to be a very popular approach to the "how else" these days. > I agree, it's very popular to ignore what REST actually says, and use code-on-demand as a starting point instead of a last resort, then point to code-on-demand as a loophole. It isn't. It's still a violation of the self-descriptive messaging constraint, particularly as Javascript is not declarative. I'll elaborate later, right now it's time for the free concert over in Steamboat -- Rhythm Devils featuring Keller Williams. -Eric
--- In rest-discuss@yahoogroups.com, "Eric J. Bowman" <eric@...> wrote: > > "Robert Brewer" wrote: > > > > Not to discount your points in the least, but shipping Javascript to > > the client that then knows how to interpret the custom media-type > > seems to be a very popular approach to the "how else" these days. > > > > I agree, it's very popular to ignore what REST actually says, and use > code-on-demand as a starting point instead of a last resort, then point > to code-on-demand as a loophole. It isn't. It's still a violation of > the self-descriptive messaging constraint, particularly as Javascript > is not declarative. I'll elaborate later, right now it's time for the > free concert over in Steamboat -- Rhythm Devils featuring Keller > Williams. > > -Eric > Interested to hear your argument against -- I've puzzled over where Ajax fits into REST quite a bit. To make a bit of a devils advocate argument for it I'll say the following: The point of code-on-demand is to allow the capabilities of the UA to be extended. Extending it to understand a data format seems like quite a reasonable thing to do. Using a base serialization format such as XML or JSON for your data format (as well as the appropriate mime-type) does provide a reasonable amount of visibility as well. There is also a certain amount of native support in the UA for these serialization formats as well. Javascript code that understand the schema and semantics of your XML/JSON is not significantly unlike a script that understand constraints that you've put on your HTML, for example, code that knows that some <span> elements will have a specific @class value that will imply that certain behavior should occur when the element is clicked. Where's the violation of REST's constraints? I would say you've gone too far only when you are using code-on-demand to implement something that the UA already does natively (with little or no gain in non-functional areas such as visible latencies or perhaps portability). Thoughts? Regards, Andrew
I read the initial question as distinguishing between the client and the client/problem domain. I concur. Almost by definition, a hypermedia format must know about the domain of the problem. Doesn't need to know how the client works in that domain, though. For a trivial example, the difference between Firefox and Lynx - I can use both to browse large portions of the web - but the presentations are radically different. -Eric J. On 07/23/2010 03:28 PM, wahbedahbe wrote: > > > > > --- In rest-discuss@yahoogroups.com > <mailto:rest-discuss%40yahoogroups.com>, Jan Algermissen > <algermissen1971@...> wrote: > > > > > > On Jul 23, 2010, at 7:36 PM, brendel.juergen wrote: > > > > > > > > > > > Hello! > > > > > > Is there a standardized way to describe a new hypermedia format? > > > > No, but you might be interested in > > > > http://www.nordsc.com/blog/?p=6 and > > http://www.nordsc.com/blog/?p=8 > > > > > > > > Let me provide some context: > > > > > > Yesterday, I had a long discussion on Twitter with Andrew Wahbe > about the topic of whether it's even possible to create a hypermedia > format without knowledge of the client. > > > > > > More specifically, the concern was that whatever hypermedia format > you come up with would 'bind' the client to that server: It's > difficult to have a generic client, since instead you end up with > clients that have to have specific knowledge about the particular > hypermedia format you use. > > > > Yes, that is natural (and ok). User agents provide the interaction > point between users and the application. They do this by > > > > - exposing information (e.g. render the content of <title> element in > > the browser window head or store the result of a link check run for > > a site in a database or file) > > - performing automatic transitions (e.g. GET an HTML inline image or > > recursively traverse links of a site for validation) > > - provide a means for the user to activate hypermedia controls > > (e.g. make links found in a page clickable, display forms) > > > > Doing all this requires in-depth knowledge of the media type and > coding for all this requires in-depth, hard coded knowledge about the > expected media types. > > > > It is simply ok that browsers implement HTML and that AtomPub > clients implement the AtomPub media types. It is also ok that Google's > indexed likely understands all media types it knows to be in use on > the Web. > > > > Ah, but remember that the context here is that the hypermedia format > is designed without knowledge of what the client does. So in the case > of HTML, the client can do all the things you mention quite easily > because HTML is explicitly designed for them -- well explicitly for > the browser examples like <title> and perhaps implicitly for the > spider examples by virtue of HTML following the Principle of Least > Power. In short -- HTML was designed around a client domain > (interactive information presentation). > > If the hypermedia format was not designed around the client domain, > then the client would be tasked with mapping the data to its domain > (which may not even be feasible). Also, the client would be bound to > that domain -- restricting it from doing other things. > > For example, if Amazon designed it's hypermedia format before > browser's and HTML existed it would likely revolve around buying books > and other items. It would likely be impossible to express something > like Facebook in this language - you'd need some new format. An > "Amazon browser" would not be able to interact with Facebook and vice > versa, even though the client domain (again interactive information > presentation) was the same. > > And before anyones says anything about this having to do anything with > the user guiding the browser let me address this. CCXML is an example > of a hypermedia language that does not drive clients with UIs. CCXML > drives a call control platform that among other things, accepts, > places and connects phone calls. It can be used for many applications > -- for example you could create a Google-Voice-like application that > looks at the number dialed and calls multiple phone numbers allowing > you to have one phone number that rings your cell and your deskphone. > Alternatively, a call center application might look at the number of > the caller, figure out who they are whether they are a "gold", > "silver" or "bronze" customer (via a DB lookup) and connect them to > the right agents for that customer pool. So the key difference is that > in the first example we route based on number dialed and in the second > based on the caller's number -- but the finer details of the apps can > be quite different. Because CCXML is designed around what the raw > telephony capabilities of the client and not the applications, the > same hypermedia format and clients can be used for both types of > applications. > > So it seems that you can get a "broader reaching" hypermedia format by > designing it around the client. But the original question was if you > have no knowledge of the client domain, can you design something that > qualifies as a "hypermedia format"? Could you really implement a > client that achieves HATEOAS based on a format that was designed > without any consideration for the client domain? In my earlier Amazon > browser example, I'd imagined that the hypermedia format was designed > for "interactive information presentation" but that the data > structures and controls were customized for Amazon's domain -- hence > the client is an "Amazon browser". > > One might propose Atom/AtomPub. But I tend to think of this as > designed with clients in mind -- a client is a feed processor that may > also possibly allow publishing to the feeds. There are some very > specific client workflows in mind here. Say that the content in the > feeds is employee data. You can have a feed for all the employees in > the company that allow clients to process the data in terms of a big > list. But if the client wants to process the data as a hierarchy it > can't do it. Of course you could define a link relation to capture the > hierarchy, but now your just extending the hypermedia format to meet > the needs of the client. (And that's maybe a dumb, toy example, but I > couldn't think of another on the spot.) > > So can you design a hypermedia format that cuts across the full range > of clients that one might want to have interact with a service or does > HATEOAS _require_ that hypermedia formats are designed for a specific > client domain? > > Does anyone have an example of client agnostic formats that allowed a > truly RESTful client to be built? > > Regards, > > Andrew > >
<snip> > Interested to hear your argument against -- I've puzzled over where Ajax fits into REST quite a bit. </snip> IMO, the more the client relies on COD, the less value is being delivered by the base media-type. IOW, high dependence on COD is an indicator that the media-type in use (and usually the client that understands that base media-type) is insufficient for the work at hand (the protocol in use, the application tasks, the UX, etc.). That is why, lately i've been barking about designing hypermedia types and consequently, implementing other HTTP-aware clients. I think there has been an explosion of code for web apps that would not be needed if we start to rethink the client and the media-types that could be used. mca http://amundsen.com/blog/ http://mamund.com/foaf.rdf#me On Fri, Jul 23, 2010 at 19:05, wahbedahbe <andrew.wahbe@...> wrote: > > > --- In rest-discuss@yahoogroups.com, "Eric J. Bowman" <eric@...> wrote: >> >> "Robert Brewer" wrote: >> > >> > Not to discount your points in the least, but shipping Javascript to >> > the client that then knows how to interpret the custom media-type >> > seems to be a very popular approach to the "how else" these days. >> > >> >> I agree, it's very popular to ignore what REST actually says, and use >> code-on-demand as a starting point instead of a last resort, then point >> to code-on-demand as a loophole. It isn't. It's still a violation of >> the self-descriptive messaging constraint, particularly as Javascript >> is not declarative. I'll elaborate later, right now it's time for the >> free concert over in Steamboat -- Rhythm Devils featuring Keller >> Williams. >> >> -Eric >> > > Interested to hear your argument against -- I've puzzled over where Ajax fits into REST quite a bit. To make a bit of a devils advocate argument for it I'll say the following: > > The point of code-on-demand is to allow the capabilities of the UA to be extended. Extending it to understand a data format seems like quite a reasonable thing to do. Using a base serialization format such as XML or JSON for your data format (as well as the appropriate mime-type) does provide a reasonable amount of visibility as well. There is also a certain amount of native support in the UA for these serialization formats as well. Javascript code that understand the schema and semantics of your XML/JSON is not significantly unlike a script that understand constraints that you've put on your HTML, for example, code that knows that some <span> elements will have a specific @class value that will imply that certain behavior should occur when the element is clicked. > > Where's the violation of REST's constraints? I would say you've gone too far only when you are using code-on-demand to implement something that the UA already does natively (with little or no gain in non-functional areas such as visible latencies or perhaps portability). Thoughts? > > Regards, > > Andrew > > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
The way I see it, and maybe this is only a semantic choice of words, a hypermedia format should be general, a media-type should know about the domain of the problem. Like, a media-type should be a specialization, or a application, of hipermedia format. On Saturday, July 24, 2010, Eric Johnson <eric@...> wrote: > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > I read the initial question as distinguishing between the client and > the client/problem domain. > > I concur. Almost by definition, a hypermedia format must know about > the domain of the problem. Doesn't need to know how the client works > in that domain, though. For a trivial example, the difference between > Firefox and Lynx - I can use both to browse large portions of the web - > but the presentations are radically different. > > -Eric J. > > On 07/23/2010 03:28 PM, wahbedahbe wrote: > > > > > > --- In rest-discuss@yahoogroups.com, > Jan Algermissen <algermissen1971@...> wrote: >> >> >> On Jul 23, 2010, at 7:36 PM, brendel.juergen wrote: >> >> > >> > >> > Hello! >> > >> > Is there a standardized way to describe a new hypermedia > format? >> >> No, but you might be interested in >> >> http://www.nordsc.com/blog/?p=6 > and >> http://www.nordsc.com/blog/?p=8 >> >> > >> > Let me provide some context: >> > >> > Yesterday, I had a long discussion on Twitter with Andrew > Wahbe about the topic of whether it's even possible to create a > hypermedia format without knowledge of the client. >> > >> > More specifically, the concern was that whatever hypermedia > format you come up with would 'bind' the client to that server: It's > difficult to have a generic client, since instead you end up with > clients that have to have specific knowledge about the particular > hypermedia format you use. >> >> Yes, that is natural (and ok). User agents provide the interaction > point between users and the application. They do this by >> >> - exposing information (e.g. render the content of <title> > element in >> the browser window head or store the result of a link check run for >> a site in a database or file) >> - performing automatic transitions (e.g. GET an HTML inline image > or >> recursively traverse links of a site for validation) >> - provide a means for the user to activate hypermedia controls >> (e.g. make links found in a page clickable, display forms) >> >> Doing all this requires in-depth knowledge of the media type and > coding for all this requires in-depth, hard coded knowledge about the > expected media types. >> >> It is simply ok that browsers implement HTML and that AtomPub > clients implement the AtomPub media types. It is also ok that Google's > indexed likely understands all media types it knows to be in use on the > Web. >> > > Ah, but remember that the context here is that the hypermedia format is > designed without knowledge of what the client does. So in the case of > HTML, the client can do all the things you mention quite easily because > HTML is explicitly designed for them -- well explicitly for the browser > examples like <title> and perhaps implicitly for the spider > examples by virtue of HTML following the Principle of Least Power. In > short -- HTML was designed around a client domain (interactive > information presentation). > > If the hypermedia format was not designed around the client domain, > then the client would be tasked with mapping the data to its domain > (which may not even be feasible). Also, the client would be bound to > that domain -- restricting it from doing other things. > > For example, if Amazon designed it's hypermedia format before browser's > and HTML existed it would likely revolve around buying books and other > items. It would likely be impossible to express something like Facebook > in this language - you'd need some new format. An "Amazon browser" > would not be able to interact with Facebook and vice versa, even though > the client domain (again interactive information presentation) was the > same. > > And before anyones says anything about this having to do anything with > the user guiding the browser let me address this. CCXML is an example > of a hypermedia language that does not drive clients with UIs. CCXML > drives a call control platform that among other things, accepts, places > and connects phone calls. It can be used for many applications -- for > example you could create a Google-Voice-like application that looks at > the number dialed and calls multiple phone numbers allowing you to have > one phone number that rings your cell and your deskphone. > Alternatively, a call center application might look at the number of > the caller, figure out who they are whether they are a "gold", "silver" > or "bronze" customer (via a DB lookup) and connect them to the right > agents for that customer pool. So the key difference is that in the > first example we route based on number dialed and in the second based > on the caller's number -- but the finer details of the apps can be > quite different. Because CCXML is designed around what the raw > telephony capabilities of the client and not the applications, the same > hypermedia format and clients can be used for both types of > applications. > > So it seems that you can get a "broader > > > > > > > > > > > > > > > > > > > > -- *Disclaimer: The opinions expressed herein are just my opinions and only by chance they are right.* <http://lh3.ggpht.com/_1aTCd17_nho/TEblN4fV-_I/AAAAAAAAAHw/wZ51kXrfJcs/qrcode_bc_1.jpg> Please click on the image to enlarge it<http://lh3.ggpht.com/_1aTCd17_nho/TEblN4fV-_I/AAAAAAAAAHw/wZ51kXrfJcs/qrcode_bc_1.jpg>
This, of course, suposing that the media-type conforms to the hipermedia format, like HTML being a specialization (a subset) of XML, and so on. On Saturday, July 24, 2010, Antnio Mota <amsmota@...> wrote: > The way I see it, and maybe this is only a semantic choice of words, a > hypermedia format should be general, a media-type should know about > the domain of the problem. > > Like, a media-type should be a specialization, or a application, of > hipermedia format. > > On Saturday, July 24, 2010, Eric Johnson <eric@...> wrote: >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> I read the initial question as distinguishing between the client and >> the client/problem domain. >> >> I concur. Almost by definition, a hypermedia format must know about >> the domain of the problem. Doesn't need to know how the client works >> in that domain, though. For a trivial example, the difference between >> Firefox and Lynx - I can use both to browse large portions of the web - >> but the presentations are radically different. >> >> -Eric J. >> >> On 07/23/2010 03:28 PM, wahbedahbe wrote: >> >> >> >> >> >> --- In rest-discuss@yahoogroups.com, >> Jan Algermissen <algermissen1971@...> wrote: >>> >>> >>> On Jul 23, 2010, at 7:36 PM, brendel.juergen wrote: >>> >>> > >>> > >>> > Hello! >>> > >>> > Is there a standardized way to describe a new hypermedia >> format? >>> >>> No, but you might be interested in >>> >>> http://www.nordsc.com/blog/?p=6 >> and >>> http://www.nordsc.com/blog/?p=8 >>> >>> > >>> > Let me provide some context: >>> > >>> > Yesterday, I had a long discussion on Twitter with Andrew >> Wahbe about the topic of whether it's even possible to create a >> hypermedia format without knowledge of the client. >>> > >>> > More specifically, the concern was that whatever hypermedia >> format you come up with would 'bind' the client to that server: It's >> difficult to have a generic client, since instead you end up with >> clients that have to have specific knowledge about the particular >> hypermedia format you use. >>> >>> Yes, that is natural (and ok). User agents provide the interaction >> point between users and the application. They do this by >>> >>> - exposing information (e.g. render the content of <title> >> element in >>> the browser window head or store the result of a link check run for >>> a site in a database or file) >>> - performing automatic transitions (e.g. GET an HTML inline image >> or >>> recursively traverse links of a site for validation) >>> - provide a means for the user to activate hypermedia controls >>> (e.g. make links found in a page clickable, display forms) >>> >>> Doing all this requires in-depth knowledge of the media type and >> coding for all this requires in-depth, hard coded knowledge about the >> expected media types. >>> >>> It is simply ok that browsers implement HTML and that AtomPub >> clients implement the AtomPub media types. It is also ok that Google's >> indexed likely understands all media types it knows to be in use on the >> Web. >>> >> >> Ah, but remember that the context here is that the hypermedia format is >> designed without knowledge of what the client does. So in the case of >> HTML, the client can do all the things you mention quite easily because >> HTML is explicitly designed for them -- well explicitly for the browser >> examples like <title> and perhaps implicitly for the spider >> examples by virtue of HTML following the Principle of Least Power. In >> short -- HTML was designed around a client domain (interactive >> information presentation). >> >> If the hypermedia format was not designed around the client domain, >> then the client would be tasked with mapping the data to its domain >> (which may not even be feasible). Also, the client would be bound to >> that domain -- restricting it from doing other things. >> >> For example, if Amazon designed it's hypermedia format before browser's >> and HTML existed it would likely revolve around buying books and other >> items. It would likely be impossible to express something like Facebook >> in this language - you'd need some new forma> >> >> >> >> >> >> >> >> >> >> >> >> >> >> > > -- > > *Disclaimer: The opinions expressed herein are just my opinions and only by > chance they are right.* > > <http://lh3.ggpht.com/_1aTCd17_nho/TEblN4fV-_I/AAAAAAAAAHw/wZ51kXrfJcs/qrcode_bc_1.jpg> > Please click on the image to enlarge > it<http://lh3.ggpht.com/_1aTCd17_nho/TEblN4fV-_I/AAAAAAAAAHw/wZ51kXrfJcs/qrcode_bc_1.jpg> > -- *Disclaimer: The opinions expressed herein are just my opinions and only by chance they are right.* <http://lh3.ggpht.com/_1aTCd17_nho/TEblN4fV-_I/AAAAAAAAAHw/wZ51kXrfJcs/qrcode_bc_1.jpg> Please click on the image to enlarge it<http://lh3.ggpht.com/_1aTCd17_nho/TEblN4fV-_I/AAAAAAAAAHw/wZ51kXrfJcs/qrcode_bc_1.jpg>
I wrote (as it turns out) a REST application in 1995 that's long-since disappeared, but serves to make a point. A pizza parlor wanted its customers to be able to order pizza for pickup or delivery, from their website. The order page allowed size, style and toppings to be selected. The confirmation page re-stated the order, adding a total. Cancelling the order returned the customer to the blank order form; placing the order went as follows: The customer saw a 'thank you' page informing them they would receive an 'order confirmed' e-mail, and a phone call shortly before the order was ready in the case of a pickup. This page linked back to the menu. The server, upon receiving a POST of the data in hidden input fields from the confirmation page, executed a Perl script to convert the name/ value pairs into a human-readable order. This order text was then e-mailed to the pizza parlor, where it would be printed out. The incoming mailbox would only receive from the Perl script, so it would only receive orders, so the printout was automatic, as was the autoresponder confirming to the customer that the order was received by the kitchen, not just the website (by using the customer's e-mail as reply-to). The frontend customer application was REST. The backend m2m process was NOT REST. Why should it have been? The need was for reliable messaging that HTTP doesn't have (if I'd been concerned about message order, then I'd have used multipart), and a one-to-one connection rather than HTTP's one-to- many. So the correct choice of protocol, then and now, was and is SMTP which is not a RESTful protocol. REST is not the solution to all problems in Web services development. Could another protocol, perhaps HTTP, be used to make this m2m process RESTful? Sure! But it would be convoluted, so what's the point? I've never advocated REST for the sake of being RESTful, or bending over backwards to solve a problem with REST that's better and more easily solved otherwise. For the pizza problem, REST is only relevant to the customer interaction. Applying REST elsewhere may be possible, but if the purpose and benefit of REST is simplicity, such a complex REST solution misses the point of the style completely. I believe there's a problem out there, where systems have been designed for SOA and are now converting to ROA as if it's merely a different "serialization," if you will. So we see efforts to apply REST to problems that REST isn't meant to solve, instead of re-architecting systems in terms of problems REST can solve, while recognizing that parts of the system are better off being NOT REST. As I've said before, saying REST/NOT REST is not a value judgement. In fact, sometimes, for something to be RESTful can be a mistake. -Eric
I'll tie this post into REST, but first... > > it's time for the free concert over in Steamboat -- Rhythm Devils > featuring Keller Williams. > Like jam bands? http://www.rhythmdevils.net/ Don't miss this! Wow! Iko Iko, Fire on the Mountain, In the Midnight Hour... wrapped up with a beautiful sunset/Venus-and-moonrise, temp 78-72F (I'm sorry, is it hot where you are?), the highlight of the summer for everyone there. I heard lots of folks say it was the best of the 20-year-old free summer concert series -- and we've seen some doozies from the likes of Big Head Todd, War, Little Feat, Karl Denson, Susan Tedeschi, Bela Fleck, John Hiatt, Nitty Gritty Dirt Band, Keb Mo, String Cheese Incident (their paid shows -- Blues Traveler opened -- at the ski area still stand as the best live music we've ever had here), list goes on... The crowd is almost all locals, everyone brings their kids of all ages, there's beer and food, anyone you don't know turns out to know someone you're with, one County Commissioner was there dancing two years after her double knee replacement, all us geezers were wearing vintage tie-dye from our Deadhead days... The idea behind the free concerts is to give something back to the workers who are responsible for Steamboat's tourist-friendly reputation, and just be a fun party for the whole community. Summer just doesn't get any better than seeing an awesome concert, free, in such an atmosphere. IMNSHO. But this doesn't bias my review of the music. I'm exhausted and my voice is still shot a day later. Howelsen Hill amphiteatre is the out-run of the century-old nordic ski jumping facility built into the hill -- this really reverbs the bass back to the audience, and the naturally-good sound always amps up whoever plays there on our tiny little stage. Tapers were there, when this show winds up online it's worth a listen. I know half the people responsible for the concert series, so getting backstage after was easy (OK, even if you didn't have connections). I wanted to meet Mickey and Billy (I saw some mediocre Dead shows, but those two were always on fire) but didn't, so we hung with Keller for a while (used to play the bars here so much he's an honorary local, like SCI, this was back when I was a local ISP cranking out interactive session-less websites). We met Davey, the 23-yr-old guitar prodigy (I don't use the term lightly) who sings like Gregg Allman and plays like Jerry Garcia. Davey's a hard guy to talk to, mainly because of all the (mostly underage) hottie girls who kept interrupting us to get their picture taken with him! They didn't even notice Keller. Andy the bass player was really razzing Davey about his "groupie problem," which was funny. I liked Davey -- he played 8 or 9 of his 12 touring guitars, and one mandolin, over the course of the evening, used the slide some. I've seen Keller's one-man show over a dozen times, and once with SCI. I went up to him, re-introduced myself, reminded him we'd hung out after-hours in certain Steamboat Springs watering holes which no longer exist, at which point he sorta-recognized me. Then I complimented him on the show, and congratulated him for landing the tour gig. Then I asked him if he learned all he needed to know about improvisational jamming from SCI, or if Mickey and Billy are taking him to school (they are). I just treat these folks like regular people -- I'd never walk up to a total stranger in the middle of a conversation, interrupt to ask to have my picture taken, then leave as suddenly as I came. Famous or not. But we were interrupted twice by dudes who wanted their picture taken with him (the new autograph). I just wanted to tell them to chill, I don't know Keller any more than they do (well, a little), we're having an interesting conversation so just hang, listen, and try to think of something intelligent to say -- *then* ask for the photo, if you must. So no, I have no photographic proof to post of this encounter (I don't even own a mobile phone) despite having actually had conversations with these people. As to those other folks, what's the point of having a photo of yourself with someone you've never actually met? Kinda like, what's the point of calling an API REST, when it doesn't exhibit the properties of a REST system? My point here is not to say that taking REST advice from me is like taking financial advice from Lenny Dykstra. But I am a college-dropout Deadhead ski-bum entrepreneur with no formal training in this field, or corporate experience -- and this *does* bias my vision of how REST should be applied. So I don't care if people take my advice or leave it. But don't get upset if it doesn't fit your preconceptions of what enterprises (particularly your own) need. Just don't insist, or try to get me to say, that what I say is REST/NOT REST is only my *opinion*, because I believe REST to be hard science. My assertions aren't always right, I recognize this, and am always ready to be proven wrong. But you'll really piss me off if, instead of rational argument of the merits of a position in the language of science, you resort to insisting that I'm expressing an opinion or accusing me of purism, ideology, or religious devotion -- since by avoiding debate in the language of science *you're* the one who's trying to make it an ideological debate of opinions. Which is NOT REST. I'm interested in the simplest solution to a problem, which means not forcing REST to be the answer, and coming up with the simplest RESTful solution I can to problems that are best solved by REST. Since there's always more than one RESTful solution to any such problem, my way may not always be the best way, but one thing it will be at this point in my experience, is RESTful, because it is hard science and once anyone learns it as such they will also have that level of confidence (not arrogance) in their work. I kept my Web development lazy-Deadhead simple, never following the corporate crowd into the SOA pit. I will never have a service-centric vision of REST because I never bought into the idea of service orientation (my philosophy is 'services as supplement'). Now that the corporate crowd is climbing back out of the SOA pit, they're bringing the filth with them (so to speak ;-), meaning that most of what's written about REST is written by SOA folks for SOA folks, and REST is increasingly being discussed in terms of SOA, as if it's an evolution of SOA. Thus, the obsession with IDLs like WADL... http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven#comment-746 Others have noticed this as well, although perhaps have not put it in as blunt of terms as I use. I've seen "RESTful Web Services" described as REST's "New Testament" and Roy's thesis as the "Old Testament". I believe Roy was right all along, which I guess puts me squarely in the "Old Testament" camp, in that I believe systems should be conceived as hypertext APIs rather than as services bound to business rules. I see SOA as an evolutionary dead end, so I refuse to discuss REST in those terms. I don't change my views of REST based on whether a system is "enterprise" or not, any more than I change my behavior towards others based on whether they're famous or not. Now, don't get me wrong. I recommend "RESTful Web Services," just not as the be-all end-all reference to REST. Not enough emphasis on the hypertext constraint, which is unfortunate because it plays right to the crowd who want to re-serialize SOA systems as ROA systems, instead of re-thinking their entire approach in terms of the hypertext capabilities of a common "client domain" (to use Andrew's term) and taking things from there. The biggest clash between Old Testament and New right now, seems to be the issue of media type proliferation. On that point, please refer to: http://roy.gbiv.com/untangled/2008/paper-tigers-and-hidden-dragons Notice that Roy's solution to the problem space is a sparse-bit array. Instead of creating a new media type, Roy's thought process is to consider what ubiquitous media type may be repurposed to this need. His choice is image/gif. That's so REST! In fact, that's so REST it's just hard for folks to accept it. Once you have, REST becomes much easier to learn. Hard science, yeah, but there's a Zen to it that goes against everything you ever thought you knew about distributed software architecture. More than anything else, it's the Zen aspect of the science of REST that makes it so hard to teach. Sometimes a fact must be accepted as a fact, and its understanding left for later, as the purpose won't otherwise become obvious. Since REST is a real-world proven goal, is this any harder of a way to learn, than going some other direction where the goal is an unknown that isn't real- world proven? Once folks *have* gotten there, they're on their own to figure out how to show anyone else the way -- like where to say "this is a fact" and "just trust me on this for now". That's just how it is. If you don't trust my judgment on these issues I have no real guidance on, then just don't listen to me, and please don't challenge me on these issues with who-accredited-me-to-teach crap and such, and this goes for anyone trying to teach REST, not just me. Find someone who's teaching style is more to your liking to ask questions of, and try not to drive the others batshit crazy arguing their shortcomings from your perspective, particularly if you're so new to this that such perspective is uninformed. Anyone trying to teach REST is also by definition learning how to teach REST, as a result both are becoming easier over time. I envy those just starting with REST now, because they'll be as capable as I am with it in a fraction of the number of years it took me. The resources and technologies we have today, and the number of folks who have advanced to a high level of knowledge on this list and are willing to offer help, simply didn't exist even a few years ago. I can't claim to have learned REST -- it's my specialty, but it's also evolving steadily enough that I'll probably always be learning more about it. There are different approaches to REST, but there are no alternate interpretations of it. There are alternate interpretations of everything else that passes for an architectural style on the Web, from CORBA to SOA and everything in between, but the difference is that all the others are buzzwords -- only REST is a proven model. So while you can call just about anything SOA, that degree of flexibility just isn't there with REST. It isn't about a secret handshake, it's about having exhibited that you not only grasp the constraints, but the reasons they're inviolable and interdependent. Like a good Dead show with the band (plus anyone sitting in) firing on all cylinders, the interdependence of the members yields a result that's greater than the sum of its parts, that you can't get from any ensemble playing the same notes written as sheet music. Lightning. REST is bottled lightning, releasing it relies on the interdependence of all its constraints. Not loopholing your way into being able to say, "Well, technically we aren't violating any constraints, CoD you see..." but I'll get to that in another post. I have the patience to teach REST, but only to those with the patience to learn REST. I could make a fine turkey-hunting analogy here, omitted for brevity. ;-) A REST system may become so complex for the sake of being REST, that it violates no constraints, yet fails to rise above the noodling mediocrity too many Dead shows descended into, where the band just never got into the groove, yet without sucking either. In my *opinion*, this is the best-case scenario when SOA is re-factored into ROA, instead of being re-envisioned as a hypertext system. Take away a constraint from REST, and the result is a free-form architectural style that isn't based on a proven, successful model. Like the Dead without Jerry -- would you really expect it to scale? :-) Maybe that's why they turned down Carlos Santana's offer and ended it. -Eric <playlist status='current' rotation='heavy' artist='Grateful Dead' href= 'http://www.archive.org/details/gd1992-05-31.sbd.miller.87281.sbeok.flac16'> Las Vegas, June 31 1992, with Steve Miller</playlist>
Erick, I'm totally with you. Rest is not a sticker for my car, or a badge for my uniform, not a cool label in my t-shirt. It is a style, suitable for some problems, not for all. When we start changing our solution into a "convoluted REST", just for the sake of being Restful, then we are wagging the dog. Cheers. William Martinez. --- In rest-discuss@yahoogroups.com, "Eric J. Bowman" <eric@...> wrote: > > I wrote (as it turns out) a REST application in 1995 that's long-since > disappeared, but serves to make a point. A pizza parlor wanted its > customers to be able to order pizza for pickup or delivery, from their > website. The order page allowed size, style and toppings to be > selected. The confirmation page re-stated the order, adding a total. > Cancelling the order returned the customer to the blank order form; > placing the order went as follows: > > The customer saw a 'thank you' page informing them they would receive > an 'order confirmed' e-mail, and a phone call shortly before the order > was ready in the case of a pickup. This page linked back to the menu. > The server, upon receiving a POST of the data in hidden input fields > from the confirmation page, executed a Perl script to convert the name/ > value pairs into a human-readable order. > > This order text was then e-mailed to the pizza parlor, where it would > be printed out. The incoming mailbox would only receive from the Perl > script, so it would only receive orders, so the printout was automatic, > as was the autoresponder confirming to the customer that the order was > received by the kitchen, not just the website (by using the customer's > e-mail as reply-to). The frontend customer application was REST. The > backend m2m process was NOT REST. > > Why should it have been? The need was for reliable messaging that HTTP > doesn't have (if I'd been concerned about message order, then I'd have > used multipart), and a one-to-one connection rather than HTTP's one-to- > many. So the correct choice of protocol, then and now, was and is SMTP > which is not a RESTful protocol. REST is not the solution to all > problems in Web services development. > > Could another protocol, perhaps HTTP, be used to make this m2m process > RESTful? Sure! But it would be convoluted, so what's the point? I've > never advocated REST for the sake of being RESTful, or bending over > backwards to solve a problem with REST that's better and more easily > solved otherwise. For the pizza problem, REST is only relevant to the > customer interaction. Applying REST elsewhere may be possible, but if > the purpose and benefit of REST is simplicity, such a complex REST > solution misses the point of the style completely. > > I believe there's a problem out there, where systems have been designed > for SOA and are now converting to ROA as if it's merely a different > "serialization," if you will. So we see efforts to apply REST to > problems that REST isn't meant to solve, instead of re-architecting > systems in terms of problems REST can solve, while recognizing that > parts of the system are better off being NOT REST. > > As I've said before, saying REST/NOT REST is not a value judgement. In > fact, sometimes, for something to be RESTful can be a mistake. > > -Eric >
Hello Bryan. That is actually an interesting question that is usually answered like many great guys here did. Still, let's review what's in a URI. 1. A resource is not a URI. The URI is the identifier-name that actually identifies a resource. A resource can be anything, and can have many names. 2. A client may not know all the names of a resource. Actually, a resource may already exist with a different name the client ignores. 3. The client may have control over the URIs it uses, but it should never had control over the Server URIs. 4. I don't see why a resource cannot have a name not given by the server, but I do see that a server should not be forced to name a resource. 5. The client should not infer nothing from a URI. No folders, no types, nothing. That is why I prevent from using templates. Too tempting. So, what all that means? It means you can use PUT with any name you want. That is your name, the URI from the client. But the server owns its namespace, thus the resource may be created with the name the server likes. Still, the server can note that you, as a client, gave a special, particular name to that new resource. So, whenever your client requests that URI, the server knows which resource it refers to. If someone else, even your client, requests that resource using a search or something, the URI that will be returned is that one of the server. See? The resource in this case has two URIs. PUT has not forced the name into the server. The server keeps its autonomy. The URI can be a cool URI client side, and use templates client side, and have a structure client side. But server doesn't care. It just polite enough to remember your name for that resource. Furthermore. As I said, the URI is not the resource. You can PUT a resource with a name, but if the resource already exists (the resource, NOT the name), it will fail. See? The server can check the body of the PUT and if creating a resource from that duplicates an already existing resource, and that is not permitted, it will fail even if the resource you PUT has a completely new name. This is very important. We are putting too much importance into URIs, when they are simply names to refer to the really important guy, the resource. Cheers. William Martinez Pomares --- In rest-discuss@yahoogroups.com, Bryan Taylor <bryan_w_taylor@...> wrote: > > I've been discussing PUT for create with some coworkers. This is certainly valid > HTTP, but I'm wondering if people consider it RESTful. It seems to me that > giving the client control over part of the URI requires them to understand how > resources are organized and forces them to construct URIs as non-opaque strings. > So I wonder if this conflicts with HATEOAS. It potentially also puts a burden on > the client to avoid namespace collisions, so that it must adopt some uniqueness > logic which again requires application state that seems problematic. >
Where can I get REST stickers. Is there a logo? Regards, Will Hartung (willh@...) On Sun, Jul 25, 2010 at 7:52 PM, William Martinez Pomares < wmartinez@...> wrote: > > > > Erick, I'm totally with you. > Rest is not a sticker for my car, or a badge for my uniform, not a cool > label in my t-shirt. It is a style, suitable for some problems, not for all. > When we start changing our solution into a "convoluted REST", just for the > sake of being Restful, then we are wagging the dog. > > Cheers. > > William Martinez. > --- In rest-discuss@yahoogroups.com <rest-discuss%40yahoogroups.com>, > "Eric J. Bowman" <eric@...> wrote: > > > > I wrote (as it turns out) a REST application in 1995 that's long-since > > disappeared, but serves to make a point. A pizza parlor wanted its > > customers to be able to order pizza for pickup or delivery, from their > > website. The order page allowed size, style and toppings to be > > selected. The confirmation page re-stated the order, adding a total. > > Cancelling the order returned the customer to the blank order form; > > placing the order went as follows: > > > > The customer saw a 'thank you' page informing them they would receive > > an 'order confirmed' e-mail, and a phone call shortly before the order > > was ready in the case of a pickup. This page linked back to the menu. > > The server, upon receiving a POST of the data in hidden input fields > > from the confirmation page, executed a Perl script to convert the name/ > > value pairs into a human-readable order. > > > > This order text was then e-mailed to the pizza parlor, where it would > > be printed out. The incoming mailbox would only receive from the Perl > > script, so it would only receive orders, so the printout was automatic, > > as was the autoresponder confirming to the customer that the order was > > received by the kitchen, not just the website (by using the customer's > > e-mail as reply-to). The frontend customer application was REST. The > > backend m2m process was NOT REST. > > > > Why should it have been? The need was for reliable messaging that HTTP > > doesn't have (if I'd been concerned about message order, then I'd have > > used multipart), and a one-to-one connection rather than HTTP's one-to- > > many. So the correct choice of protocol, then and now, was and is SMTP > > which is not a RESTful protocol. REST is not the solution to all > > problems in Web services development. > > > > Could another protocol, perhaps HTTP, be used to make this m2m process > > RESTful? Sure! But it would be convoluted, so what's the point? I've > > never advocated REST for the sake of being RESTful, or bending over > > backwards to solve a problem with REST that's better and more easily > > solved otherwise. For the pizza problem, REST is only relevant to the > > customer interaction. Applying REST elsewhere may be possible, but if > > the purpose and benefit of REST is simplicity, such a complex REST > > solution misses the point of the style completely. > > > > I believe there's a problem out there, where systems have been designed > > for SOA and are now converting to ROA as if it's merely a different > > "serialization," if you will. So we see efforts to apply REST to > > problems that REST isn't meant to solve, instead of re-architecting > > systems in terms of problems REST can solve, while recognizing that > > parts of the system are better off being NOT REST. > > > > As I've said before, saying REST/NOT REST is not a value judgement. In > > fact, sometimes, for something to be RESTful can be a mistake. > > > > -Eric > > > > >
Will: LOL! I've recently been contemplating putting this: http://amundsen.com/images/mrt.png on a mug or T-Shirt. I used to pass these out as stickers in some classes I taught some years ago: http://www.ics.uci.edu/~fielding/pubs/dissertation/null_style.gif mca http://amundsen.com/blog/ http://mamund.com/foaf.rdf#me On Mon, Jul 26, 2010 at 13:43, Will Hartung <willh@...> wrote: > > > Where can I get REST stickers. Is there a logo? > > Regards, > > Will Hartung > (willh@...) > > > On Sun, Jul 25, 2010 at 7:52 PM, William Martinez Pomares < > wmartinez@...> wrote: > >> >> >> >> Erick, I'm totally with you. >> Rest is not a sticker for my car, or a badge for my uniform, not a cool >> label in my t-shirt. It is a style, suitable for some problems, not for all. >> When we start changing our solution into a "convoluted REST", just for the >> sake of being Restful, then we are wagging the dog. >> >> Cheers. >> >> William Martinez. >> --- In rest-discuss@yahoogroups.com <rest-discuss%40yahoogroups.com>, >> "Eric J. Bowman" <eric@...> wrote: >> > >> > I wrote (as it turns out) a REST application in 1995 that's long-since >> > disappeared, but serves to make a point. A pizza parlor wanted its >> > customers to be able to order pizza for pickup or delivery, from their >> > website. The order page allowed size, style and toppings to be >> > selected. The confirmation page re-stated the order, adding a total. >> > Cancelling the order returned the customer to the blank order form; >> > placing the order went as follows: >> > >> > The customer saw a 'thank you' page informing them they would receive >> > an 'order confirmed' e-mail, and a phone call shortly before the order >> > was ready in the case of a pickup. This page linked back to the menu. >> > The server, upon receiving a POST of the data in hidden input fields >> > from the confirmation page, executed a Perl script to convert the name/ >> > value pairs into a human-readable order. >> > >> > This order text was then e-mailed to the pizza parlor, where it would >> > be printed out. The incoming mailbox would only receive from the Perl >> > script, so it would only receive orders, so the printout was automatic, >> > as was the autoresponder confirming to the customer that the order was >> > received by the kitchen, not just the website (by using the customer's >> > e-mail as reply-to). The frontend customer application was REST. The >> > backend m2m process was NOT REST. >> > >> > Why should it have been? The need was for reliable messaging that HTTP >> > doesn't have (if I'd been concerned about message order, then I'd have >> > used multipart), and a one-to-one connection rather than HTTP's one-to- >> > many. So the correct choice of protocol, then and now, was and is SMTP >> > which is not a RESTful protocol. REST is not the solution to all >> > problems in Web services development. >> > >> > Could another protocol, perhaps HTTP, be used to make this m2m process >> > RESTful? Sure! But it would be convoluted, so what's the point? I've >> > never advocated REST for the sake of being RESTful, or bending over >> > backwards to solve a problem with REST that's better and more easily >> > solved otherwise. For the pizza problem, REST is only relevant to the >> > customer interaction. Applying REST elsewhere may be possible, but if >> > the purpose and benefit of REST is simplicity, such a complex REST >> > solution misses the point of the style completely. >> > >> > I believe there's a problem out there, where systems have been designed >> > for SOA and are now converting to ROA as if it's merely a different >> > "serialization," if you will. So we see efforts to apply REST to >> > problems that REST isn't meant to solve, instead of re-architecting >> > systems in terms of problems REST can solve, while recognizing that >> > parts of the system are better off being NOT REST. >> > >> > As I've said before, saying REST/NOT REST is not a value judgement. In >> > fact, sometimes, for something to be RESTful can be a mistake. >> > >> > -Eric >> > >> >> > > >
> I've recently been contemplating putting this:http://amundsen.com/images/mrt.pngon a mug or T-Shirt. Would you PUT it or POST it on your mug or t-shirt? ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ My example of the humor constraint in REST. It was so constrained that it didn't even seem funny....
See attached... -E
No, be safe Mark... It is funny! I GET it! :-D Giacomo On Mon, Jul 26, 2010 at 8:10 PM, Mark Wonsil <mark_wonsil@...> wrote: > > > > I've recently been contemplating putting this: > http://amundsen.com/images/mrt.png on a mug or T-Shirt. > > Would you PUT it or POST it on your mug or t-shirt? > > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > > My example of the humor constraint in REST. It was so constrained that > it didn't even seem funny.... > > >
I like it. It makes REST even more confusing for the uninitiated. It should be plastered on every bus bench from Santa Cruz to Cambridge. "What is REST?" "Simple! *SLAP*" Just put that sticker over the person asking the questions mouth. "Now you can speak REST too!" Easy :). Regards, Will Hartung (willh@...) On Mon, Jul 26, 2010 at 10:49 AM, mike amundsen <mamund@...> wrote: > Will: > > LOL! > > I've recently been contemplating putting this: > http://amundsen.com/images/mrt.png on a mug or T-Shirt. > I used to pass these out as stickers in some classes I taught some years > ago: http://www.ics.uci.edu/~fielding/pubs/dissertation/null_style.gif > > mca > http://amundsen.com/blog/ > http://mamund.com/foaf.rdf#me > > > > On Mon, Jul 26, 2010 at 13:43, Will Hartung <willh@...> wrote: > >> >> >> Where can I get REST stickers. Is there a logo? >> >> Regards, >> >> Will Hartung >> (willh@...) >> >> >> On Sun, Jul 25, 2010 at 7:52 PM, William Martinez Pomares < >> wmartinez@...> wrote: >> >>> >>> >>> >>> Erick, I'm totally with you. >>> Rest is not a sticker for my car, or a badge for my uniform, not a cool >>> label in my t-shirt. It is a style, suitable for some problems, not for all. >>> When we start changing our solution into a "convoluted REST", just for the >>> sake of being Restful, then we are wagging the dog. >>> >>> Cheers. >>> >>> William Martinez. >>> --- In rest-discuss@yahoogroups.com <rest-discuss%40yahoogroups.com>, >>> "Eric J. Bowman" <eric@...> wrote: >>> > >>> > I wrote (as it turns out) a REST application in 1995 that's long-since >>> > disappeared, but serves to make a point. A pizza parlor wanted its >>> > customers to be able to order pizza for pickup or delivery, from their >>> > website. The order page allowed size, style and toppings to be >>> > selected. The confirmation page re-stated the order, adding a total. >>> > Cancelling the order returned the customer to the blank order form; >>> > placing the order went as follows: >>> > >>> > The customer saw a 'thank you' page informing them they would receive >>> > an 'order confirmed' e-mail, and a phone call shortly before the order >>> > was ready in the case of a pickup. This page linked back to the menu. >>> > The server, upon receiving a POST of the data in hidden input fields >>> > from the confirmation page, executed a Perl script to convert the name/ >>> > value pairs into a human-readable order. >>> > >>> > This order text was then e-mailed to the pizza parlor, where it would >>> > be printed out. The incoming mailbox would only receive from the Perl >>> > script, so it would only receive orders, so the printout was automatic, >>> > as was the autoresponder confirming to the customer that the order was >>> > received by the kitchen, not just the website (by using the customer's >>> > e-mail as reply-to). The frontend customer application was REST. The >>> > backend m2m process was NOT REST. >>> > >>> > Why should it have been? The need was for reliable messaging that HTTP >>> > doesn't have (if I'd been concerned about message order, then I'd have >>> > used multipart), and a one-to-one connection rather than HTTP's one-to- >>> > many. So the correct choice of protocol, then and now, was and is SMTP >>> > which is not a RESTful protocol. REST is not the solution to all >>> > problems in Web services development. >>> > >>> > Could another protocol, perhaps HTTP, be used to make this m2m process >>> > RESTful? Sure! But it would be convoluted, so what's the point? I've >>> > never advocated REST for the sake of being RESTful, or bending over >>> > backwards to solve a problem with REST that's better and more easily >>> > solved otherwise. For the pizza problem, REST is only relevant to the >>> > customer interaction. Applying REST elsewhere may be possible, but if >>> > the purpose and benefit of REST is simplicity, such a complex REST >>> > solution misses the point of the style completely. >>> > >>> > I believe there's a problem out there, where systems have been designed >>> > for SOA and are now converting to ROA as if it's merely a different >>> > "serialization," if you will. So we see efforts to apply REST to >>> > problems that REST isn't meant to solve, instead of re-architecting >>> > systems in terms of problems REST can solve, while recognizing that >>> > parts of the system are better off being NOT REST. >>> > >>> > As I've said before, saying REST/NOT REST is not a value judgement. In >>> > fact, sometimes, for something to be RESTful can be a mistake. >>> > >>> > -Eric >>> > >>> >>> >> >> >> > > >
Just merged CouchDB with Rest... So everyone can REST now with CouchDB.... -E
On Sun, Jul 25, 2010 at 9:12 PM, William Martinez Pomares <wmartinez@acoscomp.com> wrote: > > > > Hello Bryan. > That is actually an interesting question that is usually answered like many great guys here did. Still, let's review what's in a URI. > > 1. A resource is not a URI. The URI is the identifier-name that actually identifies a resource. A resource can be anything, and can have many names. > 2. A client may not know all the names of a resource. Actually, a resource may already exist with a different name the client ignores. > 3. The client may have control over the URIs it uses, but it should never had control over the Server URIs. > 4. I don't see why a resource cannot have a name not given by the server, but I do see that a server should not be forced to name a resource. > 5. The client should not infer nothing from a URI. No folders, no types, nothing. That is why I prevent from using templates. Too tempting. > > So, what all that means? It means you can use PUT with any name you want. That is your name, the URI from the client. But the server owns its namespace, thus the resource may be created with the name the server likes. Still, the server can note that you, as a client, gave a special, particular name to that new resource. So, whenever your client requests that URI, the server knows which resource it refers to. If someone else, even your client, requests that resource using a search or something, the URI that will be returned is that one of the server. > I dont quite agree.There is cost for doing this.See http://www.w3.org/TR/webarch/#uri-aliases > > See? The resource in this case has two URIs. PUT has not forced the name into the server. The server keeps its autonomy. The URI can be a cool URI client side, and use templates client side, and have a structure client side. But server doesn't care. It just polite enough to remember your name for that resource. > "Cool URIs don't change" from server side to client side. > > Furthermore. As I said, the URI is not the resource. You can PUT a resource with a name, but if the resource already exists (the resource, NOT the name), it will fail. See? The server can check the body of the PUT and if creating a resource from that duplicates an already existing resource, and that is not permitted, it will fail even if the resource you PUT has a completely new name. This is very important. We are putting too much importance into URIs, when they are simply names to refer to the really important guy, the resource. > Checking if a resource already exists by the representation from client is costly and not reliable. What happens if the URL the client wants already exists on the server? Should it be interpreted as an update? > Cheers. > > William Martinez Pomares > > --- In rest-discuss@yahoogroups.com, Bryan Taylor <bryan_w_taylor@...> wrote: > > > > I've been discussing PUT for create with some coworkers. This is certainly valid > > HTTP, but I'm wondering if people consider it RESTful. It seems to me that > > giving the client control over part of the URI requires them to understand how > > resources are organized and forces them to construct URIs as non-opaque strings. > > So I wonder if this conflicts with HATEOAS. It potentially also puts a burden on > > the client to avoid namespace collisions, so that it must adopt some uniqueness > > logic which again requires application state that seems problematic. > > > >
"William Martinez Pomares" wrote: > > Rest is not a sticker for my car, or a badge for my uniform, not a > cool label in my t-shirt. It is a style, suitable for some problems, > not for all. When we start changing our solution into a "convoluted > REST", just for the sake of being Restful, then we are wagging the > dog. > Exactly. The broader point I'm trying to make, is that I'm afraid some people are being told by their bosses to "make it REST", in which case they aren't doing it for brownie points, but to keep their jobs. Which results in REST being discussed in terms of the SOA style it has nothing to do with. -Eric
Dong Liu wrote: > > What happens if the URL the client wants already exists on the server? > Should it be interpreted as an update? > I think what William was getting at, is that if your system has constrained HTTP PUT to creation semantics, and a request comes in for which the URI already exists, the response should be an error. I would assign replacement semantics to HTTP PUT, but if the system has used POST for something besides creation semantics, I'd use FTP PUT for creation. There's no reason a resource can't have two URIs, one HTTP, one FTP. -Eric
> > I would assign replacement semantics to HTTP PUT, but if the system > has used POST for something besides creation semantics, I'd use FTP > PUT for creation. > In such a system, an FTP PUT request for an existing resource should also yield an error. -Eric
On Sun, Jul 25, 2010 at 2:40 AM, Eric J. Bowman <eric@...> wrote: > The biggest clash between Old Testament and New right now, seems to be > the issue of media type proliferation. On that point, please refer to: > > http://roy.gbiv.com/untangled/2008/paper-tigers-and-hidden-dragons > > Notice that Roy's solution to the problem space is a sparse-bit array. > Instead of creating a new media type, Roy's thought process is to > consider what ubiquitous media type may be repurposed to this need. > His choice is image/gif. That's so REST! It seems to me the conflict is coming from two distinct visions of computing. One vision is to model the world you as you see fit, and make the world work with it. The other is to take the worlds models and make your software work with that. Your discussion of using HTML is a simple example. You've always mentioned that before, and I never quite groked how you went about it until recently. Effectively what you are doing is using semantic, HTML markup combined with RDFa style annotations to augment the markup, and using that as a representation for your data. When I look at the RFDa primer (http://www.w3.org/TR/xhtml-rdfa-primer/) it became much clearer to me. But it still prompted my confusion about identifying the data to the system, since application/xhtml+xml simply doesn't tell me, at least, enough about how to process the data. But to your point, it does tell me what it is, and if it were my standard data type, then I would proceed to mine the payload for the interesting attributes. Apparently, that's what you're doing, correct? The XML payload that happens to be XHTML is not processed in total. Rather you dig your data out of it guided by XHTML and RDF annotations. If it were some defined XML, I'd be tempted to take the schema, generated a bunch of JAXB annotations, and have the framework marshal/unmarshal the document to internal Java objects, and manipulate those rather than, perhaps, pull chunks out of the document using a bunch of, say, XPath expressions. That's when the light hit me. Effectively, if your path of approach is using something like XPath as your accessor technique, then the difference between an XML document and an XHTML/RDFa document are the actual paths used, but really little else. The RDFa can impose enough structure that static XPath expressions are effective and precise enough to get the data you want out of the payloads. Once that decision has been made, XML vs XHTML becomes a bike shed color, and it's easy to see the extra value XHTML provides "for free" over XML. But I think it's clear when you're model making, and particularly from a world where binding documents to objects is common, automated, and "free", the XHTML option never comes on the radar. Arguably, it's not even an option at the point. Who wants the complexity of a generic XHTML DOM, even if mapped to an Object in the system, to a "simpler", specific DOM/Mapping. XHTML also (potentially) loses the value that things like Schema validation can bring to the table. Now, technically, you could make a "sub schema", where your document IS XHTML, it's just a specific subset of it that you (the designer) have decided is enough to represent your data. You can schema this, potentially map this (not many mappers do well with XML attributes to specific object slots), etc. "Cake and eat it too". If the goal of XHTML is for those intermediaries (i.e. it's not for the clients benefit, nor the servers benefit), that can work. But if you go this route, you can't take "arbitrary" XHTML that happens to have your interesting data embedded within it, since the overall document may not match your subset schema. But I don't think this is contrary to what you've been discussing. I don't think you've ever advocated a system being able to take arbitrary documents that meet the higher level specification of the data type you're leveraging, vs the more specific subset that your system supports. Might be a handy feature, but it's not a requirement. However, whether you use XHTML or XML, the semantics of the payload still need to be defined. That's always hard work. In that light, though I want to take Roys example you cited. While using a GIF is a clever media type to use, I think for many folks interested in this data it's wrong on many levels. First, it's not a sparse array, as was suggested, it's just compact. You're still sending all 1M bits whether it's 1 user or 10000 user changes. Yes, it compresses, but that's not relevant as that's only a transport issue. But most importantly, many systems that happen to use the GIF media type DON'T use it at the level for which it's being suggested. Specifically, at the bit level. I don't know PHP, but is it really straightforward to get the color of pixel 100,100 of a received GIF? What about Javascript in a browser. Now, perhaps, with the canvas element it can be done, but that's a pretty recent development. But either way, it sure is a lot of hoops to jump through to find out if bit #100100 is set. Most systems present the artifact instantiated from a GIF datatype as an opaque blob with very simple properties rather than as a list of Bits. I see the conflict between the reuse of what is, vs the create of what wants as the difference between the folks wanting full boat OO systems and typing within JS instead of just passing around hashes of hashes. Bags of hashes of bags of hashes. The conflict between the strongly typed crowd and the dynamically typed crowd (the battles between which are legion). Some make do, others want specific abstractions to work with. We're actually seeing the phenomenon of reusing data types, even in the SOAP world here in health care. Leveraging a few "common" data formats for many uses. A common data type today is the Document Submission Set payload. It's based on ebXML, which is used by another standards committee, and therefor adopted by yet another standards committee. Ideally this is what standard formats are for. But, at the same time, the format is so onerous, that there is already push back from the "simpler" crowd. For a simple exchange, there is a huge amount of "boiler plate" using this format. Just like the pushback from SOAP, and the boiler plate it brings with it (outside of semantics of SOAP). "Why can't I just send a PDF" they say. So, standards or no, they're not necessarily easy to use. Tooling made SOAP "easy to use". REST is "harder" for many to use because of the lack of tooling. Throwing an XSD against some tools and getting free Java classes is "easier" than crafting and testing DOM code or Xpath queries. That's where the pressure for many media types are coming from, IMHO. They're "cheap" to make, and "easy" to use.
On Mon, Jul 26, 2010 at 7:10 PM, Mark Wonsil <mark_wonsil@...> wrote: >> I've recently been contemplating putting this:http://amundsen.com/images/mrt.pngon a mug or T-Shirt. > > Would you PUT it or POST it on your mug or t-shirt? .... > > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > > My example of the humor constraint in REST. It was so constrained that > it didn't even seem funny.... You're right, but on a positive note - at least it's a self-descriptive message.
ooohhhhh --- corny REST jokes... I want to play! Two Restafarians are sitting in a bar talking about their sex lives. The first one says: "My sex life is terrible! I'm impotent!" The second replies: "Well I'm not much better: I'm idempotent -- it's always exactly the same old thing every single time!" --- In rest-discuss@yahoogroups.com, Mike Kelly <mike@...> wrote: > > On Mon, Jul 26, 2010 at 7:10 PM, Mark Wonsil <mark_wonsil@...> wrote: > >> I've recently been contemplating putting this:http://amundsen.com/images/mrt.pngon a mug or T-Shirt. > > > > Would you PUT it or POST it on your mug or t-shirt? > > .... > > > > > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > > > > My example of the humor constraint in REST. It was so constrained that > > it didn't even seem funny.... > > You're right, but on a positive note - at least it's a self-descriptive message. >
i blame this thread on Will H and William M P. mca http://amundsen.com/blog/ http://mamund.com/foaf.rdf#me On Mon, Jul 26, 2010 at 16:20, EdPimentl <edpimentl@...> wrote: > Just merged CouchDB with Rest... > > So everyone can REST now with CouchDB.... > > -E > > > >
Or: 41-yr M geek seeks younger F for non-idempotent sex Maybe that's how I'll meet my soulmate? -Eric
phew! mca http://amundsen.com/blog/ http://mamund.com/foaf.rdf#me On Mon, Jul 26, 2010 at 21:38, wahbedahbe <andrew.wahbe@...> wrote: > ooohhhhh --- corny REST jokes... I want to play! > > Two Restafarians are sitting in a bar talking about their sex lives. > The first one says: "My sex life is terrible! I'm impotent!" > The second replies: "Well I'm not much better: I'm idempotent -- it's always exactly the same old thing every single time!" > > --- In rest-discuss@yahoogroups.com, Mike Kelly <mike@...> wrote: >> >> On Mon, Jul 26, 2010 at 7:10 PM, Mark Wonsil <mark_wonsil@...> wrote: >> >> I've recently been contemplating putting this:http://amundsen.com/images/mrt.pngon a mug or T-Shirt. >> > >> > Would you PUT it or POST it on your mug or t-shirt? >> >> .... >> >> > >> > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ >> > >> > My example of the humor constraint in REST. It was so constrained that >> > it didn't even seem funny.... >> >> You're right, but on a positive note - at least it's a self-descriptive message. >> > > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
Mike. I think we need to get permission from that big guy from the A-team... I just don't want to end like Rocky in the first round, you know... William. --- In rest-discuss@yahoogroups.com, mike amundsen <mamund@...> wrote: > > Will: > > LOL! > > I've recently been contemplating putting this: > http://amundsen.com/images/mrt.png on a mug or T-Shirt. > I used to pass these out as stickers in some classes I taught some years > ago: http://www.ics.uci.edu/~fielding/pubs/dissertation/null_style.gif > > mca > http://amundsen.com/blog/ > http://mamund.com/foaf.rdf#me > > > > On Mon, Jul 26, 2010 at 13:43, Will Hartung <willh@...> wrote: > > > > > > > Where can I get REST stickers. Is there a logo? > > > > Regards, > > > > Will Hartung > > (willh@...) > > > > > > On Sun, Jul 25, 2010 at 7:52 PM, William Martinez Pomares < > > wmartinez@...> wrote: > > > >> > >> > >> > >> Erick, I'm totally with you. > >> Rest is not a sticker for my car, or a badge for my uniform, not a cool > >> label in my t-shirt. It is a style, suitable for some problems, not for all. > >> When we start changing our solution into a "convoluted REST", just for the > >> sake of being Restful, then we are wagging the dog. > >> > >> Cheers. > >> > >> William Martinez. > >> --- In rest-discuss@yahoogroups.com <rest-discuss%40yahoogroups.com>, > >> "Eric J. Bowman" <eric@> wrote: > >> > > >> > I wrote (as it turns out) a REST application in 1995 that's long-since > >> > disappeared, but serves to make a point. A pizza parlor wanted its > >> > customers to be able to order pizza for pickup or delivery, from their > >> > website. The order page allowed size, style and toppings to be > >> > selected. The confirmation page re-stated the order, adding a total. > >> > Cancelling the order returned the customer to the blank order form; > >> > placing the order went as follows: > >> > > >> > The customer saw a 'thank you' page informing them they would receive > >> > an 'order confirmed' e-mail, and a phone call shortly before the order > >> > was ready in the case of a pickup. This page linked back to the menu. > >> > The server, upon receiving a POST of the data in hidden input fields > >> > from the confirmation page, executed a Perl script to convert the name/ > >> > value pairs into a human-readable order. > >> > > >> > This order text was then e-mailed to the pizza parlor, where it would > >> > be printed out. The incoming mailbox would only receive from the Perl > >> > script, so it would only receive orders, so the printout was automatic, > >> > as was the autoresponder confirming to the customer that the order was > >> > received by the kitchen, not just the website (by using the customer's > >> > e-mail as reply-to). The frontend customer application was REST. The > >> > backend m2m process was NOT REST. > >> > > >> > Why should it have been? The need was for reliable messaging that HTTP > >> > doesn't have (if I'd been concerned about message order, then I'd have > >> > used multipart), and a one-to-one connection rather than HTTP's one-to- > >> > many. So the correct choice of protocol, then and now, was and is SMTP > >> > which is not a RESTful protocol. REST is not the solution to all > >> > problems in Web services development. > >> > > >> > Could another protocol, perhaps HTTP, be used to make this m2m process > >> > RESTful? Sure! But it would be convoluted, so what's the point? I've > >> > never advocated REST for the sake of being RESTful, or bending over > >> > backwards to solve a problem with REST that's better and more easily > >> > solved otherwise. For the pizza problem, REST is only relevant to the > >> > customer interaction. Applying REST elsewhere may be possible, but if > >> > the purpose and benefit of REST is simplicity, such a complex REST > >> > solution misses the point of the style completely. > >> > > >> > I believe there's a problem out there, where systems have been designed > >> > for SOA and are now converting to ROA as if it's merely a different > >> > "serialization," if you will. So we see efforts to apply REST to > >> > problems that REST isn't meant to solve, instead of re-architecting > >> > systems in terms of problems REST can solve, while recognizing that > >> > parts of the system are better off being NOT REST. > >> > > >> > As I've said before, saying REST/NOT REST is not a value judgement. In > >> > fact, sometimes, for something to be RESTful can be a mistake. > >> > > >> > -Eric > >> > > >> > >> > > > > > > >
Ok, ok. I may need to be more careful with my sticky words... William. --- In rest-discuss@yahoogroups.com, mike amundsen <mamund@...> wrote: > > i blame this thread on Will H and William M P. > > mca > http://amundsen.com/blog/ > http://mamund.com/foaf.rdf#me > > > > > On Mon, Jul 26, 2010 at 16:20, EdPimentl <edpimentl@...> wrote: > > Just merged CouchDB with Rest... > > > > So everyone can REST now with CouchDB.... > > > > -E > > > > > > > > >
Gentlemen (I'm assuming, but you never know on the 'net)! I'd like to politely request that I continue to be able to subscribe to this list without violating my company's sexual harassment policy. Let's back away from that particular line, please! -Eric. On 07/26/2010 06:38 PM, wahbedahbe wrote: > > > ooohhhhh --- corny REST jokes... I want to play! > > Two Restafarians are sitting in a bar talking about their sex lives. > The first one says: "My sex life is terrible! I'm impotent!" > The second replies: "Well I'm not much better: I'm idempotent -- it's > always exactly the same old thing every single time!" > > --- In rest-discuss@yahoogroups.com > <mailto:rest-discuss%40yahoogroups.com>, Mike Kelly <mike@...> wrote: > > > > On Mon, Jul 26, 2010 at 7:10 PM, Mark Wonsil <mark_wonsil@...> wrote: > > >> I've recently been contemplating putting > this: http://amundsen.com/images/mrt.png on a mug or T-Shirt. > > > > > > Would you PUT it or POST it on your mug or t-shirt? > > > > .... > > > > > > > > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > > > > > > My example of the humor constraint in REST. It was so constrained that > > > it didn't even seem funny.... > > > > You're right, but on a positive note - at least it's a > self-descriptive message. > > > >
Will Hartung wrote:
>
> In that light, though I want to take Roys example you cited.
>
> While using a GIF is a clever media type to use, I think for many
> folks interested in this data it's wrong on many levels.
>
> First, it's not a sparse array, as was suggested, it's just compact.
> You're still sending all 1M bits whether it's 1 user or 10000 user
> changes. Yes, it compresses, but that's not relevant as that's only a
> transport issue.
>
I believe these shortcomings were addressed in further debate, mostly
on Joe Gregorio's weblog. For the sake of this discussion, let's assume
we are talking about a hierarchical set of data that *is* a sparse-bit
array and *is* best modeled using GIF. What the data is, is irrelevant.
>
> But most importantly, many systems that happen to use the GIF media
> type DON'T use it at the level for which it's being suggested.
>
Right. But, any component needing to render the GIF can, if that's
even called for. Intermediaries could care less what any GIF represents
conceptually, only that the bag of binary bits it's seeing is a GIF. An
intermediary might care, which may be a bad thing, in fact the best
argument against GIF here is that some ISPs' accelerators might convert
to JPEG, but that's only a problem above a certain file size, and we're
avoiding that problem in my forthcoming example with a hierarchy of
GIFs.
Antivirus intermediaries don't care about GIF, at least not yet, but
they do care about other image formats which have been used as the
delivery vector for viruses. Note that a sparse-bit array could also
be modeled as an HTML table, or myriad other media types. Let's agree
to stick with GIF for the sake of argument. If GIF becomes susceptible,
instead of re-architecting the system, we can just be thankful that our
users who are worried can implement gateway antivirus.
>
> Specifically, at the bit level. I don't know PHP, but is it really
> straightforward to get the color of pixel 100,100 of a received GIF?
>
Presumably, GD is being used by the producer, to convert object data
from an array into a GIF (if PHP is a system requirement, then the
media type selected would be PNG not GIF, but I'm going to keep saying
GIF). A PHP consumer can determine the color of any pixel, yes. See
example #1:
http://php.net/manual/en/book.image.php
>
> What about Javascript in a browser. Now, perhaps, with the canvas
> element it can be done, but that's a pretty recent development. But
> either way, it sure is a lot of hoops to jump through to find out if
> bit #100100 is set. Most systems present the artifact instantiated
> from a GIF datatype as an opaque blob with very simple properties
> rather than as a list of Bits.
>
Why would the GIF images need to be displayed in a browser? If I
include some JSON in a page, do I display it as JSON? We're talking
about a system that just happens to use GIF as a data format. Whatever
interesting things we may want to know about that data, as well as how
to change that data, is a job for the XHTML which describes the GIF-
based API to that data.
True, there is no declarative way for a browser to generate a GIF
representation to PUT as a replacement. But who says we need the
browser to send a GIF to update a GIF? Or use the URI of a source
datum as the endpoint for a change to that datum? If 100,100 is an
important resource in its own right, assign it a URI. When that URI is
toggled from black to white, the client re-fetches the GIF, which GD
has regenerated and to which PHP has assigned a new Etag (if that's what
floats your boat).
But, let's bear in mind that Roy's example was specifically m2m and not
worry about how a browser would implement GD using Javascript, or
canvas, or whatever. In terms of developing and maintaining such a
system, the fact that when I check an interface it responds with a GIF
I can see, is of real benefit, even if checking the accuracy and
validity of the data contained in the GIF is a different problem.
>
> We're actually seeing the phenomenon of reusing data types, even in
> the SOAP world here in health care. Leveraging a few "common" data
> formats for many uses. A common data type today is the Document
> Submission Set payload. It's based on ebXML, which is used by another
> standards committee, and therefor adopted by yet another standards
> committee.
>
I've never actually claimed that SOAP is unRESTful. OTOH, a RESTful
SOAP system likely falls into convoluted territory, where architecture
astronuttery has failed to consider simpler solutions.
>
> Ideally this is what standard formats are for. But, at the same time,
> the format is so onerous, that there is already push back from the
> "simpler" crowd. For a simple exchange, there is a huge amount of
> "boiler plate" using this format. Just like the pushback from SOAP,
> and the boiler plate it brings with it (outside of semantics of SOAP).
> "Why can't I just send a PDF" they say.
>
Right. But a RESTful PDF-driven system, while possible, also falls
into the realm of "but why?" since PDF can always be a variant
representation of a resource's data, sans the website template of, say,
an XHTML variant of the same resource. Just because most media types
make sense to represent data, doesn't mean they're suited to driving an
API. PDF, though capable of driving a hypertext API, is still a poor
choice.
Maybe I should just come out and be more blunt about this -- there
exists today only a handful of hypertext media types suitable for
driving a REST API. This is a feature, not a bug! But there is no
limit to the number of media types that this handful can provide a
hypertext API *for*. REST requires those data types be at least
standardizable, but does not require them to be capable of driving a
hypertext API -- that's specifically what HTML (etc.) is (are) for.
>
> So, standards or no, they're not necessarily easy to use. Tooling made
> SOAP "easy to use". REST is "harder" for many to use because of the
> lack of tooling. Throwing an XSD against some tools and getting free
> Java classes is "easier" than crafting and testing DOM code or Xpath
> queries.
>
REST will never be easy to use, and makes a mockery of the very notion
of tooling. I'll get to my REST system design pattern, but for now,
I'll say that I can build a framework and tooling around this pattern,
but I can't envision such a thing as a general-purpose REST framework or
tooling. You can do REST with RESTlet, but some of the things I do
(without advocating, like variant cookies) with REST can't be done on
RESTlet.
My demo includes a simple PHP httpd I'm going to repurpose as a
framework for the specific system design pattern I've developed for the
CMS problem space. Simpler systems (REST's goal) are more complicated
to develop. It's easy to have tooling and framework for "architectures"
that are merely buzzwords, where anything goes, than it will _ever_ be
to come up with some sort of "constraint validator" for REST.
A framework built around my design pattern, will always implement the
same REST constraints the same way, from one project to the next. If I
were to build such framework and tooling, I would then proceed to
specialize in marketing it to folks whose problem it solves, but then
I'd be an expert in my framework -- just as some are experts at using
Typo3, but not otherwise building websites. Although, if I go this
route (my basic plan), REST does become simple for me to use, since all
the hard work that went into the framework never needs to be repeated.
If we can agree that the best way to implement a REST shopping cart is
to use HTML, +RDFa to express the GoodRelations ontology, then it's
possible to come up with a framework and tooling to generate an endless
variety of different workflows for different needs, and be able to
validate the results against both schemas and REST constraints. But,
that isn't the only paradigm for building a RESTful shopping cart, and
I can't conceive of a tool or framework that would support all the
possibilities.
What makes WS-* tooling easy to conceive of, is the lack of constraints.
Consider: any WS-* standard is *defined* by its tooling -- whatever
someone can actually make work, becomes the standard by which all else
is judged. This reliance on coding libraries instead of media types,
is a significant difference between WS-* and REST, and is exactly what
leads some to insist that a media type is a contract, from which code
stubs may be generated.
>
> That's where the pressure for many media types are coming from, IMHO.
> They're "cheap" to make, and "easy" to use.
>
Which is designing for the short-term -- a REST antipattern. Once one
understands REST, one understands the false economy of "cheap" because
long-term maintenance requires training to a non-ubiquitous media type.
>
> One vision is to model the world you as you see fit, and make the
> world work with it.
>
aka "unbounded creativity"
>
> The other is to take the worlds models and make your software work
> with that.
>
aka "applying constraints"
Or, the mistake could be called, putting your system's cart before
the REST horse. REST isn't a better way to do SOA, it's an anti-
pattern of SOA, which doesn't come across when most of the chatter
about REST puts it in SOA terms.
>
> Your discussion of using HTML is a simple example. You've always
> mentioned that before, and I never quite groked how you went about it
> until recently. Effectively what you are doing is using semantic, HTML
> markup combined with RDFa style annotations to augment the markup, and
> using that as a representation for your data.
>
Actually, I'm using that as a *variant* representation of my data --
specifically, as the variant or variants (XHTML vs. HTML) which drive
the API. Take another look at my demo, because that's the design
pattern I'm talking about (since it applies to so much more than just a
weblog, or even a wiki/weblog/forum, while of course not applying to
all problem spaces or representing a definitive example of REST):
http://charger.bisonsystems.net/xmltest/index.axm
(that's cross-browser, if you want REST you want the index.xht variant,
unless you're using IE)
The design pattern writ large, is to assign each application state a
URI and an XHTML stub file containing metadata. The XHTML calls an XSLT
processor to transform a more application-specific data format into an
XHTML API suitable for manipulating that back-end format. I've exposed
my back-end "business logic" (not rules) as a linked XSLT stylesheet.
The back-end data is hierarchical -- by date or by topic, in this
instantiation, but it could be anything. Nothing about this design
pattern requires Atom, just XHTML and XSLT, and it applies to most REST
problems I've ever considered. It may be used to implement Atom
Protocol, or whatever other protocol is needed to interact with the
back-end data. I like it so much I'm building a framework around it.
My data format is Atom. And yes, Atom is hypertext. But no, Atom is
not the sort of hypertext needed to meet the hypertext constraint
(unless all you're doing is GET) -- a collection of Atom entries and
feeds is not an engine of application state. Atom and Atom Protocol
are no more REST than HTML and HTTP. REST is about combining ubiquitous
media types in new and different ways. The power isn't in any one
media type, but in the combination of media types, just as the power
isn't in any one REST constraint, but the interdependence of the
constraints.
Any client can interact with my raw Atom in any way it wants to.
Anyone wishing to develop a client which serendipitously re-uses my
Atom content, discovers how to make it work by referring to the self-
documenting hypertext API of my (X)HTML variant(s). That's why it's
there. REST doesn't constrain clients to interact with my system via
hypertext, but it does constrain my system to provide such an interface.
The developer's manual for a REST API, _is_ the REST API.
>
> But it still prompted my confusion about identifying the data to the
> system, since application/xhtml+xml simply doesn't tell me, at least,
> enough about how to process the data.
>
Consider that anything application/xhtml+xml doesn't tell you, belongs
somewhere else, most likely as domain-specific vocabulary that is not
relevant at the protocol layer. At the protocol layer, we only care
that a GIF is a GIF -- not whether it's a picture of a dog, or even a
sparse-bit array. It really would help if Roy would at least publish
his *notes* on media type design, without that I don't feel that I or
anyone else is capable of taking a stab at REST's missing chapter.
I think the hardest thing to explain about REST, is *why* a media type
identifier doesn't have to say anything about the version of the media
type, or any of the application-specific things the "contract crowd"
wants them to do. Some guidance from Roy here, is overdue.
>
> But to your point, it does tell me what it is, and if it were my
> standard data type, then I would proceed to mine the payload for the
> interesting attributes.
>
You're right, mining the payload for data is one possibility. But
consider another possibility, that of mining the payload for the URL of
a more application-specific media type, and mining *that* for data:
Going in, the only knowledge of my demo weblog anyone has is the link I
posted to it, above (let's assume .xht). REST doesn't eliminate the
need for a clue; going in, you also know it's a weblog which uses Atom
as its native data format. The media type advertises that a link is a
link, and some of those links may have link relations. What those link
relations are, is not a protocol-layer concern. If a component needs
to know that, then the component needs to introspect beyond the headers.
In an ideal world, the XHTML document's @profile tells us where to look
to find the meaning of the link relations used, and the DOCTYPE tells
us that we're using XHTML 1.1 extended with RDFa. In reality, my
system uses the HTML 5 DOCTYPE, because a client-side XSLT system fails
on IE, as IE thinks that means 'download the .dtd file.' I don't know
what the status is of @profile in HTML5, I just know I like it when
used properly as a mechanism for defining link relations, i.e. it's a
good place to put this URL, not the sort of thing that needs to somehow
be machine-readable or even dereferenced, just as an identifier:
http://www.w3.org/1999/xhtml/vocab
Users, human or machine, have known (and unknown) goals the system
facilitates via its REST applications. As an example, "load the most-
recent weblog entry, and its comments."
(I've just decided to change my model, such that links to comment
threads are present or absent based on comments being on or off, so
that if comments are allowed, there's still a link even if no comments
exist -- a 404. This is not reflected in the demo, where no comments
means no link, for now.)
For the human, this is easy (or rather, the ease-of-use is dependent on
visual design). Use eyeballs to determine ascending vs. descending post
order, to find the newest. Click on the link to the comment thread
(even if there aren't any comments, this is one location of several
suitable for the post-new-comment form). Human goal accomplished -- if
the application was "load the most-recent weblog entry, and its
comments" then that application has terminated successfully upon 200 OK.
>
> Apparently, that's what you're doing, correct? The XML payload that
> happens to be XHTML is not processed in total. Rather you dig your
> data out of it guided by XHTML and RDF annotations.
>
For the machine, the HTML behind the visual design needs to facilitate
the discovery of the most-recent post and its comment thread (if any).
Each post will present (I'm still working on the markup) a publication
time, marked up as both human- and machine- readable, for example <abbr
title='machine-readable'>human-readable</abbr>. RDFa allows that @title
to be defined as equivalent to atom:published, while scoping it within
the <li> for the post.
(I mark up a weblog index page semantically with an <h1> for each
day, followed by an <ol> of the posts made that day. The RDFa I'm
talking about here is in a state of flux as this is work-in-progress.)
The machine user is interested in the link to the source of the first
post, not its comment thread, because unlike the HTML where original
post and comments may appear together, they are always different
datasets in Atom. By comparing atom:published times (gleaned from
<abbr> or whatever via RDFa), a machine user now needs to know which of
the links in-scope of the <li> points to the standalone weblog entry.
A link annotated with rel='dc:source' would be nice, however, this
markup scopes to the page, not the <li> -- so the machine must traverse
a link to find the proper dc:source.
In the case of my demo, this link is marked up with property='dc:title'.
So, the processing instructions for *which* link meets the criteria,
are contained within domain-specific vocabulary (in which I've re-used
other standard vocabularies like Atom and Dublin Core). When a machine
follows this link, the XHTML representation returned has a <link rel=
'dc:source' type='application/atom+xml'/>. The machine user understands
that Atom has a 'replies' link relation. The machine user follows
dc:source to an Atom representation containing a rel='replies'. The
second 200 OK marks the successful termination of the application.
If, instead of considering how a machine might "discover" how to
interact with my system, we consider how someone unknown to me might
develop their own client component for my system. I've given them a
media type, Atom, for which plenty of standard libraries exist.
Knowledge of Atom and Atom Protocol alone, however, is not enough to
derive my system's API. What URIs to POST new entries or comments to,
how to tag entries with categories, how to add, edit or remove
categories -- these unknowns only need to be discovered _once_ by the
developer, who then codes against these interfaces. (Of course, his
app breaks when I change my API, so it would be better to code against
my hypertext, so his custom client also updates.)
What documentation does this developer refer to, in order to figure out
how my system works? If I've done my job correctly as a REST architect,
then I have a self-documenting API where all that knowledge is laid out
in declarative hypertext -- the ultimate DIY handbook. I read an
interesting article recently about the importance of documentation as
development tool, instead of as afterthought. With REST, you document
your system as you go, because the declarative nature of hypertext
amounts to "functional documentation," provided you're using ubiquitous
media types to encapsulate your out-of-band knowledge.
>
> Once that decision has been made, XML vs XHTML becomes a bike shed
> color, and it's easy to see the extra value XHTML provides "for free"
> over XML.
>
I don't consider it a matter of painting the bike shed. A vital value-
add from XHTML is accessibility you don't get from XML. Why *not* make
a self-documenting hypertext API useful to as many humans as possible,
instead of only those without disabilities? In fact, accessibility
markup, by its very nature of being machine-readable, provides further
standardized attributes whose use may be included in the domain-specific
vocabulary (which doesn't have to be restricted to RDFa).
>
> XHTML also (potentially) loses the value that things like Schema
> validation can bring to the table.
>
I don't see how, XHTML is XML. Maybe what you're saying, is that by
transforming Atom to XHTML I lose the ability to validate the Atom-
ness of the output, if I understand you. But, any transformation of
valid Atom to XHTML, may be reversed back to valid Atom -- this, in
fact, would be how to test a domain-specific vocabulary for expressing
Atom constructs as XHTML using RDFa. A schema may then be constructed
for the XHTML output, to ensure it validates against what the input
needs to be (correctly uses the domain-specific vocabulary for
expressing Atom as XHTML) for that XSLT transformation back to Atom.
Not that I'm saying any of that has to do with REST.
>
> Now, technically, you could make a "sub schema", where your document
> IS XHTML, it's just a specific subset of it that you (the designer)
> have decided is enough to represent your data. You can schema this,
> potentially map this (not many mappers do well with XML attributes to
> specific object slots), etc. "Cake and eat it too".
>
Well, sure. In order for my XHTML output to be not just valid but
accessible, headings must be properly nested. The structure of my
output dictates that authors be restricted to <h3> - <h6>, while
accessibility requires authors to nest those properly. RELAX NG +
Schematron may be used to validate against not only the subset of
XHTML *modules* I allow authors to use, but also the "business rules"
for using that markup. Since content is wrapped in Atom, in order for
the Atom to validate, the xhtml:div's must also validate to a subset of
XHTML (assuming @type='xhtml').
If by losing capabilities of schema, you mean those capabilities of XSD
that aren't duplicated using RELAX NG + Schematron, then I counter that
those capabilities are better lost in a REST system. I've never
understood the point of having validation change the document being
validated, and can't imagine such capability making sense in a REST
system.
Not that this has anything to do with REST, I'm just pushing back
against a perceived pragmatic shortcoming of "my way."
>
> If the goal of XHTML is for those intermediaries (i.e. it's not for
> the clients benefit, nor the servers benefit), that can work. But if
> you go this route, you can't take "arbitrary" XHTML that happens to
> have your interesting data embedded within it, since the overall
> document may not match your subset schema.
>
Let's not get confused here that I'm only talking XHTML. I'm making a
point about standardized (i.e. ubiquitous) media types. The REST design
pattern of my demo weblog could just as easily crank out an SVG
interface instead of an HTML interface, if that's what the system calls
for. Back to Roy's example of the sparse-bit-array GIF, he mentioned
that it could be wrapped inside something like Atom, because having a
collection of images that you "just know" you can PUT and DELETE isn't
the same thing as a hypertext API.
So, re-using the system design pattern my weblog demo, the source Atom
documents could be Atom Media Entries linking to sparse-bit-array GIF
files, presented as a hierarchy, with XSLT transforming the Atom into
XHTML to allow for different presentations of the data. Or, the GIFs
themselves may be what's displayed, instead of an SVG graph of some set
of data from the GIFs or somesuch derived from the GIFs. Perhaps
clicking on the GIF launches an external client application. Whatever,
the point is that you wouldn't be re-inventing all those wheels which
allow a browser to present a user with a selection of GIF files from
some server.
"Robert Brewer" wrote:
>
> > The point is that you can't know, another point is that this sort
> > of thing can only work with ubiquitous media types, and another
> > point is that this is why REST says to use standard media types --
> > how _else_ are you going to achieve Web scale, if you go against
> > how the Web scales?
>
> Not to discount your points in the least, but shipping Javascript to
> the client that then knows how to interpret the custom media-type
> seems to be a very popular approach to the "how else" these days. It
> certainly doesn't promote "serendipitous reuse" for clients that
> don't do Javascript <wink>, but for those that do, it leverages one
> ubiquitous media-type (javascript) to lift another, less ubiquitous
> one.
>
The simplest REST application I can describe would be a slideshow for
image/* media types. Drop a bunch of images, all the same media type
or mixed, I don't care, into a directory. Configure an httpd to serve
those images with Link: headers using the standard link relations up,
prev, next, first and last. I don't know if Opera groks Link:, but it
does grok those link relations, and will optionally present a navbar
interface, or optionally a fast-forward button, in their presence.
Voila, REST slideshow with no hypertext in the entity bodies.
If you're meeting the hypertext constraint within hypertext content,
then you've chosen a hypertext media type. Not images, and not
Javascript. Javascript is an imperative programming language, not
declarative hypertext. There is no such thing as a "link" in
Javascript, etc., IOW Javascript is a "blackbox" whose media type
identifier says nothing more than "executable text/plain".
This is not a declarative hypertext link:
xhr.open('GET','./xsl/csi.xsl',false);xhr.send('');
This is an imperative function. While it's obviously a GET of an URL
to informed eyeballs, its purpose cannot be discerned without further
introspection of imperative code. Is it a link traversal? Is it an
asynchronous inclusion? No fair inferring that it's an XSLT stylesheet
to be loaded into the browser's XSLT processor, from the filename
extension...
When we talk about serendipitous re-use in REST, we mean the API. We
do not mean mootools or Sarissa -- a reusable blackbox is still a
blackbox. Javascript is a ubiquitous media type for scripting any other
media type which has Javascript bindings. Those other media types tend
to be ubiquitous hypertext types capable of driving a REST API, like
HTML or SVG.
"wahbedahbe" wrote:
>
> Interested to hear your argument against -- I've puzzled over where
> Ajax fits into REST quite a bit. To make a bit of a devils advocate
> argument for it I'll say the following:
>
There's no way to assert AJAX to be either REST or NOT REST. There are
RESTful uses of AJAX, sure. Examples of each coming... it's a hard
line to draw, as it's situational.
>
> The point of code-on-demand is to allow the capabilities of the UA to
> be extended. Extending it to understand a data format seems like
> quite a reasonable thing to do.
>
It depends. Modern browsers understand xml-stylesheet Processing
Instructions. I use @type='text/xsl' in the markup, to inform clients
that they may process the linked document as such, despite the fact I'm
serving it as text/xml (to make IE work). A REST solution would be to
serve as text/xsl unless IE, ie implement conneg. Thus, REST gracefully
degrading to NOT REST.
I focus my use of AJAX on providing graceful degradation, when it comes
to data formats. First, let's start with a simple use case for RESTful
AJAX. My static, demo weblog's homepage's first entry says it has 2
replies. If that changes to 3, I want the application steady-state to
change to reflect that, without changing the application state (i.e.
reloading the page).
So I will mark that up as <noscript>2</noscript>. If Javascript is
enabled, AJAX makes the <noscript> into a <span>, then calls a comet
subresource of the weblog entry which exposes @thr:count as JSON. So
as long as the homepage application state is loaded, the steady-state
changes dynamically to reflect the current data. The REST application
state is actually an unsteady-state, as it consists of multiple open
connections. See the graceful degradation, though? The worst that can
happen is the user needs to re-load a page in Javascript's absence, to
update thr:count.
The declarative way to implement my demo-weblog's client-side XSLT
architecture, is to use xml-stylesheet PIs. These are the only
browsers I intend to support. However, if I wanted to support older
browsers as well, I'd call the XSLT transformation using the Sarissa
library. That library checks for native XSLT support, if not found, it
can fall back to AJAXSLT.
XSLT is Code-on-Demand any way you look at it. As a declarative
hypertext format, though, it only adds a layer of indirection instead
of incurring the visibility penalty associated with REST's optional
style. The real penalty to visibility is calling the XSLT using
Sarissa, since that's an imperative blackbox, whether Sarissa is
calling an internal XSLT engine or a C-o-D XSLT engine.
Graceful degradation, though. AJAX in such a case, is being used to
extend older browsers to understand a ubiquitous media type, and it's
that ubiquitous hypertext media type exposing the API, not blackbox
Javascript (a Javascript XSLT engine isn't a blackbox any more than a
built-in XSLT processor; it's the code calling such an engine that's a
blackbox by comparison to a declarative XML PI).
Is it RESTful to only provide a hypertext interface that relies on
Javascript to call XSLT? No. Will I bitch if you call it REST? No,
not if you've recognized your kludge and documented it as such, i.e.
"At such time as built-in XSLT via XML PI is ubiquitous, this method
will be used instead of AJAX to call XSLT transformations." As I've
said before, REST is your long-term goal, sometimes the Web needs to
progress before it's realized, and sometimes your system needs to scale
before implementing a constraint becomes cost-effective.
If your system is following a RESTful course that's been charted, but
falls short of REST's ideals for pragmatic reasons, it's still REST,
provided the shortcomings are documented, like Roy says in his "REST
APIs must be hypertext driven" weblog entry.
>
> Using a base serialization format such as XML or JSON for your data
> format (as well as the appropriate mime-type) does provide a
> reasonable amount of visibility as well.
>
Sure. I'm not saying anywhere *not* to use raw XML or JSON. But in
terms of a self-documenting hypertext API, well, you can't beat the
visibility of HTML semantics. Read on... If you've loaded the
application/xml variant of my demo weblog's homepage, IE or otherwise,
the steady-state you're looking at consists of the following media
types:
application/xml
application/xhtml+xml
application/atom+xml (as identifier)
application/atom+xml; type=feed
application/atomcat+xml
application/xbel+xml (coming soon for blogroll.xml)
text/xsl (as identifier only)
text/xml (as pragmatic kludge, once blogroll.xml is XBEL)
text/css
image/jpeg
image/png
application/json (not quite yet, as described above and below)
application/javascript (embedded, atm, in two other media types)
text/html (unless application/xhtml+xml is used to call XSLT, text/
html is your post-xslt-transformation media type)
(That's a dozen media types being passed around for REST, 14 for IE
kludge, just for one steady-state.)
What I've done, see, is to combine those ubiquitous media types in a
very system-specific way (aka "my way"). Atom Protocol isn't REST,
because a REST API is more than a definition of what methods on what
one media type yield what response codes (the SOA/IDL view of REST I'm
pushing back against). OTOH, XHTML has everything I need to create a
self-documenting hypertext API which _implements_ Atom Protocol.
So as you can see, the requirement of supporting IE forces me to kludge
around REST for that browser by using application/xml and text/html,
which otherwise aren't used in my system. Note that all browsers, even
IE, receive /date service payloads as application/xhtml+xml -- in IE,
XHR treats it as application/xml instead of application/octet-stream.
Take another look at that /date service payload. Ain't it a beaut? If
you can find the development doc, it uses CDATA to present what the
JSON variant will look like. Splitting hairs, size-wise, particularly
once compressed -- the headers take up more bytes than the entity. The
JSON is definitely ugly by comparison, and directly transcribes the text
strings used in the XHTML variant, for lack of any "XHTML schema for
JSON." Opaque.
The JSON variant has no semantics. Kinda the point of JSON. The XHTML
variant's semantics are common knowledge encapsulated within the
ubiquitous media type. The appearance of the string "title" in the JSON
variant, in no way implies that the next string is the title of the
document. Without the XHTML variant as a reference, the meaning of the
JSON variant is opaque -- those text strings could mean anything,
whereas in XHTML <title> is unambiguously the title of the document.
Transparent.
Once I've added conneg back in (/conneg/ instead of /xmltest/ but
otherwise the same resources sans most file extensions), I'll post my
existing Xforms interface for the system. The challenge is not so much
in coding Xforms, as making that variant play nice with others...
anyway, when authoring or editing content, there are places where ISO
8601 date-string conversion falls to AJAX -- the variant JSON
serialization on the /date service isn't REST, it's a kludge: the
XHTML variant could be read, it's just oh-so-much-easier to read JSON
in the Javascript context. Pragmatism, or not bending over backwards
to be RESTful for the sole sake of calling it REST.
Even if I was only consuming the JSON variant in my application, I
would still have that XHTML variant, because REST requires a self-
documenting hypertext API. With no semantics, the JSON is a bag of
bits. With semantics, the XHTML is a *data structure*. Combined with
the Link: header, and the not-yet-built Xforms "service document", the
XHTML variants will represent a completed simple RESTful Web Service.
Even with the Link: headers and an AJAX "service document", the JSON
variant (in the absence of the XHTML variant) is NOT REST. Notice I'm
not saying *don't* use JSON. Remember, updating post count reads a
simple number, exposed as a subresource -- typing isn't required,
neither is anything else fancy, this could just as easily be YAML. No
semantics needed.
>
> There is also a certain amount of native support in the UA for these
> serialization formats as well. Javascript code that understand the
> schema and semantics of your XML/JSON is not significantly unlike a
> script that understand constraints that you've put on your HTML(...)
>
Blackbox Javascript code that takes bag-of-bits JSON and converts it
into HTML is NOT REST. XSLT which declaratively converts any XML
format (like Atom, in my demo) into XHTML is REST. But Atom, like
XHTML, has semantics which allow the source documents to stand on their
own. Same with XML, there's still a generic parent/child relationship
that can be traversed with XPath according to some schema.
JSON lacks such capabilities, perhaps rightly so, as it isn't supposed
to have any semantics. Blackbox AJAX code which asynchronously loads
content from another resource may or may not be REST, read on...
>
> Where's the violation of REST's constraints? I would say you've gone
> too far only when you are using code-on-demand to implement something
> that the UA already does natively (with little or no gain in
> non-functional areas such as visible latencies or perhaps
> portability). Thoughts?
>
First, I agree entirely with Mike's response to this question in the
previous thread. If your JSON re-invents HTML for the purpose of being
inserted into HTML documents, then you're violating self-descriptive
messaging, because HTML media types are used to identify HTML content.
Converting it to JSON is just obfuscating what it really is. Same with
reserializing Atom as JSON.
Typically, AJAX violates the identification of resources, self-
descriptive messaging, and hypertext constraints. How many AJAX
libraries out there have one URI that loads the AJAX engine, then
asynchronously loads JSON documents into HTML elements, all without
changing the URL when links are followed to new application states?
Granted, those JSON documents have URIs, but are the important
application states (i.e. the HTML representations) bookmarkable? Why
don't they have URIs? If I dereference one of those JSON documents,
does it link me to its required processing engine, or do I have to
"just know" that I need to go to the homepage first, to download the
AJAX engine? Such libraries are REST anti-patterns.
Compare that to my demo weblog. The messaging is self-descriptive.
The base format is Atom. The API format is XHTML. The process of
loading and executing a transformation from Atom to XHTML is handled by
declaratively calling a document whose very media type identifier
states that it's purpose is to transform XML input to the XHTML of the
calling document. No mystery there!
There is nothing self-evident about using the ubiquitous media types for
Javascript and JSON to accomplish the same thing. If you're modeling
data in JSON that is properly semantically modeled using Atom, then
converting it to HTML using ad-hoc code, then you aren't doing REST --
if you can achieve the same goals using media types suited to the task,
but choose not to, you are violating the self-descriptive messaging
constraint, and possibly the hypertext constraint, so you can't point
to C-o-D as some sort of loophole.
Which brings me to my favorite abuse of AJAX -- PUT. If the forms
language you are using is HTML 4, then you lack the declarative tools
necessary to create a message body of any media type other than
multipart, which makes no sense for PUT, and you're putting PUT where
it isn't valid. Yeah, you can hack your way around this and make it
work using AJAX, sure. But is that a RESTful approach?
No. The XHTML media type has been extended to include any HTTP method,
and define payloads, by Xforms. Xforms isn't a media type, it's a
guest language for XML host languages (XHTML, SVG). RESTfully, you
code your PUT request properly, using standardized declarative
hypertext, like Xforms. User agent doesn't grok Xforms? Then extend
its capabilities using AJAX. Multiple libraries exist to convert
Xforms code into kludgy HTML 4 forms + Javascript. (Actually, Xforms
plugins work better for extending the user agent to grok Xforms
natively.)
Again, graceful degradation. Xforms clients can understand the native
code, other clients can use C-o-D to transform the native Xforms into
browser-specific markup. But the goal is to extend the client to
understand something that's standardized. The converted forms code
lacks the visibility of the native code, since it's highly dependent on
imperative Javascript code. However, it's converting declarative code
that meets REST's constraints. This way, C-o-D is adding a layer of
indirection, rather than decreasing visibility -- hypertext is still
the engine of application state, not Javascript, if the Javascript is
used to interpret ubiquitous hypertext. Like with XSLT.
So it comes down to doing everything you can to avoid C-o-D, instead of
using it as a starting point. If you haven't done everything you can
to avoid C-o-D (which will be even more possible with HTML 5), then ur
doin' it wrong. If you're using C-o-D to avoid re-using one of the
subset of ubiquitous media types capable of driving a hypertext API,
then ur doin' it wrong. If there's a simpler, better way to achieve
REST without using C-o-D, then your tradeoff isn't just reduced
visibility, you're obviously trading away constraints as well.
Reduced visibility should never be the first tradeoff your REST project
makes.
-Eric
Dong. Agree. Aliases has their cost. Still, we are talking about a client that uses a URI he made up. That is, it would actually be for its eyes only. Any other request that access that resource will get the official one. And, any request from that client will get a redirect as a best practice. See PUT: "A single resource MAY be identified by many different URIs. For example, an article might have a URI for identifying "the current version" which is separate from the URI identifying each particular version. In this case, a PUT request on a general URI might result in several other URIs being defined by the origin server" also "...the URI in a PUT request identifies the entity enclosed with the request -- the user agent knows what URI is intended and the server MUST NOT attempt to apply the request to some other resource. If the server desires that the request be applied to a different URI, it MUST send a 301 (Moved Permanently) response; the user agent MAY then make its own decision regarding whether or not to redirect the request. " Now, the semantics you add to PUT should make PUT idempotent. "Checking if a resource already exists by the representation from client is costly and not reliable." Not so sure about that. See, if we see the analogy of a database, if the only way to check if the resource already exists is using the URI, that URI becomes a primary key of some sort. Again,URI is just a name, not the resource. The content is not the last word, to be replaced blindly. Depending of the data, the server should make several verifications to avoid conflicts, like dependencies, duplication, data inconsistency, etc. Remember clients knows the latest version of that resource, server may control cross resource verifications and any other change to resources that can make a request invalid. "What happens if the URL the client wants already exists on the server? Should it be interpreted as an update?" Well, it SHOULD. See PUT: " If the Request-URI refers to an already existing resource, the enclosed entity SHOULD be considered as a modified version of the one residing on the origin server." I think your case is the one where two clients choose, by coincidence, the same made up URI. They are both posting a new resource. Server may not have a way to detect the request comes from different clients. So the second guy is unaware he is not creating a new resource, but updating it (until he gets a 200 or 204 instead of a 201). As the update is marked with a SHOULD, you can actually limit the PUT to creation, thus the second request will fail. If you do not, then clients should handle the possibility of such case by checking the response codes and act accordingly. Cheers. William Martinez Pomares. --- In rest-discuss@yahoogroups.com, Dong Liu <edongliu@...> wrote: > > On Sun, Jul 25, 2010 at 9:12 PM, William Martinez Pomares > wmartinez@... wrote: > > > > > > > > Hello Bryan. > > That is actually an interesting question that is usually answered like many great guys here did. Still, let's review what's in a URI. > > > > 1. A resource is not a URI. The URI is the identifier-name that actually identifies a resource. A resource can be anything, and can have many names. > > 2. A client may not know all the names of a resource. Actually, a resource may already exist with a different name the client ignores. > > 3. The client may have control over the URIs it uses, but it should never had control over the Server URIs. > > 4. I don't see why a resource cannot have a name not given by the server, but I do see that a server should not be forced to name a resource. > > 5. The client should not infer nothing from a URI. No folders, no types, nothing. That is why I prevent from using templates. Too tempting. > > > > So, what all that means? It means you can use PUT with any name you want. That is your name, the URI from the client. But the server owns its namespace, thus the resource may be created with the name the server likes. Still, the server can note that you, as a client, gave a special, particular name to that new resource. So, whenever your client requests that URI, the server knows which resource it refers to. If someone else, even your client, requests that resource using a search or something, the URI that will be returned is that one of the server. > > > > I dont quite agree. There is cost for doing this. See > http://www.w3.org/TR/webarch/#uri-aliases > > > > > > > See? The resource in this case has two URIs. PUT has not forced the name into the server. The server keeps its autonomy. The URI can be a cool URI client side, and use templates client side, and have a structure client side. But server doesn't care. It just polite enough to remember your name for that resource. > > > > "Cool URIs don't change" from server side to client side. > > > > > Furthermore. As I said, the URI is not the resource. You can PUT a resource with a name, but if the resource already exists (the resource, NOT the name), it will fail. See? The server can check the body of the PUT and if creating a resource from that duplicates an already existing resource, and that is not permitted, it will fail even if the resource you PUT has a completely new name. This is very important. We are putting too much importance into URIs, when they are simply names to refer to the really important guy, the resource. > > > > Checking if a resource already exists by the representation from > client is costly and not reliable. > > What happens if the URL the client wants already exists on the server? > Should it be interpreted as an update? > > > Cheers. > > > > William Martinez Pomares > > > > --- In rest-discuss@yahoogroups.com, Bryan Taylor bryan_w_taylor@ wrote: > > > > > > I've been discussing PUT for create with some coworkers. This is certainly valid > > > HTTP, but I'm wondering if people consider it RESTful. It seems to me that > > > giving the client control over part of the URI requires them to understand how > > > resources are organized and forces them to construct URIs as non-opaque strings. > > > So I wonder if this conflicts with HATEOAS. It potentially also puts a burden on > > > the client to avoid namespace collisions, so that it must adopt some uniqueness > > > logic which again requires application state that seems problematic. > > > > > > > >
> > The design pattern writ large, is to assign each application state a > URI and an XHTML stub file containing metadata. The XHTML calls an > XSLT processor to transform a more application-specific data format > into an XHTML API suitable for manipulating that back-end format. > I meant, "into an XHTML (or SVG or etc.) API suitable..." > > > If the goal of XHTML is for those intermediaries (i.e. it's not for > > the clients benefit, nor the servers benefit), that can work. > The goals of using XHTML are far more than the concerns of intermediary components. More clients have a general knowledge of your markup, which means more clients to extend with specific knowledge of your markup. Re-using HTML (or Docbook) for basic list and table structures, is easier to maintain over time than re-inventing basic list and table structures in a neverending variety of XML languages. The purpose of using XHTML is to satisfy the principle of generality. > > > > > The point of code-on-demand is to allow the capabilities of the UA > > to be extended. Extending it to understand a data format seems like > > quite a reasonable thing to do. > > > > It depends(...) > What I meant was, it depends on the ubiquity of the media type. Extending the capabilities of a browser to understand XSLT using an AJAXSLT engine is one thing. Extending the browser to understand a proprietary media type may be applying REST's optional constraint, but this is not a loophole for violating REST's self-descriptive messaging constraint. In case anyone has the patience to read my last post. ;-) -Eric
By way of finishing a thought: My demo includes a text/xml blogroll.xml file. This is the right choice of media type to include a snippet of XML that doesn't stand alone, and is formatted such that it doesn't need transformation. But is this the RESTful choice? No. What is a blogroll, but a list of bookmarks? Is there a standalone media type for a list of bookmarks? Yes. Coming soon, is blogroll.xbel as application/xbel+xml. Not only can XSLT transform it to look exactly like blogroll.xml currently looks, but as a standalone file it has a self-descriptive media type -- you can point a browser at it, and import the blogroll into your bookmarks, because that's the gist of the media type. Pointing to an included XML snippet as text/xml and calling it NOT REST is wrong -- not a hard-and-fast rule. But, in the case of a standalone list of bookmarks, where a ubiquitous media type exists for that purpose, using text/xml for blogroll.xml as I have, is NOT REST. The desirable property gained from XBEL is the serendipitous re-use of a blogroll to populate a browser's bookmarks. That's so REST! -Eric
Hi William, I agree the approach that the client PUT to a client named URL in order to create a resource would somehow work. However, I still think a better way to do that is the server gives guide to the client by hypermedia. The biggest concern for client named URL's is that both client and server need to know how to deal them. When the server generates a representation of resource to respond a GET, how to deal with the URL's named by clients is the problem. In HTTP, URL + hash is the way to check if a representation of the resource is already there on the server. What a client can get is always a representation of the resource, and that is not the resource. There is a chance that different resources have exactly the same representation. This is why I think it is difficult to identify a resource by its representation. In fact, URI is to *identify* a resource. Go back to the case that a client C PUT some representation R to a URL U to create a new resource N. U is already on the server to identify a resource O. If the server interprets this as an update, and it updates O, this is not what we want. If it spits 409 with the difference of R and the representation of O, then C is confused. If the server limits PUT for only creation, then we would need another method or overriding method for update if the resources need that. Cheers, Dong On Tue, Jul 27, 2010 at 6:19 AM, William Martinez Pomares < wmartinez@...> wrote: > > > Dong. > Agree. Aliases has their cost. Still, we are talking about a client that > uses a URI he made up. That is, it would actually be for its eyes only. Any > other request that access that resource will get the official one. And, any > request from that client will get a redirect as a best practice. See PUT: > > "A single resource MAY be identified by many different URIs. For example, > an article might have a URI for identifying "the current version" which is > separate from the URI identifying each particular version. In this case, a > PUT request on a general URI might result in several other URIs being > defined by the origin server" > also > "...the URI in a PUT request identifies the entity enclosed with the > request -- the user agent knows what URI is intended and the server MUST NOT > attempt to apply the request to some other resource. If the server desires > that the request be applied to a different URI, it MUST send a 301 (Moved > Permanently) response; the user agent MAY then make its own decision > regarding whether or not to redirect the request. " > > > Now, the semantics you add to PUT should make PUT idempotent. > > *"Checking if a resource already exists by the representation from client > is costly and not reliable."* > > Not so sure about that. See, if we see the analogy of a database, if the > only way to check if the resource already exists is using the URI, that URI > becomes a primary key of some sort. Again,URI is just a name, not the > resource. The content is not the last word, to be replaced blindly. > Depending of the data, the server should make several verifications to avoid > conflicts, like dependencies, duplication, data inconsistency, etc. Remember > clients knows the latest version of that resource, server may control cross > resource verifications and any other change to resources that can make a > request invalid. > > > *"What happens if the URL the client wants already exists on the server? > Should it be interpreted as an update?"* > Well, it SHOULD. See PUT: > > " If the Request-URI refers to an already existing resource, the enclosed > entity SHOULD be considered as a modified version of the one residing on the > origin server." > > I think your case is the one where two clients choose, by coincidence, the > same made up URI. They are both posting a new resource. Server may not have > a way to detect the request comes from different clients. So the second guy > is unaware he is not creating a new resource, but updating it (until he gets > a 200 or 204 instead of a 201). As the update is marked with a SHOULD, you > can actually limit the PUT to creation, thus the second request will fail. > If you do not, then clients should handle the possibility of such case by > checking the response codes and act accordingly. > > Cheers. > > William Martinez Pomares. > > > > > --- In rest-discuss@yahoogroups.com, Dong Liu <edongliu@...> wrote: > > > > On Sun, Jul 25, 2010 at 9:12 PM, William Martinez Pomares > > wmartinez@... wrote: > > > > > > > > > > > > Hello Bryan. > > > That is actually an interesting question that is usually answered like > many great guys here did. Still, let's review what's in a URI. > > > > > > 1. A resource is not a URI. The URI is the identifier-name that > actually identifies a resource. A resource can be anything, and can have > many names. > > > 2. A client may not know all the names of a resource. Actually, a > resource may already exist with a different name the client ignores. > > > 3. The client may have control over the URIs it uses, but it should > never had control over the Server URIs. > > > 4. I don't see why a resource cannot have a name not given by the > server, but I do see that a server should not be forced to name a resource. > > > 5. The client should not infer nothing from a URI. No folders, no > types, nothing. That is why I prevent from using templates. Too tempting. > > > > > > So, what all that means? It means you can use PUT with any name you > want. That is your name, the URI from the client. But the server owns its > namespace, thus the resource may be created with the name the server likes. > Still, the server can note that you, as a client, gave a special, particular > name to that new resource. So, whenever your client requests that URI, the > server knows which resource it refers to. If someone else, even your client, > requests that resource using a search or something, the URI that will be > returned is that one of the server. > > > > > > > I dont quite agree. There is cost for doing this. See > > http://www.w3.org/TR/webarch/#uri-aliases > > > > > > > > > > > > See? The resource in this case has two URIs. PUT has not forced the > name into the server. The server keeps its autonomy. The URI can be a cool > URI client side, and use templates client side, and have a structure client > side. But server doesn't care. It just polite enough to remember your name > for that resource. > > > > > > > "Cool URIs don't change" from server side to client side. > > > > > > > > Furthermore. As I said, the URI is not the resource. You can PUT a > resource with a name, but if the resource already exists (the resource, NOT > the name), it will fail. See? The server can check the body of the PUT and > if creating a resource from that duplicates an already existing resource, > and that is not permitted, it will fail even if the resource you PUT has a > completely new name. This is very important. We are putting too much > importance into URIs, when they are simply names to refer to the really > important guy, the resource. > > > > > > > Checking if a resource already exists by the representation from > > client is costly and not reliable. > > > > What happens if the URL the client wants already exists on the server? > > Should it be interpreted as an update? > > > > > Cheers. > > > > > > William Martinez Pomares > > > > > > --- In rest-discuss@yahoogroups.com, Bryan Taylor bryan_w_taylor@wrote: > > > > > > > > I've been discussing PUT for create with some coworkers. This is > certainly valid > > > > HTTP, but I'm wondering if people consider it RESTful. It seems to me > that > > > > giving the client control over part of the URI requires them to > understand how > > > > resources are organized and forces them to construct URIs as > non-opaque strings. > > > > So I wonder if this conflicts with HATEOAS. It potentially also puts > a burden on > > > > the client to avoid namespace collisions, so that it must adopt some > uniqueness > > > > logic which again requires application state that seems problematic. > > > > > > > > > > > > > > >
Um .. I don't agree! ... [everyone in party stops laughing and looks at me in shock]. No - I'm not an unreasonable dogmatic REST zealot, but I /am/ passionate about the possibilities for the more declarative programming and distribution style that REST or 'ROA' enables. ROA is a 'dual' of SOA and requires a corresponding inversion in your thinking - from verbs to nouns, from actions, messages and commands to resources, open state and intentions. Obviously, when you make the jump, things that would be easy before (verbs, actions, messages, commands) have to be recast in the style (nouns, resources, state, intentions), and that may seem like 'convolution' at first, especially since 99% of our programming is imperative. Actually, you may have picked on a bad example of when REST may strain to model your application: surely Atom is great for 'reliable messaging'? I've never missed a thing in my feed reader. And you say you want timely 'one-to-one' delivery of your pizza order? No problem, use POST. In fact, create an Order Resource, stick it in an Atom list and then also POST the new list directly to the pizza parlour. If the POST fails then you could POST again, and anyway there'll be a poll along soon enough to cover your back. Not too convoluted, especially if there's library support for Atom and for, well, POSTing stuff. Benefit: your state is now visible; you've got an Order Resource and an Orders List Resource. Notice that this illustrates how REST style is about state transfer, /not/ messaging, reliable or otherwise. REST can do reliable or unreliable state transfer, and it can do ongoing state-changed transfer, with GET and/or POST. Just got to think in terms of resources and open state. If you're going for REST, for its benefits of interoperability, etc, it /is/ flexible enough to be used in /all/ of your system, even without convoluted zealotry! However, we all acknowledge that a little library support for RESTful patterns and idioms would go a long way to help. And, yes, there may be instances where modelling verbs, actions, messages and commands is a little too declarative in an ROA, in the same way that modelling nouns, resources, state and intentions is a little too imperative, too operation-driven, in an SOA. Duncan Cragg -- http://duncan-cragg.org/blog/ http://twitter.com/duncancragg
Rats, I meant to title this thread "A *simple* paleo-REST app..." and stress that simplicity wins out over REST every time. I did not intend to start a debate about RESTful reliable messaging, or even reliable HTTP messaging. I meant to use the simplest automated m2m process I've ever encountered, and the solution to it, as an argument against *any* additional complexity, RESTful or not. The pizza parlor is long gone, perhaps their marketing was ahead of its time, the product wasn't bad. If it were still around, I'd have no reason to change the system, except to allow more complex orders with a more complex, Javascript-driven form. Where's the cost benefit of taking any time to change the cheapest possible functional solution to the problem? Which system could you or would you build for the $250 I got paid, plus $10/month to host and maintain (in addition to their modem account)? "Duncan" wrote: > > Benefit: your state is now visible; you've got an Order Resource and > an Orders List Resource. > Drawback: now I have those resources. :-( My system is (was) simple. The initial application state is an HTML representation of an order-form resource. The state transition the customer chooses, GETs a representation of the customer's desired state of the order-form resource (not a form, except for the confirm/cancel buttons), so the browser uses application/x-www-urlencoded to tack a query string onto the initial URI. The server calculates the total charge w/ tax. Confirming the order is the same query URL, except POST not GET. The response is a 200 OK with a message body telling the user to expect confirmation when the kitchen receives the order (a process you didn't explain in your counter-example). Note that the URL doesn't change -- what happens is the state of the resource (the customer's desired order has its own, bookmarkable URL) has temporarily changed to "order received". (The response is not 201 Created, because nothing is created. This is the point where your solution gets vastly more complicated than mine, so no, I don't think I picked a bad example... :-) For this transaction, I don't care that HTTP is unreliable -- if the customer doesn't see the confirmation, the form may be re-submitted. I can even bookmark 'large pepperoni double cheese' for my home address, then I can skip the form and just hit 'confirm order'. But, the customer does need to be reliably informed that the kitchen got the order from the website. The connection between the kitchen and the website is even less reliable than HTTP. The need is for a hands-free workflow of checking for new orders, printing them, and sending confirmation that the customer's pizza is being made. But the system must recover without needing the help of any consultants, if the network goes offline (the modem's phone line gets unplugged to use the fax machine), or the OS crashes. A script executes on startup which checks a POP3 mailbox, retrieves and prints messages, and sends an automated response to each reply-to address. That's simple. It all gets much more complicated, and I fail to see the cost benefit, when we start talking about installing httpd for the pizza parlor's computer, or adding Atom into this mix. Same with the server -- a Perl script which drops an e-mail into a POP3 box is simple. I suppose we could create order resources, that way the kitchen can pull something in, and that action can trigger the server to change some other resource state, which the customer can then access and reload until they see their order is queued in the kitchen. In which case, sure, Atom's a good way to go. But the whole idea of creating a Web Service to solve this problem, still seems really really heavy. Now that we've created all those resources you're suggesting, we need to have some sort of process that goes through and removes them after some period of time has passed, or have the workflow in the kitchen include something beyond passive printing of e-mail, otherwise we're archiving every order placed on the server. I figured the pizza parlor wanted a record, but it seemed much more cost-effective to make that an e-mail record that they can worry about archiving, instead of creating another API so they can access that on this website of ever-increasing complexity there's no way they'd have been able to afford to build, let alone operate... Nope, you've not convinced me that REST wins out here over the robust simplicity of e-mail. Especially when I consider that I created a REST system where it needed to be a REST system. Had the parlor succeeded and gone national, their Web app would have scaled with them, instead of becoming a money-pit consultant trough. As it was, I built them a REST app for $250. REST's scaling benefit is that it can scale small, too. You can't even cover the cost of tooling up to build an SOA solution to this problem, for the amount I was paid (all of it profit since I only used a text editor to create it) to build this REST system. So I don't like the idea of REST evolving in an SOA-inspired direction, I think it will fall right into the same trap of not scaling small. -Eric
I was wondering if there is anything in the REST architectural style or specifically HTTP that prescribes what the nature of a Resource is. For example, is it legit to say that the URL http://www.amazon.com/RESTful-Web-Services-Cookbook-Scalability/dp/0596801688/ identifies a resource that is Amazon's notion of the book RESTful Web Services Cookbook. Or that http://www.nasa.gov/ identifies a resource that is NASA? I'm not trying to start an httpRange-14 flamewar, but just wondering if REST's notion of Resource is flexible enough to typically accommodate these two use cases. //Ed
This seems to run dangerously close to a Derrida [1] style question of what is the signifier, and what is the signified? Is the distinction even meaningful? Perhaps the URL is a signifier, but who the heck knows what is actually signified? Seriously, though, if you ask an author of a book what they meant to convey on a particular page of a book, who's to say whether or not they can tell you? Same sort of problem applies equally to a URL. The URL that you give for the book from Amazon, they might define as the "customer-facing" view of a book - they might have an alternate internal view of a book that gives additional information about vendors that provide the book, which vendor is preferred, the stock, the ordering time/delay, etc. However, there's no need for their definition and your definition of what they put out there to be the same. All we can know, in the end, is that it conveys information useful to the problem domain, as that domain is currently understood & defined. -Eric. [1] http://en.wikipedia.org/wiki/Derrida On 07/28/2010 12:51 PM, Ed Summers wrote: > > > I was wondering if there is anything in the REST architectural style > or specifically HTTP that prescribes what the nature of a Resource is. > > For example, is it legit to say that the URL > http://www.amazon.com/RESTful-Web-Services-Cookbook-Scalability/dp/0596801688/ > identifies a resource that is Amazon's notion of the book RESTful Web > Services Cookbook. Or that http://www.nasa.gov/ identifies a resource > that is NASA? > > I'm not trying to start an httpRange-14 flamewar, but just wondering > if REST's notion of Resource is flexible enough to typically > accommodate these two use cases. > > //Ed > >
On Wed, Jul 28, 2010 at 3:51 PM, Ed Summers <ehs@...> wrote: > I was wondering if there is anything in the REST architectural style > or specifically HTTP that prescribes what the nature of a Resource is. > > For example, is it legit to say that the URL > http://www.amazon.com/RESTful-Web-Services-Cookbook-Scalability/dp/0596801688/ > identifies a resource that is Amazon's notion of the book RESTful Web > Services Cookbook. Or that http://www.nasa.gov/ identifies a resource > that is NASA? > > I'm not trying to start an httpRange-14 flamewar, but just wondering > if REST's notion of Resource is flexible enough to typically > accommodate these two use cases. REST doesn't say anything about that, as long as you avoid coupling based on the type (or any other metadata, for that matter). As for typing, it's used quite a bit in RDF land via rdf:type. I used it extensively on a project a few years ago, but found it inferior to duck typing; http://www.markbaker.ca/blog/2004/10/rdftype-duck/ Mark.
My understanding is that no URI can have inherent meaning (I think that's what you're suggesting), the only slight exception to that being entry points of an application. The examples you gave would be expressed to clients as link relations from within the context of an application they are pursuing i.e. each client constantly maintains their own frame of reference according to where they are in an application's "flow", applying the current frame of reference to a given link relation is what creates meaning for a URI at a particular point in time. That meaning is transient and held only by the client. Cheers, Mike On Wed, Jul 28, 2010 at 8:51 PM, Ed Summers <ehs@...> wrote: > I was wondering if there is anything in the REST architectural style > or specifically HTTP that prescribes what the nature of a Resource is. > > For example, is it legit to say that the URL > http://www.amazon.com/RESTful-Web-Services-Cookbook-Scalability/dp/0596801688/ > identifies a resource that is Amazon's notion of the book RESTful Web > Services Cookbook. Or that http://www.nasa.gov/ identifies a resource > that is NASA? > > I'm not trying to start an httpRange-14 flamewar, but just wondering > if REST's notion of Resource is flexible enough to typically > accommodate these two use cases. > > //Ed >
I think RDF (Resource Description Framework) is exactly what you are looking for.
To express the relationships you desire, you need a vocabulary for terms and some space to choose values from. I'd suggest the dublin core DCMI vocabulary. See: http://dublincore.org/documents/dcmi-terms/
Lets look at your book example. DCMI defines terms "title" and "source". The space of values for title is a string represent the title of the book, while for source we could choose to use the URN for the ISBN of the book.
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:dc="http://purl.org/dc/elements/1.1/" >
<rdf:Description rdf:about="http://www.amazon.com/RESTful-Web-Services-Cookbook-Scalability/dp/0596801688/">
<dc:source rdf:resource="urn:isbn:0596801688" />
<dc:title>RESTful Web Services Cookbook: Solutions for Improving Scalability and Simplicity</dc:title>
<rdf:Description/>
<rdf:RDF/>
For the NASA example, we have to pick some namespace in which NASA exists. I was able to find an OID for NASA, so I'm using that to say that NASA is the publisher (another dublin core term) of http://www.nasa.gov. See http://www.oid-info.com/faq.htm and http://www.ietf.org/rfc/rfc3061.txt.
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:dc="http://purl.org/dc/elements/1.1/" >
<rdf:Description rdf:about="http://www.nasa.gov/">
<dc:publisher rdf:resource="urn:oid:1.3.6.1.4.1.71" />
<rdf:Description/>
<rdf:RDF/>
BTW, this isn't about REST, but more about how "the semantic web" works.
--- In rest-discuss@yahoogroups.com, Ed Summers <ehs@...> wrote:
>
> I was wondering if there is anything in the REST architectural style
> or specifically HTTP that prescribes what the nature of a Resource is.
>
> For example, is it legit to say that the URL
> http://www.amazon.com/RESTful-Web-Services-Cookbook-Scalability/dp/0596801688/
> identifies a resource that is Amazon's notion of the book RESTful Web
> Services Cookbook. Or that http://www.nasa.gov/ identifies a resource
> that is NASA?
>
> I'm not trying to start an httpRange-14 flamewar, but just wondering
> if REST's notion of Resource is flexible enough to typically
> accommodate these two use cases.
>
> //Ed
>
On Wed, Jul 28, 2010 at 7:28 PM, Mark Baker <distobj@...> wrote: > On Wed, Jul 28, 2010 at 3:51 PM, Ed Summers <ehs@...> wrote: >> I was wondering if there is anything in the REST architectural style >> or specifically HTTP that prescribes what the nature of a Resource is. >> >> For example, is it legit to say that the URL >> http://www.amazon.com/RESTful-Web-Services-Cookbook-Scalability/dp/0596801688/ >> identifies a resource that is Amazon's notion of the book RESTful Web >> Services Cookbook. Or that http://www.nasa.gov/ identifies a resource >> that is NASA? >> >> I'm not trying to start an httpRange-14 flamewar, but just wondering >> if REST's notion of Resource is flexible enough to typically >> accommodate these two use cases. > > REST doesn't say anything about that, as long as you avoid coupling > based on the type (or any other metadata, for that matter). What do you mean by 'coupling based on the type?' --Chuck
On Thu, Jul 29, 2010 at 9:57 AM, Chuck Hinson <chuck.hinson@...> wrote: > What do you mean by 'coupling based on the type?' I mean making assumptions about the resource's implementation. Mark.
Hi, I came across a old (May 2010) presentation from Google which talks about how Google designs APIs for public APIs. http://code.google.com/events/io/2010/sessions/how-google-builds-apis.html Google seems to be proposing an RPC style with REST to overcome some of the limitations of REST, especially APIs for imperative statements i.e augmenting REST with custom verbs. One of the examples cited in the presentation is rotating an image in flicker and the author claims using RPC style is natural and works better than just using REST. Any thoughts?? Best regards, Suresh
First, Google's new API model looks like RPC over HTTP to me[1]. I don't see any REST there. Second, I don't like the WADL-like approach and my recent thoughts on this can be found on my blog [2]. [1] http://amundsen.com/blog/archives/1042 [2] http://amundsen.com/blog/archives/1067 mca http://amundsen.com/blog/ http://mamund.com/foaf.rdf#me On Thu, Jul 29, 2010 at 23:45, Suresh <sureshkk@...> wrote: > Hi, > > I came across a old (May 2010) presentation from Google which talks about how Google designs APIs for public APIs. > > http://code.google.com/events/io/2010/sessions/how-google-builds-apis.html > > Google seems to be proposing an RPC style with REST to overcome some of the limitations of REST, especially APIs for imperative statements i.e augmenting REST with custom verbs. One of the examples cited in the presentation is rotating an image in flicker and the author claims using RPC style is natural and works better than just using REST. Any thoughts?? > > Best regards, > Suresh > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
No REST there no. They even fail to provide valid ATOM resources. Wonderful. -- Erlend On Fri, Jul 30, 2010 at 6:32 AM, mike amundsen <mamund@yahoo.com> wrote: > > > First, Google's new API model looks like RPC over HTTP to me[1]. I > don't see any REST there. > > Second, I don't like the WADL-like approach and my recent thoughts on > this can be found on my blog [2]. > > [1] http://amundsen.com/blog/archives/1042 > [2] http://amundsen.com/blog/archives/1067 > > mca > http://amundsen.com/blog/ > http://mamund.com/foaf.rdf#me > > > On Thu, Jul 29, 2010 at 23:45, Suresh <sureshkk@...<sureshkk%40gmail.com>> > wrote: > > Hi, > > > > I came across a old (May 2010) presentation from Google which talks about > how Google designs APIs for public APIs. > > > > > http://code.google.com/events/io/2010/sessions/how-google-builds-apis.html > > > > Google seems to be proposing an RPC style with REST to overcome some of > the limitations of REST, especially APIs for imperative statements i.e > augmenting REST with custom verbs. One of the examples cited in the > presentation is rotating an image in flicker and the author claims using RPC > style is natural and works better than just using REST. Any thoughts?? > > > > Best regards, > > Suresh > > > > > > > > ------------------------------------ > > > > Yahoo! Groups Links > > > > > > > > > >
On Fri, Jul 30, 2010 at 4:45 AM, Suresh <sureshkk@...> wrote: > Any thoughts?? Understanding and applying REST correctly is hard, many people fail at both. Coupling client and server can be used as a way to increase customer retention. Cheers, Mike
Mike Kelly <mike@...> wrote: > Understanding and applying REST correctly is hard, many people fail at both. What I find scary, though, is that the Google engineers aren't getting it right (or even half-right, perhaps more like a smidgen), and this stuff isn't *that* hard. What gives? Laziness? A few bad apples? Ignorance? Couldn't give a rats? Alex -- Project Wrangler, SOA, Information Alchemist, UX, RESTafarian, Topic Maps --- http://shelter.nu/blog/ ---------------------------------------------- ------------------ http://www.google.com/profiles/alexander.johannesen ---
"Suresh" wrote: > > One of the examples cited in the presentation is rotating an image in > flicker and the author claims using RPC style is natural and works > better than just using REST. Any thoughts?? > First thought? Google ought to know better. If the desire is to GET an image from the server, rotated + or - x degrees, then the proper method is GET, not an RPC POST. GET /image.jpg?rot=90 This GET may be generated using a simple HTML form, and the application/ x-www-urlencoded media type identifier. Note that in such case, the media type identifier isn't going over the wire; it's informing the client how to construct a query URL from a form. There's no excuse not to use GET to rotate image.jpg on the server side, or PUT to change the state of image.jpg: PUT /image.jpg entity = cached /image.jpg?rot=90 from previous GET Client fetches desired rotation from server, uses it to overwrite non- rotated resource. It just doesn't get any simpler than this, certainly nowhere near as complicated as Google makes it. That's REST. Hopefully, the benefit is obvious over RPC: Caching. Which is why Google ought to know better. BTW, one of the most common misperceptions I see about REST is that it's limited to four methods which map directly to CRUD. At least Google recognizes PATCH, but the truth is you can use as many standardized methods you need to model your application interactions, without resorting to the RPC pattern of tunneling them over POST. (Added benefit: using Etag and conditional requests to prevent the lost-update problem, which would manifest itself here as rotating the image 180 degrees. How does Google avoid this? Not clear from the POST example; obvious using REST.) -Eric
I don't see a company like Google, the biggest company on the net, born and raised on the net, and that can hire the best minds around, has problems in *getting* it, and they were proven themselves many times not to be lazy or ignorants. Probably they see there are much more things on the net besides making web sites and that allowing other parties to access their vast infrastructure and services in programmatically ways is very different than serving HTML for browser/human consumption - which by the way is true not only for the net but for even more for enterprises too. Bottom line, doing things in the real world is not always compatible with theoretical purist considerations and "all-or-nothing" views of the word... And of course, when something is presented is such a way, pragmatic people have a tendency to go away. Maybe Google is a good example how companies behave in the real world. But this is just my opinion, of course... On 30 July 2010 09:45, Alexander Johannesen <alexander.johannesen@...>wrote: > > > Mike Kelly <mike@... <mike%40mykanjo.co.uk>> wrote: > > Understanding and applying REST correctly is hard, many people fail at > both. > > What I find scary, though, is that the Google engineers aren't getting > it right (or even half-right, perhaps more like a smidgen), and this > stuff isn't *that* hard. What gives? Laziness? A few bad apples? > Ignorance? Couldn't give a rats? > > Alex > -- > Project Wrangler, SOA, Information Alchemist, UX, RESTafarian, Topic Maps > --- http://shelter.nu/blog/ ---------------------------------------------- > ------------------ http://www.google.com/profiles/alexander.johannesen --- > >
This is exactly what is worrying me. After looking at the presentation most of my colleagues are airing the same views as Antonio. I am in the middle of developing a RESTful system and now the detractors have got ammunition. They say I will never be able to model all of the APIs using just HTTP verbs and I will finally have to resort to custom verbs like Google has done. 2010/7/30 Antnio Mota <amsmota@...> > I don't see a company like Google, the biggest company on the net, born and > raised on the net, and that can hire the best minds around, has problems in > *getting* it, and they were proven themselves many times not to be lazy or > ignorants. > > Probably they see there are much more things on the net besides making web > sites and that allowing other parties to access their vast infrastructure > and services in programmatically ways is very different than serving HTML > for browser/human consumption - which by the way is true not only for the > net but for even more for enterprises too. > > Bottom line, doing things in the real world is not always compatible with > theoretical purist considerations and "all-or-nothing" views of the word... > And of course, when something is presented is such a way, pragmatic people > have a tendency to go away. Maybe Google is a good example how companies > behave in the real world. > > But this is just my opinion, of course... > > On 30 July 2010 09:45, Alexander Johannesen < > alexander.johannesen@...> wrote: > >> >> >> Mike Kelly <mike@... <mike%40mykanjo.co.uk>> wrote: >> > Understanding and applying REST correctly is hard, many people fail at >> both. >> >> What I find scary, though, is that the Google engineers aren't getting >> it right (or even half-right, perhaps more like a smidgen), and this >> stuff isn't *that* hard. What gives? Laziness? A few bad apples? >> Ignorance? Couldn't give a rats? >> >> Alex >> -- >> Project Wrangler, SOA, Information Alchemist, UX, RESTafarian, Topic Maps >> --- http://shelter.nu/blog/---------------------------------------------- >> ------------------ http://www.google.com/profiles/alexander.johannesen--- >> >> > > -- When the facts change, I change my mind. What do you do, sir?
2010/7/30 Antnio Mota <amsmota@...> > I don't see a company like Google, the biggest company on the net, born and > raised on the net, and that can hire the best minds around, has problems in > *getting* it, and they were proven themselves many times not to be lazy or > ignorants. > > Probably they see there are much more things on the net besides making web > sites and that allowing other parties to access their vast infrastructure > and services in programmatically ways is very different than serving HTML > for browser/human consumption > Hmm, I think the scale of their systems actually makes their consumption model similar to human/browser HTML, and therefore good candidates for approaching with REST and m2m hypermedia. Cheers, Mike
REST is hard science. This means it is falsifiable. Roy's thesis explains clearly the benefits of a RESTful approach like: GET /image.jpg (for Etag) GET /image.jpg?rot=90 (cacheable) PUT /image.jpg The PUT entity is the cached /image.jpg?rot=90 and the request is conditional, using the Etag from the initial GET to avoid the lost- update problem. Roy's thesis explains both the tradeoffs of this approach, and the drawbacks of following Google's RPC approach. When tested, the described implementations, mine and Google's, will bear out exactly what Roy's thesis predicts. This is not proof of REST, but is further corroboration of the thesis. The RESTful solution is capable of avoiding the lost-update problem. Google, by claiming that REST requires the image to be rotated on the client and using the word "awkward," doesn't begin to falsify the science behind REST, which has been corroborated by enough implementations over time as to make its falsification less and less likely -- certainly not by RPC. If we are to take seriously the notion that Google's solution, by virtue of being Google's solution, somehow falsifies REST, then Google will have to come up with a solution that isn't so easy to falsify against the goals of REST, particularly scalability. Believe it or not, it's possible for Google to get scaling wrong -- as proven by their RPC image-rotation example. My REST solution *falsifies* Google's RPC approach, because beyond the lost-update problem, it uses GET to rotate images, not POST, resulting in cacheable image rotations. Google's approach only allows a rotated representation to be viewed by first changing the state of the resource. Roy wrote an entire thesis explaining the benefit of transferring represenations of application state between components, which I can point to as explanation for why my solution is architecturally correct. Where is a corresponding thesis Google can point to, which falsifies REST by explaining the benefits of remote procedure invocation over HTTP? Oh, right, it's Google so I can just take them at their "awkward" word. http://en.wikipedia.org/wiki/Falsifiability REST is hard science, which is what allows me to override any logical fallacy based on arguing that surely, Google knows more than I do about API design. All Google needs to do, is provide some documentation in the language of computer science, which falsifies REST by proving that their RPC approach is more scalable than the solution I posit. Until then, evaluating my solution against the hard science of REST gives me solid ground to claim that it falsifies Google's approach. Can anyone falsify my approach? I'm not interested in opinions, only facts which may be proven or disproven. -Eric
On Thu, Jul 29, 2010 at 11:45 PM, Suresh <sureshkk@...> wrote: > Hi, > > I came across a old (May 2010) presentation from Google which talks about how Google designs APIs for public APIs. > > http://code.google.com/events/io/2010/sessions/how-google-builds-apis.html > > Google seems to be proposing an RPC style with REST to overcome some of the limitations of > REST, especially APIs for imperative statements i.e augmenting REST with custom verbs. One of > the examples cited in the presentation is rotating an image in flicker and the author claims using > RPC style is natural and works better than just using REST. Any thoughts?? I dunno, so far the changes they've made to "overcome the challenges of REST" really appear to be changes to overcome their poor initial modeling of their resources. The "cool new feature" #1 was described by Roy nearly 9 years ago[1] and summarized recently by mamund[2]. So far, unimpressed:) --tim [1] - http://www.xent.com/pipermail/fork/2001-September/004712.html [2] - http://www.amundsen.com/examples/fielding-props/
Suresh Kumar wrote: > > This is exactly what is worrying me. After looking at the > presentation most of my colleagues are airing the same views as > Antonio. I am in the middle of developing a RESTful system and now > the detractors have got ammunition. They say I will never be able to > model all of the APIs using just HTTP verbs and I will finally have > to resort to custom verbs like Google has done. > Hopefully, the ease with which I've blown a gigantic, gaping hole in Google's image-rotation solution shows that a custom verb is _not_ needed, by any stretch of the imagination. Google is in a position where their scaling problems are addressed by a colossally enormous amount of infrastructure to throw at problems, so they don't really need to care about the same things your company cares about. -Eric
I'm in the middle of a "battle" trying to push a REST model for our next project, a "battle" that I'm not only loosing as I'm staring to unwilling to do. I don;t even have "detractors", just pragmatic guys that don't care much about labels, they care about results - results meaning time-to-market, following budgets and not technical things like that. They even think I am a fundamentalist regarding my preferred technologies REST, OSGi and SDO... Of course, if you try to push REST in a all or nothing vies, you quickly fail. For instance, in my previous discussions with the team, we reached the conclusion that we had no need for concern with cache - and even me saying that we will benefit in the future in case we decided to open the infrastructure to the exterior, wasn't enough to convince then to spend 2 or 3 weeks more on the project. So we decided for no cache. Now imagine if there were SOAP proponents, or anti-REST guys on the team. Following the purist point of view that imply we were *not* implementing REST at all. So, if we aren't implementing REST, why not to just follow a well-know, market aware solution like SOAP? REST is not hard science, it's not even that difficult to understand - I'm not saying it's easy, but it's not that far of a paradigm shift to understand as it is coming from a procedural paradigm to a object-oriented one, or from this one to a declarative paradigm, or from a waterfall paradigm to a XP/Agile paradigm, or from a PDCA methodology to a CMMI level 5 methodology. To try to present REST as hard science or a all-or-nothing approach is to involve it in a veil of mysticism that is not compatible to the necessities of real life companies that have to choose and make trade-offs in many areas just to stay alive. And IT is one of that areas, IT is not above companies. And I just can't fecking understand why people want to have such kind of approaches. 2010/7/30 Suresh Kumar <sureshkk@gmail.com> > > This is exactly what is worrying me. After looking at the presentationmost of my colleagues are airing the same views as Antonio. I am in the middle of developing a RESTful system and now the detractors have got ammunition. They say I will never be able to model all of the APIs using just HTTP verbs and I will finally have to resort to custom verbs like Google has done. > > 2010/7/30 Antnio Mota <amsmota@...> >> >> I don't see a company like Google, the biggest company on the net, born and raised on the net, and that can hire the best minds around, has problems in *getting* it, and they were proven themselves many times not to be lazy or ignorants. >> >> Probably they see there are much more things on the net besides making web sites and that allowing other parties to access their vast infrastructure and services in programmatically ways is very different than serving HTML for browser/human consumption - which by the way is true not only for the net but for even more for enterprises too. >> >> Bottom line, doing things in the real world is not always compatible with theoretical purist considerations and "all-or-nothing" views of the word... And of course, when something is presented is such a way, pragmatic people have a tendency to go away. Maybe Google is a good example how companies behave in the real world. >> >> But this is just my opinion, of course... >> >> On 30 July 2010 09:45, Alexander Johannesen <alexander.johannesen@...> wrote: >>> >>> >>> >>> Mike Kelly <mike@...> wrote: >>> > Understanding and applying REST correctly is hard, many people fail at both. >>> >>> What I find scary, though, is that the Google engineers aren't getting >>> it right (or even half-right, perhaps more like a smidgen), and this >>> stuff isn't *that* hard. What gives? Laziness? A few bad apples? >>> Ignorance? Couldn't give a rats? >>> >>> Alex >>> -- >>> Project Wrangler, SOA, Information Alchemist, UX, RESTafarian, Topic Maps >>> --- http://shelter.nu/blog/ ---------------------------------------------- >>> ------------------ http://www.google.com/profiles/alexander.johannesen --- >>> > > > > -- > When the facts change, I change my mind. What do you do, sir?
I fail to see why, in a future where probably - and this is not my opinion, is a generalized point of view, just search for "pervasive computing" - the internet will have most exclusively interactions between machines (because the human/computer interface will not be visible, or conscient, to the users) people will stick with a format that was clearly designed for human presentation in a screen, and even now has lot's of luggage that simply is not necessary to that kind of interaction. The only reason at the moment that I can see is the existing of more tools for developers, but that will not be so for long. You see, from my point of view what we do now (or part of what we do now) and specially in companies like Google, *is* designing for the future. If you don't invent the future now, somebody else will do it and you'll be history soon enough (when I mean you, it's a generalized you, not you personally :) So HTML will work today? I agree. But why use HTML when you can use better tools, even if you have to invent them? 2010/7/30 Mike Kelly <mike@...>: > be used > > 2010/7/30 Antnio Mota <amsmota@...> >> >> I don't see a company like Google, the biggest company on the net, born >> and raised on the net, and that can hire the best minds around, has problems >> in *getting* it, and they were proven themselves many times not to be lazy >> or ignorants. >> >> Probably they see there are much more things on the net besides making web >> sites and that allowing other parties to access their vast infrastructure >> and services in programmatically ways is very different than serving HTML >> for browser/human consumption > > > Hmm, I think the scale of their systems actually makes their consumption > model similar to human/browser HTML, and therefore good candidates for > approaching with REST and m2m hypermedia. > Cheers, > Mike
Suresh Kumar wrote: > > They say I will never be able to model all of the APIs using just > HTTP verbs and I will finally have to resort to custom verbs like > Google has done. > REST isn't an all-or-nothing approach, nobody here even advocates for that. It's the science you use to guide the long-term development of your system. I point to my online REST demo all the time, and point out the deliberate REST mismatches and the pragmatic reasons for them. The system is RESTful overall, but certain aspects aren't. Some don't need to be, others will evolve to be *over time*. REST is what allows me to perform a cost-benefit analysis of development time vs. release cycle requirements vs. today's needs vs. tomorrow's needs. The point is that your pragmatic decisions should allow for evolution in the direction of the Platonic Ideal of REST. Otherwise you run the risk of painting yourself into a corner. You're on the right track, if you start by modeling your resources properly, such that tunneling custom verbs over POST isn't necessary. That is all-or-nothing. RPC simply isn't RESTful, and I see no reason to hesitate in pointing out that *fact* to anyone learning REST. You can't possibly learn REST if you don't understand this fact. Starting off with RPC as Google has done, paints you into just such a corner. It's a fundamentally anti-REST architectural choice. That's science speaking. Building a system which evolves towards REST is the goal, but that goal will never be reached if the fundamental decisions aren't at least informed by REST. REST is _not_ "mysticism." Roy's thesis clearly explains all the benefits and consequences of the style. This is what allows cost- benefit analysis of a system architecture which evolves over time. An understanding of REST is what allows the comparative benefits of my approach to image rotation vs. Google's to be discussed. Thus it becomes possible to rationally discuss any difference in development time between POST tunneling of custom methods, vs. re-use of standardized methods, against the scalability of the results. Only you can know your requirements for scalability now, vs. down the road, and determine what aspects of REST don't pay for themselves until later, or consider the costs of writing off your RPC system when it turns out that you do need it to scale and don't have Google's infrastructure, and scrap it for a REST system that would've been less costly to evolve over time. There is nothing mystic about my ability to come up with a REST answer to Google's "awkward" assertion, off the top of my head, and have nobody on this list dispute it as being REST (if given a hypertext API to guide the user agent through those interactions, if the human or machine goal is to rotate an image stored on a server), or a better solution than Google's. Which tends to prove that REST is indeed a science. Following REST allows you to easily implement the standard solution to the lost-update problem. Following RPC does not. Scientific fact, not dogma. Falsifiable. -Eric
> > GET /image.jpg (for Etag) > GET /image.jpg?rot=90 (cacheable) > PUT /image.jpg > I made this too hard. Forget the first GET. The conditional PUT uses if-none-match the Etag of /image.jpg?rot=90. Makes the hypertext API even easier to write. -Eric
Ok. I have no time to hear the complete presentation. I do know that not all people in Google are top geniuses. It is not true they handle all the concepts at theory purism, simply because they use a more practical approach, doing what works. I have also notice, sometimes, they mention nice words that are conceptually wrong. It happened when they published the Chrome cartoon. Did someone noticed the conceptual flaw of Virtual Machines they had? Anyway, here are a couple of notes I took. I will answer Antonio's concerns in his thread. "Interesting. Instead of getting the full resource, allow getting partial resources. Humm. Is there a problem with getting on the fly representations that are just those fields? Can we think of a resource with individual fields as collections with the actual fields being resources? Same for the update. Just use POST. An XML resource? Or is it an XML representaion of resources? XML and JSOn? Requesting a representation in the URL? Just the idea of adding a method parameter is new? My God, Roy has been awfully angry against all those APIs that have the exact same format and call themselves REST. Please, someone invite these Google guys to this list. Let see some flawed concepts: 1. <Resources are XML>. A resource is anything, and you work with representations of the resource 2. <Representation selection is not on REST>. Actually, it is part of the protocol, and the protocol is not just the URL as they seem to assume (did they mention headers anywhere in the presentation?). 3. <Resources are verbose>. Aha. Of course we can have a resource representation that has just the info we need. That is nothing new. What is wrong is thinking resources are just XML and thus verbose by nature. 4. <Modifying a resource requires to download the full resource first>. Why so? To rotate an image you have to do it in the client? An image is not just pixels, it is also metadata. We can get the image metadata, change it and post it back. Server may update representations. The resource it the same." And I had to go, the idea is to clearly state that is a pragmatic approach based on misconceptions, that adopts an RPC style that should not be called RESTful anymore. Just as many other APIs over there. Cheers. William Martinez --- In rest-discuss@yahoogroups.com, "Suresh" <sureshkk@...> wrote: > > Hi, > > I came across a old (May 2010) presentation from Google which talks about how Google designs APIs for public APIs. > > http://code.google.com/events/io/2010/sessions/how-google-builds-apis.html > > Google seems to be proposing an RPC style with REST to overcome some of the limitations of REST, especially APIs for imperative statements i.e augmenting REST with custom verbs. One of the examples cited in the presentation is rotating an image in flicker and the author claims using RPC style is natural and works better than just using REST. Any thoughts?? > > Best regards, > Suresh >
I am modeling a REST API and part of the API will represent resources that are organized into an arbitrary hierarchy of nodes with values and sub-nodes (kind of like a file system or the windows registry). I would love to create a WADL file so that i can generate the JAX-RS boilerplate for the API but i'm not sure how to represent such a situation in WADL. i could just use a string as a template parameter but i'm not sure how i would detect that that string parameter is allowed to have slashes in its value and generate the JAX-RS @Path annotation properly. Is it legit to create a template parameter that is repeatable and expect that sort of functionality? e.g. HTTP GET http://my.service.com/registry/path/to/my/node where the repeated template params are "path", "to", "my", "node" ?
> > I made this too hard. Forget the first GET. The conditional PUT uses > if-none-match the Etag of /image.jpg?rot=90. Makes the hypertext API > even easier to write. > What if we don't want to transfer a representation of image.jpg to the client, or back to the server? No problem. HEAD /image.jpg?rot=90 to get the Etag (or GET), followed by a conditional POST to /image.jpg of a representation of the desired application state as application/x-www- form-urlencoded, i.e. POST /image.jpg?rot=90 if-none-match Etag. While this may technically solve the lost-update problem, without actually transferring the image, how does the user (human or machine) know the image hasn't already been rotated by 90 degrees? If what Google meant to call awkward wasn't the notion of the client doing the rotation, but the notion that the image needs to be transferred to the client at all, then I don't see any way around it -- without viewing the image, how would a human or machine user know that it needs rotation, or by how many degrees? So my HEAD-conditional POST solution is only RESTful in situations where the user goal is to rotate an image regardless of current orientation. Whereas this is the only use-case Google's RPC supports, without solving for lost-update, making it brittle even where lost- update isn't a problem: Google's way, if the confirmation of the POST is lost and the operation repeated as a result, the rotation is 180 degrees not 90. My way, prevents this by properly identifying resources (for starters) and using conditional requests. Not brittle. There is nothing unRESTful about POST /image.jpg?rot=90 being interpreted by the server to mean "rotate image.jpg 90 degrees" unless such an operation isn't hypertext-driven, and provided the media type is application/x-www-form-urlencoded. In which case "rot" is not a "verb", it is a noun identifying a stored procedure (regardless of HTTP method). Google's POST, by way of comparison, is not a transfer of a representation of the desired resource state. It's RPC. I couldn't make this argument if Google allowed GET on the same URI -- this is the difference between identification of resources (regardless of how sloppy the URIs) and custom verbs tunneled over POST. -Eric
"apangus" wrote: > > I am modeling a REST API and part of the API will represent resources > that are organized into an arbitrary hierarchy of nodes with values > and sub-nodes (kind of like a file system or the windows registry). > Or an Atom Feed, or Google Sitemaps... choose the correct media type for the task at hand, rather than trying to make your chosen media type do things for which it was not designed. -Eric
> > Yes. Coming soon, is blogroll.xbel as application/xbel+xml. > Done. Unfortunately, there are two critical problems with XBEL: it lacks both an XML namespace and a media type identifier. Namespaces are a concern external to REST. So, I made one up. The fact that I'm using the unregistered application/xbel+xml on the public Web, means that I'm violating REST's self-descriptive messaging constraint. XBEL is a ubiquitous media type on the Web. Many browsers support it as their native bookmark format, and many desktop apps and Web services exist which use XBEL as an intermediary format to exchange bookmarks between browsers that don't implement it. Drupal implements XBEL for blogrolls, just as I have (differently, though). I added XBEL and XFN to my WordPress plugin, to give WP these capabilities. The problem is, that without its own media type identifier, all this XBEL is being passed around as text/xml or application/xml. The only thing those generic media types say about linking, is to look for Xlink or rdf:about, neither of which are relevant to XBEL. A specific media type identifier would make it explicit that //bookmark/@href contains URIs, along with the notion that the media type is a hierarchical collection of URIs (rather than being some random XML with a DTD). So, until XBEL has a registered media type identifier (I'll get right on it), none of the ubiquitous use of the media type is RESTful, and neither is that aspect of my demo system. Which doesn't make it the wrong choice of media type, nor suggest to me that I shouldn't go ahead and call it application/xbel+xml. Pragmatism and REST can co-exist. The need to standardize application/ xbel+xml is obvious, so I'm not worried that it won't ever happen, so I need change nothing for my system to evolve towards REST in this regard. -Eric
veering the thread slightly off to another angle...
there are lots of engineering disciplines present @ Google. one
engineering discipline that i suspect paid a role in the new
"Discovery-Based API" model is _social_ engineering.
i think this is an example of a team @ Google offering their audience
(developers in this case) what Google thinks developers want; what
Google needs to do to get developers to adopt their platform.
i, personally, have seen the same thing happen at Microsoft.
these are smart people. they "get" REST. however, they suspect their
target audience does not.
Also, it's been said on this list (more than once) that one of the
prime barriers to adopting the REST style (or any new pattern, tech,
etc.) is psycho-social. people don't like change, don't find the new
thing appealing, don't want to lose something in the change, etc. and
to prevent these perceived "negative consequences" from occurring,
people will attempt to argue against the new thing using
pseudo-technical reasoning (basically unsupported assertions like "it
won't scale" or "no one will like it", and so forth). often, this kind
of arguing is effective in the social setting of the office since some
in the conversation hold power over others ("i can't convince my
boss", etc.).
Finally, the REST style is not complicated (it's one of the few
network arch styles based in clear constraint-based terms), but it is
hard work. hard work is not very appealing. in my experience the REST
style is most demanding on those building _clients_ not servers.
Google is trying to get people to build "consuming" applications; the
hardest part, IMO. i think Google has decided to not try to convince
their audience they need to adopt a state-machine style in order to
consume the data Google is offering. instead, they decided to make
consuming Google data "easy" and "familiar." hence the adoption of an
RPC over HTTP pattern.
I think Google thought about this carefully and knows exactly what
they are doing. and they'll get lots of adoption, too.
mca
http://amundsen.com/blog/
http://mamund.com/foaf.rdf#me
On Fri, Jul 30, 2010 at 09:45, Eric J. Bowman <eric@...> wrote:
>>
>> I made this too hard. Forget the first GET. The conditional PUT uses
>> if-none-match the Etag of /image.jpg?rot=90. Makes the hypertext API
>> even easier to write.
>>
>
> What if we don't want to transfer a representation of image.jpg to the
> client, or back to the server? No problem. HEAD /image.jpg?rot=90 to
> get the Etag (or GET), followed by a conditional POST to /image.jpg of a
> representation of the desired application state as application/x-www-
> form-urlencoded, i.e. POST /image.jpg?rot=90 if-none-match Etag.
>
> While this may technically solve the lost-update problem, without
> actually transferring the image, how does the user (human or machine)
> know the image hasn't already been rotated by 90 degrees? If what
> Google meant to call awkward wasn't the notion of the client doing the
> rotation, but the notion that the image needs to be transferred to the
> client at all, then I don't see any way around it -- without viewing
> the image, how would a human or machine user know that it needs
> rotation, or by how many degrees?
>
> So my HEAD-conditional POST solution is only RESTful in situations
> where the user goal is to rotate an image regardless of current
> orientation. Whereas this is the only use-case Google's RPC supports,
> without solving for lost-update, making it brittle even where lost-
> update isn't a problem: Google's way, if the confirmation of the POST
> is lost and the operation repeated as a result, the rotation is 180
> degrees not 90. My way, prevents this by properly identifying
> resources (for starters) and using conditional requests. Not brittle.
>
> There is nothing unRESTful about POST /image.jpg?rot=90 being
> interpreted by the server to mean "rotate image.jpg 90 degrees" unless
> such an operation isn't hypertext-driven, and provided the media type is
> application/x-www-form-urlencoded. In which case "rot" is not a "verb",
> it is a noun identifying a stored procedure (regardless of HTTP method).
>
> Google's POST, by way of comparison, is not a transfer of a
> representation of the desired resource state. It's RPC. I couldn't
> make this argument if Google allowed GET on the same URI -- this is the
> difference between identification of resources (regardless of how sloppy
> the URIs) and custom verbs tunneled over POST.
>
> -Eric
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
>
mike amundsen wrote: > > I think Google thought about this carefully and knows exactly what > they are doing. and they'll get lots of adoption, too. > I think you're right, and I think co-opting the term REST to sell it is part of the grand strategy. The calling-it-REST part bugs me; the it- isn't-REST part doesn't. In fact, it insures that anything built this way will be just as sluggish, and prone to fantastic security failures, as Facebook -- leaving plenty of room for a competitive upstart to beat them at the social-networking game by virtue of a fast, robust and secure REST architecture. As an entrepreneur whose mantra is, "Yes, you CAN compete with Google," the only thing that *would* scare me, is if Google actually _were_ to adopt REST. Same goes for Facebook. What was the latest flaw? The ability to follow the private chats of anyone who's your 'friend'? IMO, that particular flaw was the direct result of not being RESTful, because it's just not possible to foul things up that badly unless you haven't properly identified your resources to begin with. Like Fb. -Eric
--- In rest-discuss@yahoogroups.com, mike amundsen <mamund@...> wrote:
>
> veering the thread slightly off to another angle...
>
> there are lots of engineering disciplines present @ Google. one
> engineering discipline that i suspect paid a role in the new
> "Discovery-Based API" model is _social_ engineering.
>
> i think this is an example of a team @ Google offering their audience
> (developers in this case) what Google thinks developers want; what
> Google needs to do to get developers to adopt their platform.
>
> i, personally, have seen the same thing happen at Microsoft.
>
> these are smart people. they "get" REST. however, they suspect their
> target audience does not.
I agree. There's no doubt in my mind that the folks behind the new discovery approach know exactly what they are doing. In fact, Joe Gregorio (editor of AtomPub spec & well-regarded RESTian), who is involved in the new (v3) discovery approach has been promising a blog post which I suspect will be of great interest to folks here.
That has not always been the case, though. The Atom/AtomPub (v2) approach had some shortcomings that I think were hard to work around -- each service did enough "customizing" to the Atom document, that what you ended up with was essentially "semantic tunnelling" (hat-tip - Bill de hOra), a sub-optimal starting point for a RESTful system that was really in need of custom media types.
>
> Also, it's been said on this list (more than once) that one of the
> prime barriers to adopting the REST style (or any new pattern, tech,
> etc.) is psycho-social. people don't like change, don't find the new
> thing appealing, don't want to lose something in the change, etc. and
> to prevent these perceived "negative consequences" from occurring,
> people will attempt to argue against the new thing using
> pseudo-technical reasoning (basically unsupported assertions like "it
> won't scale" or "no one will like it", and so forth). often, this kind
> of arguing is effective in the social setting of the office since some
> in the conversation hold power over others ("i can't convince my
> boss", etc.).
I'm disappointed that I will not be able to point to Google as an example of "REST done right" as that makes the education (overcoming psycho-social barriers) so much easier. I'm not yet convinced that this is an engineering or design failure, though. The *is* some RESTfulness in there, and there are some interesting ideas. It will be interesting ot see how it plays out. My own take is that this sort of JSON/WADL approach was an attempt to get around that need to mint new media types (or to come up w/ a suitably generic media type for all Google services -- a role that Atom fell short of). One of the stated goals was to make it easier to bring a new API online (interfaces, documentation, clients, etc.) when a new service was rolled out. Ideally that would entail careful consideration of (perhaps custom) media type and link relations. The approach here essentially makes those media types dynamic "runtime" artifacts described by this discovery (JSON/WADL) document.
I'm not passing judgement on it and certainly not asserting this was the best/most-RESTful approach, but I do find it worthy of study, given the needs and requirements of their particular situation.
--peter keane
>
> Finally, the REST style is not complicated (it's one of the few
> network arch styles based in clear constraint-based terms), but it is
> hard work. hard work is not very appealing. in my experience the REST
> style is most demanding on those building _clients_ not servers.
> Google is trying to get people to build "consuming" applications; the
> hardest part, IMO. i think Google has decided to not try to convince
> their audience they need to adopt a state-machine style in order to
> consume the data Google is offering. instead, they decided to make
> consuming Google data "easy" and "familiar." hence the adoption of an
> RPC over HTTP pattern.
>
> I think Google thought about this carefully and knows exactly what
> they are doing. and they'll get lots of adoption, too.
>
> mca
> http://amundsen.com/blog/
> http://mamund.com/foaf.rdf#me
>
>
>
>
> On Fri, Jul 30, 2010 at 09:45, Eric J. Bowman <eric@...> wrote:
> >>
> >> I made this too hard. �Forget the first GET. �The conditional PUT uses
> >> if-none-match the Etag of /image.jpg?rot=90. �Makes the hypertext API
> >> even easier to write.
> >>
> >
> > What if we don't want to transfer a representation of image.jpg to the
> > client, or back to the server? �No problem. �HEAD /image.jpg?rot=90 to
> > get the Etag (or GET), followed by a conditional POST to /image.jpg of a
> > representation of the desired application state as application/x-www-
> > form-urlencoded, i.e. POST /image.jpg?rot=90 if-none-match Etag.
> >
> > While this may technically solve the lost-update problem, without
> > actually transferring the image, how does the user (human or machine)
> > know the image hasn't already been rotated by 90 degrees? �If what
> > Google meant to call awkward wasn't the notion of the client doing the
> > rotation, but the notion that the image needs to be transferred to the
> > client at all, then I don't see any way around it -- without viewing
> > the image, how would a human or machine user know that it needs
> > rotation, or by how many degrees?
> >
> > So my HEAD-conditional POST solution is only RESTful in situations
> > where the user goal is to rotate an image regardless of current
> > orientation. �Whereas this is the only use-case Google's RPC supports,
> > without solving for lost-update, making it brittle even where lost-
> > update isn't a problem: �Google's way, if the confirmation of the POST
> > is lost and the operation repeated as a result, the rotation is 180
> > degrees not 90. �My way, prevents this by properly identifying
> > resources (for starters) and using conditional requests. �Not brittle.
> >
> > There is nothing unRESTful about POST /image.jpg?rot=90 being
> > interpreted by the server to mean "rotate image.jpg 90 degrees" unless
> > such an operation isn't hypertext-driven, and provided the media type is
> > application/x-www-form-urlencoded. �In which case "rot" is not a "verb",
> > it is a noun identifying a stored procedure (regardless of HTTP method).
> >
> > Google's POST, by way of comparison, is not a transfer of a
> > representation of the desired resource state. �It's RPC. �I couldn't
> > make this argument if Google allowed GET on the same URI -- this is the
> > difference between identification of resources (regardless of how sloppy
> > the URIs) and custom verbs tunneled over POST.
> >
> > -Eric
> >
> >
> > ------------------------------------
> >
> > Yahoo! Groups Links
> >
> >
> >
> >
>
Let me argue a little here. First, let me say I'm not "defending"
Google (as if they needed...) nor their solution, just having some
loud thoughts about it.
So, I'll argue that having what they call "Augment REST with Custom Verbs" like
POST /tasks/@me/{taskId}?method=markDone
is not necessarily not-REST. Can or not be RESTfull. It depends.
Now I'm not going to quote Roy here, there are people more qualified
than me to do it and I don't want to take the risk of misquoting
and/or quote out of context as it is often the case. But it is my
impression that REST doesn't advocate a limited number of verbs, or
even less, that REST is limited to GET, POST, PUT, DELETE, or even
less, that should be limited to CRUD verbs.
And what I saw written by Roy is that a REST based architecture should
not be dependent, or tied to, any particular protocol. And what I also
see is most of the people discussing REST in terms of HTTP.
Now as I said some times, our pro-REST, or the wish-to-become-REST
infrastructure, was build since the beginning with the goal of
supporting multi-protocols. Thankfully, we started on that before I
was in this list from a long time, otherwise I'll probably have
concluded from this list that that was impossible, or not REST. But of
course, the first problem was how to have a Uniform Interface spanning
multiple protocols. So after some consideration we decided to have
GET, POST, PUT, DELETE - not because we wanted to work only to HTTP
but because we knew that our HTTP connector was going to be the most
used (it's the one our fat clients use). And out of necessity we also
add another method, LISTEN. And so those 5 verbs were our uniform
interface. We were not tunnelling HTTP over the other protocols as it
may appear, we were kind of tunnelling our "personal" protocol over
all the protocols.
So, at this point, the use of
POST /tasks//123?verb=LISTEN
at least as far as I can't see doesn't seems to me to break REST.
So the same for that Google API, if they constraint the methods in
POST /tasks/@me/{taskId}?method=XXXXXX
in a way they are limited in number, they always mean the same, and
they are described in a way that both the server and the client
understand their meaning, I don't see that as unRESTfull...
And to try to reach the maximum number of audience as possible is a
legit objective, there's no point in having a "perfect" solution if no
one knows about it...
Melhores cumprimentos / Beir beannacht / Best regards
_____________________________________________________________
Antnio Manuel dos Santos Mota
Contacts: http://card.ly/amsmota
_____________________________________________________________
If you're on a mobile phone you can add my contact
by scanning the code on the card below
Please click on the image to enlarge it
_____________________________________________________________
Disclaimer: The opinions expressed herein are just my opinions and
they are not necessary right.
_____________________________________________________________
On 30 July 2010 15:47, mike amundsen <mamund@...> wrote:
>
>
>
> veering the thread slightly off to another angle...
>
> there are lots of engineering disciplines present @ Google. one
> engineering discipline that i suspect paid a role in the new
> "Discovery-Based API" model is _social_ engineering.
>
> i think this is an example of a team @ Google offering their audience
> (developers in this case) what Google thinks developers want; what
> Google needs to do to get developers to adopt their platform.
>
> i, personally, have seen the same thing happen at Microsoft.
>
> these are smart people. they "get" REST. however, they suspect their
> target audience does not.
>
> Also, it's been said on this list (more than once) that one of the
> prime barriers to adopting the REST style (or any new pattern, tech,
> etc.) is psycho-social. people don't like change, don't find the new
> thing appealing, don't want to lose something in the change, etc. and
> to prevent these perceived "negative consequences" from occurring,
> people will attempt to argue against the new thing using
> pseudo-technical reasoning (basically unsupported assertions like "it
> won't scale" or "no one will like it", and so forth). often, this kind
> of arguing is effective in the social setting of the office since some
> in the conversation hold power over others ("i can't convince my
> boss", etc.).
>
> Finally, the REST style is not complicated (it's one of the few
> network arch styles based in clear constraint-based terms), but it is
> hard work. hard work is not very appealing. in my experience the REST
> style is most demanding on those building _clients_ not servers.
> Google is trying to get people to build "consuming" applications; the
> hardest part, IMO. i think Google has decided to not try to convince
> their audience they need to adopt a state-machine style in order to
> consume the data Google is offering. instead, they decided to make
> consuming Google data "easy" and "familiar." hence the adoption of an
> RPC over HTTP pattern.
>
> I think Google thought about this carefully and knows exactly what
> they are doing. and they'll get lots of adoption, too.
>
> mca
> http://amundsen.com/blog/
> http://mamund.com/foaf.rdf#me
>
> On Fri, Jul 30, 2010 at 09:45, Eric J. Bowman <eric@...> wrote:
> >>
> >> I made this too hard. Forget the first GET. The conditional PUT uses
> >> if-none-match the Etag of /image.jpg?rot=90. Makes the hypertext API
> >> even easier to write.
> >>
> >
> > What if we don't want to transfer a representation of image.jpg to the
> > client, or back to the server? No problem. HEAD /image.jpg?rot=90 to
> > get the Etag (or GET), followed by a conditional POST to /image.jpg of a
> > representation of the desired application state as application/x-www-
> > form-urlencoded, i.e. POST /image.jpg?rot=90 if-none-match Etag.
> >
> > While this may technically solve the lost-update problem, without
> > actually transferring the image, how does the user (human or machine)
> > know the image hasn't already been rotated by 90 degrees? If what
> > Google meant to call awkward wasn't the notion of the client doing the
> > rotation, but the notion that the image needs to be transferred to the
> > client at all, then I don't see any way around it -- without viewing
> > the image, how would a human or machine user know that it needs
> > rotation, or by how many degrees?
> >
> > So my HEAD-conditional POST solution is only RESTful in situations
> > where the user goal is to rotate an image regardless of current
> > orientation. Whereas this is the only use-case Google's RPC supports,
> > without solving for lost-update, making it brittle even where lost-
> > update isn't a problem: Google's way, if the confirmation of the POST
> > is lost and the operation repeated as a result, the rotation is 180
> > degrees not 90. My way, prevents this by properly identifying
> > resources (for starters) and using conditional requests. Not brittle.
> >
> > There is nothing unRESTful about POST /image.jpg?rot=90 being
> > interpreted by the server to mean "rotate image.jpg 90 degrees" unless
> > such an operation isn't hypertext-driven, and provided the media type is
> > application/x-www-form-urlencoded. In which case "rot" is not a "verb",
> > it is a noun identifying a stored procedure (regardless of HTTP method).
> >
> > Google's POST, by way of comparison, is not a transfer of a
> > representation of the desired resource state. It's RPC. I couldn't
> > make this argument if Google allowed GET on the same URI -- this is the
> > difference between identification of resources (regardless of how sloppy
> > the URIs) and custom verbs tunneled over POST.
> >
> > -Eric
> >
> >
> > ------------------------------------
> >
> > Yahoo! Groups Links
> >
> >
> >
> >
>
Peter:
good points:
<snip>
The Atom/AtomPub (v2) approach had some shortcomings that I think were
hard to work around -- each service did enough "customizing" to the
Atom document, that what you ended up with was essentially "semantic
tunnelling" (hat-tip - Bill de hOra), a sub-optimal starting point for
a RESTful system that was really in need of custom media types.
</snip>
IMO, the primary shortcoming of Atom was the commitment to the "object
transfer" pattern (writing a predefined <entry /> item) instead of the
"state bag" pattern (writing an arbitrary set of elements [name-value
pairs] or writing a base64-encoded element, etc. as needed). this
resulted in the encouragement of what i refer to as "payload-isms"
similar to the SOAP implementation and resulted in several custom
payloads that required too much out-of-band knowledge to be
architecturally scalable.
<snip>
My own take is that this sort of JSON/WADL approach was an attempt to
get around that need to mint new media types (or to come up w/ a
suitably generic media type for all Google services -- a role that
Atom fell short of). One of the stated goals was to make it easier to
bring a new API online (interfaces, documentation, clients, etc.) when
a new service was rolled out. Ideally that would entail careful
consideration of (perhaps custom) media type and link relations. The
approach here essentially makes those media types dynamic "runtime"
artifacts described by this discovery (JSON/WADL) document.
</snip>
WADL (and it's ilk) is a great example of Bill de hOra's "semantic
tunneling" approach. since Google is already "minting" new semantics
and targeting their approach to a code-on-demand-only model (no simple
"browser" exists to "render" this content in a meaningful way), they
are - in essence - minting a new media type. one that provides almost
no hypermedia support and required split between the data and the data
semantics (payload + discovery doc).
To me, this looks very much like a JSON version of SOAP. or maybe a
JSON version of Atom + AtomSvc if you drop the semantic wrapper
defined in Atom and just use the SOAP-ish semantic tunneling in WSDL.
It is, I think, a step backward, not forward, in the effort to provide
semantically rich data that can be consumed by any device.
and it's a bummer.
mca
http://amundsen.com/blog/
http://mamund.com/foaf.rdf#me
On Fri, Jul 30, 2010 at 11:31, Peter <pkeane@...> wrote:
>
>
> --- In rest-discuss@yahoogroups.com, mike amundsen <mamund@...> wrote:
>>
>> veering the thread slightly off to another angle...
>>
>> there are lots of engineering disciplines present @ Google. one
>> engineering discipline that i suspect paid a role in the new
>> "Discovery-Based API" model is _social_ engineering.
>>
>> i think this is an example of a team @ Google offering their audience
>> (developers in this case) what Google thinks developers want; what
>> Google needs to do to get developers to adopt their platform.
>>
>> i, personally, have seen the same thing happen at Microsoft.
>>
>> these are smart people. they "get" REST. however, they suspect their
>> target audience does not.
>
> I agree. There's no doubt in my mind that the folks behind the new discovery approach know exactly what they are doing. In fact, Joe Gregorio (editor of AtomPub spec & well-regarded RESTian), who is involved in the new (v3) discovery approach has been promising a blog post which I suspect will be of great interest to folks here.
>
> That has not always been the case, though. The Atom/AtomPub (v2) approach had some shortcomings that I think were hard to work around -- each service did enough "customizing" to the Atom document, that what you ended up with was essentially "semantic tunnelling" (hat-tip - Bill de hOra), a sub-optimal starting point for a RESTful system that was really in need of custom media types.
>
>
>>
>> Also, it's been said on this list (more than once) that one of the
>> prime barriers to adopting the REST style (or any new pattern, tech,
>> etc.) is psycho-social. people don't like change, don't find the new
>> thing appealing, don't want to lose something in the change, etc. and
>> to prevent these perceived "negative consequences" from occurring,
>> people will attempt to argue against the new thing using
>> pseudo-technical reasoning (basically unsupported assertions like "it
>> won't scale" or "no one will like it", and so forth). often, this kind
>> of arguing is effective in the social setting of the office since some
>> in the conversation hold power over others ("i can't convince my
>> boss", etc.).
>
> I'm disappointed that I will not be able to point to Google as an example of "REST done right" as that makes the education (overcoming psycho-social barriers) so much easier. I'm not yet convinced that this is an engineering or design failure, though. The *is* some RESTfulness in there, and there are some interesting ideas. It will be interesting ot see how it plays out. My own take is that this sort of JSON/WADL approach was an attempt to get around that need to mint new media types (or to come up w/ a suitably generic media type for all Google services -- a role that Atom fell short of). One of the stated goals was to make it easier to bring a new API online (interfaces, documentation, clients, etc.) when a new service was rolled out. Ideally that would entail careful consideration of (perhaps custom) media type and link relations. The approach here essentially makes those media types dynamic "runtime" artifacts described by this discovery (JSON/WADL) document.
>
> I'm not passing judgement on it and certainly not asserting this was the best/most-RESTful approach, but I do find it worthy of study, given the needs and requirements of their particular situation.
>
> --peter keane
>
>
>>
>> Finally, the REST style is not complicated (it's one of the few
>> network arch styles based in clear constraint-based terms), but it is
>> hard work. hard work is not very appealing. in my experience the REST
>> style is most demanding on those building _clients_ not servers.
>> Google is trying to get people to build "consuming" applications; the
>> hardest part, IMO. i think Google has decided to not try to convince
>> their audience they need to adopt a state-machine style in order to
>> consume the data Google is offering. instead, they decided to make
>> consuming Google data "easy" and "familiar." hence the adoption of an
>> RPC over HTTP pattern.
>>
>> I think Google thought about this carefully and knows exactly what
>> they are doing. and they'll get lots of adoption, too.
>>
>> mca
>> http://amundsen.com/blog/
>> http://mamund.com/foaf.rdf#me
>>
>>
>>
>>
>> On Fri, Jul 30, 2010 at 09:45, Eric J. Bowman <eric@...> wrote:
>> >>
>> >> I made this too hard. �Forget the first GET. �The conditional PUT uses
>> >> if-none-match the Etag of /image.jpg?rot=90. �Makes the hypertext API
>> >> even easier to write.
>> >>
>> >
>> > What if we don't want to transfer a representation of image.jpg to the
>> > client, or back to the server? �No problem. �HEAD /image.jpg?rot=90 to
>> > get the Etag (or GET), followed by a conditional POST to /image.jpg of a
>> > representation of the desired application state as application/x-www-
>> > form-urlencoded, i.e. POST /image.jpg?rot=90 if-none-match Etag.
>> >
>> > While this may technically solve the lost-update problem, without
>> > actually transferring the image, how does the user (human or machine)
>> > know the image hasn't already been rotated by 90 degrees? �If what
>> > Google meant to call awkward wasn't the notion of the client doing the
>> > rotation, but the notion that the image needs to be transferred to the
>> > client at all, then I don't see any way around it -- without viewing
>> > the image, how would a human or machine user know that it needs
>> > rotation, or by how many degrees?
>> >
>> > So my HEAD-conditional POST solution is only RESTful in situations
>> > where the user goal is to rotate an image regardless of current
>> > orientation. �Whereas this is the only use-case Google's RPC supports,
>> > without solving for lost-update, making it brittle even where lost-
>> > update isn't a problem: �Google's way, if the confirmation of the POST
>> > is lost and the operation repeated as a result, the rotation is 180
>> > degrees not 90. �My way, prevents this by properly identifying
>> > resources (for starters) and using conditional requests. �Not brittle.
>> >
>> > There is nothing unRESTful about POST /image.jpg?rot=90 being
>> > interpreted by the server to mean "rotate image.jpg 90 degrees" unless
>> > such an operation isn't hypertext-driven, and provided the media type is
>> > application/x-www-form-urlencoded. �In which case "rot" is not a "verb",
>> > it is a noun identifying a stored procedure (regardless of HTTP method).
>> >
>> > Google's POST, by way of comparison, is not a transfer of a
>> > representation of the desired resource state. �It's RPC. �I couldn't
>> > make this argument if Google allowed GET on the same URI -- this is the
>> > difference between identification of resources (regardless of how sloppy
>> > the URIs) and custom verbs tunneled over POST.
>> >
>> > -Eric
>> >
>> >
>> > ------------------------------------
>> >
>> > Yahoo! Groups Links
>> >
>> >
>> >
>> >
>>
>
>
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
>
---------- Forwarded message ---------- From: "Antnio Mota" <amsmota@gmail.com> Date: 30 Jul 2010 17:44 Subject: Re: [rest-discuss] Google proposes RPC style APIs over REST To: "Suresh Kumar" <sureshkk@...> But there is nothing wrong, from my POV, you can have all the verbs that you need as REST is not tied to HTTP. Of course, those verbs should be as few as possible, at least for maintenance reasons, and they should be *generic* across the resources, I mean, you should not have specific verbs for specific services. But things like LISTEN or MOVE or FINISH I don't see why not. Of course you'll loose some things that the native verbs give you from free, like visibilty and so, but you gain other things. I think if you argue like this you can have a REST approach as close as you can get insead of dumping it altogether. > > On 30 Jul 2010 10:59, "Suresh Kumar" <sureshkk@...m> wrote: > > This is exactly what is wo... > 2010/7/30 Antnio Mota <amsmota@...> > > I don't see a company like Google, the biggest company on the net, born and raised on the net, ... > > > > > -- > When the facts change, I change my mind. What do you do, sir?
Long post, sorry.
A lot of people in Google understand REST and the Web, very well. You
can't build services at their scale and not understand fundamentals.
That said, what I gather they're doing seems reasonable to me and I
don't the sky is going to fall in because they want to leverage their
internal tool chain or reach more developers. And I don't see this as
going to back to RPC. For the reasons below, I suspect it's about
balancing trade-offs.
The fundamental, core problems aren't to do with REST v RPC any more. I
think as an industry we're past that thankfully, because it was hell.
I think first there's an issue with notion of a 'Web API'. The Web 2.0
crowd co-opted the term API, but in reality these things are very
different to Software APIs. Even the copyleft contingent of the FOSS
community had to mint a new licence, the AGPL to reflect how systems
work in this century. However the basic expectation of a Software API
remains, one of which is ease of use and abstraction of low level
detail.
The approach I've found that works, is to build your system substrate in
line with REST and sane HTTP practices - resources, links, media type
negotiation, uniform interface, cache tiers, so you have a semblance of
a systems design. A lot of systems and scaling issues I see are very
fixable when you apply REST, the notable exception being trying to do
connection-oriented stuff like Comet or IM over HTTP, for which there
are no easy solutions (and obviously, it's not a problem domain REST is
designed to solve).
With that in place you then layer on syntax sugar in the form of client
libraries. The downside is you are now supporting 2 "APIs" working at
two different layers, the upside is that you've obtained a decent
adoption/scaling tradeoff. Developers just don't want to deal with HTTP
and XML and JSON and sideline caching and marshalling and partial update
gorp. I'm sure the CPU and Programming Language architects aren't very
happy with developers either, but the industry is where it is.
Mike has nailed it when he said REST clients are hard to build. Ok, so
n a sense they aren't rock hard, but, and I'm speaking very roughly
speaking here, they present the same barrier to a typical client
developer that building an event based server might present to a typical
web developer. Basically you've lost 90% of your audience. Actually any
protocol + format client is hard to build, but REST as in HTTP presents
a extra challenge as the whole world is building lots of different
applications on top of it.
If you are in the platform game, this constitutes a problem, as
platforms are all about adoption. If you are in the distributed platform
game you have to balance these systems and ease of use concerns. There's
no point having a super-scaling platform no-one uses. There's no point
having a super-popular platform that collapses.
The client sugar bit is too hard today. Speaking from experience in the
mobile world, supporting the sheer variety of clients is a very tough
ask. Frankly, it is economically simpler on the server side to just
expose data and protocols. However to manage that cost down while
growing your developer base, you will eventually want a way to help
generate clients for the developers and client platforms that you
haven't reached yet. This is subtly but importantly different to the
mindless application of RPC we've seen in the past where one tries to
apply the wrong paradigm to the wrong reality. When you're looking to
lower client development costs, the goal or solution isn't to to end up
with inappropriate systems. No systems engineer worth a damn working at
scale will throw out REST quality properties. But ease of use is a
reasonable concern. It is semantic tunnelling, but it's not total idiocy
such as running deletes over get.
WADL for example, can map a 'code method' name 'HTTP method' so you get
a nice domain specific and expressive interface at the level of software
mapped onto a sensible system primitive, exactly what you want to keep a
balance between systems engineers and developers. I don't see what's not
to like about that - there's nothing that requires you to break
uniformity. Ok, so you can declare URLs upfront and maybe that's always
not ideal, but if you don't want to stand over them, document that, and
then follow through with a cache expiration on the document to allow the
client to regenerate. Worst case you're standing over uri-template
syntax and maybe you need to manage the WADL URLs with the links in your
formats. I don't see this as a huge problem or architecture violation.
Hence the trade off leads to cherry picking tools and approaches that
assist with client generation and cleaner programming models.
Another serous problem is that time and effort to mint good media types
doesn't reflect business reality - strictly speaking that's not a
problem of the REST architecture, but it is a problem for a business.
Atom and AtomPub took years to wrap up, and you know what, most
half-decent general purpose formats take a long time. Companies need to
ship and just can't wait years to cut a format and the formats we have
aren't commodity options that just work in general for most application
domains.
This latter 'time to market' issue there just isn't a good solution for,
not when the modelling primitives are just about managing structure such
as JSON, XML and IDLs. RDF is probably the closest to something that has
general adoption promise, but ime that seems to require significant
retooling on the server especially if you using relational databases and
ORMs. My early background was Agents and AI, so REST and network
protocol design seems like McAgents, a dumbed down version of speech
acts and ACLs, but at least they hang together conceptually. I can
remember ACL interlingua efforts like FIPA and KIF that make a ton of
sense, fail, because the existing programming paradigms were too
entrenched and the general state of the art too primitive for them have
any chance of adoption. Things are marginally better today and we still
badly need formats that can model above the level of syntax, but I don't
see that happening immediately. A current example if you want one, is
how tied up the Activity Streams effort is around modelling verbs, a
language with actual semantics would really help there, but then I'm
thinking well, at least the nouns are modelled, and how likely is it
that a large number of developers today would work with an interlingua.
The same goes for the FB Graph API. It's a small step in the right
direction.
So on the balance I think serving description formats that can drive
client tooling and peppering 'methods' in URLs that don't mess with the
infrastructure or uniform method assurances, represent reasonable
trade-offs and not some kind of retreat to pure RPC. Think of it this
way - all these developers are still getting numerous benefits from REST
style HTTP but it's not shoved in their faces as incidental complexity
and eat your greens - I think that is a good thing.
Bill
On Fri, 2010-07-30 at 16:43 +0100, António Mota wrote:
> Let me argue a little here. First, let me say I'm not "defending"
> Google (as if they needed...) nor their solution, just having some
> loud thoughts about it.
>
> So, I'll argue that having what they call "Augment REST with Custom Verbs" like
>
> POST /tasks/@me/{taskId}?method=markDone
>
> is not necessarily not-REST. Can or not be RESTfull. It depends.
>
> Now I'm not going to quote Roy here, there are people more qualified
> than me to do it and I don't want to take the risk of misquoting
> and/or quote out of context as it is often the case. But it is my
> impression that REST doesn't advocate a limited number of verbs, or
> even less, that REST is limited to GET, POST, PUT, DELETE, or even
> less, that should be limited to CRUD verbs.
>
> And what I saw written by Roy is that a REST based architecture should
> not be dependent, or tied to, any particular protocol. And what I also
> see is most of the people discussing REST in terms of HTTP.
>
> Now as I said some times, our pro-REST, or the wish-to-become-REST
> infrastructure, was build since the beginning with the goal of
> supporting multi-protocols. Thankfully, we started on that before I
> was in this list from a long time, otherwise I'll probably have
> concluded from this list that that was impossible, or not REST. But of
> course, the first problem was how to have a Uniform Interface spanning
> multiple protocols. So after some consideration we decided to have
> GET, POST, PUT, DELETE - not because we wanted to work only to HTTP
> but because we knew that our HTTP connector was going to be the most
> used (it's the one our fat clients use). And out of necessity we also
> add another method, LISTEN. And so those 5 verbs were our uniform
> interface. We were not tunnelling HTTP over the other protocols as it
> may appear, we were kind of tunnelling our "personal" protocol over
> all the protocols.
>
> So, at this point, the use of
>
> POST /tasks//123?verb=LISTEN
>
> at least as far as I can't see doesn't seems to me to break REST.
>
> So the same for that Google API, if they constraint the methods in
>
> POST /tasks/@me/{taskId}?method=XXXXXX
>
> in a way they are limited in number, they always mean the same, and
> they are described in a way that both the server and the client
> understand their meaning, I don't see that as unRESTfull...
>
> And to try to reach the maximum number of audience as possible is a
> legit objective, there's no point in having a "perfect" solution if no
> one knows about it...
>
>
>
>
> Melhores cumprimentos / Beir beannacht / Best regards
> _____________________________________________________________
> António Manuel dos Santos Mota
> Contacts: http://card.ly/amsmota
> _____________________________________________________________
> If you're on a mobile phone you can add my contact
> by scanning the code on the card below
>
>
> Please click on the image to enlarge it
> _____________________________________________________________
> Disclaimer: The opinions expressed herein are just my opinions and
> they are not necessary right.
> _____________________________________________________________
>
>
> On 30 July 2010 15:47, mike amundsen <mamund@...> wrote:
> >
> >
> >
> > veering the thread slightly off to another angle...
> >
> > there are lots of engineering disciplines present @ Google. one
> > engineering discipline that i suspect paid a role in the new
> > "Discovery-Based API" model is _social_ engineering.
> >
> > i think this is an example of a team @ Google offering their audience
> > (developers in this case) what Google thinks developers want; what
> > Google needs to do to get developers to adopt their platform.
> >
> > i, personally, have seen the same thing happen at Microsoft.
> >
> > these are smart people. they "get" REST. however, they suspect their
> > target audience does not.
> >
> > Also, it's been said on this list (more than once) that one of the
> > prime barriers to adopting the REST style (or any new pattern, tech,
> > etc.) is psycho-social. people don't like change, don't find the new
> > thing appealing, don't want to lose something in the change, etc. and
> > to prevent these perceived "negative consequences" from occurring,
> > people will attempt to argue against the new thing using
> > pseudo-technical reasoning (basically unsupported assertions like "it
> > won't scale" or "no one will like it", and so forth). often, this kind
> > of arguing is effective in the social setting of the office since some
> > in the conversation hold power over others ("i can't convince my
> > boss", etc.).
> >
> > Finally, the REST style is not complicated (it's one of the few
> > network arch styles based in clear constraint-based terms), but it is
> > hard work. hard work is not very appealing. in my experience the REST
> > style is most demanding on those building _clients_ not servers.
> > Google is trying to get people to build "consuming" applications; the
> > hardest part, IMO. i think Google has decided to not try to convince
> > their audience they need to adopt a state-machine style in order to
> > consume the data Google is offering. instead, they decided to make
> > consuming Google data "easy" and "familiar." hence the adoption of an
> > RPC over HTTP pattern.
> >
> > I think Google thought about this carefully and knows exactly what
> > they are doing. and they'll get lots of adoption, too.
> >
> > mca
> > http://amundsen.com/blog/
> > http://mamund.com/foaf.rdf#me
> >
> > On Fri, Jul 30, 2010 at 09:45, Eric J. Bowman <eric@...> wrote:
> > >>
> > >> I made this too hard. Forget the first GET. The conditional PUT uses
> > >> if-none-match the Etag of /image.jpg?rot=90. Makes the hypertext API
> > >> even easier to write.
> > >>
> > >
> > > What if we don't want to transfer a representation of image.jpg to the
> > > client, or back to the server? No problem. HEAD /image.jpg?rot=90 to
> > > get the Etag (or GET), followed by a conditional POST to /image.jpg of a
> > > representation of the desired application state as application/x-www-
> > > form-urlencoded, i.e. POST /image.jpg?rot=90 if-none-match Etag.
> > >
> > > While this may technically solve the lost-update problem, without
> > > actually transferring the image, how does the user (human or machine)
> > > know the image hasn't already been rotated by 90 degrees? If what
> > > Google meant to call awkward wasn't the notion of the client doing the
> > > rotation, but the notion that the image needs to be transferred to the
> > > client at all, then I don't see any way around it -- without viewing
> > > the image, how would a human or machine user know that it needs
> > > rotation, or by how many degrees?
> > >
> > > So my HEAD-conditional POST solution is only RESTful in situations
> > > where the user goal is to rotate an image regardless of current
> > > orientation. Whereas this is the only use-case Google's RPC supports,
> > > without solving for lost-update, making it brittle even where lost-
> > > update isn't a problem: Google's way, if the confirmation of the POST
> > > is lost and the operation repeated as a result, the rotation is 180
> > > degrees not 90. My way, prevents this by properly identifying
> > > resources (for starters) and using conditional requests. Not brittle.
> > >
> > > There is nothing unRESTful about POST /image.jpg?rot=90 being
> > > interpreted by the server to mean "rotate image.jpg 90 degrees" unless
> > > such an operation isn't hypertext-driven, and provided the media type is
> > > application/x-www-form-urlencoded. In which case "rot" is not a "verb",
> > > it is a noun identifying a stored procedure (regardless of HTTP method).
> > >
> > > Google's POST, by way of comparison, is not a transfer of a
> > > representation of the desired resource state. It's RPC. I couldn't
> > > make this argument if Google allowed GET on the same URI -- this is the
> > > difference between identification of resources (regardless of how sloppy
> > > the URIs) and custom verbs tunneled over POST.
> > >
> > > -Eric
> > >
> > >
> > > ------------------------------------
> > >
> > > Yahoo! Groups Links
> > >
> > >
> > >
> > >
> >
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
On Fri, 2010-07-30 at 03:09 -0600, Eric J. Bowman wrote: > > "Suresh" wrote: > > > > One of the examples cited in the presentation is rotating an image > in > > flicker and the author claims using RPC style is natural and works > > better than just using REST. Any thoughts?? > > > > First thought? Google ought to know better. If the desire is to GET > an image from the server, rotated + or - x degrees, then the proper > method is GET, not an RPC POST. > > GET /image.jpg?rot=90 I'd agree with PUT over POST because you get a caching option (which is usually what you want for image operations). Whether to use GET seems like an it depends thing - if a resource was created as a side effect of that GET, then GET was not the right thing. It's Friday so I'd like to avoid a discussion on whether the resources "/image.jpg?rot=90" exist to be discovered by people (therefore GET) or whether they are man made inventions (therefore POST/PUT) ;) Bill
Bill de hra wrote: > > I'd agree with PUT over POST because you get a caching option (which > is usually what you want for image operations). Whether to use GET > seems like an it depends thing - if a resource was created as a side > effect of that GET, then GET was not the right thing. > There's no rule against GET creating a resource; GET just means the client didn't request it and can't be held to account for it. > > It's Friday so I'd like to avoid a discussion on whether the resources > "/image.jpg?rot=90" exist to be discovered by people (therefore GET) > or whether they are man made inventions (therefore POST/PUT) > Um, wow, Bill. It must be Friday or something, this is one of those fundamental REST design patterns that's simply beyond dispute. If I can create form markup that builds that query, and return 400 for any value over 360 or under -360, well, that's about as simple as hypertext REST APIs get. -Eric
> > value over 360 or under -360 > Must be Friday or something. I meant 0-360, or -180 - +180. /nitpick -Eric
On Fri, 2010-07-30 at 13:48 -0600, Eric J. Bowman wrote: > Bill de hÓra wrote: > > > > I'd agree with PUT over POST because you get a caching option (which > > is usually what you want for image operations). Whether to use GET > > seems like an it depends thing - if a resource was created as a side > > effect of that GET, then GET was not the right thing. > > > > There's no rule against GET creating a resource; GET just means the > client didn't request it and can't be held to account for it. No argument. But that doesn't explain why POST is incorrect. > > It's Friday so I'd like to avoid a discussion on whether the resources > > "/image.jpg?rot=90" exist to be discovered by people (therefore GET) > > or whether they are man made inventions (therefore POST/PUT) > > > > Um, wow, Bill. It must be Friday or something, this is one of those > fundamental REST design patterns that's simply beyond dispute. If I > can create form markup that builds that query, and return 400 for any > value over 360 or under -360, well, that's about as simple as hypertext > REST APIs get. I don't follow your argument, as POST and GET are equally valid for a form. The decision point seems to be a whether a resource was created or not - only the server can know and the server therefore can dictate the appropriate method (especially if its forms driven). So either method may be appropriate. Bill
On Mon, 2010-07-26 at 15:52 -0700, Will Hartung wrote: > > On Sun, Jul 25, 2010 at 2:40 AM, Eric J. Bowman > <eric@...> wrote: > > The biggest clash between Old Testament and New right now, seems to > be > > the issue of media type proliferation. On that point, please refer > to: > > > > http://roy.gbiv.com/untangled/2008/paper-tigers-and-hidden-dragons > > > > Notice that Roy's solution to the problem space is a sparse-bit > array. > > Instead of creating a new media type, Roy's thought process is to > > consider what ubiquitous media type may be repurposed to this need. > > His choice is image/gif. That's so REST! > > It seems to me the conflict is coming from two distinct visions of > computing. > > One vision is to model the world you as you see fit, and make the > world work with it. The other is to take the worlds models and make > your software work with that. > > Your discussion of using HTML is a simple example. You've always > mentioned that before, and I never quite groked how you went about it > until recently. Effectively what you are doing is using semantic, HTML > markup combined with RDFa style annotations to augment the markup, and > using that as a representation for your data. > > When I look at the RFDa primer > (http://www.w3.org/TR/xhtml-rdfa-primer/) it became much clearer to > me. > > But it still prompted my confusion about identifying the data to the > system, since application/xhtml+xml simply doesn't tell me, at least, > enough about how to process the data. But to your point, it does tell > me what it is, and if it were my standard data type, then I would > proceed to mine the payload for the interesting attributes. > > Apparently, that's what you're doing, correct? The XML payload that > happens to be XHTML is not processed in total. Rather you dig your > data out of it guided by XHTML and RDF annotations. > > If it were some defined XML, I'd be tempted to take the schema, > generated a bunch of JAXB annotations, and have the framework > marshal/unmarshal the document to internal Java objects, and > manipulate those rather than, perhaps, pull chunks out of the document > using a bunch of, say, XPath expressions. > > That's when the light hit me. Effectively, if your path of approach is > using something like XPath as your accessor technique, then the > difference between an XML document and an XHTML/RDFa document are the > actual paths used, but really little else. The RDFa can impose enough > structure that static XPath expressions are effective and precise > enough to get the data you want out of the payloads. Once that > decision has been made, XML vs XHTML becomes a bike shed color, and > it's easy to see the extra value XHTML provides "for free" over XML. > > But I think it's clear when you're model making, and particularly from > a world where binding documents to objects is common, automated, and > "free", the XHTML option never comes on the radar. Arguably, it's not > even an option at the point. Who wants the complexity of a generic > XHTML DOM, even if mapped to an Object in the system, to a "simpler", > specific DOM/Mapping. > > XHTML also (potentially) loses the value that things like Schema > validation can bring to the table. > > Now, technically, you could make a "sub schema", where your document > IS XHTML, it's just a specific subset of it that you (the designer) > have decided is enough to represent your data. You can schema this, > potentially map this (not many mappers do well with XML attributes to > specific object slots), etc. "Cake and eat it too". If the goal of > XHTML is for those intermediaries (i.e. it's not for the clients > benefit, nor the servers benefit), that can work. But if you go this > route, you can't take "arbitrary" XHTML that happens to have your > interesting data embedded within it, since the overall document may > not match your subset schema. > > But I don't think this is contrary to what you've been discussing. I > don't think you've ever advocated a system being able to take > arbitrary documents that meet the higher level specification of the > data type you're leveraging, vs the more specific subset that your > system supports. Might be a handy feature, but it's not a requirement. > > However, whether you use XHTML or XML, the semantics of the payload > still need to be defined. That's always hard work. > > In that light, though I want to take Roys example you cited. > > While using a GIF is a clever media type to use, I think for many > folks interested in this data it's wrong on many levels. > > First, it's not a sparse array, as was suggested, it's just compact. > You're still sending all 1M bits whether it's 1 user or 10000 user > changes. Yes, it compresses, but that's not relevant as that's only a > transport issue. > > But most importantly, many systems that happen to use the GIF media > type DON'T use it at the level for which it's being suggested. > Specifically, at the bit level. I don't know PHP, but is it really > straightforward to get the color of pixel 100,100 of a received GIF? > What about Javascript in a browser. Now, perhaps, with the canvas > element it can be done, but that's a pretty recent development. But > either way, it sure is a lot of hoops to jump through to find out if > bit #100100 is set. Most systems present the artifact instantiated > from a GIF datatype as an opaque blob with very simple properties > rather than as a list of Bits. > > I see the conflict between the reuse of what is, vs the create of what > wants as the difference between the folks wanting full boat OO systems > and typing within JS instead of just passing around hashes of hashes. > Bags of hashes of bags of hashes. The conflict between the strongly > typed crowd and the dynamically typed crowd (the battles between which > are legion). Some make do, others want specific abstractions to work > with. > > We're actually seeing the phenomenon of reusing data types, even in > the SOAP world here in health care. Leveraging a few "common" data > formats for many uses. A common data type today is the Document > Submission Set payload. It's based on ebXML, which is used by another > standards committee, and therefor adopted by yet another standards > committee. > > Ideally this is what standard formats are for. But, at the same time, > the format is so onerous, that there is already push back from the > "simpler" crowd. For a simple exchange, there is a huge amount of > "boiler plate" using this format. Just like the pushback from SOAP, > and the boiler plate it brings with it (outside of semantics of SOAP). > "Why can't I just send a PDF" they say. > So, standards or no, they're not necessarily easy to use. Tooling made > SOAP "easy to use". REST is "harder" for many to use because of the > lack of tooling. Throwing an XSD against some tools and getting free > Java classes is "easier" than crafting and testing DOM code or Xpath > queries. > > That's where the pressure for many media types are coming from, IMHO. > They're "cheap" to make, and "easy" to use. Very good analysis. I wish it was 2022. HTML5 would be finished and maybe the world would have moved off media types for APIs in favour of Higher Order HTML, that allowed you to express you data clearly and specifically in a single interlingua. Imagine being able to describe the domain, range and cardinality of your data. That would be mappable to code. Life would be grand :) Bill
Bill de hra <bill@...> wrote: > I don't follow your argument, as POST and GET are equally valid for a > form. The decision point seems to be a whether a resource was created or > not - only the server can know and the server therefore can dictate the > appropriate method (especially if its forms driven). So either method > may be appropriate. And I don't understand what you guys are even arguing about. GET can create as many resources as it likes, just like POST, it's just that GET only returns what the URI specifies. And heck, even there it's *still* up to the server to deliver whatever it feels like back. /image.jpg?rot=90 might return any random picture and not rotated at all, if the server wanted to be a dick. It's up to the server to determine caching of its resources, and how it deals with that behind the scenes should have *no* impact on the client. If that means the server deals with 360 different version of that image in the back, then fine. I'm sure we all agree with that, no? Whatever is represented by /image.jpg is *not* the same thing as /image.jpg?rot=90. If you want to update /image.jpg permanently, use a PUT* but I'm already getting a bad taste in my mouth from thinking about this stuff in terms of caching. Eliminate POST, and already this is a cleaner API. *Oh god, I hope this isn't going to fall back to browser support for anything but GET/POST ? Regards, Alex -- Project Wrangler, SOA, Information Alchemist, UX, RESTafarian, Topic Maps --- http://shelter.nu/blog/ ---------------------------------------------- ------------------ http://www.google.com/profiles/alexander.johannesen ---
Bill de hra wrote: > > > > > > I'd agree with PUT over POST because you get a caching option > > > (which is usually what you want for image operations). Whether to > > > use GET seems like an it depends thing - if a resource was > > > created as a side effect of that GET, then GET was not the right > > > thing. > > > > > > > There's no rule against GET creating a resource; GET just means the > > client didn't request it and can't be held to account for it. > > No argument. But that doesn't explain why POST is incorrect. > I defined HEAD, GET and POST for the same example URI. POST is only incorrect if the request has retrieval semantics, because those semantics are defined for GET. So it's a violation of the self- descriptive messaging constraint, as surely as if Google's search interface were to make un-cacheable POST requests. What's really incorrect is this notion that rotate is a verb, not a parameter. If GET /image.jpg is your only dereferenceable resource, then the only way you're going to avoid the lost-update problem is to have the client rotate the image and PUT it back, as per slide 53. OTOH, if you treat rotation as a parameter, you create a finite set of subresources which can respond to GET or HEAD requests with Etags representing each possible state of the parent resource. This is the only way to avoid lost-update, and quite elegantly at that. Each time the parent image is rotated, the set of subresources changes, generating new Etags. The Etag of /image.jpg doesn't factor in, except on PUT, if POSTs all have query strings (i.e. we don't define an action for POST /image.jpg without a query). So my way's robust if a 200 response is dropped, in addition to avoiding lost- update. Something about "allow[ing] the forces that influence system behavior to flow naturally, in harmony with the system" seems to apply here. Using POST to toggle a rotation in an RPC fashion is not in harmony with the REST paradigm of transferring representations of application state. > > > If I can create form markup that builds that query, and return 400 > > for any value over 360 or under -360, well, that's about as simple > > as hypertext REST APIs get. > > I don't follow your argument, as POST and GET are equally valid for a > form. > I deliberately didn't specify method. Forms define a set of resources on which any method may be called, provided it's supported by the forms language. GET dereferences a rotated representation of the parent resource. PUT or POST executes a rotation of the parent resource, depending on whether an entity body is sent, using the replacement semantics of PUT or the process-this semantics of POST. GET isn't doing any rotating, it's dereferencing subresources identified by parameter. REST examples don't come any simpler. > > The decision point seems to be a whether a resource was created > or not - only the server can know and the server therefore can > dictate the appropriate method (especially if its forms driven). So > either method may be appropriate. > I think that's a paper tiger. It only matters whether a resource was created, in those cases where a 201 response needs to be generated, i.e. the user instructed the user agent to create a new resource (like posting the image in the first place). The decision point is RPC vs. REST. Do you have POST-only endpoints, or can you also GET them? Granted, RPC typically means you have some /service endpoint that can't be dereferenced, and everything beyond it is a query-string POST. In this case, Google allows /service (as /image.jpg) to be dereferenced, but this only makes it less RPC-ish, not more RESTful, since the key constraint being violated is identification of resources. The result is the same, everything beyond the endpoint is a query-string POST. Treating rotation as a parameter is a lot less dangerous than treating it as a tunneled method. If it's a parameter, it's easy to define which resources of interest need their own identifier -- one URI for each of 360 degrees of rotation, or one URI for each of four 90-degree rotations, and 4xx the rest. (I should have used /image.jpg;rot=90 as my example, to make it more clear that rotation is just a parameter, not a method or a query.) -Eric
Alexander Johannesen wrote: > > And I don't understand what you guys are even arguing about. GET can > create as many resources as it likes, just like POST, it's just that > GET only returns what the URI specifies. And heck, even there it's > *still* up to the server to deliver whatever it feels like back. > /image.jpg?rot=90 might return any random picture and not rotated at > all, if the server wanted to be a dick. > Yes, URIs are opaque. But for the sake of sanity, can we agree that I defined a system whereby ;rot=90 actually does do as I describe, in lieu of my having to actually code such a system as an example? ;-) PUT /image.jpg;rot=90 is also perfectly valid, assuming there is no automated server-side process and the subresources must be manually created. > > *Oh god, I hope this isn't going to fall back to browser support for > anything but GET/POST ? > Let alone fall back to an assumption that we're even talking about browsers. I've been assuming curl. -Eric
2010/7/30 António Mota <amsmota@...>
> Let me argue a little here. First, let me say I'm not "defending"
> Google (as if they needed...) nor their solution, just having some
> loud thoughts about it.
>
> So, I'll argue that having what they call "Augment REST with Custom Verbs"
I think you are correct...
> like
>
> POST /tasks/@me/{taskId}?method=markDone
>
but not with this example. Where's the custom method? You're still using
POST and tacking on RPC inside the query string. On the Richardson Maturity
Model, you're still at Level 0 for this part of the api.
is not necessarily not-REST. Can or not be RESTfull. It depends.
>
> Now I'm not going to quote Roy here, there are people more qualified
> than me to do it and I don't want to take the risk of misquoting
> and/or quote out of context as it is often the case. But it is my
> impression that REST doesn't advocate a limited number of verbs, or
> even less, that REST is limited to GET, POST, PUT, DELETE, or even
> less, that should be limited to CRUD verbs.
>
> And what I saw written by Roy is that a REST based architecture should
> not be dependent, or tied to, any particular protocol. And what I also
> see is most of the people discussing REST in terms of HTTP.
>
Again, I agree with you. We aren't limited to using the standard HTTP
methods. HTTP itself is extensible and allows for the addition of methods.
Doing so shouldn't be done arbitrarily, but you are correct.
So, at this point, the use of
>
> POST /tasks//123?verb=LISTEN
>
> at least as far as I can't see doesn't seems to me to break REST.
>
> So the same for that Google API, if they constraint the methods in
>
> POST /tasks/@me/{taskId}?method=XXXXXX
>
> in a way they are limited in number, they always mean the same, and
> they are described in a way that both the server and the client
> understand their meaning, I don't see that as unRESTfull...
Once again, your examples don't align with what you stated earlier. You are
embedding RPC into your urls, which is RMM Level 0 (maybe -1 since so many
people seem to miss this). Why is this RPC and not REST? method is not
filtering your request; it's telling the server what to perform. Unless I'm
grossly mistaken, that defines RPC, does it not? If you had defined LISTEN
in a custom HTTP server and called
LISTEN /tasks/123
then you would have a new HTTP method and could abide by the REST
constraints.
Cheers,
Ryan Riley
Bill de hra wrote: > > I think first there's an issue with notion of a 'Web API'. The Web 2.0 > crowd co-opted the term API, but in reality these things are very > different to Software APIs. > I don't see it. Wikipedia isn't a normative reference, but I like their wording: "An API is an abstraction that describes an interface for the interaction with a set of functions used by components of a software system." Functions = resources, in REST, and the described interface must consist of a hypertext control interface. I don't care what the back-end implementation is like, if I'm to interact with it over the Web (via browser, curl, what-have-you) then I need documentation. With hypertext, that documentation happens to be a functional interface. Those controls tell me exactly how to interact with the underlying resources, by manipulating their representations. It takes all of one HTML form to create a distributed API for a server- side image rotation system. REST doesn't require that clients follow the hypertext, but it does require the system to have hypertext controls. I can't understand why I wouldn't call an interactive hypertext control interface distributed over HTTP, a Web or REST API. -Eric
2010/7/31 Ryan Riley <ryan.riley@...> > Unless I'm grossly mistaken, that defines RPC, does it not? First, I don't think we disagree. Second, I don't think there's a definition of RPC that's going to get too far beyond simplified API calls wrapped in some accessible way, but I don't even think this debate is at all about RPC vs. REST vs. HTTP vs. anything else. This is all about semantics and *where* in your technology of choice you choose to put what and what those mean in that spot. Google has chosen to put semantics in a certain spot that rubs against the way we we do it in a more RESTful manner. > If you had defined LISTEN in a custom HTTP server and called > LISTEN /tasks/123 There are so many ways to slice and dice this whole debate (and I'm not picking on LISTEN here :), and to be honest since a *lot* of the semantics of what these examples tries to explain are simple structures, have we all forgotten to use the magic powers of URI's? GET /tasks/123/activity GET /tasks/123/events/today GET /tasks/123/events/since?timestamp=4573984579348573 Each URI may or may not be related to any other given resource, it's up to the server. But the point here is that we *can* define new methods, however with a bit of smart placing of semantics you don't have to; return new resources that represent whatever needs representing. We try to cram too much meaning into API calls forgetting that REST can give us a complete model back and not just simple resources. Even the part of debate about caching, persistence and other bits of API design, it should *all* come down to URI's. The main deal with REST is the hyperlink state machine. It's even easy! I feel people are forgetting hyperlinks in the confusion when we think about "API" design. We don't use hyperlinks to create an API that deals with resources; the hyperlinks *are* the API and the resources. Are we confused by thinking resources have to be objects or things or results, and that we need an API on top to deal with them? Because down that path lies distribution- and scalability-hell. Regards, Alex -- Project Wrangler, SOA, Information Alchemist, UX, RESTafarian, Topic Maps --- http://shelter.nu/blog/ ---------------------------------------------- ------------------ http://www.google.com/profiles/alexander.johannesen ---
Alexander Johannesen wrote: > > Google has chosen to put semantics in a certain spot that rubs > against the way we we do it in a more RESTful manner. > That's a polite way of putting it. To be more blunt, if you have a use case for a method, then the method needs to be a method, i.e. not tunneled over POST or tacked onto a query string. Why? Because of the Web's security model. I secure HTTP servers by *method* and path. You may be allowed to GET or HEAD but nothing else. Blatantly bypassing the Web's security model should be an obvious matter of poor design, without bringing REST into it at all. I really don't want to think about the fiasco that would surely result from trying to implement a security model based on query strings. > > But the point here is that we *can* define new methods, however with > a bit of smart placing of semantics you don't have to; > Being polite again. Folks should make no mistake, that inventing your own HTTP methods goes against the whole purpose of the Uniform Interface. Again, like with registered media types, not a hard-and-fast rule, because evolution must be allowed. But for all intents and purposes, when you invent your own HTTP method instead of re-using one of the dozens already in existence, you've coupled your client to your server, in that both must share uncommon knowledge of the method. The point of the Uniform Interface is to re-use standard methods such that components are decoupled, to allow for independent evolution. -Eric
"Peter" wrote: > > My own take is that this sort of JSON/WADL approach was an attempt to > get around that need to mint new media types (or to come up w/ a > suitably generic media type for all Google services -- a role that > Atom fell short of). > How does this make it any different than using SOAP/WSDL? Haven't we been down the road of stating that methods and media types need not be protocol-layer concerns, before? Looks like a new instantiation of SOA to me, more than REST. > > One of the stated goals was to make it easier to bring a new API > online (interfaces, documentation, clients, etc.) when a new service > was rolled out. > Assuming you're right in your assessment, haven't we heard this promise before, from SOA/WS-*? All this tooling needed for code generation seems to me like its costs outweigh its benefits compared with REST, where even if it's less convenient to build, a system may still be rapidly developed with a text editor and knowledge of URI + HTTP + HTML that "just works" without all the fancy enterprise doo-dads. Incessantly banging my anti-corporatized-REST drum, Eric
--- In rest-discuss@yahoogroups.com, "Eric J. Bowman" <eric@...> wrote: > > "Peter" wrote: > > > > My own take is that this sort of JSON/WADL approach was an attempt to > > get around that need to mint new media types (or to come up w/ a > > suitably generic media type for all Google services -- a role that > > Atom fell short of). > > > > How does this make it any different than using SOAP/WSDL? Haven't we > been down the road of stating that methods and media types need not be > protocol-layer concerns, before? Looks like a new instantiation of SOA > to me, more than REST. > One difference is that the JSON/WADL is a run-time, *not* compile-time approach. It's essentially just code-on-demand. And as such, it suffers drawbacks as compared to a standardized media-type with embedded application flow controls (e.g. reduced visibility/uniformity, etc). But I'm not ready to say it is completely anti-REST and I would certainly argue it is *not* an engineering failure or design failure (even if I would have hoped to see a different approach). In fact, it is kind of a fascinating study of "principled design" meant to optimize for a set of facts-on-the-ground with which Google is faced (spelled out nicely by Bill de hOra's message on this thread). I'm pretty sure they know exactly what they are doing and I would predict that they'll achieve better success with this approach than they did with the attempt at Atom/AtomPub. OTOH, as a model of "how to do REST" it could prove to be a v. bad influence indeed. > > > > One of the stated goals was to make it easier to bring a new API > > online (interfaces, documentation, clients, etc.) when a new service > > was rolled out. > > > > Assuming you're right in your assessment, haven't we heard this promise > before, from SOA/WS-*? All this tooling needed for code generation > seems to me like its costs outweigh its benefits compared with REST, > where even if it's less convenient to build, a system may still be > rapidly developed with a text editor and knowledge of URI + HTTP + HTML > that "just works" without all the fancy enterprise doo-dads. > But the question is "just works" for whom? SOA/WS-* promised as much for consuming applications -- Google is trying to make their *own* life easier by not have to do the hard work of 1. creating appropriate media types and 2. creating client code libraries for each new service they roll out. My current opinion on the matter (subject to change, of course :-)) is that we have HTML as a real success story, and Atom/AtomPub as a good idea that is too easily misused and is not the raging success I and others predicted. XHTML+RDFa is a real rabbit hole (for RESTful systems specifically) as far as I am concerned -- that's not the way we'll build the RESTful web out (same goes for RDF). HTML5 as a move *away* from XML is significant (I realize there is a serialization in XHTML, but that's an also-ran). It's noteable that Google's v3 discovery approach will work perfectly well in a completely non-XML (read: primarily JSON) world. Two other forces/trends that I see as significant are 1. the desire to pass *data* around (as opposed to largely textual/presentational HTML), 2. clients that have no real need for HTML nor adequate processing capabilities for HTML (I'm thinking of mobile-based "apps"). So if indeed we see less and less XML, more and more JSON, and about the same amount HTML, how will we be building RESTful systems in, say, 5 years? Google's is one answer to that question. Many people here (me included!) would like to see more attention paid to creating good standardized media types, but I don't think making everything a "+xml" media type is going to fly. I have no idea what the answer is. I cannot help but think that "layered" application controls (a la Google v3 discovery approach) or honest-to-goodness JSON hypermedia types (i.e. link semantics, "form-like" controls in "+json" media types) are the only two obvious choices. --peter keane > Incessantly banging my anti-corporatized-REST drum, > Eric >
"Peter" wrote: > > XHTML+RDFa is a real rabbit hole (for RESTful systems specifically) > as far as I am concerned -- that's not the way we'll build the > RESTful web out (same goes for RDF). HTML5 as a move *away* from XML > is significant (I realize there is a serialization in XHTML, but > that's an also-ran). > It is a shame that HTML5 is speccing imperative solutions to problems that XSLT solves with declarative code; that all the major browsers finally, as of this year, handle XSLT 1 competently seems to have happened despite themselves. Had this state of affairs come about earlier, the HTML5 conversation would be totally different; as it is, I wish it would slow down and consider the advantages of hypertext over blackbox code. Regardless of serialization, HTML5 won't be changing my architectural pattern one whit. XSLT support in browsers isn't going away any time soon. When the time comes, I don't even need to change the logic within the XSLT, only the actual output markup. Same goes with supporting mobile devices, client or server can generate some sort of tiny app with XSLT, in fact Xforms has come out of nowhere to start looking like a viable mobile markup, and I'm a big fan of Xforms. I don't see REST becoming less relevant as mobile proliferates or the Web evolves towards being more m2m. I guess I'll keep talking until I'm blue in the face about it, but there's no sweeter way to communicate a distributed interface to a human than Xforms, which of course is perfectly viable as an m2m format (in an XML world), and the hypertext constraint is all about communicating a Uniform Interface to both, which I see as an absolute necessity -- humans will (hopefully) always develop and maintain the code. (Although I note that Asimov's First Law of Robotics was the first one to go down to "pragmatism," as robotics nowadays is driven not by the manufacturing sector, but by the human-killing sector. We ought to know better than to let them network, but as long as it's inevitable we should give them human-readable interfaces, rather than binary ones, so they can't lock us out for our own "protection"...) XHTML+RDFa is the most viable answer at the present time. At such time as something better comes along, ditching the old in favor of the new on my system involves editing some XSLT, then modernizing the CSS/JS to keep up with the new markup. REST is a long-term solution; in this instance, keeping up with the future is precisely the goal of my use of REST. Not that I'm planning on it for a really long time, but the backend data format is just as decoupled and easy to replace, without needing to re-think the architecture, as the front-end data format. So we'll see. I hope my business partner doesn't read your post. ;-) -Eric
Regardless of how smart an organization is upon its inception, the larger it gets, the less "smart" it becomes. As it gets larger and larger this continues until the organization is not very "smart" at all. This seems to always be the case, as far as I can tell. In a sense, the organization simply succumbs to the law of averages. 2010/7/30 António Mota <amsmota@gmail.com> > > > I don't see a company like Google, the biggest company on the net, born and > raised on the net, and that can hire the best minds around, has problems in > *getting* it, and they were proven themselves many times not to be lazy or > ignorants. > > Probably they see there are much more things on the net besides making web > sites and that allowing other parties to access their vast infrastructure > and services in programmatically ways is very different than serving HTML > for browser/human consumption - which by the way is true not only for the > net but for even more for enterprises too. > > Bottom line, doing things in the real world is not always compatible with > theoretical purist considerations and "all-or-nothing" views of the word... > And of course, when something is presented is such a way, pragmatic people > have a tendency to go away. Maybe Google is a good example how companies > behave in the real world. > > But this is just my opinion, of course... > > > On 30 July 2010 09:45, Alexander Johannesen < > alexander.johannesen@...> wrote: > >> >> >> Mike Kelly <mike@mykanjo.co.uk <mike%40mykanjo.co.uk>> wrote: >> > Understanding and applying REST correctly is hard, many people fail at >> both. >> >> What I find scary, though, is that the Google engineers aren't getting >> it right (or even half-right, perhaps more like a smidgen), and this >> stuff isn't *that* hard. What gives? Laziness? A few bad apples? >> Ignorance? Couldn't give a rats? >> >> Alex >> -- >> Project Wrangler, SOA, Information Alchemist, UX, RESTafarian, Topic Maps >> --- http://shelter.nu/blog/---------------------------------------------- >> ------------------ http://www.google.com/profiles/alexander.johannesen--- >> > > > -- Bediako George Partner - Lucid Technics, LLC Think Clearly, Think Lucid www.lucidtechnics.com (p) 202.683.7486 (f) 703.563.6279
So what are you saying? That that is the case with Google? On 31 Jul 2010 08:17, "Bediako George" <bediakogeorge@...> wrote: Regardless of how smart an organization is upon its inception, the larger it gets, the less "smart" it becomes. As it gets larger and larger this continues until the organization is not very "smart" at all. This seems to always be the case, as far as I can tell. In a sense, the organization simply succumbs to the law of averages. 2010/7/30 Antnio Mota <amsmota@...> > > > I don't see a company like Google, the biggest company on the net, born and > raised on the net, and that can hire the best minds around, has problems in > *getting* it, and they were proven themselves many times not to be lazy or > ignorants. > > > > > > Probably they see there are much more things on the net besides making > web sites and that allow... > > > > > > > > > > On 30 July 2010 09:45, Alexander Johannesen < > alexander.johannesen@gmail.com> wrote: > >> > >> ... > > -- Bediako George Partner - Lucid Technics, LLC Think Clearly, Think Lucid www.lucidtechnics.com (p) 202.683.7486 (f) 703.563.6279
On Fri, 2010-07-30 at 20:31 -0600, Eric J. Bowman wrote: > Bill de hÓra wrote: > > > > I think first there's an issue with notion of a 'Web API'. The Web 2.0 > > crowd co-opted the term API, but in reality these things are very > > different to Software APIs. > > > > I don't see it. Wikipedia isn't a normative reference, but I like > their wording: "An API is an abstraction that describes an interface > for the interaction with a set of functions used by components of a > software system." Functions = resources, in REST, and the described > interface must consist of a hypertext control interface. Their wording also makes a distinction for 'Web API'. Which in encyclopedic terms implies there are a class of abstractions that are APIs, other Web APIs. Perhaps the entry is nonsense. What you've done in effect is define an API to mean abstracting over protocols software systems, resources and gateways. Interesting. Bill
On Fri, 2010-07-30 at 23:57 -0600, Eric J. Bowman wrote: > > Regardless of serialization, HTML5 won't be changing my architectural > pattern one whit. XSLT support in browsers isn't going away any time > soon. When the time comes, I don't even need to change the logic > within the XSLT, only the actual output markup. Same goes with > supporting mobile devices, client or server can generate some sort of > tiny app with XSLT, in fact Xforms has come out of nowhere to start > looking like a viable mobile markup, and I'm a big fan of Xforms. > > I don't see REST becoming less relevant as mobile proliferates or the > Web evolves towards being more m2m. REST is useful for mobile, a couple of on the ground issues come to mind - Fat web formats consumes energy and bandwidth. - Hypertext traversal increases latency and network hops over constrained radio networks (even the packet switched ones defined by LTE). The former is mitigated by allowing partial data or less fat formats (decompression is not pue for a mobile device and there is no law far battery improvement developers can free ride on). The latter is mitigated by allowing clients to consume a 'discovery' document that describe the resources available and their URIs. I note that performance is not an explicit goal of REST, but as things stand today I would prefer the option of discovery than throwing out the system benefits of REST. Fwiw, a forms document fits in my notion of discovery. Finally push is very useful for mobile but that is outside REST's design goals. I suspect there are some concepts from ARRESTED or P2P that may be useful in mobile systems. Bill
> > XHTML+RDFa is the most viable answer at the present time. > Here's an interesting read, particularly this quote: "Jay Myer of BestBuy described how the BestBuy sales went up 30% when they added RDFa tags to their product pages. Although many search engines are not transparent about their use of RDFa tags in page rankings, Jay’s results should make it clear that this strategy works." http://planetxforms.org/node/1392 -Eric
Bill de hra wrote: > > REST is useful for mobile, a couple of on the ground issues come to > mind > Have you tried Xforms? Client-side MVC architecture mitigates bandwidth issues, which is why this is a hopeful sign for mobile development: http://lists.w3.org/Archives/Public/www-forms/2010Jun/0007.html Xforms makes for thinner REST apps. -Eric
Le 30 juil. 2010 11:09, Eric J. Bowman a crit : > There's no excuse not to use GET to rotate image.jpg on the server side, > or PUT to change the state of image.jpg: > > PUT /image.jpg > entity = cached /image.jpg?rot=90 from previous GET But isn't the fact that this requires transferring the image to the client and then back to the server a good excuse, performance wise (several orders of magnitude at sake here)? What about: POST /image.jpg With a body such as: <rotation angle=90/> - Philippe Mougin
It seems that it is the case for all organizations that grow into behemoths. I don't think Google is an exception. Eventually, this begins to show in the quality of the wares they produce. I would admit that Google has done quite a good job so far. But I think it really is only a matter of time before they caught by this snare. If they continue to grow in size, wealth, and power at pace, then more likely than not and overall "desmarting" will occur. Just an observation of mine. Not a law or anything. :) 2010/7/31 António Mota <amsmota@...> > So what are you saying? That that is the case with Google? > > On 31 Jul 2010 08:17, "Bediako George" <bediakogeorge@...> wrote: > > Regardless of how smart an organization is upon its inception, the larger > it gets, the less "smart" it becomes. As it gets larger and larger this > continues until the organization is not very "smart" at all. > > This seems to always be the case, as far as I can tell. In a sense, the > organization simply succumbs to the law of averages. > > 2010/7/30 António Mota <amsmota@...> > >> >> >> I don't see a company like Google, the biggest company on the net, born >> and raised on the net, and that can hire the best minds around, has problems >> in *getting* it, and they were proven themselves many times not to be lazy >> or ignorants. >> >> >> > >> > Probably they see there are much more things on the net besides making >> web sites and that allow... >> >> >> > >> > >> > >> > On 30 July 2010 09:45, Alexander Johannesen < >> alexander.johannesen@...> wrote: >> >> >> >> ... >> >> >> > > > > -- > Bediako George > Partner - Lucid Technics, LLC > Think Clearly, Think Lucid > www.lucidtechnics.com > (p) 202.683.7486 (f) 703.563.6279 > > -- Bediako George Partner - Lucid Technics, LLC Think Clearly, Think Lucid www.lucidtechnics.com (p) 202.683.7486 (f) 703.563.6279
<snip> > Xforms makes for thinner REST apps. </snip> I like XForms because the plug-in make such an improvement for the Web browser itself. And it makes REST implementations much easier on the client side. Baffles me that XForms support is not native in all browsers. bummer. mca http://amundsen.com/blog/ http://mamund.com/foaf.rdf#me On Sat, Jul 31, 2010 at 16:45, Eric J. Bowman <eric@...> wrote: > Bill de hra wrote: >> >> REST is useful for mobile, a couple of on the ground issues come to >> mind >> > > Have you tried Xforms? Client-side MVC architecture mitigates > bandwidth issues, which is why this is a hopeful sign for mobile > development: > > http://lists.w3.org/Archives/Public/www-forms/2010Jun/0007.html > > Xforms makes for thinner REST apps. > > -Eric > > > ------------------------------------ > > Yahoo! Groups Links > > > >
Philippe Mougin wrote: > > > PUT /image.jpg > > entity = cached /image.jpg?rot=90 from previous GET > > But isn't the fact that this requires transferring the image to the > client and then back to the server a good excuse, performance wise > (several orders of magnitude at sake here)? > > What about: > > POST /image.jpg > > With a body such as: <rotation angle=90/> > I'm not a big fan of the posting-metadata approach; that's still RPC if you think about it. My solution is here: http://tech.groups.yahoo.com/group/rest-discuss/message/16033 -Eric
> > I'm not a big fan of the posting-metadata approach; that's still RPC > if you think about it. > Unless we're talking about SVG, in which case using PATCH to change the nature of the image by posting a snippet of XML falls within REST. It's interesting to note that SVG considers image rotation a client- side task -- if I'm not mistaken this is done using URI fragments, which results in the same number of URIs describing the rotational state of the image. -Eric
Has anybody written or come across a documented RESTful Reference Architecture?
I'm looking for something comparable to The OASIS RA for SOA, but obviously more
specific to a RESTful architecture. See:
http://docs.oasis-open.org/soa-rm/soa-ra/v1.0/soa-ra.html
Specifically, I'm looking for something that follows the viewpoint/view models
that [ANSI/IEEE 1471, ISO/IEC 42010] "Recommended Practice for Architectural
Description of Software-Intensive Systems" defines.
Obviously REST is an architectural style and so there can be as many RESTful
reference architectures as there are many RESTful systems. I'm just looking for
one that is expressed well in terms of an architectural system description.
Bryan, On Aug 1, 2010, at 12:27 AM, Bryan Taylor wrote: > Has anybody written or come across a documented RESTful Reference Architecture? > I'm looking for something comparable to The OASIS RA for SOA, but obviously more > > specific to a RESTful architecture. See: > http://docs.oasis-open.org/soa-rm/soa-ra/v1.0/soa-ra.html > > Specifically, I'm looking for something that follows the viewpoint/view models > that [ANSI/IEEE 1471, ISO/IEC 42010] "Recommended Practice for Architectural > Description of Software-Intensive Systems" defines. > > Obviously REST is an architectural style and so there can be as many RESTful > reference architectures as there are many RESTful systems. I'm just looking for > one that is expressed well in terms of an architectural system description. What about http://www.w3.org/TR/webarch/ ? Jan > > > > > > ------------------------------------ > > Yahoo! Groups Links > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
I have a question about using PUT with resources that have hypertext representations. In this system I would like the client to be able to set the state of a particular resource, but if the normal representation of that resource has handy hypertext links - wouldn't the client need to send content with those exact hypertext links? If so that seems fragile as the links typically are determined by the server and this implies the client also needs to be able to determine those. Specifically, I'm working on system to manage two-party contracts (really, just orders) and would like to support individual 'contributions' from each party and a master 'reconciled' contract that is composed from the two contributions based on business rules. I would like the master 'reconciled' contract to link to the composite contracts and also for the composite contracts to have a link back to the final reconciled contract that results from rules being enforced. How would a client use PUT to modify a 'contribution' resource if that resource is supposed to have a link somewhere that the client doesn't know about? I know I can just use POST to modify resources so that the entity submitted doesn't have to be exactly the same as the hyperlink-annotated representation of that resource, but I was wondering if there's a common or better way to do this?
--- In rest-discuss@yahoogroups.com, Bill de h�ra <bill@...> wrote: > > > > I don't see REST becoming less relevant as mobile proliferates or the > > Web evolves towards being more m2m. > > REST is useful for mobile, a couple of on the ground issues come to mind > > - Fat web formats consumes energy and bandwidth. > > - Hypertext traversal increases latency and network hops over > constrained radio networks (even the packet switched ones defined by > LTE). > > The former is mitigated by allowing partial data or less fat formats > (decompression is not pue for a mobile device and there is no law far > battery improvement developers can free ride on). The latter is > mitigated by allowing clients to consume a 'discovery' document that > describe the resources available and their URIs. I note that performance > is not an explicit goal of REST, but as things stand today I would > prefer the option of discovery than throwing out the system benefits of > REST. Fwiw, a forms document fits in my notion of discovery. > On fat formats -- I hear you but at the same time, the SIP/IMS standards that are poised for LTE make use of some fairly fat formats (e.g. PIDF for presence etc). The solution there seems to be conpression (e.g. compressed bodies and SigComp, etc.) and there's a lot of momentum there -- it doesn't seem to be holding things up. Is this really a barrier to REST? On the discovery doc -- are non-HTML, REST approaches really wasting that much on traversal? e.g. There's not a lot of traversal in AtomPub compared to an HTML-based web site. It seems that the wastefulness here it more about UX than something fundamental to HATEOAS. I would imagine that for m2m REST you could optimize the traversal to the minimum interactions required for the workflow no? Can an up-front discovery doc really make it better? Even if you had say a URI template up front, you'd likely need to "traverse" a few resources to get the parameter values needed to make a request. If not, then I'd imagine the first page of the service would normally offer the link/form anyways. What am I missing? > Finally push is very useful for mobile but that is outside REST's design > goals. I suspect there are some concepts from ARRESTED or P2P that may > be useful in mobile systems. > > Bill > Push is a big issue -- polling of any sort (even long polling or COMET) is problematic in mobile. I am hopeful that this ID https://wiki.tools.ietf.org/html/draft-roach-sip-http-subscribe-07 becomes a good way to receive notifications of HTTP resource changes over an LTE network. While the SIP side isn't exactly REST, bridging the two worlds with a simple Link header integrates well with the web I think. Regards, Andrew
I hadn't actually seen this before. It will definitely be useful.
In terms of the ANSI/IEEE 1471 approach, the document doesn't seem to
differentiate architectural views from the different viewpoints that it
enumerates in the intended audience section and it doesn't express a distinct
model for each view. To get a sense of what I'm looking for, the wikipedia
article on 1471 may be helpful, especially the entity model diagram. See:
http://en.wikipedia.org/wiki/IEEE_1471 and
http://en.wikipedia.org/wiki/File:IEEEConceptualFramework4ArchitectureDescription.png
I did find one of the references cited was a little closer to what I was looking
for, a paper by Fielding and Taylor:
http://www.ics.uci.edu/~fielding/pubs/webarch_icse2000.pdf
This expresses three views of a RESTful system: process, connector, and data.
This caused me to relook at Fielding's dissertation, which has the same
content. It's interesting to compare these to the OASIS reference architecture
for SOA (link in the thread root), which provides models for architectural views
from three viewpoints: that of the service ecosystem (the "business" view), that
of realizing the SOA (ie the implementer's viewpoint), and that of the SOA owner
(ie IT management).
I suspect that the difference in viewpoints are the reason that people versed in
REST and people versed in SOA seem so often to talk passed each other. REST as
an architectural style is simply not expressed in terms remotely close to those
that describe the service ecosystem viewpoint and IT ownership viewpoint that
OASIS used to express a SOA reference architecture.
________________________________
From: Jan Algermissen <algermissen1971@...>
To: Bryan Taylor <bryan_w_taylor@...>
Cc: rest-discuss@yahoogroups.com
Sent: Sat, July 31, 2010 5:30:44 PM
Subject: Re: [rest-discuss] RESTful Reference Architectures?
Bryan,
On Aug 1, 2010, at 12:27 AM, Bryan Taylor wrote:
> Has anybody written or come across a documented RESTful Reference Architecture?
>
>
> I'm looking for something comparable to The OASIS RA for SOA, but obviously
> more specific to a RESTful architecture. See:
> http://docs.oasis-open.org/soa-rm/soa-ra/v1.0/soa-ra.html
> Specifically, I'm looking for something that follows the viewpoint/view models
> that [ANSI/IEEE 1471, ISO/IEC 42010] "Recommended Practice for Architectural
> Description of Software-Intensive Systems" defines.
>
> Obviously REST is an architectural style and so there can be as many RESTful
> reference architectures as there are many RESTful systems. I'm just looking for
>
> one that is expressed well in terms of an architectural system description.
What about http://www.w3.org/TR/webarch/ ?
Jan
>
>
>
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
-----------------------------------
Jan Algermissen, Consultant
NORD Software Consulting
Mail: algermissen@...
Blog: http://www.nordsc.com/blog/
Work: http://www.nordsc.com/
-----------------------------------
Bryan Taylor wrote: > > I suspect that the difference in viewpoints are the reason that > people versed in REST and people versed in SOA seem so often to talk > passed each other. REST as an architectural style is simply not > expressed in terms remotely close to those that describe the service > ecosystem viewpoint and IT ownership viewpoint that OASIS used to > express a SOA reference architecture. > You may have a point there. REST has no "reference architecture", it's a style, whose reference instantiation is the Web, but is not limited to the Web. The Web is an enabler of a variety of system architectures. Each system is an instantiation of some architectural style, most of which are undefined. IEEE 1471 is a means for describing such architectures. The RA for SOA is a good example. But I don't think it applies to defining anything but instantiations of the Web or of REST (or of some other style). SOA is an instantiation of the Web, but not of REST. My demo website is in no way meant as a reference architecture for REST. When all is said and done, I may very well use IEEE 1471 to describe it as a reference architecture for CMS/wiki/weblog/forum operations and integration, as a RESTful alternative to CMIS. Such a reference architecture would describe an instantiation of both REST and Web styles. But it would only be one possible REST or Web instantiation of CMS/wiki/weblog/forum operations and integration, not the be-all and end-all. That's all IEEE 1471 can do -- describe reference architectures which may be instantiations of REST, Web or other styles. I don't believe it capable of describing architectural styles, i.e. as a replacement for the WebArch document Jan linked to, or Roy's thesis. -Eric
On Sun, 2010-08-01 at 04:10 +0000, wahbedahbe wrote: > > On fat formats -- I hear you but at the same time, the SIP/IMS > standards that are poised for LTE make use of some fairly fat formats > (e.g. PIDF for presence etc). The solution there seems to be > conpression (e.g. compressed bodies and SigComp, etc.) and there's a > lot of momentum there -- it doesn't seem to be holding things up. Is > this really a barrier to REST? It's not so much a barrier to REST as inclining people towards leveraging relevant aspects of RPC technology, RPC having a history of compact wire formats usually in conjunction with IDLs. As per some other mails I've sent, I think it's possible to obtain reasonable tradeoffs here and is worth investigating. I agree with your observation on the IMS/SIP standards family btw, except I'm not sure industry momentum always translates into usefulness. They've been around a long time now. > On the discovery doc -- are non-HTML, REST approaches really wasting > that much on traversal? e.g. There's not a lot of traversal in AtomPub > compared to an HTML-based web site. It seems that the wastefulness > here it more about UX than something fundamental to HATEOAS. I would > imagine that for m2m REST you could optimize the traversal to the > minimum interactions required for the workflow no? Can an up-front > discovery doc really make it better? Even if you had say a URI > template up front, you'd likely need to "traverse" a few resources to > get the parameter values needed to make a request. If not, then I'd > imagine the first page of the service would normally offer the > link/form anyways. What am I missing? Suppose you wanted to sh0ow a user's status as the common case for an application Ux. The url for that status is obtained from a link in the user's profile. It's not in the profile because it changes more frequently that the other data and so would make the profile document non-cacheable etc. By design you traverse from the profile to the status. This decouples client and server assumptions at the cost of a HTTP req/res cycle. So there's a tradeoff. Some developers would like to go direct to the status to avoid the hop. One way to do this is have the URLs prepared in advance. The argument is that a way to balance these concerns is to allow the server to publish a document that client can cache and from which the client can pull the status url directly and so short circuit the traversal without being very strongly coupled to the server's uri space. This kind of tradeoff seems reasonable to me, hence I don't understand the level of objection in some quarters to approaches like WADL. Bill
Hi, I'm new to REST architecture style. As I was studying Roy's dissertation and various other articles about REST, I realized that one of the areas where REST architecture style is relevant is in the design of information services in a distributed environment. I understood to a large extent the benefits of various architectural constraints and why they are relevant to those use cases. Are there any use cases where applying REST architectural principles is not ideal? I understand that a generic answer may be that it's not ideal when the desired properties of the target system are not met by applying REST constraints. But are there any specific type of software systems where this is true? Thanks, Viswanath
--- In rest-discuss@yahoogroups.com, Bill de h�ra <bill@...> wrote: > > On Sun, 2010-08-01 at 04:10 +0000, wahbedahbe wrote: > > > > On fat formats -- I hear you but at the same time, the SIP/IMS > > standards that are poised for LTE make use of some fairly fat formats > > (e.g. PIDF for presence etc). The solution there seems to be > > conpression (e.g. compressed bodies and SigComp, etc.) and there's a > > lot of momentum there -- it doesn't seem to be holding things up. Is > > this really a barrier to REST? > > It's not so much a barrier to REST as inclining people towards > leveraging relevant aspects of RPC technology, RPC having a history of > compact wire formats usually in conjunction with IDLs. As per some other > mails I've sent, I think it's possible to obtain reasonable tradeoffs > here and is worth investigating. > > I agree with your observation on the IMS/SIP standards family btw, > except I'm not sure industry momentum always translates into usefulness. > They've been around a long time now. > > > On the discovery doc -- are non-HTML, REST approaches really wasting > > that much on traversal? e.g. There's not a lot of traversal in AtomPub > > compared to an HTML-based web site. It seems that the wastefulness > > here it more about UX than something fundamental to HATEOAS. I would > > imagine that for m2m REST you could optimize the traversal to the > > minimum interactions required for the workflow no? Can an up-front > > discovery doc really make it better? Even if you had say a URI > > template up front, you'd likely need to "traverse" a few resources to > > get the parameter values needed to make a request. If not, then I'd > > imagine the first page of the service would normally offer the > > link/form anyways. What am I missing? > > Suppose you wanted to sh0ow a user's status as the common case for an > application Ux. The url for that status is obtained from a link in the > user's profile. It's not in the profile because it changes more > frequently that the other data and so would make the profile document > non-cacheable etc. By design you traverse from the profile to the > status. This decouples client and server assumptions at the cost of a > HTTP req/res cycle. So there's a tradeoff. Some developers would like to > go direct to the status to avoid the hop. One way to do this is have the > URLs prepared in advance. The argument is that a way to balance these > concerns is to allow the server to publish a document that client can > cache and from which the client can pull the status url directly and so > short circuit the traversal without being very strongly coupled to the > server's uri space. This kind of tradeoff seems reasonable to me, hence > I don't understand the level of objection in some quarters to approaches > like WADL. I agree. I'm not sure I'd even concede that this approach is in any way "unRESTful." As long as the document(s) the server publishes (the WADL *and* the status document) are visible/cacheable, etc. it's inline w/ REST principles. I *do* agree that it could be a slippery slope towards too-tight coupling, but that's not what we are talking about here. I particularly like a message from Roy F. on a closely related issue: On Dec 19, 2007, at 3:52 AM, [...] wrote: > URLs are passed in > hypertext URL construction from algorithms or other non-hypertext > information like cookies is non-RESTful. > That's not even remotely true. If anything, REST encourages the creation of URIs by construction. Forms, server-side imagemaps, isindex, and any form of code-on-demand all construct URIs through algorithms. The important bit is that the algorithm is defined by the server and the resource remains accessible regardless of how the URI was calculated (i.e., the result of the algorithm is bookmarkable). ....Roy --peter keane > > Bill >
On Sun, Aug 1, 2010 at 6:51 PM, Bill de hra <bill@...> wrote: > So there's a tradeoff. Some developers would like to > go direct to the status to avoid the hop. One way to do this is have the > URLs prepared in advance. The argument is that a way to balance these > concerns is to allow the server to publish a document that client can > cache and from which the client can pull the status url directly and so > short circuit the traversal without being very strongly coupled to the > server's uri space. This kind of tradeoff seems reasonable to me, hence > I don't understand the level of objection in some quarters to approaches > like WADL. Why use WADL for that? Seems unnecessary when can achieve the same thing with just a Link header. Cheers, Mike
Mike Kelly wrote: > > Why use WADL for that? Seems unnecessary when can achieve the same > thing with just a Link header. > Or headers in general. In a uniform interface, there's no need to detail what methods are available for a resource -- the Allow: header does this -- or what responses will result; the user agent must be ready to deal with any response code. Perhaps, if Accept: defaults to PUT and there's already Accept-Patch:, there's room to introduce an Accept-Post: header. Accept: and Accept-*: headers, in combination with the Allow: header, detail the interface for any media type on my demo system, for example. I mentioned using WADL as an OPTIONS response on my system; this would be generated by reading the actual headers the server sends for the resource, instead of the other way around. The limitations of HTTP in describing a REST system, for example the limitation of just having Accept:, are cause to extend HTTP to compensate; not ditch the whole concept of headers in favor of IDLs and RPC interfaces. -Eric
> > Accept: and Accept-*: headers, in combination with the Allow: header, > detail the interface for any media type on my demo system, for > example. > Maybe a visible example will help. Where I have different resource "types" of the same media type, I've used special characters to differentiate them internally to the httpd. The server replaces the special characters as a last step, after the headers are otherwise set. Without the hypertext interface, including HTTP headers, what isn't clear is which resources are which. But the gist of the system is right there in this fragment of my httpd config file: Filter = text/html addheader Allow: TRACE, HEAD, GET Filter = text/html addheader Cache-Control: must-revalidate, max-age=31536000 Filter = application/xhtml+xml addheader Allow: TRACE, HEAD, GET Filter = application/xhtml+xml addheader Cache-Control: must-revalidate, max-age=31536000 Filter = application/xhtml*xml addheader Allow: TRACE, HEAD, GET Filter = application/xhtml*xml addheader Cache-Control: public, max-age=31536000 Filter = application/xhtml$xml addheader Allow: TRACE, HEAD, GET, PUT, DELETE Filter = application/xhtml$xml addheader Accept: application/xhtml+xml Filter = application/xhtml$xml addheader Cache-Control: must-revalidate, public, max-age=31536000 Filter = application/atom+xml addheader Allow: TRACE, HEAD, GET, PUT, PATCH, DELETE Filter = application/atom+xml addheader Accept: application/atom+xml Filter = application/atom+xml addheader Accept-Patch: application/atomcat+xml Filter = application/atom+xml addheader Cache-Control: public, max-age=31536000 Filter = application/atomcat+xml addheader Allow: TRACE, HEAD, GET, PUT Filter = application/atomcat+xml addheader Accept: application/atomcat+xml Filter = application/atomcat+xml addheader Cache-Control: public, max-age=31536000 Filter = application/xml addheader Allow: TRACE, HEAD, GET Filter = application/xml addheader Cache-Control: public, max-age=31536000 Filter = text/xml addheader Allow: TRACE, HEAD, GET Filter = text/xml addheader Cache-Control: public, max-age=31536000 Filter = text/plain addheader Allow: TRACE, HEAD, GET Filter = text/plain addheader Cache-Control: public, max-age=31536000 Filter = text/plain% addheader Allow: TRACE, HEAD, GET, PUT, DELETE Filter = text/plain% addheader Accept: text/plain; charset=utf-8 Filter = text/plain% addheader Cache-Control: public, max-age=31536000 Filter = application/atom+xml@type=feed addheader Allow: TRACE, HEAD, GET, POST Filter = application/atom+xml@type=feed addheader Accept: application/atom+xml, application/x-www-form-urlencoded Filter = application/atom+xml@type=feed addheader Cache-Control: public, max-age=31536000 Filter = application/xbel+xml addheader Allow: TRACE, HEAD, GET, PUT, DELETE Filter = application/xbel+xml addheader Accept: application/xbel+xml Filter = application/xbel+xml addheader Cache-Control: public, max-age=31536000 Filter = text/css addheader Allow: TRACE, HEAD, GET, PUT, DELETE Filter = text/css addheader Accept: text/css; charset=utf-8 Filter = text/css addheader Cache-Control: public, max-age=31536000 Filter = text/xsl addheader Allow: TRACE, HEAD, GET, PUT, DELETE Filter = text/xsl addheader Accept: text/xsl; charset=utf-8 Filter = text/xsl addheader Cache-Control: public, max-age=31536000 Filter = application/json addheader Allow: TRACE, HEAD, GET Filter = application/json addheader Cache-Control: public, max-age=31536000 Filter = application/javascript addheader Allow: TRACE, HEAD, GET, PUT, DELETE Filter = application/javascript addheader Accept: application/javascript Filter = application/javascript addheader Cache-Control: public, max-age=31536000 Filter = image/gif addheader Allow: TRACE, HEAD, GET, PUT, DELETE Filter = image/gif addheader Accept: image/gif Filter = image/gif addheader Cache-Control: public, max-age=31536000 Filter = image/jpeg addheader Allow: TRACE, HEAD, GET, PUT, DELETE Filter = image/jpeg addheader Accept: image/jpeg Filter = image/jpeg addheader Cache-Control: public, max-age=31536000 Filter = image/png addheader Allow: TRACE, HEAD, GET, PUT, DELETE Filter = image/png addheader Accept: image/png Filter = image/png addheader Cache-Control: public, max-age=31536000 If the hypertext is instructing the user agent to POST application/x-www- form-urlencoded data to an Atom Feed, the user agent can always make a HEAD request to the Atom Feed, and determine that it will also accept application/atom+xml, and formulate its request that way. By following its nose. I'm sure you can mimic all this with WADL, I just question, why not use headers? Does Google's new stack allow different resource types of the same media type, to have different headers? Or has this simplicity been abstracted away for the sake of ease-of-use, to the point where all resources sharing a media type also share the same headers and methods, which goes beyond the REST constraint that they share the same method semantics? -Eric
On Sun, Aug 1, 2010 at 2:36 PM, Viswanath Durbha <viswanath.durbha@... > wrote: > > > Hi, > > I'm new to REST architecture style. As I was studying Roy's dissertation > and various other articles about REST, I realized that one of the areas > where REST architecture style is relevant is in the design of information > services in a distributed environment. I understood to a large extent the > benefits of various architectural constraints and why they are relevant to > those use cases. > > Are there any use cases where applying REST architectural principles is not > ideal? I understand that a generic answer may be that it's not ideal when > the desired properties of the target system are not met by applying REST > constraints. But are there any specific type of software systems where this > is true? > > Thanks, > Viswanath > In the space I work in, I don't know that I would use "not ideal" but would say that there are cases where the benefits of the style would not be taken advantage of. For example, when building internal apps where the you (or a small set of people) are in control of everything (clients, servers, intermediaries etc etc), then applying all the REST principles may not provide significant values and a RPC-based (or anything else) solution may work just fine. Eb
On Sat, Jul 31, 2010 at 10:59 PM, mdierken <dierken@...> wrote: > I have a question about using PUT with resources that have hypertext representations. In this system I would like the client to be able to set the state of a particular resource, but if the normal representation of that resource has handy hypertext links - wouldn't the client need to send content with those exact hypertext links? If so that seems fragile as the links typically are determined by the server and this implies the client also needs to be able to determine those. The server is free to validate them and either throw an error if they're changed or (what I usually end up doing) just ignore the change. > Specifically, I'm working on system to manage two-party contracts (really, just orders) and would like to support individual 'contributions' from each party and a master 'reconciled' contract that is composed from the two contributions based on business rules. I would like the master 'reconciled' contract to link to the composite contracts and also for the composite contracts to have a link back to the final reconciled contract that results from rules being enforced. How would a client use PUT to modify a 'contribution' resource if that resource is supposed to have a link somewhere that the client doesn't know about? Easily. Have the server add the link. Mark.
One broad way of thinking about an answer to this question is to look at protocols. An alternate protocol probably suggests a different set of trade-offs with different characteristics for different situations: Example: Mail - POP, SMTP, IMAP Example: Instant messaging - Jabber, BEEP Example: High-speed broadcast - UDP, JMS*, AMQP -Eric. * Yes, I realize that JMS, technically, is not a protocol, but it has been mapped onto several protocols, including AMQP. On 08/01/2010 11:36 AM, Viswanath Durbha wrote: > > > Hi, > > I'm new to REST architecture style. As I was studying Roy's > dissertation and various other articles about REST, I realized that > one of the areas where REST architecture style is relevant is in the > design of information services in a distributed environment. I > understood to a large extent the benefits of various architectural > constraints and why they are relevant to those use cases. > > Are there any use cases where applying REST architectural principles > is not ideal? I understand that a generic answer may be that it's not > ideal when the desired properties of the target system are not met by > applying REST constraints. But are there any specific type of software > systems where this is true? > > Thanks, > Viswanath > >
Thanks. I was under the impression that the content sent by the client would be cached by intermediaries - but I suppose the server can just respond with "don't cache this" response headers (or should it respond with the augmented content?). I guess I keep thinking that the content of a PUT request must be the same as the content of a GET request to that same resource. Mike --- In rest-discuss@yahoogroups.com, Mark Baker <distobj@...> wrote: > > On Sat, Jul 31, 2010 at 10:59 PM, mdierken <dierken@...> wrote: > > I have a question about using PUT with resources that have hypertext representations. In this system I would like the client to be able to set the state of a particular resource, but if the normal representation of that resource has handy hypertext links - wouldn't the client need to send content with those exact hypertext links? If so that seems fragile as the links typically are determined by the server and this implies the client also needs to be able to determine those. > > The server is free to validate them and either throw an error if > they're changed or (what I usually end up doing) just ignore the > change. > > > Specifically, I'm working on system to manage two-party contracts (really, just orders) and would like to support individual 'contributions' from each party and a master 'reconciled' contract that is composed from the two contributions based on business rules. I would like the master 'reconciled' contract to link to the composite contracts and also for the composite contracts to have a link back to the final reconciled contract that results from rules being enforced. How would a client use PUT to modify a 'contribution' resource if that resource is supposed to have a link somewhere that the client doesn't know about? > > Easily. Have the server add the link. > > Mark. >
Mike, On Aug 3, 2010, at 5:47 AM, mdierken wrote: > Thanks. I was under the impression that the content sent by the client would be cached by intermediaries - but I suppose the server can just respond with "don't cache this" response headers (or should it respond with the augmented content?). PUT invalidates caches, but responses are not cacheable: "If the request passes through a cache and the Request-URI identifies one or more currently cached entities, those entries SHOULD be treated as stale. Responses to this method are not cacheable." <http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html#sec9.6> > > I guess I keep thinking that the content of a PUT request must be the same as the content of a GET request to that same resource. No, it need not be. (I lack a pointer for this - can anyone supply one?) Jan > > Mike > > > --- In rest-discuss@yahoogroups.com, Mark Baker <distobj@...> wrote: >> >> On Sat, Jul 31, 2010 at 10:59 PM, mdierken <dierken@...> wrote: >>> I have a question about using PUT with resources that have hypertext representations. In this system I would like the client to be able to set the state of a particular resource, but if the normal representation of that resource has handy hypertext links - wouldn't the client need to send content with those exact hypertext links? If so that seems fragile as the links typically are determined by the server and this implies the client also needs to be able to determine those. >> >> The server is free to validate them and either throw an error if >> they're changed or (what I usually end up doing) just ignore the >> change. >> >>> Specifically, I'm working on system to manage two-party contracts (really, just orders) and would like to support individual 'contributions' from each party and a master 'reconciled' contract that is composed from the two contributions based on business rules. I would like the master 'reconciled' contract to link to the composite contracts and also for the composite contracts to have a link back to the final reconciled contract that results from rules being enforced. How would a client use PUT to modify a 'contribution' resource if that resource is supposed to have a link somewhere that the client doesn't know about? >> >> Easily. Have the server add the link. >> >> Mark. >> > > > > > ------------------------------------ > > Yahoo! Groups Links > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
Given my /date service, and knowing it's served by a PHP httpd: http://charger.bisonsystems.net/date?iso=2010-08-04 How would you create a REST API to remotely invoke the following PHP function, as a method of /date ? mcal_is_leap_year() Keep it simple, 'cuz yeah, it's a trick question, the sort I might use on a job application to quickly separate the wheat from the chaff. -Eric
one approach i've used in similar situations is: => GET /isleapyear?2010 <= 404 Not Found => GET /isleapyear?2012 <= 200 OK how you model the _representation_ can depend on accept headers: - image/* might return green check for 200, red x for 404 - text/plain might return "This is a leap year" or "This is not a leap year" - text/html can return both - etc. mca http://amundsen.com/blog/ http://mamund.com/foaf.rdf#me On Wed, Aug 4, 2010 at 18:40, Eric J. Bowman <eric@...> wrote: > Given my /date service, and knowing it's served by a PHP httpd: > > http://charger.bisonsystems.net/date?iso=2010-08-04 > > How would you create a REST API to remotely invoke the following PHP > function, as a method of /date ? > > mcal_is_leap_year() > > Keep it simple, 'cuz yeah, it's a trick question, the sort I might use > on a job application to quickly separate the wheat from the chaff. > > -Eric > > > ------------------------------------ > > Yahoo! Groups Links > > > >
Don't know much about PHP, but what about exposing leap years as resources? GET http://charger.bisonsystems.net/date/leap-years/2012 --> 200 GET http://charger.bisonsystems.net/date/leap-years/2010 --> 404 -Philippe Le 5 aot 2010 00:40, Eric J. Bowman a crit : > > Given my /date service, and knowing it's served by a PHP httpd: > > http://charger.bisonsystems.net/date?iso=2010-08-04 > > How would you create a REST API to remotely invoke the following PHP > function, as a method of /date ? > > mcal_is_leap_year() > > Keep it simple, 'cuz yeah, it's a trick question, the sort I might use > on a job application to quickly separate the wheat from the chaff. > > -Eric
No change needed other than a little HATEOAS. Just use the existing call with Feb 29 for the year in consideration. 404 means its not a leap year. Back it with the specified function or not as you see fit, as long as it works. http://charger.bisonsystems.net/date?iso=2010-02-29 HATEOASify the whole thing by adding a link header for the above "is leap year" URI to related calls. This would be assembled based on the year in the related call. On 04-Aug-2010, at 9:00 PM, Philippe Mougin wrote: > Don't know much about PHP, but what about exposing leap years as resources? > > GET http://charger.bisonsystems.net/date/leap-years/2012 --> 200 > > GET http://charger.bisonsystems.net/date/leap-years/2010 --> 404 > > -Philippe > > Le 5 aot 2010 00:40, Eric J. Bowman a crit : > > > > > Given my /date service, and knowing it's served by a PHP httpd: > > > > http://charger.bisonsystems.net/date?iso=2010-08-04 > > > > How would you create a REST API to remotely invoke the following PHP > > function, as a method of /date ? > > > > mcal_is_leap_year() > > > > Keep it simple, 'cuz yeah, it's a trick question, the sort I might use > > on a job application to quickly separate the wheat from the chaff. > > > > -Eric > >
Gregory Berezowsky wrote: > > No change needed other than a little HATEOAS. > > Just use the existing call with Feb 29 for the year in > consideration. 404 means its not a leap year. Back it with the > specified function or not as you see fit, as long as it works. > > http://charger.bisonsystems.net/date?iso=2010-02-29 > > HATEOASify the whole thing by adding a link header for the above "is > leap year" URI to related calls. This would be assembled based on > the year in the related call. > On the A - F grading scale, this answer rates a solid B. What I was looking for was: Some sort of hypertext control accepting YYYY as input instructs the user agent to HEAD /date?iso=YYYY-02-29 and display 'YYYY is a leap year' if and only if response code = (200|304). Glad someone posted that answer, because my purpose was to offer a tip from hard experience -- base system logic on checking for success codes, not failure codes, as much as possible. Note the above URI returns 500. Don't forget that 304 also indicates success. More job-app pop-quiz questions to come over time, as I think of 'em. -Eric
<snip> > On the A - F grading scale, this answer rates a solid B. What I was > looking for was: Some sort of hypertext control accepting YYYY as > input instructs the user agent to HEAD /date?iso=YYYY-02-29 and display > 'YYYY is a leap year' if and only if response code = (200|304). > > Glad someone posted that answer, because my purpose was to offer a tip > from hard experience -- base system logic on checking for success codes, > not failure codes, as much as possible. Note the above URI returns 500. > Don't forget that 304 also indicates success. </snip> meh. mca http://amundsen.com/blog/ http://mamund.com/foaf.rdf#me On Wed, Aug 4, 2010 at 22:02, Eric J. Bowman <eric@...> wrote: > Gregory Berezowsky wrote: >> >> No change needed other than a little HATEOAS. >> >> Just use the existing call with Feb 29 for the year in >> consideration. 404 means its not a leap year. Back it with the >> specified function or not as you see fit, as long as it works. >> >> http://charger.bisonsystems.net/date?iso=2010-02-29 >> >> HATEOASify the whole thing by adding a link header for the above "is >> leap year" URI to related calls. This would be assembled based on >> the year in the related call. >> > > On the A - F grading scale, this answer rates a solid B. What I was > looking for was: Some sort of hypertext control accepting YYYY as > input instructs the user agent to HEAD /date?iso=YYYY-02-29 and display > 'YYYY is a leap year' if and only if response code = (200|304). > > Glad someone posted that answer, because my purpose was to offer a tip > from hard experience -- base system logic on checking for success codes, > not failure codes, as much as possible. Note the above URI returns 500. > Don't forget that 304 also indicates success. > > More job-app pop-quiz questions to come over time, as I think of 'em. > > -Eric > > > ------------------------------------ > > Yahoo! Groups Links > > > >
What I meant was, as opposed to looking for 4xx responses, try this logic pattern: if (200|304) = true then !(200|304) = false. You'll get more robust application code that way. -Eric
Philippe Mougin wrote: > > Don't know much about PHP... > The A+ answer to the question would note that implementation details are opaque behind the uniform interface, therefore PHP and mcal_is_leap_ year() are red herrings. HEAD for the true/false query on the 29th is the REST knowledge being tested for, not the applicant's knowledge of any particular coding language. That was the 'trick' part of the question, sorry Philippe! :-) I hope to come up with about a half-dozen of these so I can interview via e-mail. The story problems I use "for real" won't be published here, but they'll be in the same vein, and test for the same knowledge -- overall, not the same knowledge in the same questions, i.e. this isn't a cheat sheet. So please feel free to comment on the fairness and veracity of my questions. I've decided to pursue development of a framework/httpd for implementing my architectural style, using standalone FastCGI apps configured per resource "type," following the design pattern laid out here: http://www.nongnu.org/fastcgi/ Marshalled by a non-blocking I/O core server component with HTTP cache connector, so all output is held in RAM as compressed streams. In order to farm out the development of standalone FastCGI modules, I need to be able to determine whether applicants posess enough of a REST skillset to grok how their assigned module fits into the overall system. -Eric
mike amundsen wrote: > > meh. > Thanks for playing, though, sport! :-) -Eric
Le 5 aot 2010 04:46, Eric J. Bowman a crit : > Philippe Mougin wrote: >> >> Don't know much about PHP... > The A+ answer to the question would note that implementation details > are opaque behind the uniform interface, therefore PHP and mcal_is_leap_ > year() are red herrings. > > HEAD for the true/false query on the 29th is the REST knowledge being > tested for, not the applicant's knowledge of any particular coding > language. > > That was the 'trick' part of the question, sorry Philippe! > > :-) D'oh! -Philippe
On Aug 5, 2010, at 12:40 AM, Eric J. Bowman wrote: > Given my /date service, and knowing it's served by a PHP httpd: > > http://charger.bisonsystems.net/date?iso=2010-08-04 > > How would you create a REST API to remotely invoke the following PHP > function, as a method of /date ? > > mcal_is_leap_year() > > Keep it simple, 'cuz yeah, it's a trick question, the sort I might use > on a job application to quickly separate the wheat from the chaff. GET /date?iso=2010-08-04 Accept: text/date 200 Ok Content-Type: text/date iso8601: 2010-08-04 gov: 08/04/2010 unix: 6615155515151 is-leap: false (just example values,sorry. But you get the idea...) Hmm....wondering whether that can be turned into a URI scheme of some sort: date:2010-08-04 Jan > > -Eric > > > ------------------------------------ > > Yahoo! Groups Links > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
Jan Algermissen wrote: > > Hmm....wondering whether that can be turned into a URI scheme of some > sort: > No opinion on that... So your solution is the same as Captain Kirk's solution to the Kobayashi Maru problem -- change the parameters of the test? Interesting... ;-) -Eric
On Aug 5, 2010, at 11:36 AM, Eric J. Bowman wrote: > Jan Algermissen wrote: >> >> Hmm....wondering whether that can be turned into a URI scheme of some >> sort: >> > > No opinion on that... > > So your solution is the same as Captain Kirk's solution to the Kobayashi > Maru problem -- change the parameters of the test? Hmm... I applied the 'principle' of designing for serendipidity and for me that means putting the information in the representation and let the client decide how best to make use of 'the date'. Re-use is maximized when we put information into the payload instead into the 'API'. Jan > > Interesting... ;-) > > -Eric > > > ------------------------------------ > > Yahoo! Groups Links > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
Jan Algermissen wrote: > > Re-use is maximized when we put information into the payload instead > into the 'API'. > "Rigidity is the enemy of long-lived systems." -Roy Is there a more efficient remote boolean check than the response code from a HEAD request? Payload isn't required. Heck, neither are headers, if the request is absolute. Simplicity itself. -Eric
On Aug 5, 2010, at 12:16 PM, Eric J. Bowman wrote: > Jan Algermissen wrote: >> >> Re-use is maximized when we put information into the payload instead >> into the 'API'. >> > > "Rigidity is the enemy of long-lived systems." -Roy > > Is there a more efficient remote boolean check than the response code > from a HEAD request? Yes... a highly specialized, binary RPC protocol :-) > Payload isn't required. Right - but at the cost of the client having to know the semantic of the resource. IOW, you need an additional link rel to tell the client where the resource is to check for leap years. Also, the 404 response does not mean "XY is no leap year" it only means "there is currently no representation available". Effectively you couple client and server around the additional semantic of the special interpretation of the 404 *for that resource*. A representation for dates that is maximized for re-use (and extensibility) does not have these disadvantages. Jan > Heck, neither are > headers, if the request is absolute. Simplicity itself. > > -Eric ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
API designers, please note the following rules before calling your creation a REST API: A REST API should not be dependent on any single communication protocol (...) (from Roy Fielding blog) On 5 Aug 2010 11:30, "Jan Algermissen" <algermissen1971@...> wrote: On Aug 5, 2010, at 12:16 PM, Eric J. Bowman wrote: > Jan Algermissen wrote: >> >> Re-use is max... Yes... a highly specialized, binary RPC protocol :-) > Payload isn't required. Right - but at the cost of the client having to know the semantic of the resource. IOW, you need an additional link rel to tell the client where the resource is to check for leap years. Also, the 404 response does not mean "XY is no leap year" it only means "there is currently no representation available". Effectively you couple client and server around the additional semantic of the special interpretation of the 404 *for that resource*. A representation for dates that is maximized for re-use (and extensibility) does not have these disadvantages. Jan > Heck, neither are > headers, if the request is absolute. Simplicity itself. > > -Eric ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: alge...
I like Eric's point about checking for !(success) vs. checking for explicit failure. In defence of a better grade than B, how to code the client wasn't the question, though :P As creator of a public API (my assumption), I have no little to no control over the client implementation (although I think !success will become part of my boilerplate docs). The best I can do is make the URI opaque, provide some variations to get at that URI (link header, form taking the year and returning a location, etc.), and clearly indicate the expected behaviour when that URI is called. You'll notice that in all of the suggestions, including Eric's, you have to communicate to the client creator what behaviour to code against. I think it less important specifically which choice is made, but rather that such choices are consistent across the API and are communicated clearly. Having said that, using forums like this to debate implementation will hopefully lead us towards some standard approaches to things such as boolean responses. Food for thought: I inferred that the intent of the exercise was to provide 'is leap year' functionality and that whether it's actually backed by PHP's mcal_is_leap_year() function is not that important. In fulfilling this contract, I really needed to sit down with my customer, Eric. He didn't say why he wanted a URI to invoke the function. What if the rationale to exposing mcal_is_leap_year() is actually part of an oddball distributed test harness for php? In that case, it makes perfect sense to expose the function as a resource. You'd always get a 200 if the function is actually available with some sort part of the data in the response indicating true|false. On 05-Aug-2010, at 6:29 AM, Jan Algermissen wrote: > > On Aug 5, 2010, at 12:16 PM, Eric J. Bowman wrote: > > > Jan Algermissen wrote: > >> > >> Re-use is maximized when we put information into the payload instead > >> into the 'API'. > >> > > > > "Rigidity is the enemy of long-lived systems." -Roy > > > > Is there a more efficient remote boolean check than the response code > > from a HEAD request? > > Yes... a highly specialized, binary RPC protocol :-) > > > Payload isn't required. > > Right - but at the cost of the client having to know the semantic of the resource. IOW, you need an additional link rel to tell the client where the resource is to check for leap years. > > Also, the 404 response does not mean "XY is no leap year" it only means "there is currently no representation available". Effectively you couple client and server around the additional semantic of the special interpretation of the 404 *for that resource*. > > A representation for dates that is maximized for re-use (and extensibility) does not have these disadvantages. > > Jan > > > Heck, neither are > > headers, if the request is absolute. Simplicity itself. > > > > > -Eric > > ----------------------------------- > Jan Algermissen, Consultant > NORD Software Consulting > > Mail: algermissen@... > Blog: http://www.nordsc.com/blog/ > Work: http://www.nordsc.com/ > ----------------------------------- > >
On Aug 5, 2010, at 1:18 PM, Gregory Berezowsky wrote: > I like Eric's point about checking for !(success) vs. checking for explicit failure. In defence of a better grade than B, how to code the client wasn't the question, though :P As creator of a public API (my assumption), I have no little to no control over the client implementation (although I think !success will become part of my boilerplate docs). Actually, REST aims to prevent 'service functionality driven design' and instead forces us to standardize the payload (inlcuding link rel) semantics first[1]. In a REStful system you cannot sit down, discuss some desired functions exposed by a service and then provide some service-specific payload semantics. Services can only be designed around already standardized hypermedia semantics. That is the driving force that prevents client implementations to be coupled to the sematics of a single service. Jan [1] Even if that will often be an interative process, of course. > > The best I can do is make the URI opaque, provide some variations to get at that URI (link header, form taking the year and returning a location, etc.), and clearly indicate the expected behaviour when that URI is called. You'll notice that in all of the suggestions, including Eric's, you have to communicate to the client creator what behaviour to code against. I think it less important specifically which choice is made, but rather that such choices are consistent across the API and are communicated clearly. > > Having said that, using forums like this to debate implementation will hopefully lead us towards some standard approaches to things such as boolean responses. > > Food for thought: > > I inferred that the intent of the exercise was to provide 'is leap year' functionality and that whether it's actually backed by PHP's mcal_is_leap_year() function is not that important. In fulfilling this contract, I really needed to sit down with my customer, Eric. He didn't say why he wanted a URI to invoke the function. What if the rationale to exposing mcal_is_leap_year() is actually part of an oddball distributed test harness for php? In that case, it makes perfect sense to expose the function as a resource. You'd always get a 200 if the function is actually available with some sort part of the data in the response indicating true|false. > > On 05-Aug-2010, at 6:29 AM, Jan Algermissen wrote: > >> >> On Aug 5, 2010, at 12:16 PM, Eric J. Bowman wrote: >> >> > Jan Algermissen wrote: >> >> >> >> Re-use is maximized when we put information into the payload instead >> >> into the 'API'. >> >> >> > >> > "Rigidity is the enemy of long-lived systems." -Roy >> > >> > Is there a more efficient remote boolean check than the response code >> > from a HEAD request? >> >> Yes... a highly specialized, binary RPC protocol :-) >> >> > Payload isn't required. >> >> Right - but at the cost of the client having to know the semantic of the resource. IOW, you need an additional link rel to tell the client where the resource is to check for leap years. >> >> Also, the 404 response does not mean "XY is no leap year" it only means "there is currently no representation available". Effectively you couple client and server around the additional semantic of the special interpretation of the 404 *for that resource*. >> >> A representation for dates that is maximized for re-use (and extensibility) does not have these disadvantages. >> >> Jan >> >> > Heck, neither are >> > headers, if the request is absolute. Simplicity itself. >> >> > >> > -Eric >> >> ----------------------------------- >> Jan Algermissen, Consultant >> NORD Software Consulting >> >> Mail: algermissen@... >> Blog: http://www.nordsc.com/blog/ >> Work: http://www.nordsc.com/ >> ----------------------------------- >> >> > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
Hi,
Eric J. Bowman <eric@...> wrote:
> Is there a more efficient remote boolean check than the response code
> from a HEAD request? Payload isn't required. Heck, neither are
> headers, if the request is absolute. Simplicity itself.
Well, yes, but I'm my spidey senses are tingling with semantics, but
not quite knowing what the answer is. I get the feeling that if you do
;
HEAD /date?iso=YYYY-02-29
there's a compartmentalization going on here which feels, well, not
directly wrong, but not quite right either. For me, the resource for
the above is really ;
/date
And any HTTP status code for me is about that resource without
parameters. I'm making a clearer distinction between a base URI and
its parameters, and perhaps that's wrong of me, but I'd *much* rather
prefer ;
/date/leap-year/{*}
And where HATEOAS is derived from /date, then /date/leap-year, to the
resources in question, probably using a simple ontology wrapped in
forms. Either that year is there (200 or 304), or it is not (404). The
semantics are on the year as a resource, not on a service-type
resource which if 404 indicates the *service* is not found as opposed
to the functionality given by the service. And this still won't break
testing for success, nor does it stop you getting HEAD.
Alex
--
Project Wrangler, SOA, Information Alchemist, UX, RESTafarian, Topic Maps
--- http://shelter.nu/blog/ ----------------------------------------------
------------------ http://www.google.com/profiles/alexander.johannesen ---
I really like that solution, although I'd pluralize it.
Its simply a collection of the leap years.
On 2010-08-05, at 7:44 AM, Alexander Johannesen <alexander.johannesen@...> wrote:
> Hi,
>
> Eric J. Bowman <eric@bisonsystems.net> wrote:
> > Is there a more efficient remote boolean check than the response code
> > from a HEAD request? Payload isn't required. Heck, neither are
> > headers, if the request is absolute. Simplicity itself.
>
> Well, yes, but I'm my spidey senses are tingling with semantics, but
> not quite knowing what the answer is. I get the feeling that if you do
> ;
>
> HEAD /date?iso=YYYY-02-29
>
> there's a compartmentalization going on here which feels, well, not
> directly wrong, but not quite right either. For me, the resource for
> the above is really ;
>
> /date
>
> And any HTTP status code for me is about that resource without
> parameters. I'm making a clearer distinction between a base URI and
> its parameters, and perhaps that's wrong of me, but I'd *much* rather
> prefer ;
>
> /date/leap-year/{*}
>
> And where HATEOAS is derived from /date, then /date/leap-year, to the
> resources in question, probably using a simple ontology wrapped in
> forms. Either that year is there (200 or 304), or it is not (404). The
> semantics are on the year as a resource, not on a service-type
> resource which if 404 indicates the *service* is not found as opposed
> to the functionality given by the service. And this still won't break
> testing for success, nor does it stop you getting HEAD.
>
> Alex
> --
> Project Wrangler, SOA, Information Alchemist, UX, RESTafarian, Topic Maps
> --- http://shelter.nu/blog/ ----------------------------------------------
> ------------------ http://www.google.com/profiles/alexander.johannesen ---
>
The question was, how to code a boolean REST API (all tricks aside). Harken back to the definition of API from Wikipedia: "An API is an abstraction that describes an interface for the interaction with a set of functions used by components of a software system." I call those functions, "resources." You may consider them objects with methods and properties, if that helps. The domain doesn't matter, so: GET http://example.org/is_leap_year Returns an Xforms document. This interactive form has a field to enter YYYY. User hits 'enter'. Hypertext instructs the user agent to make a HEAD request to /date?iso=YYYY-02-29. If response=(200|304), then append "is a leap year" after the user-entered YYYY; else append "is not a leap year". These are declarative hypertext instructions; (200| 304) would manifest itself in Xpath. It doesn't matter whether or not /date is RESTful in such a case. A REST API can wrap any number of unRESTful services with a hypertext interface. This is called "applying the layered system constraint." So let's change the example URI from PHP to V8+node.js: http://en.wiski.org/date?iso=2010-02-28 http://en.wiski.org/date?iso=2010-03-01 Another 'A' answer would be to query a known date on either side of February 29th, and check the Link header for rel=(nex|prev) == 29. -Eric
Apologies to anyone receiving multiple posts, yahoogroups issues. Antnio Mota wrote: > > API designers, please note the following rules before calling your > creation a REST API: A REST API should not be dependent on any single > communication protocol (...) > You're misunderstanding that, and btw taking it out of context, to be some rigid rule against building systems using only HTTP. You can, in general, replace my HTTP URIs with FTP URIs and the overlapping standardized methods will function the same way, in general. IOW, it is an error if your custom media type assigns partial-update semantics to PUT, and such. -Eric
Silly me. I missed philippe's earlier response.
On 2010-08-05, at 7:54 AM, Gregory Berezowsky <greg.berezowsky@...> wrote:
> I really like that solution, although I'd pluralize it.
>
> Its simply a collection of the leap years.
>
> On 2010-08-05, at 7:44 AM, Alexander Johannesen <alexander.johannesen@...> wrote:
>
>> Hi,
>>
>> Eric J. Bowman <eric@bisonsystems.net> wrote:
>> > Is there a more efficient remote boolean check than the response code
>> > from a HEAD request? Payload isn't required. Heck, neither are
>> > headers, if the request is absolute. Simplicity itself.
>>
>> Well, yes, but I'm my spidey senses are tingling with semantics, but
>> not quite knowing what the answer is. I get the feeling that if you do
>> ;
>>
>> HEAD /date?iso=YYYY-02-29
>>
>> there's a compartmentalization going on here which feels, well, not
>> directly wrong, but not quite right either. For me, the resource for
>> the above is really ;
>>
>> /date
>>
>> And any HTTP status code for me is about that resource without
>> parameters. I'm making a clearer distinction between a base URI and
>> its parameters, and perhaps that's wrong of me, but I'd *much* rather
>> prefer ;
>>
>> /date/leap-year/{*}
>>
>> And where HATEOAS is derived from /date, then /date/leap-year, to the
>> resources in question, probably using a simple ontology wrapped in
>> forms. Either that year is there (200 or 304), or it is not (404). The
>> semantics are on the year as a resource, not on a service-type
>> resource which if 404 indicates the *service* is not found as opposed
>> to the functionality given by the service. And this still won't break
>> testing for success, nor does it stop you getting HEAD.
>>
>> Alex
>> --
>> Project Wrangler, SOA, Information Alchemist, UX, RESTafarian, Topic Maps
>> --- http://shelter.nu/blog/ ----------------------------------------------
>> ------------------ http://www.google.com/profiles/alexander.johannesen ---
>>
REST Fest 2010 and Hypermedia Workshop Friday, September 17, 2010 at 8:00 AM - Saturday, September 18, 2010 at 6:00 PM (ET) Greenville, SC Co-Chairs: Mike Amundsen & Benjamin Young REST Fest 2010 (Sep 17th & 18th) REST Fest is a community unconference event focused on the REST architectural style and implementations. This year, REST Fest will encourage developers who have direct experience building RESTful applications for the World Wide Web to share their successes and their frustrations in an informal atmosphere. REST Fest will also maintain a "Hack Room" open throughout the two-day event where attendees can get together and work on any project they like. http://restfest.org Call for Presenters Presenters are encouraged to submit a title, short abstract (250 or less), and an indication of the "level" of the talk (beginner, intermediate, advanced). "How To..." talks are encouraged as well as "How Do I?" talks. In the spirit of the "Unconference" model, all talks are automatically accepted as a "Lightning Talk" (Five Slides in Five Minutes). A small number of talks will be chosen as "Selected Talks" with a format of 30+ minutes. Break out sessions will be added as desired by the attendees. http://restfest2010.eventbrite.com/ Workshop: Hypermedia Hacking with Mike Amundsen (Sep 17th) In this one-day pre-event workshop, attendees will learn how to implement an alternative to one-off Web APIs by using Hypermedia Engines. The all-day session includes a mix of presentation, discussion, and hands-on implementation. Attendees are encouraged to bring laptops and "code-along" with supplied examples throughout the day. http://www.restfest.org/schedule/workshop mca http://amundsen.com/blog/ http://mamund.com/foaf.rdf#me
On 05-Aug-2010, at 8:01 AM, Eric J. Bowman wrote: > > GET http://example.org/is_leap_year > > Returns an Xforms document. This interactive form has a field to enter > YYYY. User hits 'enter'. Hypertext instructs the user agent to make a > HEAD request to /date?iso=YYYY-02-29. If response=(200|304), then > append "is a leap year" after the user-entered YYYY; else append "is > not a leap year". These are declarative hypertext instructions; (200| > 304) would manifest itself in Xpath. > I've been mulling this on the subway in to work. Small nit and not at the heart of what you are pointing out here, but I think you'd need another condition to catch 5xx errors (and some of the 4xx codes). In these cases, I don't think you can make assumptions either way about whether the indicated year is a leap year. I point it out because it is a potential gap in a blanket !(success) check to support a boolean check. Or have I missed something obvious? (very new to REST, somewhat new to the nuances of http response codes if that hasn't been apparent)
---------- Forwarded message ---------- From: "Antnio Mota" <amsmota@gmail.com> Date: 5 Aug 2010 13:34 Subject: Re: [rest-discuss] REST pop quiz To: "Eric J. Bowman" <eric@...> No, as usual you're misunderstanding me by giving meaning to my posts that aren't there. Or otherwise because I don't explain myself correctly. Neverthless in this situation I haven't use a single word of my own, so it can't be it... How can I take it out of context if the original context is preciselly of what it takes, what "rules of thumb" should be followed to design a REST API? And that your current example does not follow. Bottom line, I was illustrating that using HTTP specific codes and giving them application-context meaning, or business meaning, is not RESTfull because makes it dependent of the protocol and is a clear coupling between client and server and implementation by using out-of-band protocol meaning to your application-specific meaning, that should be defined at the media-type level and not at the protocol level. In my humble, not expert opinion, of course... In other words, use the response body for that. BTW, what has your last paragraph to do with my quotation of Roy? It seems like a "ready-made" phrase... > > On 5 Aug 2010 12:40, "Eric J. Bowman" <eric@...> wrote: > > Antnio Mota wrote: > > > > API designers, please note the following rules before calling your > creatio... > > You're misunderstanding that, and btw taking it out of context, to be > some rigid rule against...
On Aug 5, 2010, at 2:01 PM, Eric J. Bowman wrote: > If response=(200|304), then > append "is a leap year" Well,... 200 and 304 are defined by RFC 2616. They do not mean anything besides what is specified there. Same to 404. Jan ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
On Aug 5, 2010, at 2:34 PM, Antnio Mota wrote: > using HTTP specific codes and giving them application-context meaning, or business meaning, is not RESTfull Bingo! Jan ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
Jan Algermissen wrote: > > > using HTTP specific codes and giving them application-context > > meaning, or business meaning, is not RESTfull > > Bingo! > Example: There's no HEAD equivalent in FTP, so (not a live example): GET ftp://charger.bisonsystems.net/date?iso=2010-02-29 If you get a 'success' response, then bool=true. If you don't, then bool=false. But, you're being rigid if you believe that, since they aren't in FTP, I shouldn't use HEAD or conditional requests. I deliberately chose HTTP for this resource identifier, because HTTP is a REST application protocol. FTP is not. This is NOT coupling. God help us if it is, as REST would be unattainable. There is a resource that only exists in leap years, ?iso=YYYY-02-29. It is dereferenceable. Its existence is proof of a leap year. Protocol is irrelevant. What matters is the hypertext constraint. A hypertext interface may be crafted to do a boolean check based on the presence/absence of a given resource, regardless of the protocol chosen for that resource, and change its steady-state as a result. I promise. Why do folks insist on making REST so damn complicated? Good grief, people, you can curl the resource (http|ftp not relevant, it's up to me when I assign the identifier, though) and answer the question inside half a second. What on Earth REST constraint is being violated by boolean checking if a resource exists (regardless of the meaning the system assigns to the resource, this is irrelevant to the user agent, coupling would be if it was important to the user agent) based on response code=(200|304) with HEAD? -Eric
Gregory Berezowsky wrote: > > Or have I missed something obvious? (very new to REST, somewhat new > to the nuances of http response codes if that hasn't been apparent) > No, you've only missed my point about fragility. If you're requiring a 404 and the server config slips to 400, your client breaks. User agents must be able to deal with any response code they come across. It's simply more robust to deal with knowns, than unknowns. In my case, when I screwed up the /date code on charger yesterday, it went from 400 to 500. This broke my code until I changed my pattern to !(200|304). Why bother catching all the possible errors, when the only thing that matters is success? OK, you can catch 500 if you want, and report "service down" to the user instead of "is not a leap year". The important takeaway I was trying to pass on here, is to not check for success by checking for the absence of failure. Take it or leave it at that, please. -Eric
On 05-Aug-2010, at 10:19 AM, Eric J. Bowman wrote: > Gregory Berezowsky wrote: >> >> Or have I missed something obvious? (very new to REST, somewhat new >> to the nuances of http response codes if that hasn't been apparent) >> > > No, you've only missed my point about fragility. If you're requiring a > 404 and the server config slips to 400, your client breaks. User > agents must be able to deal with any response code they come across. > It's simply more robust to deal with knowns, than unknowns. > > In my case, when I screwed up the /date code on charger yesterday, it > went from 400 to 500. This broke my code until I changed my pattern > to !(200|304). Why bother catching all the possible errors, when the > only thing that matters is success? > > OK, you can catch 500 if you want, and report "service down" to the > user instead of "is not a leap year". The important takeaway I was > trying to pass on here, is to not check for success by checking for the > absence of failure. Take it or leave it at that, please. That's what I figured. I just wanted to clarify my understanding. > > -Eric
<snip> > If you get a 'success' response, then bool=true. If you don't, then bool=false. </snip> 1 - if you write client code like that assumes bool=false when server returns 5xx you're history in my shop. 2 - if you write server code that does not allow clients to negotiate for response bodies (e.g. only support HTTP HEAD), you're out. mca http://amundsen.com/blog/ http://mamund.com/foaf.rdf#me On Thu, Aug 5, 2010 at 10:13, Eric J. Bowman <eric@...> wrote: > Jan Algermissen wrote: >> >> > using HTTP specific codes and giving them application-context >> > meaning, or business meaning, is not RESTfull >> >> Bingo! >> > > Example: There's no HEAD equivalent in FTP, so (not a live example): > > GET ftp://charger.bisonsystems.net/date?iso=2010-02-29 > > If you get a 'success' response, then bool=true. If you don't, then > bool=false. > > But, you're being rigid if you believe that, since they aren't in FTP, > I shouldn't use HEAD or conditional requests. > > I deliberately chose HTTP for this resource identifier, because HTTP is > a REST application protocol. FTP is not. This is NOT coupling. God > help us if it is, as REST would be unattainable. > > There is a resource that only exists in leap years, ?iso=YYYY-02-29. It > is dereferenceable. Its existence is proof of a leap year. Protocol > is irrelevant. What matters is the hypertext constraint. > > A hypertext interface may be crafted to do a boolean check based on the > presence/absence of a given resource, regardless of the protocol chosen > for that resource, and change its steady-state as a result. I promise. > > Why do folks insist on making REST so damn complicated? Good grief, > people, you can curl the resource (http|ftp not relevant, it's up to me > when I assign the identifier, though) and answer the question inside > half a second. > > What on Earth REST constraint is being violated by boolean checking if > a resource exists (regardless of the meaning the system assigns to the > resource, this is irrelevant to the user agent, coupling would be if it > was important to the user agent) based on response code=(200|304) with > HEAD? > > -Eric > > > ------------------------------------ > > Yahoo! Groups Links > > > >
Jan Algermissen wrote: > > > If response=(200|304), then > > append "is a leap year" > > Well,... 200 and 304 are defined by RFC 2616. They do not mean > anything besides what is specified there. Same to 404. > The hypertext constraint is about self-documenting an API which translates responses (codes, headers, entities) into application states. How you can expect to have REST without imparting meaning to response codes, baffles me. If, on response=(200|304), my hypertext changes the application state to "is a leap year" then I've just defined those response codes to mean bool=true within the context of my application. If, on HEAD ?iso=2010-02-29, rel=next equals 2010-03-01, and my hypertext changes the application state to "is a leap year" on that basis, then I've just defined *that* to mean bool=true within the context of my application. The success/fail cases for the application located at: http://example.org/is_leap_year *Are whatever the hypertext representation _says_ they are!* -Eric
mike amundsen wrote: > > 1 - if you write client code like that assumes bool=false when server > returns 5xx you're history in my shop. > When the client is dealing with the response code, it should first check for (200|304). The 'else' can be as complicated as you need it to be, but I recommend defaulting to mean false, if your specific traps for, say, 5xx fail. Check for success, then handle exceptions. Don't check for failure, then assume !failure = success. I rarely give better, or more hard- earned, advice. Roy has given the same advice about coding robust clients -- too late to save me from myself, though. > > 2 - if you write server code that does not allow clients to negotiate > for response bodies (e.g. only support HTTP HEAD), you're out. > Who says you can't negotiate? If you'll be patient until I post my next example, you can plainly see that /date connegs between application/xhtml+xml and application/json. HEAD requests are just the results of the negotiation, sans body. But, why on Earth would anyone want to bring conneg into the equation to do a boolean check based on whether a resource exists? Just do a HEAD with Accept: */* and be done with it. -Eric
As a newbie and lurker on this list, I have found this whole thread really illuminating. And it doesn't seem to me to be a REST issue in some sense. Specifically, the assumption made in the question about the client behavior seems to be that all it cares about is whether it can unequivocally identify the year as a leap year. If it can't- for any reason- it doesn't care. Doesn't matter whether it ISN'T a leap year or that the server is unavailable. For myself, I find it unlikely that I will ever write a useful client that will only care about being able to definitively say it is a leap year (as opposed to definitively say that it is OR that it isn't, or to be uncertain and aware of server errors). Most useful clients tend to want to know whether or not something failed because it is unavailable. Myself, I would likely not model "leap year" itself as a resource. Resources, I think (and I am a bit new here, so tolerance please!), have attributes. Seems to me it would be at least as valid to treat the year as a resource, with leap year as an implicit boolean attribute. There's probably all sorts of other things I might want to know about a year- like, what date Thanksgiving fell on or whether the 12th of April was a Wednesday. Would those also be treated as resources? And what are the implications if they are? Am probably managing to miss everyone's point- guidance appreciated. Have enjoyed the thread a lot. Linda Grimaldi On Aug 5, 2010, at 8:25 AM, mike amundsen wrote: > <snip> >> If you get a 'success' response, then bool=true. If you don't, then bool=false. > </snip> > 1 - if you write client code like that assumes bool=false when server > returns 5xx you're history in my shop. > 2 - if you write server code that does not allow clients to negotiate > for response bodies (e.g. only support HTTP HEAD), you're out. > > mca > http://amundsen.com/blog/ > http://mamund.com/foaf.rdf#me > > > > > On Thu, Aug 5, 2010 at 10:13, Eric J. Bowman <eric@...> wrote: >> Jan Algermissen wrote: >>> >>>> using HTTP specific codes and giving them application-context >>>> meaning, or business meaning, is not RESTfull >>> >>> Bingo! >>> >> >> Example: There's no HEAD equivalent in FTP, so (not a live example): >> >> GET ftp://charger.bisonsystems.net/date?iso=2010-02-29 >> >> If you get a 'success' response, then bool=true. If you don't, then >> bool=false. >> >> But, you're being rigid if you believe that, since they aren't in FTP, >> I shouldn't use HEAD or conditional requests. >> >> I deliberately chose HTTP for this resource identifier, because HTTP is >> a REST application protocol. FTP is not. This is NOT coupling. God >> help us if it is, as REST would be unattainable. >> >> There is a resource that only exists in leap years, ?iso=YYYY-02-29. It >> is dereferenceable. Its existence is proof of a leap year. Protocol >> is irrelevant. What matters is the hypertext constraint. >> >> A hypertext interface may be crafted to do a boolean check based on the >> presence/absence of a given resource, regardless of the protocol chosen >> for that resource, and change its steady-state as a result. I promise. >> >> Why do folks insist on making REST so damn complicated? Good grief, >> people, you can curl the resource (http|ftp not relevant, it's up to me >> when I assign the identifier, though) and answer the question inside >> half a second. >> >> What on Earth REST constraint is being violated by boolean checking if >> a resource exists (regardless of the meaning the system assigns to the >> resource, this is irrelevant to the user agent, coupling would be if it >> was important to the user agent) based on response code=(200|304) with >> HEAD? >> >> -Eric >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> >> > > > ------------------------------------ > > Yahoo! Groups Links > > >
On Aug 5, 2010, at 4:13 PM, Eric J. Bowman wrote: > What on Earth REST constraint is being violated by boolean checking if > a resource exists You cannot boolean-check whether the resource exists. (404 does not mean that the resource does not exist) Jan ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
But you will assume !(200|304) as being a exception or a false value? For instance, I'm a big fan of OSGi, so let's say I have my site or service developed in that. I have /mysite/date?iso=2008-02-29 -> 200 /mysite/date?iso=2010-02-29 -> 404 [or !(200|304)] so you're application assume true for "2008 is leap year" and false for "2010 is leap year". No tomorrow, I shut down my module that corresponds to the resource /mysite/date, while the rest of /mysite is working as usuall. /mysite/date?iso=2008-02-29 -> 404 /mysite/date?iso=2010-02-29 -> 404 What do you make of that? More generally, how do you distinguish a 404 meaning between it's significance in HTTP - not found - and your specific application assigned meaning - not a leap year? Isn't it much more simple, and unanmbiguous, just to return a body with a true/false, and leave !(200|304) for what they really mean, and let decide the client what to do with those exception? (try again later, change service provider, etc...) With the advantage that you're nout bounding your implementation to protocol-specific codes... On 5 Aug 2010 15:41, "Eric J. Bowman" <eric@...> wrote: mike amundsen wrote: > > 1 - if you write client code like that assumes bool=false when server > ret... When the client is dealing with the response code, it should first check for (200|304). The 'else' can be as complicated as you need it to be, but I recommend defaulting to mean false, if your specific traps for, say, 5xx fail. Check for success, then handle exceptions. Don't check for failure, then assume !failure = success. I rarely give better, or more hard- earned, advice. Roy has given the same advice about coding robust clients -- too late to save me from myself, though. > > 2 - if you write server code that does not allow clients to negotiate > for response bodies (e.... Who says you can't negotiate? If you'll be patient until I post my next example, you can plainly see that /date connegs between application/xhtml+xml and application/json. HEAD requests are just the results of the negotiation, sans body. But, why on Earth would anyone want to bring conneg into the equation to do a boolean check based on whether a resource exists? Just do a HEAD with Accept: */* and be done with it. -Eric
Linda Grimaldi wrote: > > If it can't- for any reason- it doesn't care. Doesn't matter > whether it ISN'T a leap year or that the server is unavailable. > My thinking exactly. The service document for /date will eventually be an Xforms interface. Use slider control to select century, numeric input 00-99, dropdown box for month, and numeric input 01-?? to select day-of-month. That day-of-month control is pretty easy to code, as to which months have 30 days vs. 31. February's a real bitch, though -- does the range go to 28 or 29? Better do a boolean check: on success, range=01-29. Again, we can avoid the entire issue of failure response codes by instead analyzing the Link: rel=prev|next -- anything *but* 200 OK is a critical failure, obviously the service is malfunctioning. -Eric
On Aug 5, 2010, at 4:27 PM, Eric J. Bowman wrote: > Jan Algermissen wrote: >> >>> If response=(200|304), then >>> append "is a leap year" >> >> Well,... 200 and 304 are defined by RFC 2616. They do not mean >> anything besides what is specified there. Same to 404. >> > > The hypertext constraint is about self-documenting an API which > translates responses (codes, headers, entities) into application states. No. > How you can expect to have REST without imparting meaning to response > codes, baffles me. The meaning of the response codes are all well defined by RFC2616. > > If, on response=(200|304), my hypertext changes the application state > to "is a leap year" then I've just defined those response codes to mean > bool=true within the context of my application. Then you are tunneling through HTTP. > > If, on HEAD ?iso=2010-02-29, rel=next equals 2010-03-01, and my > hypertext changes the application state to "is a leap year" on that > basis, then I've just defined *that* to mean bool=true within the > context of my application. Again, that is tunneling semantics through HTTP. > > The success/fail cases for the application located at: > > http://example.org/is_leap_year You seem to be mixing 'service (or Web application)' with REST's meaning of 'application'. The 'application' is not defined by the service, nor by any hypermedia specification. The application is the involved components (servers, intermediaries, user agents) and data at runtime. Jan > > *Are whatever the hypertext representation _says_ they are!* > > -Eric ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
Jan Algermissen wrote: > > You cannot boolean-check whether the resource exists. > Sure I can. How can I get a 200 OK for a nonexistent resource? > > (404 does not mean that the resource does not exist) > Or 400, whatever. It doesn't have to. Assuming the service works, everything I need to know is revealed by the fact that the response code is 200 OK. Assuming the service doesn't work, and we always must, what's the failure mode? False negatives, not false positives. We may rely on the service to reliably declare "is a leap year" if it's running at all. That's all I need -- temporarily not showing Feb 29th on my form isn't a mission-critical failure. YMMV. If we wish to avoid the possibility of an incorrect assertion due to a system malfunction, then we check ?iso=YYYY-02-28's rel=next. If the system doesn't respond 200 OK, then it's malfunctioning. When it is functioning, we may rely on the service to reliably assert leap years. Again, what matters is, does your hypertext API lead the user agent from one application state to the next, based on success/fail/ unavailable/what-have-you? -Eric
On Aug 5, 2010, at 5:24 PM, Eric J. Bowman wrote: > Jan Algermissen wrote: >> >> You cannot boolean-check whether the resource exists. >> > > Sure I can. How can I get a 200 OK for a nonexistent resource? ---> http://www.infoq.com/author/Darth-Vader Jan
*
So between
GET /date?iso=2008-02-29 ----> means TRUE
200 isleapyear=true
GET /date?iso=2008-02-29
200 isleapyear=false ----> means FALSE
and
GET /date?iso=2008-02-29 ----> means TRUE
200
GET /date?iso=2010-02-29 ----> means, well, maybe FALSE maybe the service
is down
404
GET /date?iso=2010-02-28 ----> let's make sure
200 rel=next....
GET {next} ----> means TRUE
200
Well, now I have to quote you - "Why do folks insist on making REST so damn
complicated?"...
And it beats me how it should be responsability of the client application to
distinguish beteween protocol meanings and applications meanings, isn't a
separation of concerns a REST constraint?
**_____________________________________________________________
**Disclaimer: The opinions expressed herein are just my opinions and they
are not necessary right.*
*_____________________________________________________________*
On 5 August 2010 16:24, Eric J. Bowman <eric@...> wrote:
> Jan Algermissen wrote:
> >
> > You cannot boolean-check whether the resource exists.
> >
>
> Sure I can. How can I get a 200 OK for a nonexistent resource?
>
> >
> > (404 does not mean that the resource does not exist)
> >
>
> Or 400, whatever. It doesn't have to. Assuming the service works,
> everything I need to know is revealed by the fact that the response
> code is 200 OK.
>
> Assuming the service doesn't work, and we always must, what's the
> failure mode? False negatives, not false positives. We may rely on
> the service to reliably declare "is a leap year" if it's running at all.
> That's all I need -- temporarily not showing Feb 29th on my form isn't
> a mission-critical failure. YMMV.
>
> If we wish to avoid the possibility of an incorrect assertion due to a
> system malfunction, then we check ?iso=YYYY-02-28's rel=next. If the
> system doesn't respond 200 OK, then it's malfunctioning. When it is
> functioning, we may rely on the service to reliably assert leap years.
>
> Again, what matters is, does your hypertext API lead the user agent
> from one application state to the next, based on success/fail/
> unavailable/what-have-you?
>
> -Eric
>
Jan Algermissen wrote: > > > The hypertext constraint is about self-documenting an API which > > translates responses (codes, headers, entities) into application > > states. > > No. > Yes. Absolutely yes. REST applications are driven from one state to the next via hypertext. The user is presented with a selection of state transitions to choose from (YYYY). The user agent follows the instructions in the hypertext to formulate a request, then reports the results of the response back to the user as a new application steady- state. Hypertext, supplied by some representation of /is_leap_year, instructs the user agent how to carry out the user's request to check if YYYY is a leap year. That request is a state transition, to a new steady-state of "is not a leap year," "is a leap year," or "server malfunction." That is a self-documenting hypertext API which, when the user requests a state transition, instructs the user agent how to transition to that next state, i.e. by making a HEAD request and interpreting the response. The interpreted response is the user's selected state transition. I can't for the life of me figure out what else the hypertext constraint could possibly mean. > > > How you can expect to have REST without imparting meaning to > > response codes, baffles me. > > The meaning of the response codes are all well defined by RFC2616. > HTTP is not a transfer protocol, it is an application protocol. HTTP provides generic semantics for methods and responses. Putting those together into a REST system is the architect's job. This may mean constraining PUT to only have replacement semantics -- both creation and replacement semantics are allowed by HTTP, but a REST API must choose one or the other. It is well understood that 304 means unchanged in HTTP. But what does this mean to your application? Well, whatever you need it to mean. If you make a PUT request, followed by a conditional HEAD, and receive a 304 response, then for that application interaction 304 means failure. Or the hypertext could key off of the Etag in the 200 response to the PUT to determine success or failure. > > > If, on response=(200|304), my hypertext changes the application > > state to "is a leap year" then I've just defined those response > > codes to mean bool=true within the context of my application. > > Then you are tunneling through HTTP. > Tunneling what through HTTP? If ?iso=2008-02-29 responds 200 OK, why on Earth would I *not* interpret a 200 OK response to mean that 2008 is a leap year? That I have hypertext that took as input '2008' and returned "is a leap year" based on _whatever_ factors is all that matters -- can the interface be expressed over the network using hypertext? Yes. Couldn't be easier, unless you insist on making it hard. > > > If, on HEAD ?iso=2010-02-29, rel=next equals 2010-03-01, and my > > hypertext changes the application state to "is a leap year" on that > > basis, then I've just defined *that* to mean bool=true within the > > context of my application. > > Again, that is tunneling semantics through HTTP. > Tunneling what semantics? I defined a resource, /is_leap_year, and a hypertext representation of that resource which takes as input YYYY and appends to that "is a leap year" or "is not a leap year". That's my resource definition, plain and simple as you like, and fully declarative. > > > The success/fail cases for the application located at: > > > > http://example.org/is_leap_year > > You seem to be mixing 'service (or Web application)' with REST's > meaning of 'application'. The 'application' is not defined by the > service, nor by any hypermedia specification. The application is the > involved components (servers, intermediaries, user agents) and data > at runtime. > No, as I've explained (paraphrasing Roy) plenty of times, a REST application may be defined as "what the user is trying to do". The REST application here is simple: determine if YYYY is a leap year. The user is presented a hypertext control which is used to select the next application state -- whatever year is being boolean-queried. The nature of that application state depends on the year queried. If the selected transition is '2008' then the next steady-state is "2008 is a leap year." Application terminates. REST applications are defined by their steady-states. Those steady- state transitions must be driven by hypertext. -Eric
Jan Algermissen wrote: > > Eric J. Bowman wrote: > > > Jan Algermissen wrote: > >> > >> You cannot boolean-check whether the resource exists. > >> > > > > Sure I can. How can I get a 200 OK for a nonexistent resource? > > > ---> http://www.infoq.com/author/Darth-Vader > That's a URI. Therefore it must identify a resource. If dereferencing that resource responds 200 OK, then the resource must, by definition, exist. Reading the response entity tells me that the resource is defined to be a list of "All of Darth Vader's Content on InfoQ" and contains no links. So the resource is exactly what its hypertext representation says it is. If I want to know whether "Darth Vader is alive" then why would I query this resource? It isn't defined to be about that, is it? -Eric
On Aug 5, 2010, at 5:51 PM, Eric J. Bowman wrote: > This may mean > constraining PUT to only have replacement semantics -- both creation > and replacement semantics are allowed by HTTP, but a REST API must > choose one or the other. That seems to be your misunderstanding. PUT is what HTTP defines it to be. Hypermedia specification can not and need not re-define HTTP semantics. (That is the reason why you have to provide the information about leap-year or not in the representation, using standardized representation semantics. And that in turn is why re-use is facilitated by REST). Jan ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
On Thu, 5 Aug 2010 16:49:38 +0100
Antnio Mota wrote:
>
> So between
>
> GET /date?iso=2008-02-29 ----> means TRUE
> 200 isleapyear=true
> GET /date?iso=2008-02-29
> 200 isleapyear=false ----> means FALSE
>
Huh? That bears not even passign resemblance to the examples I gave.
>
> GET /date?iso=2008-02-29 200 means TRUE
>
If that's all the information your app is looking for, then that's all
the information your app needs, so why make it more complicated?
>
> GET /date?iso=2010-02-29 4xx means, well, maybe FALSE maybe the
> service is down
>
The fact that it is !(200|304) simply means you can't assert 2010 to be
a leap year. As Linda wrote, you can't declare this "NOT REST" because
you have no clue whether this pattern meets the needs of whatever app
implements it.
>
> GET /date?iso=2010-02-28 ----> let's make sure
> 200 rel=next....
>
Yes, if rel=next equals 2010-03-01, then it's quite damn obvious that
2010 isn't a leap year, otherwise 2010-02-29 would be rel=next in the
collection, so why on Earth would you go complicating this with
another...
>
> GET {next} ----> means TRUE
> 200
>
>
> Well, now I have to quote you - "Why do folks insist on making REST
> so damn complicated?"...
>
Because REST APIs just don't get any simpler than I make them. If the
initial pattern I suggested doesn't work for your app, then check the
link relations. Do you seriously mean to call that complicated?
Compared to the other solutions offered in this thread?
I can't make REST any easier. It doesn't get any easier. If you can't
accept that, then stop responding to my posts with your absurd
assertions that what I posit is somehow complicated, that's just being
argumentative again, and your double-secret-probation will end
unfavorably, in terms of me responding to you.
>
> And it beats me how it should be responsability of the client
> application to distinguish beteween protocol meanings and
> applications meanings, isn't a separation of concerns a REST
> constraint?
>
Yes, exactly. That's why these concerns are addressed in _hypertext_
which _instructs_ user agents _how_ to perform state transitions. Why
would a user agent give a damn that my hypertext interprets 304 one way
or the other, provided it loads the representation from cache when
requested?
This REST stuff is nowhere near as difficult as folks make it out to be.
-Eric
On 5 August 2010 17:07, Jan Algermissen <algermissen1971@...> wrote: > > That seems to be your misunderstanding. PUT is what HTTP defines it to be. Hypermedia specification can not and need not re-define HTTP semantics. > Rigth, or at least is what I understand by reading "A REST API should not contain any changes to the communication protocols aside from filling-out or fixing the details of underspecified bits of standard protocols, such as HTTPs PATCH method or Link header field" and "Failure here implies that the resource interfaces are object-specific, not generic."
2010/8/5 Eric J. Bowman <eric@...>:
> On Thu, 5 Aug 2010 16:49:38 +0100
> Antnio Mota wrote:
>>
>> So between
>>
>> GET /date?iso=2008-02-29 ----> means TRUE
>> 200 isleapyear=true
>> GET /date?iso=2008-02-29
>> 200 isleapyear=false ----> means FALSE
>>
>
> Huh? That bears not even passign resemblance to the examples I gave.
>
No, ofcourse not. That is *my* example of the way that for me looks
more logical and simple, a GET to a resource with a true/false in the
body. Everithing else is a exception that the client can or not (as
she wishes) deal with, like retry later or change to a more reliable
service provider.
>>
>> GET /date?iso=2008-02-29 200 means TRUE
>>
>
> If that's all the information your app is looking for, then that's all
> the information your app needs, so why make it more complicated?
>
Because the opposite can have two meanings, "not leap year" or
"exception" that any application should be capable of distinguish.
>>
>> GET /date?iso=2010-02-29 4xx means, well, maybe FALSE maybe the
>> service is down
>>
>
> The fact that it is !(200|304) simply means you can't assert 2010 to be
> a leap year. As Linda wrote, you can't declare this "NOT REST" because
> you have no clue whether this pattern meets the needs of whatever app
> implements it.
>
>>
>> GET /date?iso=2010-02-28 ----> let's make sure
>> 200 rel=next....
>>
>
> Yes, if rel=next equals 2010-03-01, then it's quite damn obvious that
> 2010 isn't a leap year, otherwise 2010-02-29 would be rel=next in the
> collection, so why on Earth would you go complicating this with
> another...
>
>>
>> GET {next} ----> means TRUE
>> 200
>>
>>
>> Well, now I have to quote you - "Why do folks insist on making REST
>> so damn complicated?"...
>>
>
> Because REST APIs just don't get any simpler than I make them. If the
> initial pattern I suggested doesn't work for your app, then check the
> link relations. Do you seriously mean to call that complicated?
> Compared to the other solutions offered in this thread?
>
> I can't make REST any easier. It doesn't get any easier. If you can't
> accept that, then stop responding to my posts with your absurd
> assertions that what I posit is somehow complicated, that's just being
> argumentative again, and your double-secret-probation will end
> unfavorably, in terms of me responding to you.
>
>>
>> And it beats me how it should be responsability of the client
>> application to distinguish beteween protocol meanings and
>> applications meanings, isn't a separation of concerns a REST
>> constraint?
>>
>
> Yes, exactly. That's why these concerns are addressed in _hypertext_
> which _instructs_ user agents _how_ to perform state transitions. Why
> would a user agent give a damn that my hypertext interprets 304 one way
> or the other, provided it loads the representation from cache when
> requested?
>
> This REST stuff is nowhere near as difficult as folks make it out to be.
>
> -Eric
>
So, let's assume the case for 2010 that is not a leap year, between my
sugestion and yours, let's called A and B
A)
GET /date?iso=2010-02-29
200 isleapyear=false
B)
GET /date?iso=2010-02-29
404
GET /date?iso=2010-02-28
200 rel=next....
GET {next}
200
are you frankly saying that B is simpler that A???
Oh, I'm sorry, I misread a part of your post: 2010/8/5 Eric J. Bowman <eric@...>: > Yes, if rel=next equals 2010-03-01, then it's quite damn obvious that > 2010 isn't a leap year, otherwise 2010-02-29 would be rel=next in the > collection, so why on Earth would you go complicating this with > another... > So my example in my previous post should read A) GET /date?iso=2010-02-29 200 isleapyear=false B) GET /date?iso=2010-02-29 404 GET /date?iso=2010-02-28 200 rel=next.... [extract value from "next" and compare it to "2010-03-01"] So, are you frankly saying that B is simpler that A???
On Aug 4, 2010, at 7:02 PM, Eric J. Bowman wrote: > Glad someone posted that answer, because my purpose was to offer a tip > from hard experience -- base system logic on checking for success codes, > not failure codes, as much as possible. Note the above URI returns 500. > Don't forget that 304 also indicates success. +1. This is like using exceptions for control flow. Causes indigestion. The example corresponds to what I call as processing functions. begin promotion see http://my.safaribooksonline.com/9780596809140/recipe-treat-processing-functions-as-resources end A simple GET with response explaining the answer would do. Hypermedia of some form (HTML, links, URI templates etc.) can be thrown in when it makes sense. Subbu
This is problem with random quizzes, e-mail, very little context and quite a few bright minds! Depending on what the resource truly is, then 200 (or 400) take on different meaning. If the date is the actual resource, then yes, 200 (I believe) would indicate that the resource (date) does exist (and is a leap year). This is where GET is probably not needed and HEAD could be used solely. However, if the resource in question is "testing that date is a leap year", then 200 takes on a very different meaning as it communicates whether the resource that tests for leap years is/was available. The response would still have to be inspected. My $0.02. Ebenezer
Jan Algermissen wrote: > > > This may mean > > constraining PUT to only have replacement semantics -- both creation > > and replacement semantics are allowed by HTTP, but a REST API must > > choose one or the other. > > That seems to be your misunderstanding. PUT is what HTTP defines it > to be. Hypermedia specification can not and need not re-define HTTP > semantics. > No, you don't get to re-define method semantics, where did I say that? PUT has two possible interpretations. In a REST system, you must pick one and only one of those interpretations. It is your hypertext which explains that PUT either means creation or replacement, depending on the application interaction responsible for the PUT request. If your hypertext didn't describe your API's method semantics, then how would anyone know what POST means? -Eric
António Mota wrote: > > "A REST API should not contain any changes to the communication > protocols aside from filling-out or fixing the details of > underspecified bits of standard protocols, such as HTTP’s PATCH method > or Link header field" > What protocol have I redefined the semantics of? My GET is a GET. -Eric
Antnio Mota wrote: > > No, ofcourse not. That is *my* example of the way that for me looks > more logical and simple, a GET to a resource with a true/false in the > body. Everithing else is a exception that the client can or not (as > she wishes) deal with, like retry later or change to a more reliable > service provider. > Whether a resource exists or not, is a function of its response code when dereferenced. This has *nothing* to do with the request body, and it's totally redundant to express something so obvious there. -Eric
Antnio Mota wrote: > > 200 isleapyear=false > What on Earth does this tell you, that can't be inferred from the fact that this... > > GET /date?iso=2010-02-29 > ...returns 200 OK? Not that the above resource would exist, but if for some reason it did, then it would obviously mean 2010 "is a leap year." > > B) > GET /date?iso=2010-02-29 > 404 > Why bother with that step? > > GET /date?iso=2010-02-28 > 200 rel=next.... > [extract value from "next" and compare it to "2010-03-01"] > > So, are you frankly saying that B is simpler that A??? > No, I'm saying that if A is too simple to fill your needs, then use B. -Eric
> > In order to farm out the development of standalone FastCGI modules, > I need to be able to determine whether applicants posess enough of a > REST skillset to grok how their assigned module fits into the overall > system. > Or, given that my framework-to-be will be an open-source project, I can use a quiz to at least determine who's on the same page as I am, or at least willing to acknowledge that my architecture is a legitimate instantiation of the REST style, before assigning commit privileges. Sure, there are other instantiations which are equally legitimate, but clarity of purpose at the top is essential. http://charger.bisonsystems.net/config/ That's my take on what an httpd config file ought to look like, rough first draft I came up with overnight. Configuration by resource type. So far I've only drafted the config for /date, this should give more insight to how the service works, except for generating the response. -Eric
I was refering to Jan answer to you, but since you ask, what are you doing with (200;304) -> is leap year !(200;304) -> is not a leap year And BTW why you only answer questions that interest you or that fits in your POV and conveniently ignore the others? On 5 Aug 2010 18:59, "Eric J. Bowman" <eric@...> wrote: Antnio Mota wrote: > > "A REST API should not contain any changes to the communication > protocols... What protocol have I redefined the semantics of? My GET is a GET. -Eric
Antnio Mota wrote: > > And BTW why you only answer questions that interest you or that fits > in your POV and conveniently ignore the others? > If you have a problem with my answers, then I'll simply stop giving them to you. Back on ignore. -Eric
I don't have problems, I just note your *lack* of answers, even if a yes or no is suficient, when they don't suite you. I also don't have problems wih you trying to flame me when it is convenient for you to ignore my questions - albeit sometimes you succeed in flame me - or turning things personal when they are not - I didn't said nothing that others aren't saying, giving or taken, and none of those was personal. But if it's convenient for you to ignore me, well, good for you. C'est la vie, as they say in english... On 5 Aug 2010 19:11, "Eric J. Bowman" <eric@...> wrote: Antnio Mota wrote: > > And BTW why you only answer questions that interest you or that fits > in y... If you have a problem with my answers, then I'll simply stop giving them to you. Back on ignore. -Eric
Can you guys stop? Please? I beg you! 2010/8/5 Antnio Mota <amsmota@gmail.com> > > > I don't have problems, I just note your *lack* of answers, even if a yes > or no is suficient, when they don't suite you. > > I also don't have problems wih you trying to flame me when it is convenient > for you to ignore my questions - albeit sometimes you succeed in flame me - > or turning things personal when they are not - I didn't said nothing that > others aren't saying, giving or taken, and none of those was personal. > > But if it's convenient for you to ignore me, well, good for you. C'est la > vie, as they say in english... > > On 5 Aug 2010 19:11, "Eric J. Bowman" <eric@...> wrote: > > Antnio Mota wrote: > > > > And BTW why you only answer questions that interest you or that fits > > in y... > If you have a problem with my answers, then I'll simply stop giving > them to you. Back on ignore. > > -Eric > > >
<snip> Can you guys stop? Please? I beg you! </snip> +1 mca http://amundsen.com/blog/ http://mamund.com/foaf.rdf#me 2010/8/5 Eb <amaeze@gmail.com> > > > Can you guys stop? Please? I beg you! > > 2010/8/5 Antnio Mota <amsmota@...> > > >> >> I don't have problems, I just note your *lack* of answers, even if a yes >> or no is suficient, when they don't suite you. >> >> I also don't have problems wih you trying to flame me when it is >> convenient for you to ignore my questions - albeit sometimes you succeed in >> flame me - or turning things personal when they are not - I didn't said >> nothing that others aren't saying, giving or taken, and none of those was >> personal. >> >> But if it's convenient for you to ignore me, well, good for you. C'est la >> vie, as they say in english... >> >> On 5 Aug 2010 19:11, "Eric J. Bowman" <eric@...> wrote: >> >> Antnio Mota wrote: >> > >> > And BTW why you only answer questions that interest you or that fits >> > in y... >> >> If you have a problem with my answers, then I'll simply stop giving >> them to you. Back on ignore. >> >> -Eric >> >> > > > >
I can't stop what I didn't start. But I do sugest to Eric that he ignore me instead of publicizing it... On 5 Aug 2010 19:31, "mike amundsen" <mamund@...> wrote: <snip> Can you guys stop? Please? I beg you! </snip> +1 mca http://amundsen.com/blog/ http://mamund.com/foaf.rdf#me 2010/8/5 Eb <amaeze@...> > > > > > > > > Can you guys stop? Please? I beg you! > > > > 2010/8/5 Antnio Mota <amsmota@...> > > > >> ... > > > > Your email settings: Individual Email|Traditional > Change settings via the Web<http://groups.yahoo.com/group/rest-discuss/join;_ylc=X3oDMTJmczlpa3EwBF9TAzk3NDc2NTkwBGdycElkAzQzMTkyNTUEZ3Jwc3BJZAMxNzA1NzAxMDE0BHNlYwNmdHIEc2xrA3N0bmdzBHN0aW1lAzEyODEwMzMwNzE->(Yahoo! ID required) > Change settings via email: Switch delivery to Daily Digest<rest-discuss-digest@yahoogroups.com?subject=Email+Delivery:+Digest>| Switch > to Fully Featured<rest-discuss-fullfeatured@yahoogroups.com?subject=Change+Delivery+Format:+Fully+Featured> > Visit Your Group > <http://groups.yahoo.com/group/rest-discuss;_ylc=X3oDMTJkcmJ0dWlmBF9TAzk3NDc2NTkwBGdycElkAzQzMTkyNTUEZ3Jwc3BJZAMxNzA1NzAxMDE0BHNlYwNmdHIEc2xrA2hwZgRzdGltZQMxMjgxMDMzMDcx>| Yahoo! > Groups Terms of Use <http://docs.yahoo.com/info/terms/> | Unsubscribe > <rest-discuss-unsubscribe@yahoogroups.com?subject=Unsubscribe> > > >
Hello! Let's say I have a queue resource: /foo I can POST new entries into the queue. I can even refer to individual entries within the queue: /foo/<id> But how do I pop the next entry? How do I construct a single request that gets me the next/first entry but also removes the entry at the same time? Maybe I can implement a special resource /foo/next, which always refers to the next entry in the queue. But clearly, I can't use GET to pop the entry, since that would not be idempotent. The queue has multiple consumers, so the 'pop' operation should be atomic. This seems to rule out the possibility of doing a GET to retrieve the latest element, followed by a DELETE to remove it. Someone else could have gotten the 'latest' element in the meantime, thus causing the same element to be consumed twice. Maybe I can cause a 'move', where a single request causes the next element to be renamed to a unique ID, which is then returned to the client, who then is the only one who has a handle on that object. The client can then work with the resource. But the question now is: a) What happens when the client fails before it can delete the resource? b) What is the best way to 'move' an item in that way? Juergen -- Juergen Brendel http://restx.mulesoft.org
Your scenario posits multiple consumers. How does the server-side distinguish those consumers? Are you doing bi-directional SSL authentication, HTTP-Basic, or just giving each consumer their own URL? In any case, "GET" can be idempotent on a particular URL if you define the URL as: "get resource already assigned to me, or if none, the next queued resource available." Until the particular client then deletes (or updates state to reflect that it has been consumed), the server will simply give the same response each time the consumer asks. -Eric. On 08/05/2010 11:59 AM, Juergen Brendel wrote: > > > > Hello! > > Let's say I have a queue resource: /foo > > I can POST new entries into the queue. I can even refer to individual > entries within the queue: /foo/<id> > > But how do I pop the next entry? How do I construct a single request > that gets me the next/first entry but also removes the entry at the same > time? > > Maybe I can implement a special resource /foo/next, which always refers > to the next entry in the queue. But clearly, I can't use GET to pop the > entry, since that would not be idempotent. > > The queue has multiple consumers, so the 'pop' operation should be > atomic. This seems to rule out the possibility of doing a GET to > retrieve the latest element, followed by a DELETE to remove it. Someone > else could have gotten the 'latest' element in the meantime, thus > causing the same element to be consumed twice. > > Maybe I can cause a 'move', where a single request causes the next > element to be renamed to a unique ID, which is then returned to the > client, who then is the only one who has a handle on that object. The > client can then work with the resource. But the question now is: > > a) What happens when the client fails before it can delete the resource? > b) What is the best way to 'move' an item in that way? > > Juergen > > -- > Juergen Brendel > http://restx.mulesoft.org > >
There is no concept of ownership of the items. Assume a bunch of 'clients' that pop the next item to be processed. It doesn't matter who of them gets which item, it's just that one item must be seen by exactly one client only. Items are also added to the queue on the other end, so when an item is given to a client, it should actually be removed from the queue at that moment. Authentication is not an issue for this. On Thu, 2010-08-05 at 12:11 -0700, Eric Johnson wrote: > Your scenario posits multiple consumers. How does the server-side > distinguish those consumers? > > Are you doing bi-directional SSL authentication, HTTP-Basic, or just > giving each consumer their own URL? > > In any case, "GET" can be idempotent on a particular URL if you define > the URL as: > > "get resource already assigned to me, or if none, the next queued > resource available." Until the particular client then deletes (or > updates state to reflect that it has been consumed), the server will > simply give the same response each time the consumer asks. > > -Eric. > > On 08/05/2010 11:59 AM, Juergen Brendel wrote: > > > > > > Hello! > > > > Let's say I have a queue resource: /foo > > > > I can POST new entries into the queue. I can even refer to > > individual > > entries within the queue: /foo/<id> > > > > But how do I pop the next entry? How do I construct a single request > > that gets me the next/first entry but also removes the entry at the > > same > > time? > > > > Maybe I can implement a special resource /foo/next, which always > > refers > > to the next entry in the queue. But clearly, I can't use GET to pop > > the > > entry, since that would not be idempotent. > > > > The queue has multiple consumers, so the 'pop' operation should be > > atomic. This seems to rule out the possibility of doing a GET to > > retrieve the latest element, followed by a DELETE to remove it. > > Someone > > else could have gotten the 'latest' element in the meantime, thus > > causing the same element to be consumed twice. > > > > Maybe I can cause a 'move', where a single request causes the next > > element to be renamed to a unique ID, which is then returned to the > > client, who then is the only one who has a handle on that object. > > The > > client can then work with the resource. But the question now is: > > > > a) What happens when the client fails before it can delete the > > resource? > > b) What is the best way to 'move' an item in that way? > > > > Juergen > > > > -- > > Juergen Brendel > > http://restx.mulesoft.org > > > > > > > > > > -- Juergen Brendel Architect, MuleSoft Inc. http://mulesoft.com
Well, you did ask, in your scenario, what happens if the client fails. If you introduce the notion of temporary ownership, then the "GET" operation can be idempotent, and further you don't have to wonder what happens if the client (temporarily) fails. In case of a permanent failure, you can simply set a timeout value. That does get tricky, though. Without more detail on your scenario - what end is the queue serving: assigning a work task to the next available person, or queuing up documents for an automated process to consume? What's the cost of processing the same resource twice? What's the cost of dropping it? How quickly must it be processed? Without those answers, it seems like it will be hard to reflect the right "state" on the server. -Eric. On 08/05/2010 12:27 PM, Juergen Brendel wrote: > > > > > There is no concept of ownership of the items. Assume a bunch of > 'clients' that pop the next item to be processed. It doesn't matter who > of them gets which item, it's just that one item must be seen by exactly > one client only. > > Items are also added to the queue on the other end, so when an item is > given to a client, it should actually be removed from the queue at that > moment. > > Authentication is not an issue for this. > > On Thu, 2010-08-05 at 12:11 -0700, Eric Johnson wrote: > > Your scenario posits multiple consumers. How does the server-side > > distinguish those consumers? > > > > Are you doing bi-directional SSL authentication, HTTP-Basic, or just > > giving each consumer their own URL? > > > > In any case, "GET" can be idempotent on a particular URL if you define > > the URL as: > > > > "get resource already assigned to me, or if none, the next queued > > resource available." Until the particular client then deletes (or > > updates state to reflect that it has been consumed), the server will > > simply give the same response each time the consumer asks. > > > > -Eric. > > > > On 08/05/2010 11:59 AM, Juergen Brendel wrote: > > > > > > > > > Hello! > > > > > > Let's say I have a queue resource: /foo > > > > > > I can POST new entries into the queue. I can even refer to > > > individual > > > entries within the queue: /foo/<id> > > > > > > But how do I pop the next entry? How do I construct a single request > > > that gets me the next/first entry but also removes the entry at the > > > same > > > time? > > > > > > Maybe I can implement a special resource /foo/next, which always > > > refers > > > to the next entry in the queue. But clearly, I can't use GET to pop > > > the > > > entry, since that would not be idempotent. > > > > > > The queue has multiple consumers, so the 'pop' operation should be > > > atomic. This seems to rule out the possibility of doing a GET to > > > retrieve the latest element, followed by a DELETE to remove it. > > > Someone > > > else could have gotten the 'latest' element in the meantime, thus > > > causing the same element to be consumed twice. > > > > > > Maybe I can cause a 'move', where a single request causes the next > > > element to be renamed to a unique ID, which is then returned to the > > > client, who then is the only one who has a handle on that object. > > > The > > > client can then work with the resource. But the question now is: > > > > > > a) What happens when the client fails before it can delete the > > > resource? > > > b) What is the best way to 'move' an item in that way? > > > > > > Juergen > > > > > > -- > > > Juergen Brendel > > > http://restx.mulesoft.org > > > > > > > > > > > > > > > > > -- > Juergen Brendel > Architect, MuleSoft Inc. > http://mulesoft.com > >
Juergen: here's two possibilities: one i use and one from Microsoft's Azure Queue implementation: I've done this in the past: - POST to the LIST-URI to add an item - POST to the ITEM-URI to "take ownership" (server marks the item as "taken", any lists show your workstation has that item, no other user can claim it, etc.) - DELETE to the ITEM-URI (w/ etag) when you are done working on it (e.g. it's been successfully completed) (server takes it out of the active list, possibly puts it in archives, etc.) NOTE: If "you" don't perform a DELETE on the item in X minutes, the item is placed back into the "active" list where some other user can claim it. advantages of this model are: - GET to ITEM-URI and LIST-URI is safe and can be used for read-only cases - POST to ITEM-URI can create log entries for historical review - POST to ITEM-URI returns etag that must be used when DELETEing it. if your etag doesn't match, it's not yours to delete, etc. - POST to ITEM-URI that has been claimed will return 4xx - POST to ITEM-URI that "you" already "own" will return 200 OK (this allows repeats in case "you" never got the earlier POST response or in case "you" need more time to process the item, etc.) - DELETE to ITEM-URI w/out proper etag or that "you" do not own fails w/ 4xx downsides: - must do DELETE to clear completed items. Microsoft Azure Queue details here: http://msdn.microsoft.com/en-us/library/dd179363.aspx mca http://amundsen.com/blog/ http://mamund.com/foaf.rdf#me On Thu, Aug 5, 2010 at 15:27, Juergen Brendel <juergen.brendel@...> wrote: > > > There is no concept of ownership of the items. Assume a bunch of > 'clients' that pop the next item to be processed. It doesn't matter who > of them gets which item, it's just that one item must be seen by exactly > one client only. > > Items are also added to the queue on the other end, so when an item is > given to a client, it should actually be removed from the queue at that > moment. > > Authentication is not an issue for this. > > > > On Thu, 2010-08-05 at 12:11 -0700, Eric Johnson wrote: >> Your scenario posits multiple consumers. How does the server-side >> distinguish those consumers? >> >> Are you doing bi-directional SSL authentication, HTTP-Basic, or just >> giving each consumer their own URL? >> >> In any case, "GET" can be idempotent on a particular URL if you define >> the URL as: >> >> "get resource already assigned to me, or if none, the next queued >> resource available." Until the particular client then deletes (or >> updates state to reflect that it has been consumed), the server will >> simply give the same response each time the consumer asks. >> >> -Eric. >> >> On 08/05/2010 11:59 AM, Juergen Brendel wrote: >> > >> > >> > Hello! >> > >> > Let's say I have a queue resource: /foo >> > >> > I can POST new entries into the queue. I can even refer to >> > individual >> > entries within the queue: /foo/<id> >> > >> > But how do I pop the next entry? How do I construct a single request >> > that gets me the next/first entry but also removes the entry at the >> > same >> > time? >> > >> > Maybe I can implement a special resource /foo/next, which always >> > refers >> > to the next entry in the queue. But clearly, I can't use GET to pop >> > the >> > entry, since that would not be idempotent. >> > >> > The queue has multiple consumers, so the 'pop' operation should be >> > atomic. This seems to rule out the possibility of doing a GET to >> > retrieve the latest element, followed by a DELETE to remove it. >> > Someone >> > else could have gotten the 'latest' element in the meantime, thus >> > causing the same element to be consumed twice. >> > >> > Maybe I can cause a 'move', where a single request causes the next >> > element to be renamed to a unique ID, which is then returned to the >> > client, who then is the only one who has a handle on that object. >> > The >> > client can then work with the resource. But the question now is: >> > >> > a) What happens when the client fails before it can delete the >> > resource? >> > b) What is the best way to 'move' an item in that way? >> > >> > Juergen >> > >> > -- >> > Juergen Brendel >> > http://restx.mulesoft.org >> > >> > >> > >> > >> > > > > -- > Juergen Brendel > Architect, MuleSoft Inc. > http://mulesoft.com > > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
On Thu, Aug 5, 2010 at 2:59 PM, Juergen Brendel < juergen.brendel@...> wrote: > > > > Hello! > > Let's say I have a queue resource: /foo > > I can POST new entries into the queue. I can even refer to individual > entries within the queue: /foo/<id> > > But how do I pop the next entry? How do I construct a single request > that gets me the next/first entry but also removes the entry at the same > time? > > Maybe I can implement a special resource /foo/next, which always refers > to the next entry in the queue. But clearly, I can't use GET to pop the > entry, since that would not be idempotent. > > The queue has multiple consumers, so the 'pop' operation should be > atomic. This seems to rule out the possibility of doing a GET to > retrieve the latest element, followed by a DELETE to remove it. Someone > else could have gotten the 'latest' element in the meantime, thus > causing the same element to be consumed twice. > > Maybe I can cause a 'move', where a single request causes the next > element to be renamed to a unique ID, which is then returned to the > client, who then is the only one who has a handle on that object. The > client can then work with the resource. But the question now is: > > a) What happens when the client fails before it can delete the resource? > b) What is the best way to 'move' an item in that way? > > Juergen > Does DELETE /foo/top suffice? The resource at the top of the queue is removed and another item replaces.
Alexander Johannesen wrote: > > Well, yes, but I'm my spidey senses are tingling with semantics, but > not quite knowing what the answer is. I get the feeling that if you do > ; > > HEAD /date?iso=YYYY-02-29 > > there's a compartmentalization going on here which feels, well, not > directly wrong, but not quite right either. For me, the resource for > the above is really ; > > /date > > And any HTTP status code for me is about that resource without > parameters. > URIs are opaque. The URI allocation scheme could use random number generation. If the link relations tie them together into collections, and a form can correlate them to YYYY-MM-DD somehow, well that's all that really matters. That being said, I do understand your point. Which is why the example config file for my forthcoming REST framework binds '/date' to different virtual hosts, and configures it differently depending on whether the query is relevant to the operation or not: http://charger.bisonsystems.net/config/ So I have a resource, /date, which returns a service document, which describes the URI allocation scheme using the application/x-www-form- urlencoded media type to indicate that /date has query parameters which identify different states of the /date resource. This is as standard a design pattern as there is in REST. I re-use it all the time, as in last week's example of image.jpg?rot=90 for server- side image rotation. But it's really neither here nor there, given URI opacity. -Eric
On Thu, Aug 5, 2010 at 3:53 PM, Eb <amaeze@...> wrote: > Does DELETE /foo/top suffice? > > The resource at the top of the queue is removed and another item replaces. > No. Because DELETE is defined to be idempotent, ie one call should have the same effect as N calls. -- Nick
Juergen Brendel wrote:
>
> Let's say I have a queue resource: /foo
>
> I can POST new entries into the queue. I can even refer to individual
> entries within the queue: /foo/<id>
>
Good so far.
>
> But how do I pop the next entry? How do I construct a single request
> that gets me the next/first entry but also removes the entry at the
> same time?
>
Define /foo/most-recent or somesuch.
>
> Maybe I can implement a special resource /foo/next, which always
> refers to the next entry in the queue.
>
Make the resource you define (i.e. /foo/most-recent) the target of your
operation.
>
> But clearly, I can't use GET to pop the entry, since that would not
> be idempotent.
>
So assign your 'pop' semantics to a non-idempotent method like POST.
GET /foo/most-recent returns a representation of the top of your
stack. Or use HEAD, either method harvests the Etag you're after.
Round-trip the Etag on POST using If-Match, the 200 OK response can
contain the message body you're consuming.
>
> The queue has multiple consumers, so the 'pop' operation should be
> atomic. This seems to rule out the possibility of doing a GET to
> retrieve the latest element, followed by a DELETE to remove it.
> Someone else could have gotten the 'latest' element in the meantime,
> thus causing the same element to be consumed twice.
>
Why rules out? That's exactly how REST works -- say multiple users
HEAD /foo/most-recent and get the Etag. The If-Match on POST prevents
any but the first-received POST request from popping. The POST removes
the corresponding /foo/{id} from your stack resource.
Subsequent GET or HEAD requests yield the Etag for the next pop.
>
> Maybe I can cause a 'move', where a single request causes the next
> element to be renamed to a unique ID, which is then returned to the
> client, who then is the only one who has a handle on that object. The
> client can then work with the resource. But the question now is:
>
I think you're making this too hard on yourself, please consider using
conditional requests...
>
> a) What happens when the client fails before it can delete the
> resource? b) What is the best way to 'move' an item in that way?
>
Then don't leave it up to the client to make a DELETE request, make the
deletion a side-effect of some other action like a conditional POST.
-Eric
On 2010-08-04 07:01, Jan Algermissen wrote: >> I guess I keep thinking that the content of a PUT request must be the same as the content of a GET request to that same resource. > > No, it need not be. > > (I lack a pointer for this - can anyone supply one?) It's entailed by the fact of content-negotiation. If a resource can be represented by more than one entity, then it follows logically that a given entity representing it in a PUT operation would not necessarily be the same as that representing it in a particular GET operation. By extension, it may not be the same as any GET operation at all.
On Thu, Aug 5, 2010 at 4:26 PM, Nick Gall <nick.gall@...> wrote: > On Thu, Aug 5, 2010 at 3:53 PM, Eb <amaeze@...> wrote: > >> Does DELETE /foo/top suffice? >> >> The resource at the top of the queue is removed and another item replaces. >> > > No. Because DELETE is defined to be idempotent, ie one call should have the > same effect as N calls. > > -- Nick > What do you consider "same effect" here? As far as its removing the topmost element, isn't it the same? Or does it have to remove the "same" resource every single time? The URI points to the resource at the top of the queue. If yes, is GET /currentTime idempotent given that the currentTime changes continually? Curious to hear your thoughts. Eb
What's important to remember here, is the importance of the initial GET or HEAD request. "REST" APIs which "just know" how to pop a stack are not hypertext driven. Whereas, say, Xforms allows you to define a button, let's label it 'pop'. When the user selects 'pop' as the state transition, hypertext informs the user agent to fetch an Etag with HEAD, then uses that Etag to make a conditional request. There's nothing wrong with a user pressing a form button, and having it trigger a series of requests; instead of requiring the user to first select the item to pop, then confirm the pop. The user expressed a desire to initiate the pop, hypertext made it happen, the next application steady-state is just a successful retrieval operation with side effects... erm... oops!!! Which, come to think of it, suggests that GET is the proper method; again, make it If-Match. Making n If-Match GET requests will always yield the same result: either no match, or a one-time pop. The one thing we can't have, is subsequent GET requests activating the pop -- that violates the idempotency of GET, but only on unconditional GET, so don't do the pop unless the Etag matches. There still needs to be an initial GET or HEAD request for the Etag. There is nothing wrong, IMO, with a GET causing a resource to be removed. The client can't be held accountable because the client didn't request such removal -- the user requested a representation of some other resource, and that's exactly what the user agent retrieved. Allowing unconditional requests would yield undesirable results. Those caveats aside, go for it. Unless I'm wrong, I'm not fully confident of either the POST or GET approach, and suspect that modelling the resources differently might yield more elegant results if I were to think it through more carefully. -Eric
Hello! I think the issue is that GET has to be 'side effect free', not that it always has to return the same data. For example, if you view the current time as a resource then GETing it will give you always different values, but those may well change, since 'current time' in itself is a changing resource. Nothing you (the client) does changes its value. The value changes whether you make the request or not. However, DELETEing the top of a queue that's stored on a server actively changes stuff on the server, which means that you making the request has a real effect. Thus, it's not idempotent. Juergen On Thu, 2010-08-05 at 16:51 -0400, Eb wrote: > > On Thu, Aug 5, 2010 at 4:26 PM, Nick Gall <nick.gall@...> wrote: > > What do you consider "same effect" here? As far as its removing the > topmost element, isn't it the same? Or does it have to remove the > "same" resource every single time? The URI points to the resource at > the top of the queue. > > If yes, is GET /currentTime idempotent given that the currentTime > changes continually? > > Curious to hear your thoughts. > > > Eb -- Juergen Brendel http://restx.mulesoft.org
Juergen Brendel wrote: > > I think the issue is that GET has to be 'side effect free', not that > it always has to return the same data. > No, there is no requirement that GET be free from side effects. GET is a "safe" method in that users know they can't be held to account for any side effects. Think about page-hit counters, which increment for every GET, thus altering the state of the resource, as reflected on a reload. Nothing wrong with that, really. -Eric
Hi! DELETE cannot have side effects? The point with idempotency (to me) is that the side effect is the same and in this case, it would/could be in that the topmost element is removed from the queue. If the DELETE is done 1 or 20 times, the topmost element is popped. (I think another mistake is that we interpret the verbs to mean how the server MUST behave whereas the verbs are for the client to communicate their expectation. The server can ultimately do what it wants, only that the client is not responsible.) On Thu, Aug 5, 2010 at 4:56 PM, Juergen Brendel < juergen.brendel@...> wrote: > > Hello! > > > I think the issue is that GET has to be 'side effect free', not that it > always has to return the same data. For example, if you view the current > time as a resource then GETing it will give you always different values, > but those may well change, since 'current time' in itself is a changing > resource. Nothing you (the client) does changes its value. The value > changes whether you make the request or not. > > However, DELETEing the top of a queue that's stored on a server actively > changes stuff on the server, which means that you making the request has > a real effect. Thus, it's not idempotent. > > Juergen > > > On Thu, 2010-08-05 at 16:51 -0400, Eb wrote: > > > > On Thu, Aug 5, 2010 at 4:26 PM, Nick Gall <nick.gall@...> wrote: > > > > > What do you consider "same effect" here? As far as its removing the > > topmost element, isn't it the same? Or does it have to remove the > > "same" resource every single time? The URI points to the resource at > > the top of the queue. > > > > If yes, is GET /currentTime idempotent given that the currentTime > > changes continually? > > > > Curious to hear your thoughts. > > > > > > Eb > > > -- > Juergen Brendel > http://restx.mulesoft.org > > >
Hello! On Thu, 2010-08-05 at 14:54 -0600, Eric J. Bowman wrote: > What's important to remember here, is the importance of the initial GET > or HEAD request. "REST" APIs which "just know" how to pop a stack are > not hypertext driven. Whereas, say, Xforms allows you to define a > button, let's label it 'pop'. When the user selects 'pop' as the state > transition, hypertext informs the user agent to fetch an Etag with > HEAD, then uses that Etag to make a conditional request. Out of curiosity (and since I'm not familiar with Xforms): How does the hypertext inform the user agent to fetch an Etag with HEAD? > Which, come to think of it, suggests that GET is the proper method; > again, make it If-Match. Making n If-Match GET requests will always > yield the same result: either no match, or a one-time pop. The one > thing we can't have, is subsequent GET requests activating the pop -- > that violates the idempotency of GET, but only on unconditional GET, so > don't do the pop unless the Etag matches. > > There still needs to be an initial GET or HEAD request for the Etag. > > There is nothing wrong, IMO, with a GET causing a resource to be > removed. The client can't be held accountable because the client didn't > request such removal -- the user requested a representation of some > other resource, and that's exactly what the user agent retrieved. Well, but hold on. How can you say that GET is idempotent if it has the effect of removing something from the queue? I understand that issuing it twice won't have an effect. However, one nice way of describing the need for GET to be idempotent was to say: "If a search spider hits your API, nothing bad will happen". I always like that way of describing it. Aren't you opening yourself up for the consequences of an accidental GET? > Allowing unconditional requests would yield undesirable results. Those > caveats aside, go for it. Unless I'm wrong, I'm not fully confident of > either the POST or GET approach, and suspect that modelling the > resources differently might yield more elegant results if I were to > think it through more carefully. > > -Eric -- Juergen Brendel http://restx.mulesoft.org
Hello! On Thu, 2010-08-05 at 14:59 -0600, Eric J. Bowman wrote: > > I think the issue is that GET has to be 'side effect free', not that > > it always has to return the same data. > > > > No, there is no requirement that GET be free from side effects. GET is > a "safe" method in that users know they can't be held to account for > any side effects. > > Think about page-hit counters, which increment for every GET, thus > altering the state of the resource, as reflected on a reload. Nothing > wrong with that, really. I know that's how they work, but, who says that page counters are RESTful? I mean, incrementing on GET is the easiest way for them to get their job done, but a particularly RESTful architecture it is not, is it? Juergen -- Juergen Brendel http://restx.mulesoft.org
Juergen Brendel wrote: > > For example, if you view the current time as a resource then GETing > it will give you always different values, but those may well change, > since 'current time' in itself is a changing resource. Nothing you > (the client) does changes its value. The value changes whether you > make the request or not. > That's an idempotent request -- the resource is defined as "current time" (not "an instance in time") and that's what's returned, each and every time its GET method is invoked -- a representation of the current time. If the GET is conditional, then it will return the same error code each and every time it's invoked, once the clock increments. Until the clock increments, the current time is returned each and every time, as the conditional request matches. So I believe it can be idempotent to use GET to pop a stack. -Eric
Hello! On Thu, 2010-08-05 at 17:02 -0400, Eb wrote: > DELETE cannot have side effects? Who says that? I surely did not... > The point with idempotency (to me) is that the side effect is the same > and in this case, it would/could be in that the topmost element is > removed from the queue. If the DELETE is done 1 or 20 times, the > topmost element is popped. I think the point with idempotency is that you can if necessary retransmit the request without any ill-effect. In our case, issuing DELETE /foo/top multiple times (because maybe the connection was interrupted on the first try and you never received a response code) will not have the desired effect at all. > (I think another mistake is that we interpret the verbs to mean how > the server MUST behave whereas the verbs are for the client to > communicate their expectation. The server can ultimately do what it > wants, only that the client is not responsible.) Of course the server can do what it wants. If it does offer this option and the client choses to use it then the consequences may be very much not what either side expected. I think the better idea is to find out the ID of the top element (using HEAD or GET) on /foo/top and then doing a DELETE only on /foo/<id>. It would be safest if the server wouldn't allow DELETE on /foo/top. Juergen > On Thu, Aug 5, 2010 at 4:56 PM, Juergen Brendel > <juergen.brendel@...> wrote: > > Hello! > > > I think the issue is that GET has to be 'side effect free', > not that it > always has to return the same data. For example, if you view > the current > time as a resource then GETing it will give you always > different values, > but those may well change, since 'current time' in itself is a > changing > resource. Nothing you (the client) does changes its value. The > value > changes whether you make the request or not. > > However, DELETEing the top of a queue that's stored on a > server actively > changes stuff on the server, which means that you making the > request has > a real effect. Thus, it's not idempotent. > > Juergen > > > On Thu, 2010-08-05 at 16:51 -0400, Eb wrote: > > > > On Thu, Aug 5, 2010 at 4:26 PM, Nick Gall > <nick.gall@...> wrote: > > > > > > What do you consider "same effect" here? As far as its > removing the > > topmost element, isn't it the same? Or does it have to > remove the > > "same" resource every single time? The URI points to the > resource at > > the top of the queue. > > > > If yes, is GET /currentTime idempotent given that the > currentTime > > changes continually? > > > > Curious to hear your thoughts. > > > > > > Eb > > > > > -- > Juergen Brendel > http://restx.mulesoft.org > > > > -- Juergen Brendel Architect, MuleSoft Inc. http://mulesoft.com
> I think the point with idempotency is that you can if necessary > retransmit the request without any ill-effect. In our case, issuing > DELETE /foo/top multiple times (because maybe the connection was > interrupted on the first try and you never received a response code) > will not have the desired effect at all. > > > Why will it not? What am I missing?
Juergen Brendel wrote: > > I know that's how they work, but, who says that page counters are > RESTful? I mean, incrementing on GET is the easiest way for them to > get their job done, but a particularly RESTful architecture it is > not, is it? > Sure it is. Hit-counters can, of course, be implemented in many different ways. One way, would be to make it a cached-separately AJAX call, such that the steady-state changes on each hit, without each hit causing cache expiration. There is nothing conceptually unRESTful about it. -Eric
Hello! On Thu, 2010-08-05 at 17:13 -0400, Eb wrote: > I think the point with idempotency is that you can if > necessary > retransmit the request without any ill-effect. In our case, > issuing > DELETE /foo/top multiple times (because maybe the connection > was > interrupted on the first try and you never received a response > code) > will not have the desired effect at all. > Why will it not? What am I missing? Maybe nothing! I could be the one missing something. :-) Assume this: I'm a client, I know what the top item is in the queue and now want to delete it, because I'm done processing it, for example. I issue DELETE /foo/top Obviously, I need to either use Etags or I need to know that I'm the only client to even consider doing it this way. Let's assume I know I'm the only client, so I don't use Etags. I send the DELETE request, but my network connection drops, I don't get the response code to see if the request was processed by the server. So, now I reissue the DELETE request. However, I don't know whether the first request was really dropped, so I don't know if this is going to delete the element I was working with or if it's going to delete the next element. I know that purely technically speaking, DELETE /foo/top means just that: Delete the top element. But practically, if I use /foo/top to DELETE then I can't just retry without wondering whether I really ending up just retrying or whether I'm deleting an as-of-yet unprocessed element. Juergen -- Juergen Brendel http://restx.mulesoft.org
At Thu, 5 Aug 2010 15:08:50 -0600, Eric J. Bowman wrote: > That's an idempotent request -- the resource is defined as "current > time" (not "an instance in time") and that's what's returned, each and > every time its GET method is invoked -- a representation of the current > time. > > If the GET is conditional, then it will return the same error code each > and every time it's invoked, once the clock increments. Until the > clock increments, the current time is returned each and every time, as > the conditional request matches. > > So I believe it can be idempotent to use GET to pop a stack. Idempotence, in the HTTP sense, has to do with side-effects, not what the response to a request is. | Methods can also have the property of “idempotence” in that (aside | from error or expiration issues) the side-effects of N > 0 identical | requests is the same as for a single request. best, Erik Hetzner
>>>>> "Eb" == Eb <amaeze@...> writes:
Eb> DELETE cannot have side effects? The point with idempotency
Eb> (to me) is that the side effect is the same and in this case,
Eb> it would/could be in that the topmost element is removed from
Eb> the queue. If the DELETE is done 1 or 20 times, the topmost
Eb> element is popped.
DELETE is meant to delete a URL.
It's a bit weird to have your use case I would say: DELETE /stack/top
doesn't make much sense if you don't really delete top. Looks more
like POST to me.
--
Cheers,
Berend de Boer
Hello!
This is an interesting discussion, and I have the feeling I'm going to
learn a lot from it. I appreciate you taking the time to respond.
On Thu, 2010-08-05 at 15:08 -0600, Eric J. Bowman wrote:
> So I believe it can be idempotent to use GET to pop a stack.
>
> -Eric
Just reminding myself to distinguish between 'safe' and 'idempotent'.
Maybe in this context it is more important to talk about safety?
While the GET doesn't change the state of the individual item resource,
it does seem to change the state of the queue resource, though, if you
use it to pop an element.
Looking at texts like these: http://www.packetizer.com/ws/rest.html
I find this:
The word "safe" means that if a given HTTP method is invoked,
the resource state on the server remains unchanged. ... In
theory, GET is always safe. No matter how many times you
download this web page, the contents of it will not change due
to your repeated downloads, since you cannot change the web page
in that way. That sounds obvious, but if you build a RESTful web
service that uses GET in such a way as to modify any state
contained within a resource, then you have violated the rules.
If this carries any weight then I would still say that a GET
for /foo/top, which actually pops the item is changing resource state on
the server and thus is not safe.
What am I missing?
--
Juergen Brendel
http://restx.mulesoft.org
Juergen Brendel wrote: > > > What's important to remember here, is the importance of the initial > > GET or HEAD request. "REST" APIs which "just know" how to pop a > > stack are not hypertext driven. Whereas, say, Xforms allows you to > > define a button, let's label it 'pop'. When the user selects 'pop' > > as the state transition, hypertext informs the user agent to fetch > > an Etag with HEAD, then uses that Etag to make a conditional > > request. > > Out of curiosity (and since I'm not familiar with Xforms): How does > the hypertext inform the user agent to fetch an Etag with HEAD? > By specifying the HEAD method of a target URI, and using some Javascript (a blackbox incurring a visibility penalty) to write that Etag into a <header> element of the next submission, then calling that submission. IOW, by applying the optional Code on Demand constraint. > > Well, but hold on. How can you say that GET is idempotent if it has > the effect of removing something from the queue? > Side effects have nothing to do with the idempotency of the request method. > > I understand that issuing it twice won't have an effect. However, one > nice way of describing the need for GET to be idempotent was to say: > "If a search spider hits your API, nothing bad will happen". I always > like that way of describing it. Aren't you opening yourself up for the > consequences of an accidental GET? > Not if you're prepared for it, no. A search spider may be able to make a conditional GET to some resource where I've deliberately defined side effects, sure. Which is why I'd respond 403 Forbidden to any client that hasn't authenticated. Forcing users to log in to complete such operations tends to inform them that consequences exist, regardless of method used, which the user never sees anyway. -Eric
Hello! On Thu, 2010-08-05 at 14:25 -0700, Erik Hetzner wrote: > Idempotence, in the HTTP sense, has to do with side-effects, not what > the response to a request is. > > | Methods can also have the property of “idempotence” in that (aside > | from error or expiration issues) the side-effects of N > 0 identical > | requests is the same as for a single request. > > best, Erik Hetzner Right, I get that. I think. But idempotent is different than 'safe' (no resource state change at all). And it was my understanding that GET always is supposed to be 'safe'. No? -- Juergen Brendel http://restx.mulesoft.org
On Thu, Aug 5, 2010 at 5:23 PM, Juergen Brendel < juergen.brendel@...> wrote: > > Maybe nothing! I could be the one missing something. :-) > > Assume this: I'm a client, I know what the top item is in the queue and > now want to delete it, because I'm done processing it, for example. I > issue DELETE /foo/top Obviously, I need to either use Etags or I need > to know that I'm the only client to even consider doing it this way. > > Let's assume I know I'm the only client, so I don't use Etags. I send > the DELETE request, but my network connection drops, I don't get the > response code to see if the request was processed by the server. > > So, now I reissue the DELETE request. However, I don't know whether the > first request was really dropped, so I don't know if this is going to > delete the element I was working with or if it's going to delete the > next element. > > I know that purely technically speaking, DELETE /foo/top means just > that: Delete the top element. But practically, if I use /foo/top to > DELETE then I can't just retry without wondering whether I really ending > up just retrying or whether I'm deleting an as-of-yet unprocessed > element. > > Juergen > > > Well if you know the top item in the queue then why is "popping" that important? If you do have this knowledge beforehand, then just DELETE /foo/[item]. I was working from the premise that you just want to pop off the top element (and you have no idea what it is) and in that case DELETE /foo/top does exactly what it needs to do as you observe above. Using conditional GET/DELETE also works. But I would suggest there are multiple ways to model 'pop' based on what is you are really shooting for. Eb
Juergen Brendel wrote: > > So, now I reissue the DELETE request. However, I don't know whether > the first request was really dropped, so I don't know if this is > going to delete the element I was working with or if it's going to > delete the next element. > Sure you do. That's what Etag is for. Nothing prohibits conditional DELETE requests, i.e. use If-Match. If the response was dropped the first time, n subsequent requests will fail, indicating the DELETE succeeded. -Eric
On Thu, Aug 5, 2010 at 5:25 PM, Berend de Boer <berend@...> wrote: > >>>>> "Eb" == Eb <amaeze@...> writes: > > Eb> DELETE cannot have side effects? The point with idempotency > Eb> (to me) is that the side effect is the same and in this case, > Eb> it would/could be in that the topmost element is removed from > Eb> the queue. If the DELETE is done 1 or 20 times, the topmost > Eb> element is popped. > > DELETE is meant to delete a URL. > > It's a bit weird to have your use case I would say: DELETE /stack/top > doesn't make much sense if you don't really delete top. Looks more > like POST to me. > > -- > Cheers, > > Berend de Boer > Really? DELETE deletes a URL? I beg to differ. :)
Hello! On Thu, 2010-08-05 at 15:31 -0600, Eric J. Bowman wrote: > > Out of curiosity (and since I'm not familiar with Xforms): How does > > the hypertext inform the user agent to fetch an Etag with HEAD? > > > By specifying the HEAD method of a target URI, and using some Javascript > (a blackbox incurring a visibility penalty) to write that Etag into a > <header> element of the next submission, then calling that submission. > IOW, by applying the optional Code on Demand constraint. Uh. That sounds terribly complicated. I guess I'm not a big fan of COD, no matter how much it is 'allowed'. > Side effects have nothing to do with the idempotency of the request > method. Yes, I'm realizing that (now). :-) 'safe' != 'idempotent' > > I understand that issuing it twice won't have an effect. However, one > > nice way of describing the need for GET to be idempotent was to say: > > "If a search spider hits your API, nothing bad will happen". I always > > like that way of describing it. Aren't you opening yourself up for the > > consequences of an accidental GET? > > > > Not if you're prepared for it, no. A search spider may be able to make > a conditional GET to some resource where I've deliberately defined side > effects, sure. Which is why I'd respond 403 Forbidden to any client > that hasn't authenticated. Forcing users to log in to complete such > operations tends to inform them that consequences exist, regardless of > method used, which the user never sees anyway. I guess if that works in your scenario that's great. But I don't know if you can make 'logging in' and 'tends to inform them' part of a more generally applicable solution. -- Juergen Brendel http://restx.mulesoft.org
Erik Hetzner wrote: > > Idempotence, in the HTTP sense, has to do with side-effects, not what > the response to a request is. > > | Methods can also have the property of “idempotence” in that (aside > | from error or expiration issues) the side-effects of N > 0 identical > | requests is the same as for a single request. > I stand corrected, use-of-vocabulary-wise. The side-effects of N > 0 identical If-Match GET requests will always be the same -- "If-Match then remove," "If-None-Match respond fail." If this appears non- idempotent, perhaps you're not reading "aside from error or expiration issues" right. Or I'm not. -Eric
At Thu, 5 Aug 2010 15:41:14 -0600, Eric J. Bowman wrote: > I stand corrected, use-of-vocabulary-wise. The side-effects of N > 0 > identical If-Match GET requests will always be the same -- "If-Match > then remove," "If-None-Match respond fail." If this appears non- > idempotent, perhaps you're not reading "aside from error or expiration > issues" right. Or I'm not. I think you’re right. But I’ve seen a lot of confusion about this use of idempotence in the past, so I wanted to intervene. best, Erik
Hello!
On Thu, 2010-08-05 at 15:35 -0600, Eric J. Bowman wrote:
> Juergen Brendel wrote:
> >
> > So, now I reissue the DELETE request. However, I don't know whether
> > the first request was really dropped, so I don't know if this is
> > going to delete the element I was working with or if it's going to
> > delete the next element.
> >
>
> Sure you do. That's what Etag is for. Nothing prohibits conditional
> DELETE requests, i.e. use If-Match. If the response was dropped the
> first time, n subsequent requests will fail, indicating the DELETE
> succeeded.
Ok. Makes sense. You've broken through my wall of density. :-)
Back to the Etag, though, and how to get it in the first place.
I found this here: http://www.xml.com/pub/a/2004/12/01/restful-web.html
Where Joe Gregorio writes:
Make sure your GETs are side-effect free. This is a biggie, the
one where many services get it wrong. GETs must be both safe and
idempotent.
Even with an If-match and all the Etags in the world, don't you go
against this if you actually pop an element from the queue with a GET?
I mean, I get what you are doing and that technically your solution
works. But why would others place such a warning label on GET: "Has to
be safe and idempotent and side-effect free."?
--
Juergen Brendel
http://restx.mulesoft.org
At Fri, 06 Aug 2010 09:32:31 +1200, Juergen Brendel wrote: > Right, I get that. I think. > > But idempotent is different than 'safe' (no resource state change at > all). And it was my understanding that GET always is supposed to be > 'safe'. No? Safe implies idempotent. And yes, GET is supposed to always be side-effect free. (But you didn’t need me to tell you that, did you. :) best, Erik
Juergen Brendel wrote:
>
> I find this:
>
> The word "safe" means that if a given HTTP method is invoked,
> the resource state on the server remains unchanged. ... In
> theory, GET is always safe. No matter how many times you
> download this web page, the contents of it will not change due
> to your repeated downloads, since you cannot change the web
> page in that way. That sounds obvious, but if you build a RESTful web
> service that uses GET in such a way as to modify any state
> contained within a resource, then you have violated the rules.
>
> If this carries any weight...
>
It doesn't. A resource is a temporally-varying membership function,
suggesting that it doesn't change across repeated downloads is just not
correct. What matters is the semantics of the resource mapping. A
page about dogs which contains a hit counter, is always a page about
dogs, no matter if the hit counter changes over time, or what causes it
to change.
If the user intent is to change the information about dogs which
appears in the page, then GET is wrong, because the user intent is to
replace the existing page with another, and replacement semantics map
to PUT. If the resource is a collection of pages about dogs of
different breeds, and the user intent is to create a new subordinate
resource, then the creation semantics should map to PUT.
If you GET /foo/most-recent, and that causes /foo/{id} to disappear,
have you changed the meaning of /foo/most-recent? No, the semantics of
the mapping stay the same -- 'most-recent datum.' The user didn't
request a change in the state of /foo/most-recent, the state of that
resource just happened to change.
What would be totally wrong, would be to remove /foo/{id} by making a
GET request to /foo/{id}. As it is, /foo/{id} can be removed by the
server at any time for any reason, as a response to a user action or
not, and this may (if it's most-recent) change the state of /foo/most-
recent. That would just be "how your system works", not using GET to
DELETE.
-Eric
Juergen Brendel wrote: > > Even with an If-match and all the Etags in the world, don't you go > against this if you actually pop an element from the queue with a GET? > If I assign deletion semantics to GET, yes, that's a REST violation. But I'm not, I'm making a GET request to /foo/most-recent which has no affect on the semantics of /foo/most-recent's mapping -- /foo/most- recent doesn't go anywhere. Sure, its representation changes, but its representation has the same semantics -- most-recent datum. > > I mean, I get what you are doing and that technically your solution > works. But why would others place such a warning label on GET: "Has to > be safe and idempotent and side-effect free."? > Like I said earlier, if you've come to this, it likely means that you've erred in modeling your resources (painted yourself into a corner). Just because something's technically correct doesn't make it the best solution, the right solution, or even a good solution. If you really, really need to model this as popping a stack, then you'll get an awkward solution, whereas re-architecting can avoid the issue entirely. In general, it's more robust and maintainable to take direct action on the resources you're trying to change, rather than coding side effects, but there's nothing wrong with GET having side effects. So not an error, more of an early-warning sign... -Eric
Erik Hetzner wrote: > > I think you’re right. But I’ve seen a lot of confusion about this use > of idempotence in the past, so I wanted to intervene. > Probably because none of us manly men around here enjoy pondering a word that rhymes with "impotent"... -Eric
> > If the resource is a collection of pages about dogs of > different breeds, and the user intent is to create a new subordinate > resource, then the creation semantics should map to PUT. > Erm, I meant POST. -Eric
Hello!
On Thu, 2010-08-05 at 15:52 -0600, Eric J. Bowman wrote:
> Juergen Brendel wrote:
> >
> > I find this:
> >
> > The word "safe" means that if a given HTTP method is invoked,
> > the resource state on the server remains unchanged. ... In
> > theory, GET is always safe. No matter how many times you
> > download this web page, the contents of it will not change due
> > to your repeated downloads, since you cannot change the web
> > page in that way. That sounds obvious, but if you build a RESTful web
> > service that uses GET in such a way as to modify any state
> > contained within a resource, then you have violated the rules.
> >
> > If this carries any weight...
> >
>
> It doesn't. A resource is a temporally-varying membership function,
> suggesting that it doesn't change across repeated downloads is just not
> correct.
But that's not what's said in the quote. A current-time resource changes
constantly, for example, and that's fine, of course. What it says is
that the act of downloading it shouldn't change state anywhere.
> If you GET /foo/most-recent, and that causes /foo/{id} to disappear,
> have you changed the meaning of /foo/most-recent? No, the semantics of
> the mapping stay the same -- 'most-recent datum.' The user didn't
> request a change in the state of /foo/most-recent, the state of that
> resource just happened to change.
But /foo itself is also a resource. And a GET to /foo/most-recent
changes the state of /foo (number of elements that it contains, etc.).
So, in that respect, GET doesn't appear to be safe.
--
Juergen Brendel
http://restx.mulesoft.org
On Thu, Aug 5, 2010 at 6:08 PM, Juergen Brendel <
juergen.brendel@...> wrote:
>
> Hello!
>
> On Thu, 2010-08-05 at 15:52 -0600, Eric J. Bowman wrote:
> > Juergen Brendel wrote:
> > >
> > > I find this:
> > >
> > > The word "safe" means that if a given HTTP method is invoked,
> > > the resource state on the server remains unchanged. ... In
> > > theory, GET is always safe. No matter how many times you
> > > download this web page, the contents of it will not change due
> > > to your repeated downloads, since you cannot change the web
> > > page in that way. That sounds obvious, but if you build a RESTful web
> > > service that uses GET in such a way as to modify any state
> > > contained within a resource, then you have violated the rules.
> > >
> > > If this carries any weight...
> > >
> >
> > It doesn't. A resource is a temporally-varying membership function,
> > suggesting that it doesn't change across repeated downloads is just not
> > correct.
>
> But that's not what's said in the quote. A current-time resource changes
> constantly, for example, and that's fine, of course. What it says is
> that the act of downloading it shouldn't change state anywhere.
>
>
> > If you GET /foo/most-recent, and that causes /foo/{id} to disappear,
> > have you changed the meaning of /foo/most-recent? No, the semantics of
> > the mapping stay the same -- 'most-recent datum.' The user didn't
> > request a change in the state of /foo/most-recent, the state of that
> > resource just happened to change.
>
> But /foo itself is also a resource. And a GET to /foo/most-recent
> changes the state of /foo (number of elements that it contains, etc.).
> So, in that respect, GET doesn't appear to be safe.
>
>
>
> --
> Juergen Brendel
> http://restx.mulesoft.org
>
>
>
So how do we explain this quote:
> Naturally, it is not possible to ensure that the server does not generate
> side-effects as a result of performing a GET request; in fact, some dynamic
> resources consider that a feature. The important distinction here is that
> the user did not request the side-effects, so therefore cannot be held
> accountable for them. [1]
>
What qualifies as a "dynamic" resource? Could a queue fall in that
category? To me, the key point as mentioned above is accountability.
1. http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html
Actually, you can. They teach 2 year olds to do it all the time. Dont call people out. Or at least do it offline. +1 to taming the call outs. Its getting old. ~ Ryan From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of Antnio Mota Sent: Thursday, August 05, 2010 11:38 AM To: mike amundsen Cc: rest-discuss@...m; Jan Algermissen; Eb; Gregory Berezowsky; Eric J. Bowman Subject: Re: [rest-discuss] REST pop quiz I can't stop what I didn't start. But I do sugest to Eric that he ignore me instead of publicizing it... On 5 Aug 2010 19:31, "mike amundsen" <mamund@...> wrote: <snip> Can you guys stop? Please? I beg you! </snip> +1 mca http://amundsen.com/blog/ http://mamund.com/foaf.rdf#me 2010/8/5 Eb <amaeze@...> > > > > Can you guys stop? Please? I beg you! > > 2010/8/5 Antnio Mota <amsmota@...> > >> ... Error! Filename not specified. > Your email settings: Individual Email|Traditional Change settings via the Web <http://groups.yahoo.com/group/rest-discuss/join;_ylc=X3oDMTJmczlpa3EwBF9TAz k3NDc2NTkwBGdycElkAzQzMTkyNTUEZ3Jwc3BJZAMxNzA1NzAxMDE0BHNlYwNmdHIEc2xrA3N0bm dzBHN0aW1lAzEyODEwMzMwNzE-> (Yahoo! ID required) Change settings via email: Switch delivery to Daily Digest <mailto:rest-discuss-digest@yahoogroups.com?subject=Email+Delivery:+Digest> | Switch to Fully Featured <mailto:rest-discuss-fullfeatured@yahoogroups.com?subject=Change+Delivery+Fo rmat:+Fully+Featured> Visit Your Group <http://groups.yahoo.com/group/rest-discuss;_ylc=X3oDMTJkcmJ0dWlmBF9TAzk3NDc 2NTkwBGdycElkAzQzMTkyNTUEZ3Jwc3BJZAMxNzA1NzAxMDE0BHNlYwNmdHIEc2xrA2hwZgRzdGl tZQMxMjgxMDMzMDcx> | Yahoo! Groups Terms of <http://docs.yahoo.com/info/terms/> Use | Unsubscribe <mailto:rest-discuss-unsubscribe@yahoogroups.com?subject=Unsubscribe>
> > +1 to taming the call outs. It's getting old. > Can we put this in perspective, please? What ever happened to the folks here I learned REST from? I can name those left using only my fingers to count, and most of those are like Roy, Julian, Mark, Jon, Subbu, Bill etc. who are ghosts of their former selves here with only Jan and mca still posting with any regularity. That list is dwarfed by those who are just gone -- where's Aristotle, or Nic for example? Trying to help folks with REST takes a toll, and I won't hesitate to chalk that up to ungrateful, unruly, argumentative and flamey replies. Yeah, I'm as guilty as the next guy at times. Why? Maybe because I'm the only person here who's taken the time to distill out and post working examples of my code instead of always relying on theoretical request/response patterns, such that they're available for public scrutiny. Today, I posted two slightly-different versions of the same service and got bitched at for not only posting one solution. WTF?!? Just what exactly is it you people expect? If you want in- depth answers to every single question you constantly ask, then you can pay me my $100/hr consulting fee. As it is, this list is a damn bargain. I could just as easily keep my work to myself, instead of opening myself up to attacks from all sides that I don't know what I'm talking about, in public for all to see. In fact, I could way the hell *easier* NOT post my work here, or take the time to participate in discussions. Talk about looking a gift horse in the mouth. It is not enough for some people to just disagree with my answers, they take it further and bitch and moan about how inadequate those answers are to their needs. That is what I have such a strong negative reaction to. That's what I'm so tired of. That is what will lead me to follow everyone else here who's simply had enough of that ridiculous attitude towards _free_ help _willingly_ given, and just NOT any more. You don't have to kiss my ass, but if beyond disagreeing, you just don't like the answers I give for whatever reason, I consider myself well within my rights to tell you to please STFU. Because I've had enough of that crap, and not just when it's directed my way. What started as a fun thread yesterday, turned into a disaster. If that's the price I must pay to participate, it is too steep. Since I'm obviously at that breaking point, it would probably be in my best interests to just not bother trying to teach REST. Mark, I feel your pain. -Eric
Eric J. Bowman <eric@...> wrote: > WTF?!? Just what exactly is it you people expect? Hang on. Who's "you people"? I think the people involved in the "flame" (one of the lamest ones in know Internet history) is a *very* small crowd that do not include "you people" on this list. Perspective, please. :) > It is not enough for some people to just disagree > with my answers, they take it further and bitch and moan about how > inadequate those answers are to their needs. I don't actually know what happened, I didn't pay attention. I think we had a pretty good discussion about the usual semantics in REST and what to emphasise and whatnot (like forgetting that REST without HATEOAS isn't LOVEYEAH!), but looking through the emails I *suspect* some language barriers and words going into semantic territory that were new to them, and they got a bit lost. As to regulars, well I've been on this list for quite some time and are just as guilty as the next person in both flames, passionate rhetorics and blatant ignorance of some black art. However, I have been more active in periods, and I suspect that applies to all of us; We come in and out of periods of busy or having enough. Some times you will interact with people who you think will have your babies, and other times you're interacting with people who'd make poor parents. These things come and go in leaps and bounds. The rough guide is ; 1. Don't take it personal 2. Don't be so cocksure 3. Patience will put out fires I think the people involved failed a bit on all of these, in no particular order or targets. > That is what I have such a strong negative reaction to. That's what > I'm so tired of. That is what will lead me to follow everyone else > here who's simply had enough of that ridiculous attitude towards _free_ > help _willingly_ given, and just NOT any more. You don't have to kiss > my ass, but if beyond disagreeing, you just don't like the answers I > give for whatever reason, I consider myself well within my rights to > tell you to please STFU. I actually agree with you, although I believe STFU should read PDBAD. But some responsibility you have to take yourself for engaging yourself in that very same open process. If you state the goodness of A, someone will call out that B is a better answer, and STFU. It's a human thing to expect that your free time and patience will be appreciated, however other people's opinion render that process void and null, justified or not. In other words, the ability to ignore the tiniest hint of crap thrown your way (or better yet, adressed back in the most polite and humble way possible) can be a healthy undertaking for the value of the list. I have to admit to learning this the hard way (and Eric might know of the NGC4LIB mailing-list in which I've pissed off a many people over the years). Don't give up, just change strategy. It's easier for you to change strategy than it is to teach the Internet how to behave. > Because I've had enough of that crap, and not just when it's directed > my way. What started as a fun thread yesterday, turned into a > disaster. That's not true; a lot of that stuff is really helpful for a lot of people. For every flame between 5 people there's 500 watching, and you simply don't know what they take away from it. This is all about very subtle parts of REST (to a degree), and not a lot of people are willing to engage and pipe up as this is, well, flame-territory. As with all things, this, too, shall pass. Regards, Alex -- Project Wrangler, SOA, Information Alchemist, UX, RESTafarian, Topic Maps --- http://shelter.nu/blog/ ---------------------------------------------- ------------------ http://www.google.com/profiles/alexander.johannesen ---
Alexander Johannesen wrote: > > Hang on. Who's "you people"? I think the people involved in the > "flame" (one of the lamest ones in know Internet history) is a *very* > small crowd that do not include "you people" on this list. > Perspective, please. :) > By "you people" I mean anyone who bitches and moans about having two working examples of a service to dereference, when that attitude leads to the alternative of zero. Nobody posted working examples of code to answer my questions when I was first learning REST. I can only imagine how many less years it would have taken me to learn, if they had. Then again, we didn't have the tools a few years ago that we have today. But that's what I mean by "looking a gift horse in the mouth" which is a phrase anyone can google. > > It is not enough for some people to just disagree > > with my answers, they take it further and bitch and moan about how > > inadequate those answers are to their needs. > > I don't actually know what happened, I didn't pay attention. > Goes back aways. But then I start lashing out at others for disagreeing with me, which is not like me although I am a tenacious debater. That's when it's time to quit -- when my love/hate relationship with teaching REST tilts more towards the latter. Ah, to be young and teaching swimming lessons in the summer again. There was no kid I couldn't teach to swim, once they were at least five. > > As to regulars, well I've been on this list for quite some time > Yes, that's the danger of listing names and forgetting to say "and anyone else who's been here a while" which includes you. Learning REST is a group effort, but there's a point of diminishing returns when it comes time to pass it along by trying to teach it. The greater point I was making was the folks like Nic or Aristotle who reach a certain level of proficiency and disappear, which happens regularly and without a comment of the sort Mark Baker left, who did considerably more to earn the right to such gripe than I ever have. -Eric
I was not expecting to get back to this, but it seems to me some people are seeing this for what's it not. First, let me say that I don't think neither me nor Eric crossed any line here. He just said he'll ignore my questions and I said ok, just do it, don't need to publicize that. He could very easilly said that in a mail to me instead of to the list, since it's such a personal statement. Now let me say to make it clear that I do have respect for Eric, and I said before that I think it is stimulating to debate with him. I do think that he think too much of himself and has a big ego, but that's not a problem, everibody including me has his or her own particular personality with good things or bad things. But the fact that I respect him doesen't mean I think he is some kind of authoritative source whose opinions have to be taken "prima facie". I think is opinions, like probably the ones of everibody else, sometime are rigth, sometimes are arguable, and sometimes are wrong. Now I'm guilty of not jumping into discussions when I think he is rigth, just to tell that. But I do it when I think (let'me say again the *I think* part) he is wrong, because or he is indeed wrong and that prevents other people of being misguided by his opinions (as it happened to me, altougth I gladly admit the contrary is more frequent) or is I that I'm wrong and is a good opportunity for me to stand corrected. That is the way I see learning in lists of professional people that see themselves as pairs. The simple fact of soemone subscribing to this list is a signal of some knowledge in IT and intention to learn, no one here was forced to subscribe or forced to learn REST. This is what is called dialectic, is how I've learned things during my professional life - after I had a "formal" teacher-student course that I took *before* I've becoming a professional. I think Eric has a different approach, he seems to follow that more "formal" teacher-student approach - and that is probably why we tend to crash in our discussions. Because I have nothing to learn that way, but probably much to learn otherwise. If I ask a question I kinda expect a response - but Eric thinks I'm just asking for the sake of asking, or to annoy him. It's not the case. My fault here? In this thread, I said that *in my opinion* - as I carefully said several times - Eric's aproach was wrong. I quoted Roy to that purpose. I then explain my POV in my own words. I then presented a diferent approach, and specifically asked why he thougth his was more simple than the one I presented. How can this be considered as bitching? What the heck am I'm supposed to do? At least Jan agreed with my POV and I don't see Eric bashin him because of that. And the fact tha Jan agreed with me, means I'm not completly out of order with my thinking - and Jan is even one of the persons whose opinions I most admire in this list. I have no problem with Eric ignoring me, no problem at all, I just think is intelectually wrong if Eric answers some questions - picked by him - and ignore others where I'm trying to make my point. I wish he takes the "all-or-nothing" approach. Answer all or answer none. Neverthless I wasn't even considering this a "flame war", so I hope it won't turn into one. On 6 August 2010 08:13, Eric J. Bowman <eric@...> wrote: > Alexander Johannesen wrote: >> >> Hang on. Who's "you people"? I think the people involved in the >> "flame" (one of the lamest ones in know Internet history) is a *very* >> small crowd that do not include "you people" on this list. >> Perspective, please. :) >> > > By "you people" I mean anyone who bitches and moans about having two > working examples of a service to dereference, when that attitude leads > to the alternative of zero. Nobody posted working examples of code to > answer my questions when I was first learning REST. I can only imagine > how many less years it would have taken me to learn, if they had. > > Then again, we didn't have the tools a few years ago that we have today. > But that's what I mean by "looking a gift horse in the mouth" which is > a phrase anyone can google. > > >> > It is not enough for some people to just disagree >> > with my answers, they take it further and bitch and moan about how >> > inadequate those answers are to their needs. >> >> I don't actually know what happened, I didn't pay attention. >> > > Goes back aways. But then I start lashing out at others for > disagreeing with me, which is not like me although I am a tenacious > debater. That's when it's time to quit -- when my love/hate > relationship with teaching REST tilts more towards the latter. > > Ah, to be young and teaching swimming lessons in the summer again. > There was no kid I couldn't teach to swim, once they were at least five. > >> >> As to regulars, well I've been on this list for quite some time >> > > Yes, that's the danger of listing names and forgetting to say "and > anyone else who's been here a while" which includes you. Learning REST > is a group effort, but there's a point of diminishing returns when it > comes time to pass it along by trying to teach it. The greater point I > was making was the folks like Nic or Aristotle who reach a certain > level of proficiency and disappear, which happens regularly and without > a comment of the sort Mark Baker left, who did considerably more to > earn the right to such gripe than I ever have. > > -Eric >
Something I have been trying to wrap my head around:
Suppose we are dealing with the procurement domain. Also suppose we plan on dealing with lists of orders (e.g. maybe there is a system that manages orders and exposes the new ones, processsed ones or the ones being shipped. There will be clients that do something with these order lists such as compiling a report.
Also suppose we have defined a link semantic that allows a server to point a client to, for example, the list of new orders.
It is not important how that link semantic looks, but it could be <newOrders href="/foo/bar" /> or <link rel="new-orders" href="/foo/bar"/> or an AtomPub collection with a special category: <collection href="/foo/bar"><category term="new-orders" scheme=".."/></collection>.
I personally 'call' any of those 'link semantics' and for the purpose of my question it only matters that the useragent ends up knowing that
/foo/bar is the URI of a resource that represents the list of new orders.
An equivalent from the HTML world would be that <img src="/baz.gif"/> tells the client that
/baz.gif is a resource that is 'an image'[1]
The issue I am dealing with is this: What is the appropriate degree of specificity of the media type for lists of orders. Especially I am wondering whether it is enough for the user agent to say
Accept: application/atom+xml;type=feed
or whether the Accept header should include the user agent capabilities regarding the individual order entries, e.g.
Accept: application/orderlist
Take a step back and lets think about what is happening here. At one level, the server informs that client about the nature of a resource and at another (lower) level the client informs the server about its technical capabilities that allow it to process responses for a request to the given resource.
I think it is important to distinuish these levels because the actual request the client makes does not express any assumptions about the nature of the resource, only about the technical capability.
The assumption (e.g. that the requested resource is 'an image') happens before that.
Browsers are implemented to follow <img src=""/> links and process the response by inlining the received images into the rendered page. Other HTML-aware clients might be implemented to produce a fine-printed book of all images found via <img src=""/> links.
The actual request will (usually) contain an Accept header of the form:
Accept: image/gif,image/jpeg,image/png,image/*
What this accept header is saying is *not*
"I expect that the requested resource is 'an image'"
but
"I can process a response to this request if you give me any of the accepted formts"
IOW:"I can do whatever I want to do if the response comes in any of these formats"
Before this gets boring, lets shift to the example of the list of new orders. Suppose I am implementing a user agent that compiles a list of all items ordered in the list of new orders.
Such a user agent would be implemented to find (or just be given or have bookmarked) the URI of the resource that represents the list of new orders (in the same sense as browsers get hold of the URI of 'an image').
How do I have to implement the user agent's construction of the GET request to /foo/bar?
Suppose we are using a media type application/order for order representations and have also decided to build upon Atom for dealing with lists of stuff in our domain. We might construct the request as:
GET /foo/bar
Accept: application/atom+xml;type=feed
and the server might send something like (excuse flaws in the XML, pls)
200 Ok
Content-Type: application/atom+xml[1]
<feed>
<entry>
<content type="application/order>
<order>....</order>
</content>
</entry>
<entry>
<content type="application/order>
<order>....</order>
</content>
</entry>
</feed>
Is that sufficient? Does the acept header sufficiently express the processing capabilities in the Accept header? Can the server know that the user agent wants to receive the entries as application/order? Is it ok to just program the user agent to ignore the entries of which it does not understand the type?
Would we end up with the correct list of ordered items if all entries come back as HTML and the user agent ignores them?
I think that there is a great danger of creating a nightmare of hidden coupling because in my opinion the user agent simply can *not* fullfil its processing goal given simply 'an atom feed'. An Atom feed reader *can* do that (because it has a different goal) but a newly-ordered-items-list compiling user agent can not do that it it must express that in the Accept header.
I'd rather define a media type application/orderlist (defined as an Atom feed containing entries of application/order) and have the user agent be explicit:
GET /foo/bar
Accept: application/orderlist
200 Ok
Content-Type: application/orderlist
<feed>
<entry>
<content type="application/order>
<order>....</order>
</content>
</entry>
<entry>
<content type="application/order>
<order>....</order>
</content>
</entry>
</feed>
What do others think?
(See also [3])
Jan
[1] 'An image' is as good as it gets in terms of definitions, BTW.
<http://www.w3.org/TR/html401/struct/objects.html#edef-IMG>
Note that the HTML spec also provides some sort of hint what media types are involved when dealing with images.
[2] conneged on the type param already, so no need to repeat it in the Content-Type header
[3] There is also the issue of returning a feed that consists of references to entries that the user agent can then GET as Accept: application/order individually. Certainly we aould not want to define a list format that constrains the references to only references application;order resource. The user agent would basically have to report an error if the referenced order is not available as application/order (that is upon a 406 on a GET subrequest)
An alternative would be to have the user agent Accept: application/atom+xml;type=feed but report an error if an entry in the feed is not provided as application/order (be it inline or via a sub-request).
-----------------------------------
Jan Algermissen, Consultant
NORD Software Consulting
Mail: algermissen@...
Blog: http://www.nordsc.com/blog/
Work: http://www.nordsc.com/
-----------------------------------
On Thu, Aug 5, 2010 at 4:39 PM, Jon Hanna <jon@...> wrote: > On 2010-08-04 07:01, Jan Algermissen wrote: >>> I guess I keep thinking that the content of a PUT request must be the same as the content of a GET request to that same resource. >> >> No, it need not be. >> >> (I lack a pointer for this - can anyone supply one?) > > It's entailed by the fact of content-negotiation. If a resource can be > represented by more than one entity, then it follows logically that a > given entity representing it in a PUT operation would not necessarily be > the same as that representing it in a particular GET operation. By > extension, it may not be the same as any GET operation at all. IMO, that's not the reason. Even without conneg, you'd still have this issue. You could also imagine a distributed file system *with* conneg where "store" request entities could be used to respond to "retrieve" requests for the same media type. Really, when you think about it, the question is a bit silly; no where in the protocol is it specified that there's any relationship between a PUT request and a GET response, and that means there isn't one. The *reason* for this, I believe, is due to the generality of the interface; PUT means the minimum it needs to mean to be a state-setting method, and that minimum provides no expectation about what future GET requests might look like... unlike in the distributed file system example. Mark.
<snip>
> Really, when you think about it, the question is a bit silly; no where
> in the protocol is it specified that there's any relationship between
> a PUT request and a GET response, and that means there isn't one. The
> *reason* for this, I believe, is due to the generality of the
> interface; PUT means the minimum it needs to mean to be a
> state-setting method, and that minimum provides no expectation about
> what future GET requests might look like... unlike in the distributed
> file system example.
</snip>
Agreed. This is something I often see when talking to others about
HTTP. There is very often an *assumption* that PUT and GET are
symmetrical. I think this comes about because, so often, PUT is
described as having "replace" semantics ("the PUT message body
replaces the content of the target resource", etc.). It also happens
quite often when folks are using/building servers that do not do much
content negotiation at the data format level (accept: text/html,
text/plain, application/atom+xml, etc.).
My most successful way to help people avoid this mistake is to remind
them that we only pass *representations* around, not resources. And
that, just like IRL (in real life), a representation is bound to leave
out various details; a representation is transient, there are lots of
possible representations of the same "thing", etc.
Thus, "the representation of the PUT message is used to replace the
content of the target resource..." (or something more accurately
worded) is a better way to talk about it.
mca
http://amundsen.com/blog/
http://mamund.com/foaf.rdf#me
On Fri, Aug 6, 2010 at 09:21, Mark Baker <distobj@...> wrote:
> On Thu, Aug 5, 2010 at 4:39 PM, Jon Hanna <jon@hackcraft.net> wrote:
>> On 2010-08-04 07:01, Jan Algermissen wrote:
>>>> I guess I keep thinking that the content of a PUT request must be the same as the content of a GET request to that same resource.
>>>
>>> No, it need not be.
>>>
>>> (I lack a pointer for this - can anyone supply one?)
>>
>> It's entailed by the fact of content-negotiation. If a resource can be
>> represented by more than one entity, then it follows logically that a
>> given entity representing it in a PUT operation would not necessarily be
>> the same as that representing it in a particular GET operation. By
>> extension, it may not be the same as any GET operation at all.
>
> IMO, that's not the reason. Even without conneg, you'd still have
> this issue. You could also imagine a distributed file system *with*
> conneg where "store" request entities could be used to respond to
> "retrieve" requests for the same media type.
>
> Really, when you think about it, the question is a bit silly; no where
> in the protocol is it specified that there's any relationship between
> a PUT request and a GET response, and that means there isn't one. The
> *reason* for this, I believe, is due to the generality of the
> interface; PUT means the minimum it needs to mean to be a
> state-setting method, and that minimum provides no expectation about
> what future GET requests might look like... unlike in the distributed
> file system example.
>
> Mark.
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
>
On Thu, Aug 5, 2010 at 6:08 PM, Juergen Brendel < juergen.brendel@...> wrote: > > > It doesn't. A resource is a temporally-varying membership function, > > suggesting that it doesn't change across repeated downloads is just not > > correct. > > But that's not what's said in the quote. A current-time resource changes > constantly, for example, and that's fine, of course. What it says is > that the act of downloading it shouldn't change state anywhere. > > The most obvious side effect of a GET is that an entry is made in an access or error log. I've also worked on a financial planning application where the act of GETting a resource would update the resource's last-viewed property and add a record indicting which user accessed the data. -- David blog: http://www.traceback.org twitter: http://twitter.com/dstanek
Jan Algermissen wrote:
> ...suppose we have defined a link semantic that allows a server to
> point a client to, for example, the list of new orders...it could
> be <link rel="new-orders" href="/foo/bar"/>
> ...
> The issue I am dealing with is this: What is the appropriate degree of
> specificity of the media type for lists of orders. Especially I am
> wondering whether it is enough for the user agent to say
>
> Accept: application/atom+xml;type=feed
>
> or whether the Accept header should include the user agent
capabilities
> regarding the individual order entries, e.g.
>
> Accept: application/orderlist
I consider media types as syntax, not semantics. If
application/orderlist really has different syntax than
application/atom+xml, then, OK I guess. But it probably shouldn't.
Instead, the semantics of the resource are described by the @rel
attribute the user-agent discovered and followed (in combination with
data and its arrangement in the response, including further links in the
response). Your resource could just as easily return "text/html" with
the same semantics and, if the client understood HTML, could be
processed meaningfully, in which case the client would emit "Accept:
application/atom+xml, text/html".
In other words, the Accept header says, "these are the representation
formats I am prepared to parse for a resource of the 'new-orders'
relation". A media type of "application/orderlist" *can* be used in that
way, but couples too tightly, IMO. There are plenty of "list-y" media
types out there already--does a list of orders really differ
significantly in structure (not just differ in @rel's) from a list of,
say, sale items? Atom has succeeded, IMO, because it is specific in
syntax ("feed" = list of items in order, with some fixed fields) and
generic in semantics (doesn't matter what you're listing). I wrote Shoji
[1] because I wanted to hit that sweet spot for a "catalog" syntax
(entities and overlapping lists of entity URI's) that was independent of
the semantic--I wrote it for Etsy procurement, but it could equally be
used for representing scientific lab work. If the syntax fits, wear it.
Robert Brewer
fumanchu@...
[1] http://www.aminus.org/rbre/shoji/shoji-draft-01.txt
Hi, in his dissertation, Roy explicitly cites the "Timeless way of building" and applies this approach of design in the fifth chapter. I was wondering if any of you encountered other examples of this approach in software architecture? Thanks a lot, Benoit.
On Fri, Aug 6, 2010 at 10:44 AM, David Stanek <dstanek@...> wrote: > On Thu, Aug 5, 2010 at 6:08 PM, Juergen Brendel < > juergen.brendel@...> wrote: > >> >> > It doesn't. A resource is a temporally-varying membership function, >> > suggesting that it doesn't change across repeated downloads is just not >> > correct. >> >> But that's not what's said in the quote. A current-time resource changes >> constantly, for example, and that's fine, of course. What it says is >> that the act of downloading it shouldn't change state anywhere. >> >> > The most obvious side effect of a GET is that an entry is made in an access > or error log. I've also worked on a financial planning application where the > act of GETting a resource would update the resource's last-viewed property > and add a record indicting which user accessed the data. > > -- > David > blog: http://www.traceback.org > twitter: http://twitter.com/dstanek > and I personally don't think that's "bad" as long as the consumer understands this and is not surprised by this.
Robert - formats (such as XML and JSON) usually describe the representation syntax, where as media types such as application/atom+xml specify the representation semantics. The @rel of a link specifies the semantics of an association between two resources, and not "semantics of the resource". In other words, @rel describes the semantics of a resource in a "particular" context and not in "any" context.
Subbu
On Aug 6, 2010, at 9:22 AM, Robert Brewer wrote:
> Jan Algermissen wrote:
>> ...suppose we have defined a link semantic that allows a server to
>> point a client to, for example, the list of new orders...it could
>> be <link rel="new-orders" href="/foo/bar"/>
>> ...
>> The issue I am dealing with is this: What is the appropriate degree of
>> specificity of the media type for lists of orders. Especially I am
>> wondering whether it is enough for the user agent to say
>>
>> Accept: application/atom+xml;type=feed
>>
>> or whether the Accept header should include the user agent
> capabilities
>> regarding the individual order entries, e.g.
>>
>> Accept: application/orderlist
>
> I consider media types as syntax, not semantics. If
> application/orderlist really has different syntax than
> application/atom+xml, then, OK I guess. But it probably shouldn't.
> Instead, the semantics of the resource are described by the @rel
> attribute the user-agent discovered and followed (in combination with
> data and its arrangement in the response, including further links in the
> response). Your resource could just as easily return "text/html" with
> the same semantics and, if the client understood HTML, could be
> processed meaningfully, in which case the client would emit "Accept:
> application/atom+xml, text/html".
>
> In other words, the Accept header says, "these are the representation
> formats I am prepared to parse for a resource of the 'new-orders'
> relation". A media type of "application/orderlist" *can* be used in that
> way, but couples too tightly, IMO. There are plenty of "list-y" media
> types out there already--does a list of orders really differ
> significantly in structure (not just differ in @rel's) from a list of,
> say, sale items? Atom has succeeded, IMO, because it is specific in
> syntax ("feed" = list of items in order, with some fixed fields) and
> generic in semantics (doesn't matter what you're listing). I wrote Shoji
> [1] because I wanted to hit that sweet spot for a "catalog" syntax
> (entities and overlapping lists of entity URI's) that was independent of
> the semantic--I wrote it for Etsy procurement, but it could equally be
> used for representing scientific lab work. If the syntax fits, wear it.
>
>
> Robert Brewer
> fumanchu@...
>
> [1] http://www.aminus.org/rbre/shoji/shoji-draft-01.txt
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
I don't have this all figured out by any means. :) Corrections and
fine-tuning welcome.
Subbu Allamaraju wrote:
> Robert - formats (such as XML and JSON) usually describe the
> representation syntax, where as media types such as
> application/atom+xml specify the representation semantics.
Well, there are semantics and then there are semantics.
"application/atom+xml" certainly tells you something about the
arrangement of syntactic elements and how to parse them in the sense of
operational semantics. But I meant "semantics" in the more linguistic
sense--if XML is syntax, then Atom is at best grammar, but neither is
real interpretation. You don't end up with a chocolate cake just by
having a general notion of "recipe". My point was, in my experience,
it's useful to make a new media type for "recipe" (grammar) but not for
"chocolate cake" (idea).
> The @rel of
> a link specifies the semantics of an association between two
resources,
> and not "semantics of the resource". In other words, @rel describes
the
> semantics of a resource in a "particular" context and not in "any"
> context.
Sort of. One page's "prev" is another one's "next". But link relations
have varying degrees of how context-bound or -free they might be. If you
have a website where every response representation includes a "home"
link, those links possess an interpretation (and probably even an
operational semantic) that's not very particular to the immediate
context.
Formalizing the operational semantics of link relations (and other
elements) via media types is great. But I'm not convinced that trying to
formalize the *interpretation* of representations via new media types
(like 'application/orderlist') is appropriate, especially when existing
media types already express operational semantics so well. I sent an
email to a friend yesterday asking, "can we go camping next weekend?". I
didn't have to replace the question mark with a camping-specific symbol.
Robert Brewer
fumanchu@...
> On Aug 6, 2010, at 9:22 AM, Robert Brewer wrote:
>
> > Jan Algermissen wrote:
> >> ...suppose we have defined a link semantic that allows a server to
> >> point a client to, for example, the list of new orders...it could
> >> be <link rel="new-orders" href="/foo/bar"/>
> >> ...
> >> The issue I am dealing with is this: What is the appropriate degree
> of
> >> specificity of the media type for lists of orders. Especially I am
> >> wondering whether it is enough for the user agent to say
> >>
> >> Accept: application/atom+xml;type=feed
> >>
> >> or whether the Accept header should include the user agent
> > capabilities
> >> regarding the individual order entries, e.g.
> >>
> >> Accept: application/orderlist
> >
> > I consider media types as syntax, not semantics. If
> > application/orderlist really has different syntax than
> > application/atom+xml, then, OK I guess. But it probably shouldn't.
> > Instead, the semantics of the resource are described by the @rel
> > attribute the user-agent discovered and followed (in combination
with
> > data and its arrangement in the response, including further links in
> the
> > response). Your resource could just as easily return "text/html"
with
> > the same semantics and, if the client understood HTML, could be
> > processed meaningfully, in which case the client would emit "Accept:
> > application/atom+xml, text/html".
> >
> > In other words, the Accept header says, "these are the
representation
> > formats I am prepared to parse for a resource of the 'new-orders'
> > relation". A media type of "application/orderlist" *can* be used in
> that
> > way, but couples too tightly, IMO. There are plenty of "list-y"
media
> > types out there already--does a list of orders really differ
> > significantly in structure (not just differ in @rel's) from a list
> of,
> > say, sale items? Atom has succeeded, IMO, because it is specific in
> > syntax ("feed" = list of items in order, with some fixed fields) and
> > generic in semantics (doesn't matter what you're listing). I wrote
> Shoji
> > [1] because I wanted to hit that sweet spot for a "catalog" syntax
> > (entities and overlapping lists of entity URI's) that was
independent
> of
> > the semantic--I wrote it for Etsy procurement, but it could equally
> be
> > used for representing scientific lab work. If the syntax fits, wear
> it.
> >
> >
> > Robert Brewer
> > fumanchu@...
> >
> > [1] http://www.aminus.org/rbre/shoji/shoji-draft-01.txt
> >
> >
> > ------------------------------------
> >
> > Yahoo! Groups Links
> >
> >
> >
<snip> > The issue I am dealing with is this: What is the appropriate degree of specificity of the media type for lists of orders. Especially I am wondering whether it is enough for the user agent to say > > Accept: application/atom+xml;type=feed > > > or whether the Accept header should include the user agent capabilities regarding the individual order entries, e.g. > > Accept: application/orderlist </snip> If this is about crafting media types and at which level to apply them, here's how I go about it: DESIGNING MEDIA-TYPES First, when crafting a media type (which is rare, but it happens), I aim for the "application-level" or higher. IOW, I plan on being able to use it for most all representation transfers within some defined application space. Usually that application space applies to a commonly-understood boundary such as "the order management application" or "my twitter-clone", etc. If possible, I try to craft my media type to be useful _across_ application boundaries. IOW, I aim for a commonly-understood "process" or "workflow" that appears within multiple application boundaries such as "manage my shopping cart" or "manage a photo gallery", etc. DESIGNING REPRESENTATIONS Second, when determining how the application controls appear within a representation (e.g. <img />-style, <a rel="..." />-style, <form enctype="..." />-style etc.) I use the following rules of thumb. If the representation of the target URI will _always_ be of the same media type (e.g. always use application/shopping+xml), then I favor using the <img />-style approach (e.g. <shopping ... />). This reduces "noise" in the representation and makes it easy for clients to parse out the details. If, however, the representation of the target URI can be one of multiple formats (e.g. application/shopping+xml, application/shopping+json, etc.), then I favor the <a />-style approach (e.g. <shopping accept:application/atom+xml />) . This gives the server a chance to give the client hints in the representation and also allows the client a chance to handle data format negotiation if that's appropriate. I use the same general approach for "out-bound" representations (e.g. FORM-type elements). I allow the server to send an @enctype attribute in the representation and it's presence give the client a chance at modifying it as needed. REL TAGS Finally, I do not use the @rel attribute to act as a stand-in for media-types or semantics. IOW, in my implementations, rel="shopping" does not tell the client anything about the media-type in use. This allows both servers and clients to keep the @rel semantics (what should i expect) clear from the data formats (here the representation format i can handle for that @rel). mca http://amundsen.com/blog/ http://mamund.com/foaf.rdf#me On Fri, Aug 6, 2010 at 07:05, Jan Algermissen <algermissen1971@...> wrote: > Something I have been trying to wrap my head around: > > Suppose we are dealing with the procurement domain. Also suppose we plan on dealing with lists of orders (e.g. maybe there is a system that manages orders and exposes the new ones, processsed ones or the ones being shipped. There will be clients that do something with these order lists such as compiling a report. > > Also suppose we have defined a link semantic that allows a server to point a client to, for example, the list of new orders. > > It is not important how that link semantic looks, but it could be <newOrders href="/foo/bar" /> or <link rel="new-orders" href="/foo/bar"/> or an AtomPub collection with a special category: <collection href="/foo/bar"><category term="new-orders" scheme=".."/></collection>. > > I personally 'call' any of those 'link semantics' and for the purpose of my question it only matters that the useragent ends up knowing that > > > /foo/bar is the URI of a resource that represents the list of new orders. > > An equivalent from the HTML world would be that <img src="/baz.gif"/> tells the client that > > /baz.gif is a resource that is 'an image'[1] > > > The issue I am dealing with is this: What is the appropriate degree of specificity of the media type for lists of orders. Especially I am wondering whether it is enough for the user agent to say > > Accept: application/atom+xml;type=feed > > > or whether the Accept header should include the user agent capabilities regarding the individual order entries, e.g. > > Accept: application/orderlist > > > Take a step back and lets think about what is happening here. At one level, the server informs that client about the nature of a resource and at another (lower) level the client informs the server about its technical capabilities that allow it to process responses for a request to the given resource. > > I think it is important to distinuish these levels because the actual request the client makes does not express any assumptions about the nature of the resource, only about the technical capability. > > The assumption (e.g. that the requested resource is 'an image') happens before that. > > Browsers are implemented to follow <img src=""/> links and process the response by inlining the received images into the rendered page. Other HTML-aware clients might be implemented to produce a fine-printed book of all images found via <img src=""/> links. > > The actual request will (usually) contain an Accept header of the form: > > Accept: image/gif,image/jpeg,image/png,image/* > > What this accept header is saying is *not* > > "I expect that the requested resource is 'an image'" > > but > > "I can process a response to this request if you give me any of the accepted formts" > IOW:"I can do whatever I want to do if the response comes in any of these formats" > > > Before this gets boring, lets shift to the example of the list of new orders. Suppose I am implementing a user agent that compiles a list of all items ordered in the list of new orders. > > Such a user agent would be implemented to find (or just be given or have bookmarked) the URI of the resource that represents the list of new orders (in the same sense as browsers get hold of the URI of 'an image'). > > How do I have to implement the user agent's construction of the GET request to /foo/bar? > > Suppose we are using a media type application/order for order representations and have also decided to build upon Atom for dealing with lists of stuff in our domain. We might construct the request as: > > GET /foo/bar > Accept: application/atom+xml;type=feed > > and the server might send something like (excuse flaws in the XML, pls) > > > 200 Ok > Content-Type: application/atom+xml[1] > > <feed> > <entry> > <content type="application/order> > <order>....</order> > </content> > </entry> > <entry> > <content type="application/order> > <order>....</order> > </content> > </entry> > </feed> > > Is that sufficient? Does the acept header sufficiently express the processing capabilities in the Accept header? Can the server know that the user agent wants to receive the entries as application/order? Is it ok to just program the user agent to ignore the entries of which it does not understand the type? > > Would we end up with the correct list of ordered items if all entries come back as HTML and the user agent ignores them? > > > > > I think that there is a great danger of creating a nightmare of hidden coupling because in my opinion the user agent simply can *not* fullfil its processing goal given simply 'an atom feed'. An Atom feed reader *can* do that (because it has a different goal) but a newly-ordered-items-list compiling user agent can not do that it it must express that in the Accept header. > > I'd rather define a media type application/orderlist (defined as an Atom feed containing entries of application/order) and have the user agent be explicit: > > GET /foo/bar > Accept: application/orderlist > > > 200 Ok > Content-Type: application/orderlist > > <feed> > <entry> > <content type="application/order> > <order>....</order> > </content> > </entry> > <entry> > <content type="application/order> > <order>....</order> > </content> > </entry> > </feed> > > > What do others think? > > (See also [3]) > > Jan > > [1] 'An image' is as good as it gets in terms of definitions, BTW. > <http://www.w3.org/TR/html401/struct/objects.html#edef-IMG> > Note that the HTML spec also provides some sort of hint what media types are involved when dealing with images. > > [2] conneged on the type param already, so no need to repeat it in the Content-Type header > > [3] There is also the issue of returning a feed that consists of references to entries that the user agent can then GET as Accept: application/order individually. Certainly we aould not want to define a list format that constrains the references to only references application;order resource. The user agent would basically have to report an error if the referenced order is not available as application/order (that is upon a 406 on a GET subrequest) > > An alternative would be to have the user agent Accept: application/atom+xml;type=feed but report an error if an entry in the feed is not provided as application/order (be it inline or via a sub-request). > > > ----------------------------------- > Jan Algermissen, Consultant > NORD Software Consulting > > Mail: algermissen@... > Blog: http://www.nordsc.com/blog/ > Work: http://www.nordsc.com/ > ----------------------------------- > > > > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
Benot On Aug 6, 2010, at 6:38 PM, Benot Fleury wrote: > > > Hi, > > in his dissertation, Roy explicitly cites the "Timeless way of building" and applies this approach of design in the fifth chapter. I was wondering if any of you encountered other examples of this approach in software architecture? I think it might help if you provide some more context what you are looking for or a quote of the dis that illstrates you point. Jan > > Thanks a lot, > Benoit. > > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
On Aug 6, 2010, at 6:22 PM, Robert Brewer wrote:
> Jan Algermissen wrote:
>> ...suppose we have defined a link semantic that allows a server to
>> point a client to, for example, the list of new orders...it could
>> be <link rel="new-orders" href="/foo/bar"/>
>> ...
>> The issue I am dealing with is this: What is the appropriate degree of
>> specificity of the media type for lists of orders. Especially I am
>> wondering whether it is enough for the user agent to say
>>
>> Accept: application/atom+xml;type=feed
>>
>> or whether the Accept header should include the user agent
> capabilities
>> regarding the individual order entries, e.g.
>>
>> Accept: application/orderlist
>
> I consider media types as syntax, not semantics.
Media types are a lot more than syntax: media types provide intended processing semantics.
Is this <html> ... </html> an HTML document or an XSLT stylesheet? Only the media type provided by the sender can tell you that.
Jan
> If
> application/orderlist really has different syntax than
> application/atom+xml, then, OK I guess. But it probably shouldn't.
> Instead, the semantics of the resource are described by the @rel
> attribute the user-agent discovered and followed (in combination with
> data and its arrangement in the response, including further links in the
> response). Your resource could just as easily return "text/html" with
> the same semantics and, if the client understood HTML, could be
> processed meaningfully, in which case the client would emit "Accept:
> application/atom+xml, text/html".
>
> In other words, the Accept header says, "these are the representation
> formats I am prepared to parse for a resource of the 'new-orders'
> relation". A media type of "application/orderlist" *can* be used in that
> way, but couples too tightly, IMO. There are plenty of "list-y" media
> types out there already--does a list of orders really differ
> significantly in structure (not just differ in @rel's) from a list of,
> say, sale items? Atom has succeeded, IMO, because it is specific in
> syntax ("feed" = list of items in order, with some fixed fields) and
> generic in semantics (doesn't matter what you're listing). I wrote Shoji
> [1] because I wanted to hit that sweet spot for a "catalog" syntax
> (entities and overlapping lists of entity URI's) that was independent of
> the semantic--I wrote it for Etsy procurement, but it could equally be
> used for representing scientific lab work. If the syntax fits, wear it.
>
>
> Robert Brewer
> fumanchu@...
>
> [1] http://www.aminus.org/rbre/shoji/shoji-draft-01.txt
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
-----------------------------------
Jan Algermissen, Consultant
NORD Software Consulting
Mail: algermissen@...
Blog: http://www.nordsc.com/blog/
Work: http://www.nordsc.com/
-----------------------------------
On Aug 6, 2010, at 6:22 PM, Robert Brewer wrote:
> Jan Algermissen wrote:
>> ...suppose we have defined a link semantic that allows a server to
>> point a client to, for example, the list of new orders...it could
>> be <link rel="new-orders" href="/foo/bar"/>
>> ...
>> The issue I am dealing with is this: What is the appropriate degree of
>> specificity of the media type for lists of orders. Especially I am
>> wondering whether it is enough for the user agent to say
>>
>> Accept: application/atom+xml;type=feed
>>
>> or whether the Accept header should include the user agent
> capabilities
>> regarding the individual order entries, e.g.
>>
>> Accept: application/orderlist
>
> I consider media types as syntax, not semantics. If
> application/orderlist really has different syntax than
> application/atom+xml, then, OK I guess. But it probably shouldn't.
> Instead, the semantics of the resource are described by the @rel
> attribute the user-agent discovered and followed (in combination with
> data and its arrangement in the response, including further links in the
> response). Your resource could just as easily return "text/html" with
> the same semantics and, if the client understood HTML, could be
> processed meaningfully, in which case the client would emit "Accept:
> application/atom+xml, text/html".
The question is whether Accept: text/html is indeed sufficient. Is it true that the user agent can persue its implemented goal of compiling a list of all newly ordered items from any HTML document?
Suppose the server provides both, application/order and text/html as representations of the list of new orders. When a user agent comes along that says Accept: text/html the server can freely assume browser-like capabilities of the user agent (any HTML will do; even an <ul> with items referring to scanned PNGs of the orders). IOW, the owner of the server is free to change the implementation for text/html as long as a) the resource semantics remain stable (list of new orders) and valid HTML is returned.
How would the user agent implementation deal with HTML? Special syntactic assumptions are not allowed (because of Accept: text/html) or would mean a hidden coupling. How would a user agent distinguish between an HTML it does not understand but that contains orders (e.g. the list of scanned order images) and an empty list of orders that is augmented with some HTML it does not (and need not) understand?
IMO that is impossible and hence Accept: text/html does not cut it.
Jan
>
> In other words, the Accept header says, "these are the representation
> formats I am prepared to parse for a resource of the 'new-orders'
> relation". A media type of "application/orderlist" *can* be used in that
> way, but couples too tightly, IMO. There are plenty of "list-y" media
> types out there already--does a list of orders really differ
> significantly in structure (not just differ in @rel's) from a list of,
> say, sale items? Atom has succeeded, IMO, because it is specific in
> syntax ("feed" = list of items in order, with some fixed fields) and
> generic in semantics (doesn't matter what you're listing). I wrote Shoji
> [1] because I wanted to hit that sweet spot for a "catalog" syntax
> (entities and overlapping lists of entity URI's) that was independent of
> the semantic--I wrote it for Etsy procurement, but it could equally be
> used for representing scientific lab work. If the syntax fits, wear it.
>
>
> Robert Brewer
> fumanchu@...
>
> [1] http://www.aminus.org/rbre/shoji/shoji-draft-01.txt
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
-----------------------------------
Jan Algermissen, Consultant
NORD Software Consulting
Mail: algermissen@...
Blog: http://www.nordsc.com/blog/
Work: http://www.nordsc.com/
-----------------------------------
Sorry if I wasn't clear. I'm talking about the design process described here: http://www.ics.uci.edu/~fielding/pubs/dissertation/software_arch.htm#sec_1_6and used in chapter 5. Starting with the null style and adding constraints one after the other to let the desired properties emerge. 2010/8/6 Jan Algermissen <algermissen1971@...> > Benoît > > On Aug 6, 2010, at 6:38 PM, Benoît Fleury wrote: > > > > > > > Hi, > > > > in his dissertation, Roy explicitly cites the "Timeless way of building" > and applies this approach of design in the fifth chapter. I was wondering if > any of you encountered other examples of this approach in software > architecture? > > I think it might help if you provide some more context what you are looking > for or a quote of the dis that illstrates you point. > > Jan > > > > > > > > > Thanks a lot, > > Benoit. > > > > > > > > > > ----------------------------------- > Jan Algermissen, Consultant > NORD Software Consulting > > Mail: algermissen@... > Blog: http://www.nordsc.com/blog/ > Work: http://www.nordsc.com/ > ----------------------------------- > > > > >
On Aug 6, 2010, at 11:45 PM, Benot Fleury wrote: > > > Sorry if I wasn't clear. I'm talking about the design process described here: http://www.ics.uci.edu/~fielding/pubs/dissertation/software_arch.htm#sec_1_6 and used in chapter 5. Starting with the null style and adding constraints one after the other to let the desired properties emerge. Rohit Khare built on top of Roy's work in his ARRESTED thesis (http://www.ics.uci.edu/~rohit/ARRESTED-ICSE.pdf [Huge download!]) and you will find something in the book referenced in [1]. Plough through the references of Roy's thesis, especially around Garlan/Shaws work. IIRC Mark has also worked on defining the Semantic Web as REST+1 other constraint (explicit data semantics). He mentioned that in his blog in the early days. Maybe ask him. HTH, Jan [1] http://www.nordsc.com/blog/?p=11 > > > 2010/8/6 Jan Algermissen <algermissen1971@...> > Benot > > On Aug 6, 2010, at 6:38 PM, Benot Fleury wrote: > > > > > > > Hi, > > > > in his dissertation, Roy explicitly cites the "Timeless way of building" and applies this approach of design in the fifth chapter. I was wondering if any of you encountered other examples of this approach in software architecture? > > I think it might help if you provide some more context what you are looking for or a quote of the dis that illstrates you point. > > Jan > > > > > > > > > Thanks a lot, > > Benoit. > > > > > > > > > > ----------------------------------- > Jan Algermissen, Consultant > NORD Software Consulting > > Mail: algermissen@...g > Blog: http://www.nordsc.com/blog/ > Work: http://www.nordsc.com/ > ----------------------------------- > > > > > > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
Jan Algermissen wrote: > On Aug 6, 2010, at 6:22 PM, Robert Brewer wrote: > > > Jan Algermissen wrote: > >> ...suppose we have defined a link semantic that allows a server to > >> point a client to, for example, the list of new orders...it could > >> be <link rel="new-orders" href="/foo/bar"/> > >> ... > >> The issue I am dealing with is this: What is the appropriate degree > of > >> specificity of the media type for lists of orders. Especially I am > >> wondering whether it is enough for the user agent to say > >> > >> Accept: application/atom+xml;type=feed > >> > >> or whether the Accept header should include the user agent > > capabilities > >> regarding the individual order entries, e.g. > >> > >> Accept: application/orderlist > > > > I consider media types as syntax, not semantics. If > > application/orderlist really has different syntax than > > application/atom+xml, then, OK I guess. But it probably shouldn't. > > Instead, the semantics of the resource are described by the @rel > > attribute the user-agent discovered and followed (in combination with > > data and its arrangement in the response, including further links in > the > > response). Your resource could just as easily return "text/html" with > > the same semantics and, if the client understood HTML, could be > > processed meaningfully, in which case the client would emit "Accept: > > application/atom+xml, text/html". > > The question is whether Accept: text/html is indeed sufficient. Is it > true that the user agent can persue its implemented goal of compiling a > list of all newly ordered items from any HTML document? > > Suppose the server provides both, application/order and text/html as > representations of the list of new orders. When a user agent comes > along that says Accept: text/html the server can freely assume browser- > like capabilities of the user agent (any HTML will do; even an <ul> > with items referring to scanned PNGs of the orders). IOW, the owner of > the server is free to change the implementation for text/html as long > as a) the resource semantics remain stable (list of new orders) and > valid HTML is returned. > > How would the user agent implementation deal with HTML? Special > syntactic assumptions are not allowed (because of Accept: text/html) or > would mean a hidden coupling. How would a user agent distinguish > between an HTML it does not understand but that contains orders (e.g. > the list of scanned order images) and an empty list of orders that is > augmented with some HTML it does not (and need not) understand? > > IMO that is impossible and hence Accept: text/html does not cut it. I think all that demonstrates is that HTML is too generic to be useful for your particular task, not that all media types require "special syntactic assumptions" (whether implicit or explicit). The fact that you can make a "list" in HTML using any of a hundred types of tags doesn't mean Atom, for example, also suffers from the same inappropriateness to your task. Robert Brewer fumanchu@...
On Fri, 2010-08-06 at 23:11 +0200, Jan Algermissen wrote: > > On Aug 6, 2010, at 6:22 PM, Robert Brewer wrote: > > I consider media types as syntax, not semantics. > > Media types are a lot more than syntax: media types provide intended > processing semantics. > > Is this <html> ... </html> an HTML document or an XSLT stylesheet? > Only the media type provided by the sender can tell you that. I agree with the sentiment, Jan, but I don't believe this is what actually happens. More and more, media types seem broken as designed to me. That is, they are semantic nonsense that don't hold up to scrutiny. Bill
<snip> More and more, media types seem broken as designed to me. That is, they are semantic nonsense that don't hold up to scrutiny. </snip> Bill: can you elaborate on this? mca http://amundsen.com/blog/ http://mamund.com/foaf.rdf#me On Fri, Aug 6, 2010 at 18:28, Bill de hra <bill@dehora.net> wrote: > On Fri, 2010-08-06 at 23:11 +0200, Jan Algermissen wrote: >> >> On Aug 6, 2010, at 6:22 PM, Robert Brewer wrote: >> > I consider media types as syntax, not semantics. >> >> Media types are a lot more than syntax: media types provide intended >> processing semantics. >> >> Is this <html> ... </html> an HTML document or an XSLT stylesheet? >> Only the media type provided by the sender can tell you that. > > I agree with the sentiment, Jan, but I don't believe this is what > actually happens. More and more, media types seem broken as designed to > me. That is, they are semantic nonsense that don't hold up to scrutiny. > > Bill > > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
Hi thank you for your answer et pointers. I was more interested in the design process in general. I am wondering if this design process has been used and documented in other software architectures. It's why I titled my mail "Off topic" :) Thanks again, Benoit. 2010/8/6 Jan Algermissen <algermissen1971@...> > > On Aug 6, 2010, at 11:45 PM, Benoît Fleury wrote: > > > > > > > Sorry if I wasn't clear. I'm talking about the design process described > here: > http://www.ics.uci.edu/~fielding/pubs/dissertation/software_arch.htm#sec_1_6and used in chapter 5. Starting with the null style and adding constraints > one after the other to let the desired properties emerge. > > Rohit Khare built on top of Roy's work in his ARRESTED thesis ( > http://www.ics.uci.edu/~rohit/ARRESTED-ICSE.pdf [Huge download!]) and you > will find something in the book referenced in [1]. Plough through the > references of Roy's thesis, especially around Garlan/Shaws work. > > IIRC Mark has also worked on defining the Semantic Web as REST+1 other > constraint (explicit data semantics). He mentioned that in his blog in the > early days. Maybe ask him. > > HTH, > > Jan > > > [1] http://www.nordsc.com/blog/?p=11 > > > > > > > > 2010/8/6 Jan Algermissen <algermissen1971@...> > > Benoît > > > > On Aug 6, 2010, at 6:38 PM, Benoît Fleury wrote: > > > > > > > > > > > Hi, > > > > > > in his dissertation, Roy explicitly cites the "Timeless way of > building" and applies this approach of design in the fifth chapter. I was > wondering if any of you encountered other examples of this approach in > software architecture? > > > > I think it might help if you provide some more context what you are > looking for or a quote of the dis that illstrates you point. > > > > Jan > > > > > > > > > > > > > > > > Thanks a lot, > > > Benoit. > > > > > > > > > > > > > > > > ----------------------------------- > > Jan Algermissen, Consultant > > NORD Software Consulting > > > > Mail: algermissen@... > > Blog: http://www.nordsc.com/blog/ > > Work: http://www.nordsc.com/ > > ----------------------------------- > > > > > > > > > > > > > > > > > > ----------------------------------- > Jan Algermissen, Consultant > NORD Software Consulting > > Mail: algermissen@... > Blog: http://www.nordsc.com/blog/ > Work: http://www.nordsc.com/ > ----------------------------------- > > > > >
On Mon, 2010-08-02 at 09:22 +0100, Mike Kelly wrote: > On Sun, Aug 1, 2010 at 6:51 PM, Bill de hÓra <bill@...> wrote: > > So there's a tradeoff. Some developers would like to > > go direct to the status to avoid the hop. One way to do this is have the > > URLs prepared in advance. The argument is that a way to balance these > > concerns is to allow the server to publish a document that client can > > cache and from which the client can pull the status url directly and so > > short circuit the traversal without being very strongly coupled to the > > server's uri space. This kind of tradeoff seems reasonable to me, hence > > I don't understand the level of objection in some quarters to approaches > > like WADL. > > Why use WADL for that? Seems unnecessary when can achieve the same > thing with just a Link header. How? Bill
On Sat, Aug 7, 2010 at 12:52 AM, Bill de hra <bill@...> wrote: > On Mon, 2010-08-02 at 09:22 +0100, Mike Kelly wrote: >> On Sun, Aug 1, 2010 at 6:51 PM, Bill de hra <bill@...> wrote: >> > So there's a tradeoff. Some developers would like to >> > go direct to the status to avoid the hop. One way to do this is have the >> > URLs prepared in advance. The argument is that a way to balance these >> > concerns is to allow the server to publish a document that client can >> > cache and from which the client can pull the status url directly and so >> > short circuit the traversal without being very strongly coupled to the >> > server's uri space. This kind of tradeoff seems reasonable to me, hence >> > I don't understand the level of objection in some quarters to approaches >> > like WADL. >> >> Why use WADL for that? Seems unnecessary when can achieve the same >> thing with just a Link header. > > How? > > Bill By serving a cacheable representation that includes the appropriate 'short circuit' link (and relation) in its Link header. Cheers, Mike
On Aug 7, 2010, at 12:25 AM, Robert Brewer wrote:
>> Jan Algermissen wrote:
>> The question is whether Accept: text/html is indeed sufficient. Is it
>> true that the user agent can persue its implemented goal of compiling
> a
>> list of all newly ordered items from any HTML document?
>>
>> Suppose the server provides both, application/order and text/html as
>> representations of the list of new orders. When a user agent comes
>> along that says Accept: text/html the server can freely assume
> browser-
>> like capabilities of the user agent (any HTML will do; even an <ul>
>> with items referring to scanned PNGs of the orders). IOW, the owner of
>> the server is free to change the implementation for text/html as long
>> as a) the resource semantics remain stable (list of new orders) and
>> valid HTML is returned.
>>
>> How would the user agent implementation deal with HTML? Special
>> syntactic assumptions are not allowed (because of Accept: text/html)
> or
>> would mean a hidden coupling. How would a user agent distinguish
>> between an HTML it does not understand but that contains orders (e.g.
>> the list of scanned order images) and an empty list of orders that is
>> augmented with some HTML it does not (and need not) understand?
>>
>> IMO that is impossible and hence Accept: text/html does not cut it.
>
> I think all that demonstrates is that HTML is too generic to be useful
> for your particular task, not that all media types require "special
> syntactic assumptions" (whether implicit or explicit). The fact that you
> can make a "list" in HTML using any of a hundred types of tags doesn't
> mean Atom, for example, also suffers from the same inappropriateness to
> your task.
>
>
I knew you were going to say that :-)
Let's see:
The implementor of the server side chooses to expose the order list as HTML and Atom. In the Atom case, she would write sth like this (in JAX-RS):
@Path("/new-orders")
class NewOrders {
@GET
@Produces("text/html")
public Response newOrdersAsHTML() {
// ...
}
@GET
@Produces("application/atom+xml")
public Response newOrdersAsAtomFeed() {
}
}
When it comes to implementing (or changing) the Atom-producing method, the server developer need not (must not) be concerned with any client expectations. All that matters is to produce any valid Atom feed[1].
Given that, it would be a perfectly fine implementation to produce an Atom feed such as this:
<feed>
<entry>
<summary type="xhtml">
<xhtml:div xmlns:xhtml="http://www.w3.org/1999/xhtml">
<xhtml:h1>Order 551-A-1272</xhtml:h1>
<xhtml:ul>
<xhtml:li>Device Foo, Item Price: ... </xhtml:li>
<xhtml:li>Screw Bar, Item Price: ... </xhtml:li>
</xhtml:ul>
<xhtml:b>Total: 600 EUR</xhtml:b>
</xhtml:div>
</summary>
<content type="image/png" src="/scan-archive/orders/551-A-1272.png"/>
</entry>
<entry>
<summary type="xhtml">
<xhtml:div xmlns:xhtml="http://www.w3.org/1999/xhtml">
<xhtml:h1>Order 551-A-1273</xhtml:h1>
<xhtml:ul>
<xhtml:li>Device Foo, Item Price: ... </xhtml:li>
<xhtml:li>Screw Bar, Item Price: ... </xhtml:li>
</xhtml:ul>
<xhtml:b>Total: 600 EUR</xhtml:b>
</xhtml:div>
</summary>
<content type="image/png" src="/scan-archive/orders/551-A-1273.png"/>
</entry>
</feed>
If you develop a user agent that says Accept: application/atom+xml;type=feed you must be prepared to receive the above feed.
While a usual feed reader (e.g. Apple Mail) would be able to perform it's implemented goal based on that feed our compile-list-of-newly-ordered-items user agent would definitely not be able to do what it is implemented to do.
Two questions arise:
1. How does the user agent detect that it cannot perform its task but (despite having a perfectly valid answer)
2. What to do about that
1.:
Given the feed above, how do we need to implement the client to report to the user (e.g. someone that at some point looks in a log file or someone that uses the business intelligence application that uses the compiled reports about newly ordered items) that a correct answer was received, that it did indeed contain orders but that the list could not be processed as intended?
First of all, the client trusts the higher level assumption that the resource indeed provides the list of new orders. This is the same kind of trust that any browser has when it follows an <img src=""/> hypermedia control. The server told the user agent something about the referenced resource and the client can reasonably expect that to be true (otherwise we would deal with a broken server and that is not the issue here).
Since the client expects the feed to represent the list of new orders, it is IMHO reasonable to assume that any entry in that feed points to a new order. No entries would mean 'no new orders'. This is IMHO not semantic tunneling through the Atom feed because the assumption is backed by the semantics of the resource as advertised by the server.
The feed appearently contains two entries, hence the user agent can be programmed to understand that there are two new orders to process. When it comes to processing the orders the user agent will have to realize that neither the summary nor the referenced content is available in a format that is sufficient to extract the ordered items automatically. Hence the user agent has to report an error eventually leading to some human action to fix the situation:
2.:
We reach question #2 once the fact that a problem exists for the user agent has reached a human. What is he supposed to do? There are three options:
a) call the server developer and negotiate a certain format for the Atom feed
b) adjust the user agent implementation to handle the format received (e.g. parse out the HTML from the summary or OCR the scanned orders)
c) do nothing except mark the compiled report as 'wrong' or 'unusable'. IWO, accept the fact that the user goal cannot be satisfied
a) Leads to coupling (if it is at all possible/desireable to call the server implementor)
b) Does not improve the situation because the format can just change again tomorrow
c) is the honest option but provides no business value
In my opinion, the only thing to really improve the situation is to standardize a format that allows the server developer to actually determine the user agent expectations (capabilities) from the Accept header. If we had application/orderlist (or at least application/atom+xml;profile=orderlist) the server developer would need to either add a new response-producing method or send a 406.
Does that sufficiently illustrate the point?
Jan
[1] and of course be true to be true to the server's own statement that the resource represents
the new orders. It would be bad to serve a list of shipped orders, for example.
> Robert Brewer
> fumanchu@...
>
-----------------------------------
Jan Algermissen, Consultant
NORD Software Consulting
Mail: algermissen@...
Blog: http://www.nordsc.com/blog/
Work: http://www.nordsc.com/
-----------------------------------
<snip>
While a usual feed reader (e.g. Apple Mail) would be able to perform
it's implemented goal based on that feed our
compile-list-of-newly-ordered-items user agent would definitely not be
able to do what it is implemented to do.
</snip>
QUESTION:
how do you "know" this to be true? IOW, what is it about the
representation example you provided that leads you to believe your
"compile-list-of-newly-ordered-items user agent" cannot "do what it is
implemented to do"?
ASSUMPTION:
I think I hear you talking about the need for clients to know ahead of
time whether the representation returned is something they can
process. If that's the case, that means there must be some information
baked into the client that is used to "check" the representation
returned. The Accept header is one of these methods ("I am a client
that will only be able to understand the following representation
formats").
I that what this is about?
mca
http://amundsen.com/blog/
http://mamund.com/foaf.rdf#me
On Sat, Aug 7, 2010 at 06:47, Jan Algermissen <algermissen1971@...> wrote:
>
> On Aug 7, 2010, at 12:25 AM, Robert Brewer wrote:
>
>>> Jan Algermissen wrote:
>
>>> The question is whether Accept: text/html is indeed sufficient. Is it
>>> true that the user agent can persue its implemented goal of compiling
>> a
>>> list of all newly ordered items from any HTML document?
>>>
>>> Suppose the server provides both, application/order and text/html as
>>> representations of the list of new orders. When a user agent comes
>>> along that says Accept: text/html the server can freely assume
>> browser-
>>> like capabilities of the user agent (any HTML will do; even an <ul>
>>> with items referring to scanned PNGs of the orders). IOW, the owner of
>>> the server is free to change the implementation for text/html as long
>>> as a) the resource semantics remain stable (list of new orders) and
>>> valid HTML is returned.
>>>
>>> How would the user agent implementation deal with HTML? Special
>>> syntactic assumptions are not allowed (because of Accept: text/html)
>> or
>>> would mean a hidden coupling. How would a user agent distinguish
>>> between an HTML it does not understand but that contains orders (e.g.
>>> the list of scanned order images) and an empty list of orders that is
>>> augmented with some HTML it does not (and need not) understand?
>>>
>>> IMO that is impossible and hence Accept: text/html does not cut it.
>>
>> I think all that demonstrates is that HTML is too generic to be useful
>> for your particular task, not that all media types require "special
>> syntactic assumptions" (whether implicit or explicit). The fact that you
>> can make a "list" in HTML using any of a hundred types of tags doesn't
>> mean Atom, for example, also suffers from the same inappropriateness to
>> your task.
>>
>>
>
> I knew you were going to say that :-)
>
> Let's see:
>
> The implementor of the server side chooses to expose the order list as HTML and Atom. In the Atom case, she would write sth like this (in JAX-RS):
>
> @Path("/new-orders")
> class NewOrders {
>
> @GET
> @Produces("text/html")
> public Response newOrdersAsHTML() {
> // ...
> }
>
> @GET
> @Produces("application/atom+xml")
> public Response newOrdersAsAtomFeed() {
>
> }
> }
>
>
> When it comes to implementing (or changing) the Atom-producing method, the server developer need not (must not) be concerned with any client expectations. All that matters is to produce any valid Atom feed[1].
>
> Given that, it would be a perfectly fine implementation to produce an Atom feed such as this:
>
> <feed>
> <entry>
> <summary type="xhtml">
> <xhtml:div xmlns:xhtml="http://www.w3.org/1999/xhtml">
> <xhtml:h1>Order 551-A-1272</xhtml:h1>
> <xhtml:ul>
> <xhtml:li>Device Foo, Item Price: ... </xhtml:li>
> <xhtml:li>Screw Bar, Item Price: ... </xhtml:li>
> </xhtml:ul>
> <xhtml:b>Total: 600 EUR</xhtml:b>
> </xhtml:div>
> </summary>
> <content type="image/png" src="/scan-archive/orders/551-A-1272.png"/>
> </entry>
> <entry>
> <summary type="xhtml">
> <xhtml:div xmlns:xhtml="http://www.w3.org/1999/xhtml">
> <xhtml:h1>Order 551-A-1273</xhtml:h1>
> <xhtml:ul>
> <xhtml:li>Device Foo, Item Price: ... </xhtml:li>
> <xhtml:li>Screw Bar, Item Price: ... </xhtml:li>
> </xhtml:ul>
> <xhtml:b>Total: 600 EUR</xhtml:b>
> </xhtml:div>
> </summary>
> <content type="image/png" src="/scan-archive/orders/551-A-1273.png"/>
> </entry>
> </feed>
>
>
> If you develop a user agent that says Accept: application/atom+xml;type=feed you must be prepared to receive the above feed.
>
> While a usual feed reader (e.g. Apple Mail) would be able to perform it's implemented goal based on that feed our compile-list-of-newly-ordered-items user agent would definitely not be able to do what it is implemented to do.
>
> Two questions arise:
>
> 1. How does the user agent detect that it cannot perform its task but (despite having a perfectly valid answer)
> 2. What to do about that
>
>
> 1.:
> Given the feed above, how do we need to implement the client to report to the user (e.g. someone that at some point looks in a log file or someone that uses the business intelligence application that uses the compiled reports about newly ordered items) that a correct answer was received, that it did indeed contain orders but that the list could not be processed as intended?
>
> First of all, the client trusts the higher level assumption that the resource indeed provides the list of new orders. This is the same kind of trust that any browser has when it follows an <img src=""/> hypermedia control. The server told the user agent something about the referenced resource and the client can reasonably expect that to be true (otherwise we would deal with a broken server and that is not the issue here).
>
> Since the client expects the feed to represent the list of new orders, it is IMHO reasonable to assume that any entry in that feed points to a new order. No entries would mean 'no new orders'. This is IMHO not semantic tunneling through the Atom feed because the assumption is backed by the semantics of the resource as advertised by the server.
>
> The feed appearently contains two entries, hence the user agent can be programmed to understand that there are two new orders to process. When it comes to processing the orders the user agent will have to realize that neither the summary nor the referenced content is available in a format that is sufficient to extract the ordered items automatically. Hence the user agent has to report an error eventually leading to some human action to fix the situation:
>
> 2.:
> We reach question #2 once the fact that a problem exists for the user agent has reached a human. What is he supposed to do? There are three options:
>
> a) call the server developer and negotiate a certain format for the Atom feed
> b) adjust the user agent implementation to handle the format received (e.g. parse out the HTML from the summary or OCR the scanned orders)
> c) do nothing except mark the compiled report as 'wrong' or 'unusable'. IWO, accept the fact that the user goal cannot be satisfied
>
> a) Leads to coupling (if it is at all possible/desireable to call the server implementor)
> b) Does not improve the situation because the format can just change again tomorrow
> c) is the honest option but provides no business value
>
> In my opinion, the only thing to really improve the situation is to standardize a format that allows the server developer to actually determine the user agent expectations (capabilities) from the Accept header. If we had application/orderlist (or at least application/atom+xml;profile=orderlist) the server developer would need to either add a new response-producing method or send a 406.
>
> Does that sufficiently illustrate the point?
>
> Jan
>
>
> [1] and of course be true to be true to the server's own statement that the resource represents
> the new orders. It would be bad to serve a list of shipped orders, for example.
>
>
>
>
>
>
>> Robert Brewer
>> fumanchu@...
>>
>
> -----------------------------------
> Jan Algermissen, Consultant
> NORD Software Consulting
>
> Mail: algermissen@...
> Blog: http://www.nordsc.com/blog/
> Work: http://www.nordsc.com/
> -----------------------------------
>
>
>
>
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
>
On Aug 7, 2010, at 5:17 PM, mike amundsen wrote:
> <snip>
> While a usual feed reader (e.g. Apple Mail) would be able to perform
> it's implemented goal based on that feed our
> compile-list-of-newly-ordered-items user agent would definitely not be
> able to do what it is implemented to do.
> </snip>
>
> QUESTION:
> how do you "know" this to be true? IOW, what is it about the
> representation example you provided that leads you to believe your
> "compile-list-of-newly-ordered-items user agent" cannot "do what it is
> implemented to do"?
Gee - every sentence I leave out leads to confusion. Sorry. What I meant to say was:
We know the user agent cannot handle the HTML-containing/scan-referencing feed because we did not program it that way. And besides that: both, HTML and the scan do not make good candidates for machine processing without a hidden contract regarding the structure.
>
> ASSUMPTION:
> I think I hear you talking about the need for clients to know ahead of
> time whether the representation returned is something they can
> process.
Yes. That is: meaningfully process according to their implemented goals. If the goal is to turn the controls contained in the representation into something the user can activate (e.g. as browsers or feed readers do) then fine. If the implemented goal is to compile a list of ordered items from each order the user agent needs to be able to parse the order representation. If it understands that there are orders at all (feed not empty) but it does not understand the individual order syntax it needs to reposrt an error somehow. (Which might just be ok - depending on the goal implementation).
But the problem is really that of formats embedded in formats because we cannot implement the client without making assumptions about the possible sub-formats. If these assumptions cannot be stated in the Acept header, the situation I am dealing with exists.
> If that's the case, that means there must be some information
> baked into the client that is used to "check" the representation
> returned. The Accept header is one of these methods ("I am a client
> that will only be able to understand the following representation
> formats").
>
> I that what this is about?
Yes. It is the question of how specific the Acept header needs to be without causing hidden coupling. (Or whether we should just live with the uncertainty on the client side - which I think we should not).
Jan
>
> mca
> http://amundsen.com/blog/
> http://mamund.com/foaf.rdf#me
>
>
>
>
> On Sat, Aug 7, 2010 at 06:47, Jan Algermissen <algermissen1971@...> wrote:
>>
>> On Aug 7, 2010, at 12:25 AM, Robert Brewer wrote:
>>
>>>> Jan Algermissen wrote:
>>
>>>> The question is whether Accept: text/html is indeed sufficient. Is it
>>>> true that the user agent can persue its implemented goal of compiling
>>> a
>>>> list of all newly ordered items from any HTML document?
>>>>
>>>> Suppose the server provides both, application/order and text/html as
>>>> representations of the list of new orders. When a user agent comes
>>>> along that says Accept: text/html the server can freely assume
>>> browser-
>>>> like capabilities of the user agent (any HTML will do; even an <ul>
>>>> with items referring to scanned PNGs of the orders). IOW, the owner of
>>>> the server is free to change the implementation for text/html as long
>>>> as a) the resource semantics remain stable (list of new orders) and
>>>> valid HTML is returned.
>>>>
>>>> How would the user agent implementation deal with HTML? Special
>>>> syntactic assumptions are not allowed (because of Accept: text/html)
>>> or
>>>> would mean a hidden coupling. How would a user agent distinguish
>>>> between an HTML it does not understand but that contains orders (e.g.
>>>> the list of scanned order images) and an empty list of orders that is
>>>> augmented with some HTML it does not (and need not) understand?
>>>>
>>>> IMO that is impossible and hence Accept: text/html does not cut it.
>>>
>>> I think all that demonstrates is that HTML is too generic to be useful
>>> for your particular task, not that all media types require "special
>>> syntactic assumptions" (whether implicit or explicit). The fact that you
>>> can make a "list" in HTML using any of a hundred types of tags doesn't
>>> mean Atom, for example, also suffers from the same inappropriateness to
>>> your task.
>>>
>>>
>>
>> I knew you were going to say that :-)
>>
>> Let's see:
>>
>> The implementor of the server side chooses to expose the order list as HTML and Atom. In the Atom case, she would write sth like this (in JAX-RS):
>>
>> @Path("/new-orders")
>> class NewOrders {
>>
>> @GET
>> @Produces("text/html")
>> public Response newOrdersAsHTML() {
>> // ...
>> }
>>
>> @GET
>> @Produces("application/atom+xml")
>> public Response newOrdersAsAtomFeed() {
>>
>> }
>> }
>>
>>
>> When it comes to implementing (or changing) the Atom-producing method, the server developer need not (must not) be concerned with any client expectations. All that matters is to produce any valid Atom feed[1].
>>
>> Given that, it would be a perfectly fine implementation to produce an Atom feed such as this:
>>
>> <feed>
>> <entry>
>> <summary type="xhtml">
>> <xhtml:div xmlns:xhtml="http://www.w3.org/1999/xhtml">
>> <xhtml:h1>Order 551-A-1272</xhtml:h1>
>> <xhtml:ul>
>> <xhtml:li>Device Foo, Item Price: ... </xhtml:li>
>> <xhtml:li>Screw Bar, Item Price: ... </xhtml:li>
>> </xhtml:ul>
>> <xhtml:b>Total: 600 EUR</xhtml:b>
>> </xhtml:div>
>> </summary>
>> <content type="image/png" src="/scan-archive/orders/551-A-1272.png"/>
>> </entry>
>> <entry>
>> <summary type="xhtml">
>> <xhtml:div xmlns:xhtml="http://www.w3.org/1999/xhtml">
>> <xhtml:h1>Order 551-A-1273</xhtml:h1>
>> <xhtml:ul>
>> <xhtml:li>Device Foo, Item Price: ... </xhtml:li>
>> <xhtml:li>Screw Bar, Item Price: ... </xhtml:li>
>> </xhtml:ul>
>> <xhtml:b>Total: 600 EUR</xhtml:b>
>> </xhtml:div>
>> </summary>
>> <content type="image/png" src="/scan-archive/orders/551-A-1273.png"/>
>> </entry>
>> </feed>
>>
>>
>> If you develop a user agent that says Accept: application/atom+xml;type=feed you must be prepared to receive the above feed.
>>
>> While a usual feed reader (e.g. Apple Mail) would be able to perform it's implemented goal based on that feed our compile-list-of-newly-ordered-items user agent would definitely not be able to do what it is implemented to do.
>>
>> Two questions arise:
>>
>> 1. How does the user agent detect that it cannot perform its task but (despite having a perfectly valid answer)
>> 2. What to do about that
>>
>>
>> 1.:
>> Given the feed above, how do we need to implement the client to report to the user (e.g. someone that at some point looks in a log file or someone that uses the business intelligence application that uses the compiled reports about newly ordered items) that a correct answer was received, that it did indeed contain orders but that the list could not be processed as intended?
>>
>> First of all, the client trusts the higher level assumption that the resource indeed provides the list of new orders. This is the same kind of trust that any browser has when it follows an <img src=""/> hypermedia control. The server told the user agent something about the referenced resource and the client can reasonably expect that to be true (otherwise we would deal with a broken server and that is not the issue here).
>>
>> Since the client expects the feed to represent the list of new orders, it is IMHO reasonable to assume that any entry in that feed points to a new order. No entries would mean 'no new orders'. This is IMHO not semantic tunneling through the Atom feed because the assumption is backed by the semantics of the resource as advertised by the server.
>>
>> The feed appearently contains two entries, hence the user agent can be programmed to understand that there are two new orders to process. When it comes to processing the orders the user agent will have to realize that neither the summary nor the referenced content is available in a format that is sufficient to extract the ordered items automatically. Hence the user agent has to report an error eventually leading to some human action to fix the situation:
>>
>> 2.:
>> We reach question #2 once the fact that a problem exists for the user agent has reached a human. What is he supposed to do? There are three options:
>>
>> a) call the server developer and negotiate a certain format for the Atom feed
>> b) adjust the user agent implementation to handle the format received (e.g. parse out the HTML from the summary or OCR the scanned orders)
>> c) do nothing except mark the compiled report as 'wrong' or 'unusable'. IWO, accept the fact that the user goal cannot be satisfied
>>
>> a) Leads to coupling (if it is at all possible/desireable to call the server implementor)
>> b) Does not improve the situation because the format can just change again tomorrow
>> c) is the honest option but provides no business value
>>
>> In my opinion, the only thing to really improve the situation is to standardize a format that allows the server developer to actually determine the user agent expectations (capabilities) from the Accept header. If we had application/orderlist (or at least application/atom+xml;profile=orderlist) the server developer would need to either add a new response-producing method or send a 406.
>>
>> Does that sufficiently illustrate the point?
>>
>> Jan
>>
>>
>> [1] and of course be true to be true to the server's own statement that the resource represents
>> the new orders. It would be bad to serve a list of shipped orders, for example.
>>
>>
>>
>>
>>
>>
>>> Robert Brewer
>>> fumanchu@...
>>>
>>
>> -----------------------------------
>> Jan Algermissen, Consultant
>> NORD Software Consulting
>>
>> Mail: algermissen@...
>> Blog: http://www.nordsc.com/blog/
>> Work: http://www.nordsc.com/
>> -----------------------------------
>>
>>
>>
>>
>>
>>
>> ------------------------------------
>>
>> Yahoo! Groups Links
>>
>>
>>
>>
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
-----------------------------------
Jan Algermissen, Consultant
NORD Software Consulting
Mail: algermissen@...
Blog: http://www.nordsc.com/blog/
Work: http://www.nordsc.com/
-----------------------------------
On Aug 7, 2010, at 5:17 PM, mike amundsen wrote:
> <snip>
> While a usual feed reader (e.g. Apple Mail) would be able to perform
> it's implemented goal based on that feed our
> compile-list-of-newly-ordered-items user agent would definitely not be
> able to do what it is implemented to do.
> </snip>
>
> QUESTION:
> how do you "know" this to be true? IOW, what is it about the
> representation example you provided that leads you to believe your
> "compile-list-of-newly-ordered-items user agent" cannot "do what it is
> implemented to do"?
>
> ASSUMPTION:
> I think I hear you talking about the need for clients to know ahead of
> time whether the representation returned is something they can
> process. If that's the case, that means there must be some information
> baked into the client that is used to "check" the representation
> returned. The Accept header is one of these methods ("I am a client
> that will only be able to understand the following representation
> formats").
I think it is important to be explicit about what "understand" means.
I'd rather say that Accept means: "I am a user agent that will only be able to sensibly perform it's implemented goal if the representation has one of these media types"
Note that it all depends on the implemented goal. If that goal is to "compile a list of newly ordered items from those orders that I happen to be able to parse and report the number of unparsable orders" then that would work just fine with Accept: application/atom+xml;type=feed.
However, we must then understand that the eventual application state exposed to the user (the compiled list/report, maybe stuffed into some database) can only reflect what the user agent was able to make of the feed. IOW, the report might look like this:
New Orders as of date foo: 201
Processable Orders: 11
Summary of items in those 11: [some list of items here]
Unprocessable orders 190 [Reference to temporary filesystem where they can be reviewed]
(This might, BTW, just what we want)
You can turn all this around and say:
When the server developer implements for application/atom+xml; type=feed it simply has no idea what special assumptions some client will make. As long as the service returns valid Atom it will be a correct implementation. Any side-agreements between client and server violate what REST tries to achieve.
Jan
>
> I that what this is about?
>
> mca
> http://amundsen.com/blog/
> http://mamund.com/foaf.rdf#me
>
>
>
>
> On Sat, Aug 7, 2010 at 06:47, Jan Algermissen <algermissen1971@...> wrote:
>>
>> On Aug 7, 2010, at 12:25 AM, Robert Brewer wrote:
>>
>>>> Jan Algermissen wrote:
>>
>>>> The question is whether Accept: text/html is indeed sufficient. Is it
>>>> true that the user agent can persue its implemented goal of compiling
>>> a
>>>> list of all newly ordered items from any HTML document?
>>>>
>>>> Suppose the server provides both, application/order and text/html as
>>>> representations of the list of new orders. When a user agent comes
>>>> along that says Accept: text/html the server can freely assume
>>> browser-
>>>> like capabilities of the user agent (any HTML will do; even an <ul>
>>>> with items referring to scanned PNGs of the orders). IOW, the owner of
>>>> the server is free to change the implementation for text/html as long
>>>> as a) the resource semantics remain stable (list of new orders) and
>>>> valid HTML is returned.
>>>>
>>>> How would the user agent implementation deal with HTML? Special
>>>> syntactic assumptions are not allowed (because of Accept: text/html)
>>> or
>>>> would mean a hidden coupling. How would a user agent distinguish
>>>> between an HTML it does not understand but that contains orders (e.g.
>>>> the list of scanned order images) and an empty list of orders that is
>>>> augmented with some HTML it does not (and need not) understand?
>>>>
>>>> IMO that is impossible and hence Accept: text/html does not cut it.
>>>
>>> I think all that demonstrates is that HTML is too generic to be useful
>>> for your particular task, not that all media types require "special
>>> syntactic assumptions" (whether implicit or explicit). The fact that you
>>> can make a "list" in HTML using any of a hundred types of tags doesn't
>>> mean Atom, for example, also suffers from the same inappropriateness to
>>> your task.
>>>
>>>
>>
>> I knew you were going to say that :-)
>>
>> Let's see:
>>
>> The implementor of the server side chooses to expose the order list as HTML and Atom. In the Atom case, she would write sth like this (in JAX-RS):
>>
>> @Path("/new-orders")
>> class NewOrders {
>>
>> @GET
>> @Produces("text/html")
>> public Response newOrdersAsHTML() {
>> // ...
>> }
>>
>> @GET
>> @Produces("application/atom+xml")
>> public Response newOrdersAsAtomFeed() {
>>
>> }
>> }
>>
>>
>> When it comes to implementing (or changing) the Atom-producing method, the server developer need not (must not) be concerned with any client expectations. All that matters is to produce any valid Atom feed[1].
>>
>> Given that, it would be a perfectly fine implementation to produce an Atom feed such as this:
>>
>> <feed>
>> <entry>
>> <summary type="xhtml">
>> <xhtml:div xmlns:xhtml="http://www.w3.org/1999/xhtml">
>> <xhtml:h1>Order 551-A-1272</xhtml:h1>
>> <xhtml:ul>
>> <xhtml:li>Device Foo, Item Price: ... </xhtml:li>
>> <xhtml:li>Screw Bar, Item Price: ... </xhtml:li>
>> </xhtml:ul>
>> <xhtml:b>Total: 600 EUR</xhtml:b>
>> </xhtml:div>
>> </summary>
>> <content type="image/png" src="/scan-archive/orders/551-A-1272.png"/>
>> </entry>
>> <entry>
>> <summary type="xhtml">
>> <xhtml:div xmlns:xhtml="http://www.w3.org/1999/xhtml">
>> <xhtml:h1>Order 551-A-1273</xhtml:h1>
>> <xhtml:ul>
>> <xhtml:li>Device Foo, Item Price: ... </xhtml:li>
>> <xhtml:li>Screw Bar, Item Price: ... </xhtml:li>
>> </xhtml:ul>
>> <xhtml:b>Total: 600 EUR</xhtml:b>
>> </xhtml:div>
>> </summary>
>> <content type="image/png" src="/scan-archive/orders/551-A-1273.png"/>
>> </entry>
>> </feed>
>>
>>
>> If you develop a user agent that says Accept: application/atom+xml;type=feed you must be prepared to receive the above feed.
>>
>> While a usual feed reader (e.g. Apple Mail) would be able to perform it's implemented goal based on that feed our compile-list-of-newly-ordered-items user agent would definitely not be able to do what it is implemented to do.
>>
>> Two questions arise:
>>
>> 1. How does the user agent detect that it cannot perform its task but (despite having a perfectly valid answer)
>> 2. What to do about that
>>
>>
>> 1.:
>> Given the feed above, how do we need to implement the client to report to the user (e.g. someone that at some point looks in a log file or someone that uses the business intelligence application that uses the compiled reports about newly ordered items) that a correct answer was received, that it did indeed contain orders but that the list could not be processed as intended?
>>
>> First of all, the client trusts the higher level assumption that the resource indeed provides the list of new orders. This is the same kind of trust that any browser has when it follows an <img src=""/> hypermedia control. The server told the user agent something about the referenced resource and the client can reasonably expect that to be true (otherwise we would deal with a broken server and that is not the issue here).
>>
>> Since the client expects the feed to represent the list of new orders, it is IMHO reasonable to assume that any entry in that feed points to a new order. No entries would mean 'no new orders'. This is IMHO not semantic tunneling through the Atom feed because the assumption is backed by the semantics of the resource as advertised by the server.
>>
>> The feed appearently contains two entries, hence the user agent can be programmed to understand that there are two new orders to process. When it comes to processing the orders the user agent will have to realize that neither the summary nor the referenced content is available in a format that is sufficient to extract the ordered items automatically. Hence the user agent has to report an error eventually leading to some human action to fix the situation:
>>
>> 2.:
>> We reach question #2 once the fact that a problem exists for the user agent has reached a human. What is he supposed to do? There are three options:
>>
>> a) call the server developer and negotiate a certain format for the Atom feed
>> b) adjust the user agent implementation to handle the format received (e.g. parse out the HTML from the summary or OCR the scanned orders)
>> c) do nothing except mark the compiled report as 'wrong' or 'unusable'. IWO, accept the fact that the user goal cannot be satisfied
>>
>> a) Leads to coupling (if it is at all possible/desireable to call the server implementor)
>> b) Does not improve the situation because the format can just change again tomorrow
>> c) is the honest option but provides no business value
>>
>> In my opinion, the only thing to really improve the situation is to standardize a format that allows the server developer to actually determine the user agent expectations (capabilities) from the Accept header. If we had application/orderlist (or at least application/atom+xml;profile=orderlist) the server developer would need to either add a new response-producing method or send a 406.
>>
>> Does that sufficiently illustrate the point?
>>
>> Jan
>>
>>
>> [1] and of course be true to be true to the server's own statement that the resource represents
>> the new orders. It would be bad to serve a list of shipped orders, for example.
>>
>>
>>
>>
>>
>>
>>> Robert Brewer
>>> fumanchu@...
>>>
>>
>> -----------------------------------
>> Jan Algermissen, Consultant
>> NORD Software Consulting
>>
>> Mail: algermissen@...
>> Blog: http://www.nordsc.com/blog/
>> Work: http://www.nordsc.com/
>> -----------------------------------
>>
>>
>>
>>
>>
>>
>> ------------------------------------
>>
>> Yahoo! Groups Links
>>
>>
>>
>>
-----------------------------------
Jan Algermissen, Consultant
NORD Software Consulting
Mail: algermissen@...
Blog: http://www.nordsc.com/blog/
Work: http://www.nordsc.com/
-----------------------------------
<snip>
> We know the user agent cannot handle the HTML-containing/scan-referencing feed because we did not program it that way. And besides that: both, HTML and the scan do not make good candidates for machine processing without a hidden contract regarding the structure.
</snip>
not a problem. this would be easy in a face-to-face, but email has
it's limitations that must be overcome.
OK, "because we did not program it that way" is the key here. IOW, the
ability to "pluck" the proper content out of a representation (that
looks like the one you offered as your example) has not been
programmed into the client. I understand that.
<snip>
HTML and the scan do not make good candidates for machine processing
without a hidden contract regarding the structure.
</snip>
and
<snip>
If the implemented goal is to compile a list of ordered items from
each order the user agent needs to be able to parse the order
representation.
</snip>
and then: "..without a [hidden] contract..."
All media-type processing is by contract: the contract offered when
the media-type is documented. I think I hear you saying that the
contract details for a client that uses Atom would need (in your case)
_additional_ contract information such as ("here is how you can
recognize an order list inside an Atom feed", etc.). I can see that
this is so.
<snip>
But the problem is really that of formats embedded in formats because
we cannot implement the client without making assumptions about the
possible sub-formats. If these assumptions cannot be stated in the
Acept header, the situation I am dealing with exists.
</snip>
PROBLEM RESTATEMENT:
OK, now I think we're getting to the heart of the matter. It would
seem that the issue here is whether it is possible or reasonable to
create ways for clients to "know whether they understand this
representation" even in cases where the Accept header is "insufficient
as a descriptor" (due to the fact that a well-known generic media-type
is employed for the representation).
PROPOSED SOLUTION:
I will offer the following that I've done in the past that might address this:
1 - for cases where the representation is based on HTML, I use the
@profile model[1]. This allows me to program clients to look for the
proper information within the @profile attribute and reject it if
necessary (invalid representation) or, if the @profile is valid, but
the body does not conform, pitch another error (invalid body), etc.
2 - for cases where the presentation is based on XML (Atom, etc.), I
use standard namespace checking. That means, for my designs, I use
Atom's extension model rather than embedding custom XML in the content
element. I have, in the past, used a namespace within the content
element, but no longer do that.
These two "hacks" allow me to design representations that use
well-known formats and still provide a simple test for clients to use
in order to validate the representation before attempting to process
it.
Does this make sense?
mca
http://amundsen.com/blog/
http://mamund.com/foaf.rdf#me
On Sat, Aug 7, 2010 at 11:43, Jan Algermissen <algermissen1971@...> wrote:
>
> On Aug 7, 2010, at 5:17 PM, mike amundsen wrote:
>
>> <snip>
>> While a usual feed reader (e.g. Apple Mail) would be able to perform
>> it's implemented goal based on that feed our
>> compile-list-of-newly-ordered-items user agent would definitely not be
>> able to do what it is implemented to do.
>> </snip>
>>
>> QUESTION:
>> how do you "know" this to be true? IOW, what is it about the
>> representation example you provided that leads you to believe your
>> "compile-list-of-newly-ordered-items user agent" cannot "do what it is
>> implemented to do"?
>
> Gee - every sentence I leave out leads to confusion. Sorry. What I meant to say was:
>
> We know the user agent cannot handle the HTML-containing/scan-referencing feed because we did not program it that way. And besides that: both, HTML and the scan do not make good candidates for machine processing without a hidden contract regarding the structure.
>
>>
>> ASSUMPTION:
>> I think I hear you talking about the need for clients to know ahead of
>> time whether the representation returned is something they can
>> process.
>
> Yes. That is: meaningfully process according to their implemented goals. If the goal is to turn the controls contained in the representation into something the user can activate (e.g. as browsers or feed readers do) then fine. If the implemented goal is to compile a list of ordered items from each order the user agent needs to be able to parse the order representation. If it understands that there are orders at all (feed not empty) but it does not understand the individual order syntax it needs to reposrt an error somehow. (Which might just be ok - depending on the goal implementation).
>
> But the problem is really that of formats embedded in formats because we cannot implement the client without making assumptions about the possible sub-formats. If these assumptions cannot be stated in the Acept header, the situation I am dealing with exists.
>
>> If that's the case, that means there must be some information
>> baked into the client that is used to "check" the representation
>> returned. The Accept header is one of these methods ("I am a client
>> that will only be able to understand the following representation
>> formats").
>>
>> I that what this is about?
>
> Yes. It is the question of how specific the Acept header needs to be without causing hidden coupling. (Or whether we should just live with the uncertainty on the client side - which I think we should not).
>
> Jan
>
>
>>
>> mca
>> http://amundsen.com/blog/
>> http://mamund.com/foaf.rdf#me
>>
>>
>>
>>
>> On Sat, Aug 7, 2010 at 06:47, Jan Algermissen <algermissen1971@mac.com> wrote:
>>>
>>> On Aug 7, 2010, at 12:25 AM, Robert Brewer wrote:
>>>
>>>>> Jan Algermissen wrote:
>>>
>>>>> The question is whether Accept: text/html is indeed sufficient. Is it
>>>>> true that the user agent can persue its implemented goal of compiling
>>>> a
>>>>> list of all newly ordered items from any HTML document?
>>>>>
>>>>> Suppose the server provides both, application/order and text/html as
>>>>> representations of the list of new orders. When a user agent comes
>>>>> along that says Accept: text/html the server can freely assume
>>>> browser-
>>>>> like capabilities of the user agent (any HTML will do; even an <ul>
>>>>> with items referring to scanned PNGs of the orders). IOW, the owner of
>>>>> the server is free to change the implementation for text/html as long
>>>>> as a) the resource semantics remain stable (list of new orders) and
>>>>> valid HTML is returned.
>>>>>
>>>>> How would the user agent implementation deal with HTML? Special
>>>>> syntactic assumptions are not allowed (because of Accept: text/html)
>>>> or
>>>>> would mean a hidden coupling. How would a user agent distinguish
>>>>> between an HTML it does not understand but that contains orders (e.g.
>>>>> the list of scanned order images) and an empty list of orders that is
>>>>> augmented with some HTML it does not (and need not) understand?
>>>>>
>>>>> IMO that is impossible and hence Accept: text/html does not cut it.
>>>>
>>>> I think all that demonstrates is that HTML is too generic to be useful
>>>> for your particular task, not that all media types require "special
>>>> syntactic assumptions" (whether implicit or explicit). The fact that you
>>>> can make a "list" in HTML using any of a hundred types of tags doesn't
>>>> mean Atom, for example, also suffers from the same inappropriateness to
>>>> your task.
>>>>
>>>>
>>>
>>> I knew you were going to say that :-)
>>>
>>> Let's see:
>>>
>>> The implementor of the server side chooses to expose the order list as HTML and Atom. In the Atom case, she would write sth like this (in JAX-RS):
>>>
>>> @Path("/new-orders")
>>> class NewOrders {
>>>
>>> @GET
>>> @Produces("text/html")
>>> public Response newOrdersAsHTML() {
>>> // ...
>>> }
>>>
>>> @GET
>>> @Produces("application/atom+xml")
>>> public Response newOrdersAsAtomFeed() {
>>>
>>> }
>>> }
>>>
>>>
>>> When it comes to implementing (or changing) the Atom-producing method, the server developer need not (must not) be concerned with any client expectations. All that matters is to produce any valid Atom feed[1].
>>>
>>> Given that, it would be a perfectly fine implementation to produce an Atom feed such as this:
>>>
>>> <feed>
>>> <entry>
>>> <summary type="xhtml">
>>> <xhtml:div xmlns:xhtml="http://www.w3.org/1999/xhtml">
>>> <xhtml:h1>Order 551-A-1272</xhtml:h1>
>>> <xhtml:ul>
>>> <xhtml:li>Device Foo, Item Price: ... </xhtml:li>
>>> <xhtml:li>Screw Bar, Item Price: ... </xhtml:li>
>>> </xhtml:ul>
>>> <xhtml:b>Total: 600 EUR</xhtml:b>
>>> </xhtml:div>
>>> </summary>
>>> <content type="image/png" src="/scan-archive/orders/551-A-1272.png"/>
>>> </entry>
>>> <entry>
>>> <summary type="xhtml">
>>> <xhtml:div xmlns:xhtml="http://www.w3.org/1999/xhtml">
>>> <xhtml:h1>Order 551-A-1273</xhtml:h1>
>>> <xhtml:ul>
>>> <xhtml:li>Device Foo, Item Price: ... </xhtml:li>
>>> <xhtml:li>Screw Bar, Item Price: ... </xhtml:li>
>>> </xhtml:ul>
>>> <xhtml:b>Total: 600 EUR</xhtml:b>
>>> </xhtml:div>
>>> </summary>
>>> <content type="image/png" src="/scan-archive/orders/551-A-1273.png"/>
>>> </entry>
>>> </feed>
>>>
>>>
>>> If you develop a user agent that says Accept: application/atom+xml;type=feed you must be prepared to receive the above feed.
>>>
>>> While a usual feed reader (e.g. Apple Mail) would be able to perform it's implemented goal based on that feed our compile-list-of-newly-ordered-items user agent would definitely not be able to do what it is implemented to do.
>>>
>>> Two questions arise:
>>>
>>> 1. How does the user agent detect that it cannot perform its task but (despite having a perfectly valid answer)
>>> 2. What to do about that
>>>
>>>
>>> 1.:
>>> Given the feed above, how do we need to implement the client to report to the user (e.g. someone that at some point looks in a log file or someone that uses the business intelligence application that uses the compiled reports about newly ordered items) that a correct answer was received, that it did indeed contain orders but that the list could not be processed as intended?
>>>
>>> First of all, the client trusts the higher level assumption that the resource indeed provides the list of new orders. This is the same kind of trust that any browser has when it follows an <img src=""/> hypermedia control. The server told the user agent something about the referenced resource and the client can reasonably expect that to be true (otherwise we would deal with a broken server and that is not the issue here).
>>>
>>> Since the client expects the feed to represent the list of new orders, it is IMHO reasonable to assume that any entry in that feed points to a new order. No entries would mean 'no new orders'. This is IMHO not semantic tunneling through the Atom feed because the assumption is backed by the semantics of the resource as advertised by the server.
>>>
>>> The feed appearently contains two entries, hence the user agent can be programmed to understand that there are two new orders to process. When it comes to processing the orders the user agent will have to realize that neither the summary nor the referenced content is available in a format that is sufficient to extract the ordered items automatically. Hence the user agent has to report an error eventually leading to some human action to fix the situation:
>>>
>>> 2.:
>>> We reach question #2 once the fact that a problem exists for the user agent has reached a human. What is he supposed to do? There are three options:
>>>
>>> a) call the server developer and negotiate a certain format for the Atom feed
>>> b) adjust the user agent implementation to handle the format received (e.g. parse out the HTML from the summary or OCR the scanned orders)
>>> c) do nothing except mark the compiled report as 'wrong' or 'unusable'. IWO, accept the fact that the user goal cannot be satisfied
>>>
>>> a) Leads to coupling (if it is at all possible/desireable to call the server implementor)
>>> b) Does not improve the situation because the format can just change again tomorrow
>>> c) is the honest option but provides no business value
>>>
>>> In my opinion, the only thing to really improve the situation is to standardize a format that allows the server developer to actually determine the user agent expectations (capabilities) from the Accept header. If we had application/orderlist (or at least application/atom+xml;profile=orderlist) the server developer would need to either add a new response-producing method or send a 406.
>>>
>>> Does that sufficiently illustrate the point?
>>>
>>> Jan
>>>
>>>
>>> [1] and of course be true to be true to the server's own statement that the resource represents
>>> the new orders. It would be bad to serve a list of shipped orders, for example.
>>>
>>>
>>>
>>>
>>>
>>>
>>>> Robert Brewer
>>>> fumanchu@...
>>>>
>>>
>>> -----------------------------------
>>> Jan Algermissen, Consultant
>>> NORD Software Consulting
>>>
>>> Mail: algermissen@...
>>> Blog: http://www.nordsc.com/blog/
>>> Work: http://www.nordsc.com/
>>> -----------------------------------
>>>
>>>
>>>
>>>
>>>
>>>
>>> ------------------------------------
>>>
>>> Yahoo! Groups Links
>>>
>>>
>>>
>>>
>>
>>
>> ------------------------------------
>>
>> Yahoo! Groups Links
>>
>>
>>
>
> -----------------------------------
> Jan Algermissen, Consultant
> NORD Software Consulting
>
> Mail: algermissen@...
> Blog: http://www.nordsc.com/blog/
> Work: http://www.nordsc.com/
> -----------------------------------
>
>
>
>
>
<snip>
When the server developer implements for application/atom+xml;
type=feed it simply has no idea what special assumptions some client
will make. As long as the service returns valid Atom it will be a
correct implementation. Any side-agreements between client and server
violate what REST tries to achieve.
</snip>
I understand that the case you describe here is _possible_ but I am
not convinced it is _reasonable_. IOW, I do not accept that a
server/developer MUST act in the way you describe. I think a server
developer can be more responsible than what you characterize here and
can provide additional media-type instruction to any client that
wishes to participate. The server developer can provide details on an
Atom Extension employed or additional information on how clients can
recognize sub-types within a message (as I describe in my previous
message).
Yes, I think my suggestion is compromise for cases where the
well-known type lacks the proper semantics, but I assert this
compromise is reasonable and valid. The next reasonable alternative
(in cases where this compromise is not acceptable) is to develop a
custom media-type and instruct the client developers to "learn" the
details of that custom media type and code that knowledge into the
client head of time. I've done both and find merit in both.
SPECULATION:
I think, long-term, there is another possible solution; one that I
have been working on in tiny private examples lately. That solution is
to create a way to make "understanding a new media type" easier for
state-machine clients. IOW, a way that clients can "learn" the
semantic rules of a new type by installing a media-type definition (in
the same manner that users install "plug-ins" and "add-ons" in their
common Web browsers today). I have no serious examples to show for
this right now, but am encouraged that this is do-able and has good
long-term values.
mca
http://amundsen.com/blog/
http://mamund.com/foaf.rdf#me
On Sat, Aug 7, 2010 at 11:57, Jan Algermissen <algermissen1971@...> wrote:
>
> On Aug 7, 2010, at 5:17 PM, mike amundsen wrote:
>
>> <snip>
>> While a usual feed reader (e.g. Apple Mail) would be able to perform
>> it's implemented goal based on that feed our
>> compile-list-of-newly-ordered-items user agent would definitely not be
>> able to do what it is implemented to do.
>> </snip>
>>
>> QUESTION:
>> how do you "know" this to be true? IOW, what is it about the
>> representation example you provided that leads you to believe your
>> "compile-list-of-newly-ordered-items user agent" cannot "do what it is
>> implemented to do"?
>>
>> ASSUMPTION:
>> I think I hear you talking about the need for clients to know ahead of
>> time whether the representation returned is something they can
>> process. If that's the case, that means there must be some information
>> baked into the client that is used to "check" the representation
>> returned. The Accept header is one of these methods ("I am a client
>> that will only be able to understand the following representation
>> formats").
>
> I think it is important to be explicit about what "understand" means.
>
> I'd rather say that Accept means: "I am a user agent that will only be able to sensibly perform it's implemented goal if the representation has one of these media types"
>
>
> Note that it all depends on the implemented goal. If that goal is to "compile a list of newly ordered items from those orders that I happen to be able to parse and report the number of unparsable orders" then that would work just fine with Accept: application/atom+xml;type=feed.
>
> However, we must then understand that the eventual application state exposed to the user (the compiled list/report, maybe stuffed into some database) can only reflect what the user agent was able to make of the feed. IOW, the report might look like this:
>
> New Orders as of date foo: 201
> Processable Orders: 11
> Summary of items in those 11: [some list of items here]
> Unprocessable orders 190 [Reference to temporary filesystem where they can be reviewed]
>
> (This might, BTW, just what we want)
>
>
> You can turn all this around and say:
>
> When the server developer implements for application/atom+xml; type=feed it simply has no idea what special assumptions some client will make. As long as the service returns valid Atom it will be a correct implementation. Any side-agreements between client and server violate what REST tries to achieve.
>
>
> Jan
>
>
>
>
>
>>
>> I that what this is about?
>>
>> mca
>> http://amundsen.com/blog/
>> http://mamund.com/foaf.rdf#me
>>
>>
>>
>>
>> On Sat, Aug 7, 2010 at 06:47, Jan Algermissen <algermissen1971@...> wrote:
>>>
>>> On Aug 7, 2010, at 12:25 AM, Robert Brewer wrote:
>>>
>>>>> Jan Algermissen wrote:
>>>
>>>>> The question is whether Accept: text/html is indeed sufficient. Is it
>>>>> true that the user agent can persue its implemented goal of compiling
>>>> a
>>>>> list of all newly ordered items from any HTML document?
>>>>>
>>>>> Suppose the server provides both, application/order and text/html as
>>>>> representations of the list of new orders. When a user agent comes
>>>>> along that says Accept: text/html the server can freely assume
>>>> browser-
>>>>> like capabilities of the user agent (any HTML will do; even an <ul>
>>>>> with items referring to scanned PNGs of the orders). IOW, the owner of
>>>>> the server is free to change the implementation for text/html as long
>>>>> as a) the resource semantics remain stable (list of new orders) and
>>>>> valid HTML is returned.
>>>>>
>>>>> How would the user agent implementation deal with HTML? Special
>>>>> syntactic assumptions are not allowed (because of Accept: text/html)
>>>> or
>>>>> would mean a hidden coupling. How would a user agent distinguish
>>>>> between an HTML it does not understand but that contains orders (e.g.
>>>>> the list of scanned order images) and an empty list of orders that is
>>>>> augmented with some HTML it does not (and need not) understand?
>>>>>
>>>>> IMO that is impossible and hence Accept: text/html does not cut it.
>>>>
>>>> I think all that demonstrates is that HTML is too generic to be useful
>>>> for your particular task, not that all media types require "special
>>>> syntactic assumptions" (whether implicit or explicit). The fact that you
>>>> can make a "list" in HTML using any of a hundred types of tags doesn't
>>>> mean Atom, for example, also suffers from the same inappropriateness to
>>>> your task.
>>>>
>>>>
>>>
>>> I knew you were going to say that :-)
>>>
>>> Let's see:
>>>
>>> The implementor of the server side chooses to expose the order list as HTML and Atom. In the Atom case, she would write sth like this (in JAX-RS):
>>>
>>> @Path("/new-orders")
>>> class NewOrders {
>>>
>>> @GET
>>> @Produces("text/html")
>>> public Response newOrdersAsHTML() {
>>> // ...
>>> }
>>>
>>> @GET
>>> @Produces("application/atom+xml")
>>> public Response newOrdersAsAtomFeed() {
>>>
>>> }
>>> }
>>>
>>>
>>> When it comes to implementing (or changing) the Atom-producing method, the server developer need not (must not) be concerned with any client expectations. All that matters is to produce any valid Atom feed[1].
>>>
>>> Given that, it would be a perfectly fine implementation to produce an Atom feed such as this:
>>>
>>> <feed>
>>> <entry>
>>> <summary type="xhtml">
>>> <xhtml:div xmlns:xhtml="http://www.w3.org/1999/xhtml">
>>> <xhtml:h1>Order 551-A-1272</xhtml:h1>
>>> <xhtml:ul>
>>> <xhtml:li>Device Foo, Item Price: ... </xhtml:li>
>>> <xhtml:li>Screw Bar, Item Price: ... </xhtml:li>
>>> </xhtml:ul>
>>> <xhtml:b>Total: 600 EUR</xhtml:b>
>>> </xhtml:div>
>>> </summary>
>>> <content type="image/png" src="/scan-archive/orders/551-A-1272.png"/>
>>> </entry>
>>> <entry>
>>> <summary type="xhtml">
>>> <xhtml:div xmlns:xhtml="http://www.w3.org/1999/xhtml">
>>> <xhtml:h1>Order 551-A-1273</xhtml:h1>
>>> <xhtml:ul>
>>> <xhtml:li>Device Foo, Item Price: ... </xhtml:li>
>>> <xhtml:li>Screw Bar, Item Price: ... </xhtml:li>
>>> </xhtml:ul>
>>> <xhtml:b>Total: 600 EUR</xhtml:b>
>>> </xhtml:div>
>>> </summary>
>>> <content type="image/png" src="/scan-archive/orders/551-A-1273.png"/>
>>> </entry>
>>> </feed>
>>>
>>>
>>> If you develop a user agent that says Accept: application/atom+xml;type=feed you must be prepared to receive the above feed.
>>>
>>> While a usual feed reader (e.g. Apple Mail) would be able to perform it's implemented goal based on that feed our compile-list-of-newly-ordered-items user agent would definitely not be able to do what it is implemented to do.
>>>
>>> Two questions arise:
>>>
>>> 1. How does the user agent detect that it cannot perform its task but (despite having a perfectly valid answer)
>>> 2. What to do about that
>>>
>>>
>>> 1.:
>>> Given the feed above, how do we need to implement the client to report to the user (e.g. someone that at some point looks in a log file or someone that uses the business intelligence application that uses the compiled reports about newly ordered items) that a correct answer was received, that it did indeed contain orders but that the list could not be processed as intended?
>>>
>>> First of all, the client trusts the higher level assumption that the resource indeed provides the list of new orders. This is the same kind of trust that any browser has when it follows an <img src=""/> hypermedia control. The server told the user agent something about the referenced resource and the client can reasonably expect that to be true (otherwise we would deal with a broken server and that is not the issue here).
>>>
>>> Since the client expects the feed to represent the list of new orders, it is IMHO reasonable to assume that any entry in that feed points to a new order. No entries would mean 'no new orders'. This is IMHO not semantic tunneling through the Atom feed because the assumption is backed by the semantics of the resource as advertised by the server.
>>>
>>> The feed appearently contains two entries, hence the user agent can be programmed to understand that there are two new orders to process. When it comes to processing the orders the user agent will have to realize that neither the summary nor the referenced content is available in a format that is sufficient to extract the ordered items automatically. Hence the user agent has to report an error eventually leading to some human action to fix the situation:
>>>
>>> 2.:
>>> We reach question #2 once the fact that a problem exists for the user agent has reached a human. What is he supposed to do? There are three options:
>>>
>>> a) call the server developer and negotiate a certain format for the Atom feed
>>> b) adjust the user agent implementation to handle the format received (e.g. parse out the HTML from the summary or OCR the scanned orders)
>>> c) do nothing except mark the compiled report as 'wrong' or 'unusable'. IWO, accept the fact that the user goal cannot be satisfied
>>>
>>> a) Leads to coupling (if it is at all possible/desireable to call the server implementor)
>>> b) Does not improve the situation because the format can just change again tomorrow
>>> c) is the honest option but provides no business value
>>>
>>> In my opinion, the only thing to really improve the situation is to standardize a format that allows the server developer to actually determine the user agent expectations (capabilities) from the Accept header. If we had application/orderlist (or at least application/atom+xml;profile=orderlist) the server developer would need to either add a new response-producing method or send a 406.
>>>
>>> Does that sufficiently illustrate the point?
>>>
>>> Jan
>>>
>>>
>>> [1] and of course be true to be true to the server's own statement that the resource represents
>>> the new orders. It would be bad to serve a list of shipped orders, for example.
>>>
>>>
>>>
>>>
>>>
>>>
>>>> Robert Brewer
>>>> fumanchu@...
>>>>
>>>
>>> -----------------------------------
>>> Jan Algermissen, Consultant
>>> NORD Software Consulting
>>>
>>> Mail: algermissen@...
>>> Blog: http://www.nordsc.com/blog/
>>> Work: http://www.nordsc.com/
>>> -----------------------------------
>>>
>>>
>>>
>>>
>>>
>>>
>>> ------------------------------------
>>>
>>> Yahoo! Groups Links
>>>
>>>
>>>
>>>
>
> -----------------------------------
> Jan Algermissen, Consultant
> NORD Software Consulting
>
> Mail: algermissen@...
> Blog: http://www.nordsc.com/blog/
> Work: http://www.nordsc.com/
> -----------------------------------
>
>
>
>
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
>
On Sat, 2010-08-07 at 11:39 +0100, Mike Kelly wrote: > On Sat, Aug 7, 2010 at 12:52 AM, Bill de hÓra <bill@...> wrote: > > On Mon, 2010-08-02 at 09:22 +0100, Mike Kelly wrote: > >> On Sun, Aug 1, 2010 at 6:51 PM, Bill de hÓra <bill@...> wrote: > >> > So there's a tradeoff. Some developers would like to > >> > go direct to the status to avoid the hop. One way to do this is have the > >> > URLs prepared in advance. The argument is that a way to balance these > >> > concerns is to allow the server to publish a document that client can > >> > cache and from which the client can pull the status url directly and so > >> > short circuit the traversal without being very strongly coupled to the > >> > server's uri space. This kind of tradeoff seems reasonable to me, hence > >> > I don't understand the level of objection in some quarters to approaches > >> > like WADL. > >> > >> Why use WADL for that? Seems unnecessary when can achieve the same > >> thing with just a Link header. > > > > How? > > > > Bill > > > By serving a cacheable representation that includes the appropriate > 'short circuit' link (and relation) in its Link header. Oh so for every related resource I serve a Link header with relations? A couple of things come to mind - I can't compress headers. This actually matters in mobile systems. - I need to do a ton of testing to see whether Link will go through gateways and proxies on mobile systems. - I now have a management problem as to where to put relations (headers or document? both? This kind of thing drives developers insane. - I know have even more indirection due to Extension Relation Types to go off and read the document that 'explains' the link type. - I can't cache the Links and representation independently (cue lots of HEAD requests). No offence, but Link in this scenario seems like a way to avoid RDF at any cost (or avoiding admitting documents with actual semantics are important). Which brings me back to interlingua and just serving a description document (WADL or otherwise). I'm not seeing a clear win, theoretically or practically. Bill
Steering back on topic... > There is a resource that only exists in leap years, ?iso=YYYY-02-29. It > is dereferenceable. Its existence is proof of a leap year. Protocol > is irrelevant. What matters is the hypertext constraint. Being new to thinking about the REST architecture, I was thinking about the date resource. Yes, the existence of YYYY-02-29 (my anniversary BTW) would suffice in determining leap year. However, if I were creating a date resource, would there be anything wrong with performing a GET on the date and returning something like: <date iso="1992-02-29"> <dow>Saturday</dow> <leapyear>True</leapyear> <olympic-year>Summer Games</olympic-year> <stardate type="true-trek">45593.5</stardate> <Hebrew>25th of Adar I, 5752</Hebrew> <Chinese type="lunar">Day 26, First Month, Ren Shen Year</Chinese> <next href="/date?iso=1992030'" /> <prev href="/date?iso=19920228" /> <this-day-in-history href="http://www.history.com/this-day-in-history/2/29"/> </date> Doing a GET on 1992-06-31 would also return a non-existent resource even though it was a leap year. Would the above resource representation be preferable? Mark W.
Mark Wonsil wrote: > > Would there be anything wrong with performing a GET on the date and > returning something like: > Yes. What media type would that be? REST requires standardized media types and link relations, which is what I've used. My payload will eventually be extended to include more information, and account for non- Gregorian calendars. But for now, it's all that it needs to be, and everyone in the world can tell what the "previous" or "next" day is, understand that "6th" is an abbreviation of "sixth", and so on and so forth, without having to read any explanations I've written. -Eric
On Aug 7, 2010, at 6:24 PM, mike amundsen wrote:
> <snip>
> When the server developer implements for application/atom+xml;
> type=feed it simply has no idea what special assumptions some client
> will make. As long as the service returns valid Atom it will be a
> correct implementation. Any side-agreements between client and server
> violate what REST tries to achieve.
> </snip>
>
> I understand that the case you describe here is _possible_ but I am
> not convinced it is _reasonable_. IOW, I do not accept that a
> server/developer MUST act in the way you describe. I think a server
> developer can be more responsible than what you characterize here and
> can provide additional media-type instruction to any client that
> wishes to participate. The server developer can provide details on an
> Atom Extension employed or additional information on how clients can
> recognize sub-types within a message (as I describe in my previous
> message).
Sure, but how would he know? Maybe he wants to serve it as normal. plain Atom, too? Why would the server develper bother what use the client makes of the representation he provides? The kind of agreement is IMHO exactly what REST tries to eliminate (or make explicit as a media type) because it causes maintainence nightmares. Question: is there any promise by Amazon about how the HTML of the site ooks like? No. And for good reason. They do not want clients to start making assumptions beyond text/html.
>
> Yes, I think my suggestion is compromise for cases where the
> well-known type lacks the proper semantics, but I assert this
> compromise is reasonable and valid. The next reasonable alternative
> (in cases where this compromise is not acceptable) is to develop a
> custom media-type and instruct the client developers to "learn" the
> details of that custom media type and code that knowledge into the
> client head of time. I've done both and find merit in both.
See your point. The 'server promisses some out of band stuff' leads to coupling of clients to *that* server though. Something REST aims to avoid.
However, I guess that media types will likely be derived from experience that started based on out of band promisses in the first place. So the approach is definitely reasonable.
>
> SPECULATION:
> I think, long-term, there is another possible solution; one that I
> have been working on in tiny private examples lately. That solution is
> to create a way to make "understanding a new media type" easier for
> state-machine clients.
Uh, oh. The magic client?
> IOW, a way that clients can "learn" the
> semantic rules of a new type by installing a media-type definition (in
> the same manner that users install "plug-ins" and "add-ons" in their
> common Web browsers today). I have no serious examples to show for
> this right now, but am encouraged that this is do-able and has good
> long-term values.
You'll get an A++ from me when that thingy is out :-)
[But seriously: can you sketch a 'solution'?]
Jan
>
> mca
> http://amundsen.com/blog/
> http://mamund.com/foaf.rdf#me
>
>
>
>
> On Sat, Aug 7, 2010 at 11:57, Jan Algermissen <algermissen1971@...> wrote:
>>
>> On Aug 7, 2010, at 5:17 PM, mike amundsen wrote:
>>
>>> <snip>
>>> While a usual feed reader (e.g. Apple Mail) would be able to perform
>>> it's implemented goal based on that feed our
>>> compile-list-of-newly-ordered-items user agent would definitely not be
>>> able to do what it is implemented to do.
>>> </snip>
>>>
>>> QUESTION:
>>> how do you "know" this to be true? IOW, what is it about the
>>> representation example you provided that leads you to believe your
>>> "compile-list-of-newly-ordered-items user agent" cannot "do what it is
>>> implemented to do"?
>>>
>>> ASSUMPTION:
>>> I think I hear you talking about the need for clients to know ahead of
>>> time whether the representation returned is something they can
>>> process. If that's the case, that means there must be some information
>>> baked into the client that is used to "check" the representation
>>> returned. The Accept header is one of these methods ("I am a client
>>> that will only be able to understand the following representation
>>> formats").
>>
>> I think it is important to be explicit about what "understand" means.
>>
>> I'd rather say that Accept means: "I am a user agent that will only be able to sensibly perform it's implemented goal if the representation has one of these media types"
>>
>>
>> Note that it all depends on the implemented goal. If that goal is to "compile a list of newly ordered items from those orders that I happen to be able to parse and report the number of unparsable orders" then that would work just fine with Accept: application/atom+xml;type=feed.
>>
>> However, we must then understand that the eventual application state exposed to the user (the compiled list/report, maybe stuffed into some database) can only reflect what the user agent was able to make of the feed. IOW, the report might look like this:
>>
>> New Orders as of date foo: 201
>> Processable Orders: 11
>> Summary of items in those 11: [some list of items here]
>> Unprocessable orders 190 [Reference to temporary filesystem where they can be reviewed]
>>
>> (This might, BTW, just what we want)
>>
>>
>> You can turn all this around and say:
>>
>> When the server developer implements for application/atom+xml; type=feed it simply has no idea what special assumptions some client will make. As long as the service returns valid Atom it will be a correct implementation. Any side-agreements between client and server violate what REST tries to achieve.
>>
>>
>> Jan
>>
>>
>>
>>
>>
>>>
>>> I that what this is about?
>>>
>>> mca
>>> http://amundsen.com/blog/
>>> http://mamund.com/foaf.rdf#me
>>>
>>>
>>>
>>>
>>> On Sat, Aug 7, 2010 at 06:47, Jan Algermissen <algermissen1971@...> wrote:
>>>>
>>>> On Aug 7, 2010, at 12:25 AM, Robert Brewer wrote:
>>>>
>>>>>> Jan Algermissen wrote:
>>>>
>>>>>> The question is whether Accept: text/html is indeed sufficient. Is it
>>>>>> true that the user agent can persue its implemented goal of compiling
>>>>> a
>>>>>> list of all newly ordered items from any HTML document?
>>>>>>
>>>>>> Suppose the server provides both, application/order and text/html as
>>>>>> representations of the list of new orders. When a user agent comes
>>>>>> along that says Accept: text/html the server can freely assume
>>>>> browser-
>>>>>> like capabilities of the user agent (any HTML will do; even an <ul>
>>>>>> with items referring to scanned PNGs of the orders). IOW, the owner of
>>>>>> the server is free to change the implementation for text/html as long
>>>>>> as a) the resource semantics remain stable (list of new orders) and
>>>>>> valid HTML is returned.
>>>>>>
>>>>>> How would the user agent implementation deal with HTML? Special
>>>>>> syntactic assumptions are not allowed (because of Accept: text/html)
>>>>> or
>>>>>> would mean a hidden coupling. How would a user agent distinguish
>>>>>> between an HTML it does not understand but that contains orders (e.g.
>>>>>> the list of scanned order images) and an empty list of orders that is
>>>>>> augmented with some HTML it does not (and need not) understand?
>>>>>>
>>>>>> IMO that is impossible and hence Accept: text/html does not cut it.
>>>>>
>>>>> I think all that demonstrates is that HTML is too generic to be useful
>>>>> for your particular task, not that all media types require "special
>>>>> syntactic assumptions" (whether implicit or explicit). The fact that you
>>>>> can make a "list" in HTML using any of a hundred types of tags doesn't
>>>>> mean Atom, for example, also suffers from the same inappropriateness to
>>>>> your task.
>>>>>
>>>>>
>>>>
>>>> I knew you were going to say that :-)
>>>>
>>>> Let's see:
>>>>
>>>> The implementor of the server side chooses to expose the order list as HTML and Atom. In the Atom case, she would write sth like this (in JAX-RS):
>>>>
>>>> @Path("/new-orders")
>>>> class NewOrders {
>>>>
>>>> @GET
>>>> @Produces("text/html")
>>>> public Response newOrdersAsHTML() {
>>>> // ...
>>>> }
>>>>
>>>> @GET
>>>> @Produces("application/atom+xml")
>>>> public Response newOrdersAsAtomFeed() {
>>>>
>>>> }
>>>> }
>>>>
>>>>
>>>> When it comes to implementing (or changing) the Atom-producing method, the server developer need not (must not) be concerned with any client expectations. All that matters is to produce any valid Atom feed[1].
>>>>
>>>> Given that, it would be a perfectly fine implementation to produce an Atom feed such as this:
>>>>
>>>> <feed>
>>>> <entry>
>>>> <summary type="xhtml">
>>>> <xhtml:div xmlns:xhtml="http://www.w3.org/1999/xhtml">
>>>> <xhtml:h1>Order 551-A-1272</xhtml:h1>
>>>> <xhtml:ul>
>>>> <xhtml:li>Device Foo, Item Price: ... </xhtml:li>
>>>> <xhtml:li>Screw Bar, Item Price: ... </xhtml:li>
>>>> </xhtml:ul>
>>>> <xhtml:b>Total: 600 EUR</xhtml:b>
>>>> </xhtml:div>
>>>> </summary>
>>>> <content type="image/png" src="/scan-archive/orders/551-A-1272.png"/>
>>>> </entry>
>>>> <entry>
>>>> <summary type="xhtml">
>>>> <xhtml:div xmlns:xhtml="http://www.w3.org/1999/xhtml">
>>>> <xhtml:h1>Order 551-A-1273</xhtml:h1>
>>>> <xhtml:ul>
>>>> <xhtml:li>Device Foo, Item Price: ... </xhtml:li>
>>>> <xhtml:li>Screw Bar, Item Price: ... </xhtml:li>
>>>> </xhtml:ul>
>>>> <xhtml:b>Total: 600 EUR</xhtml:b>
>>>> </xhtml:div>
>>>> </summary>
>>>> <content type="image/png" src="/scan-archive/orders/551-A-1273.png"/>
>>>> </entry>
>>>> </feed>
>>>>
>>>>
>>>> If you develop a user agent that says Accept: application/atom+xml;type=feed you must be prepared to receive the above feed.
>>>>
>>>> While a usual feed reader (e.g. Apple Mail) would be able to perform it's implemented goal based on that feed our compile-list-of-newly-ordered-items user agent would definitely not be able to do what it is implemented to do.
>>>>
>>>> Two questions arise:
>>>>
>>>> 1. How does the user agent detect that it cannot perform its task but (despite having a perfectly valid answer)
>>>> 2. What to do about that
>>>>
>>>>
>>>> 1.:
>>>> Given the feed above, how do we need to implement the client to report to the user (e.g. someone that at some point looks in a log file or someone that uses the business intelligence application that uses the compiled reports about newly ordered items) that a correct answer was received, that it did indeed contain orders but that the list could not be processed as intended?
>>>>
>>>> First of all, the client trusts the higher level assumption that the resource indeed provides the list of new orders. This is the same kind of trust that any browser has when it follows an <img src=""/> hypermedia control. The server told the user agent something about the referenced resource and the client can reasonably expect that to be true (otherwise we would deal with a broken server and that is not the issue here).
>>>>
>>>> Since the client expects the feed to represent the list of new orders, it is IMHO reasonable to assume that any entry in that feed points to a new order. No entries would mean 'no new orders'. This is IMHO not semantic tunneling through the Atom feed because the assumption is backed by the semantics of the resource as advertised by the server.
>>>>
>>>> The feed appearently contains two entries, hence the user agent can be programmed to understand that there are two new orders to process. When it comes to processing the orders the user agent will have to realize that neither the summary nor the referenced content is available in a format that is sufficient to extract the ordered items automatically. Hence the user agent has to report an error eventually leading to some human action to fix the situation:
>>>>
>>>> 2.:
>>>> We reach question #2 once the fact that a problem exists for the user agent has reached a human. What is he supposed to do? There are three options:
>>>>
>>>> a) call the server developer and negotiate a certain format for the Atom feed
>>>> b) adjust the user agent implementation to handle the format received (e.g. parse out the HTML from the summary or OCR the scanned orders)
>>>> c) do nothing except mark the compiled report as 'wrong' or 'unusable'. IWO, accept the fact that the user goal cannot be satisfied
>>>>
>>>> a) Leads to coupling (if it is at all possible/desireable to call the server implementor)
>>>> b) Does not improve the situation because the format can just change again tomorrow
>>>> c) is the honest option but provides no business value
>>>>
>>>> In my opinion, the only thing to really improve the situation is to standardize a format that allows the server developer to actually determine the user agent expectations (capabilities) from the Accept header. If we had application/orderlist (or at least application/atom+xml;profile=orderlist) the server developer would need to either add a new response-producing method or send a 406.
>>>>
>>>> Does that sufficiently illustrate the point?
>>>>
>>>> Jan
>>>>
>>>>
>>>> [1] and of course be true to be true to the server's own statement that the resource represents
>>>> the new orders. It would be bad to serve a list of shipped orders, for example.
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>> Robert Brewer
>>>>> fumanchu@...
>>>>>
>>>>
>>>> -----------------------------------
>>>> Jan Algermissen, Consultant
>>>> NORD Software Consulting
>>>>
>>>> Mail: algermissen@...
>>>> Blog: http://www.nordsc.com/blog/
>>>> Work: http://www.nordsc.com/
>>>> -----------------------------------
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> ------------------------------------
>>>>
>>>> Yahoo! Groups Links
>>>>
>>>>
>>>>
>>>>
>>
>> -----------------------------------
>> Jan Algermissen, Consultant
>> NORD Software Consulting
>>
>> Mail: algermissen@...
>> Blog: http://www.nordsc.com/blog/
>> Work: http://www.nordsc.com/
>> -----------------------------------
>>
>>
>>
>>
>>
>>
>> ------------------------------------
>>
>> Yahoo! Groups Links
>>
>>
>>
>>
-----------------------------------
Jan Algermissen, Consultant
NORD Software Consulting
Mail: algermissen@...
Blog: http://www.nordsc.com/blog/
Work: http://www.nordsc.com/
-----------------------------------
On Aug 7, 2010, at 6:13 PM, mike amundsen wrote:
> <snip>
>> We know the user agent cannot handle the HTML-containing/scan-referencing feed because we did not program it that way. And besides that: both, HTML and the scan do not make good candidates for machine processing without a hidden contract regarding the structure.
> </snip>
>
> not a problem. this would be easy in a face-to-face, but email has
> it's limitations that must be overcome.
>
> OK, "because we did not program it that way" is the key here. IOW, the
> ability to "pluck" the proper content out of a representation (that
> looks like the one you offered as your example) has not been
> programmed into the client. I understand that.
>
> <snip>
> HTML and the scan do not make good candidates for machine processing
> without a hidden contract regarding the structure.
> </snip>
> and
> <snip>
> If the implemented goal is to compile a list of ordered items from
> each order the user agent needs to be able to parse the order
> representation.
> </snip>
>
> and then: "..without a [hidden] contract..."
> All media-type processing is by contract: the contract offered when
> the media-type is documented. I think I hear you saying that the
> contract details for a client that uses Atom would need (in your case)
> _additional_ contract information such as ("here is how you can
> recognize an order list inside an Atom feed", etc.). I can see that
> this is so.
>
> <snip>
> But the problem is really that of formats embedded in formats because
> we cannot implement the client without making assumptions about the
> possible sub-formats. If these assumptions cannot be stated in the
> Acept header, the situation I am dealing with exists.
> </snip>
>
> PROBLEM RESTATEMENT:
> OK, now I think we're getting to the heart of the matter. It would
> seem that the issue here is whether it is possible or reasonable to
> create ways for clients to "know whether they understand this
> representation" even in cases where the Accept header is "insufficient
> as a descriptor" (due to the fact that a well-known generic media-type
> is employed for the representation).
Yes, I think we are (sort of) in alignment now.
>
> PROPOSED SOLUTION:
> I will offer the following that I've done in the past that might address this:
> 1 - for cases where the representation is based on HTML, I use the
> @profile model[1]. This allows me to program clients to look for the
> proper information within the @profile attribute and reject it if
> necessary (invalid representation) or, if the @profile is valid, but
> the body does not conform, pitch another error (invalid body), etc.
Got a reference for [1]? :-)
I (think I) like profiles to express 'bundles of extensions or syntax conventions'. Though I'd rather use the profile in the Accept header to enable conneg. Teh profile parameter is already standardized (I think; too late for a pointer). Another approach is the Content-Features header (also: too late for pointers).
Basically, a profile parameter works like media type subclassing: Accept: application/atom+xml;profile=orderlist would mean: I can handle an atom feed, but only if it conforms to a certain profile. The server can still respond with 200 Ok, Content-Type:application/atom if the profile is met or 406 if it isn't.
However, this is really not much different from minting a media type.
>
> 2 - for cases where the presentation is based on XML (Atom, etc.), I
> use standard namespace checking. That means, for my designs, I use
> Atom's extension model rather than embedding custom XML in the content
> element. I have, in the past, used a namespace within the content
> element, but no longer do that.
Ah - so you do not let the server 'negotiate' but let the client fail gracefully if the condition is not met?
>
> These two "hacks" allow me to design representations that use
> well-known formats and still provide a simple test for clients to use
> in order to validate the representation before attempting to process
> it.
>
> Does this make sense?
Yes, but it does not solve the problem of a hidden expectation being in place that to some extend (implicitly) constrains the service owner. What is so bad about just minting a new type? (Especially since all design activity in REST is spent in this area anyhow?)
I think we are just too afraid of minting media types. IMHO there should be one (or a couple) for each domain such as SCM (procurement), ERP, HR, ITIL, BI, etc. You know, the big acronyms of the enterprise IT space.
Jan
>
> mca
> http://amundsen.com/blog/
> http://mamund.com/foaf.rdf#me
>
>
>
>
> On Sat, Aug 7, 2010 at 11:43, Jan Algermissen <algermissen1971@...> wrote:
>>
>> On Aug 7, 2010, at 5:17 PM, mike amundsen wrote:
>>
>>> <snip>
>>> While a usual feed reader (e.g. Apple Mail) would be able to perform
>>> it's implemented goal based on that feed our
>>> compile-list-of-newly-ordered-items user agent would definitely not be
>>> able to do what it is implemented to do.
>>> </snip>
>>>
>>> QUESTION:
>>> how do you "know" this to be true? IOW, what is it about the
>>> representation example you provided that leads you to believe your
>>> "compile-list-of-newly-ordered-items user agent" cannot "do what it is
>>> implemented to do"?
>>
>> Gee - every sentence I leave out leads to confusion. Sorry. What I meant to say was:
>>
>> We know the user agent cannot handle the HTML-containing/scan-referencing feed because we did not program it that way. And besides that: both, HTML and the scan do not make good candidates for machine processing without a hidden contract regarding the structure.
>>
>>>
>>> ASSUMPTION:
>>> I think I hear you talking about the need for clients to know ahead of
>>> time whether the representation returned is something they can
>>> process.
>>
>> Yes. That is: meaningfully process according to their implemented goals. If the goal is to turn the controls contained in the representation into something the user can activate (e.g. as browsers or feed readers do) then fine. If the implemented goal is to compile a list of ordered items from each order the user agent needs to be able to parse the order representation. If it understands that there are orders at all (feed not empty) but it does not understand the individual order syntax it needs to reposrt an error somehow. (Which might just be ok - depending on the goal implementation).
>>
>> But the problem is really that of formats embedded in formats because we cannot implement the client without making assumptions about the possible sub-formats. If these assumptions cannot be stated in the Acept header, the situation I am dealing with exists.
>>
>>> If that's the case, that means there must be some information
>>> baked into the client that is used to "check" the representation
>>> returned. The Accept header is one of these methods ("I am a client
>>> that will only be able to understand the following representation
>>> formats").
>>>
>>> I that what this is about?
>>
>> Yes. It is the question of how specific the Acept header needs to be without causing hidden coupling. (Or whether we should just live with the uncertainty on the client side - which I think we should not).
>>
>> Jan
>>
>>
>>>
>>> mca
>>> http://amundsen.com/blog/
>>> http://mamund.com/foaf.rdf#me
>>>
>>>
>>>
>>>
>>> On Sat, Aug 7, 2010 at 06:47, Jan Algermissen <algermissen1971@...> wrote:
>>>>
>>>> On Aug 7, 2010, at 12:25 AM, Robert Brewer wrote:
>>>>
>>>>>> Jan Algermissen wrote:
>>>>
>>>>>> The question is whether Accept: text/html is indeed sufficient. Is it
>>>>>> true that the user agent can persue its implemented goal of compiling
>>>>> a
>>>>>> list of all newly ordered items from any HTML document?
>>>>>>
>>>>>> Suppose the server provides both, application/order and text/html as
>>>>>> representations of the list of new orders. When a user agent comes
>>>>>> along that says Accept: text/html the server can freely assume
>>>>> browser-
>>>>>> like capabilities of the user agent (any HTML will do; even an <ul>
>>>>>> with items referring to scanned PNGs of the orders). IOW, the owner of
>>>>>> the server is free to change the implementation for text/html as long
>>>>>> as a) the resource semantics remain stable (list of new orders) and
>>>>>> valid HTML is returned.
>>>>>>
>>>>>> How would the user agent implementation deal with HTML? Special
>>>>>> syntactic assumptions are not allowed (because of Accept: text/html)
>>>>> or
>>>>>> would mean a hidden coupling. How would a user agent distinguish
>>>>>> between an HTML it does not understand but that contains orders (e.g.
>>>>>> the list of scanned order images) and an empty list of orders that is
>>>>>> augmented with some HTML it does not (and need not) understand?
>>>>>>
>>>>>> IMO that is impossible and hence Accept: text/html does not cut it.
>>>>>
>>>>> I think all that demonstrates is that HTML is too generic to be useful
>>>>> for your particular task, not that all media types require "special
>>>>> syntactic assumptions" (whether implicit or explicit). The fact that you
>>>>> can make a "list" in HTML using any of a hundred types of tags doesn't
>>>>> mean Atom, for example, also suffers from the same inappropriateness to
>>>>> your task.
>>>>>
>>>>>
>>>>
>>>> I knew you were going to say that :-)
>>>>
>>>> Let's see:
>>>>
>>>> The implementor of the server side chooses to expose the order list as HTML and Atom. In the Atom case, she would write sth like this (in JAX-RS):
>>>>
>>>> @Path("/new-orders")
>>>> class NewOrders {
>>>>
>>>> @GET
>>>> @Produces("text/html")
>>>> public Response newOrdersAsHTML() {
>>>> // ...
>>>> }
>>>>
>>>> @GET
>>>> @Produces("application/atom+xml")
>>>> public Response newOrdersAsAtomFeed() {
>>>>
>>>> }
>>>> }
>>>>
>>>>
>>>> When it comes to implementing (or changing) the Atom-producing method, the server developer need not (must not) be concerned with any client expectations. All that matters is to produce any valid Atom feed[1].
>>>>
>>>> Given that, it would be a perfectly fine implementation to produce an Atom feed such as this:
>>>>
>>>> <feed>
>>>> <entry>
>>>> <summary type="xhtml">
>>>> <xhtml:div xmlns:xhtml="http://www.w3.org/1999/xhtml">
>>>> <xhtml:h1>Order 551-A-1272</xhtml:h1>
>>>> <xhtml:ul>
>>>> <xhtml:li>Device Foo, Item Price: ... </xhtml:li>
>>>> <xhtml:li>Screw Bar, Item Price: ... </xhtml:li>
>>>> </xhtml:ul>
>>>> <xhtml:b>Total: 600 EUR</xhtml:b>
>>>> </xhtml:div>
>>>> </summary>
>>>> <content type="image/png" src="/scan-archive/orders/551-A-1272.png"/>
>>>> </entry>
>>>> <entry>
>>>> <summary type="xhtml">
>>>> <xhtml:div xmlns:xhtml="http://www.w3.org/1999/xhtml">
>>>> <xhtml:h1>Order 551-A-1273</xhtml:h1>
>>>> <xhtml:ul>
>>>> <xhtml:li>Device Foo, Item Price: ... </xhtml:li>
>>>> <xhtml:li>Screw Bar, Item Price: ... </xhtml:li>
>>>> </xhtml:ul>
>>>> <xhtml:b>Total: 600 EUR</xhtml:b>
>>>> </xhtml:div>
>>>> </summary>
>>>> <content type="image/png" src="/scan-archive/orders/551-A-1273.png"/>
>>>> </entry>
>>>> </feed>
>>>>
>>>>
>>>> If you develop a user agent that says Accept: application/atom+xml;type=feed you must be prepared to receive the above feed.
>>>>
>>>> While a usual feed reader (e.g. Apple Mail) would be able to perform it's implemented goal based on that feed our compile-list-of-newly-ordered-items user agent would definitely not be able to do what it is implemented to do.
>>>>
>>>> Two questions arise:
>>>>
>>>> 1. How does the user agent detect that it cannot perform its task but (despite having a perfectly valid answer)
>>>> 2. What to do about that
>>>>
>>>>
>>>> 1.:
>>>> Given the feed above, how do we need to implement the client to report to the user (e.g. someone that at some point looks in a log file or someone that uses the business intelligence application that uses the compiled reports about newly ordered items) that a correct answer was received, that it did indeed contain orders but that the list could not be processed as intended?
>>>>
>>>> First of all, the client trusts the higher level assumption that the resource indeed provides the list of new orders. This is the same kind of trust that any browser has when it follows an <img src=""/> hypermedia control. The server told the user agent something about the referenced resource and the client can reasonably expect that to be true (otherwise we would deal with a broken server and that is not the issue here).
>>>>
>>>> Since the client expects the feed to represent the list of new orders, it is IMHO reasonable to assume that any entry in that feed points to a new order. No entries would mean 'no new orders'. This is IMHO not semantic tunneling through the Atom feed because the assumption is backed by the semantics of the resource as advertised by the server.
>>>>
>>>> The feed appearently contains two entries, hence the user agent can be programmed to understand that there are two new orders to process. When it comes to processing the orders the user agent will have to realize that neither the summary nor the referenced content is available in a format that is sufficient to extract the ordered items automatically. Hence the user agent has to report an error eventually leading to some human action to fix the situation:
>>>>
>>>> 2.:
>>>> We reach question #2 once the fact that a problem exists for the user agent has reached a human. What is he supposed to do? There are three options:
>>>>
>>>> a) call the server developer and negotiate a certain format for the Atom feed
>>>> b) adjust the user agent implementation to handle the format received (e.g. parse out the HTML from the summary or OCR the scanned orders)
>>>> c) do nothing except mark the compiled report as 'wrong' or 'unusable'. IWO, accept the fact that the user goal cannot be satisfied
>>>>
>>>> a) Leads to coupling (if it is at all possible/desireable to call the server implementor)
>>>> b) Does not improve the situation because the format can just change again tomorrow
>>>> c) is the honest option but provides no business value
>>>>
>>>> In my opinion, the only thing to really improve the situation is to standardize a format that allows the server developer to actually determine the user agent expectations (capabilities) from the Accept header. If we had application/orderlist (or at least application/atom+xml;profile=orderlist) the server developer would need to either add a new response-producing method or send a 406.
>>>>
>>>> Does that sufficiently illustrate the point?
>>>>
>>>> Jan
>>>>
>>>>
>>>> [1] and of course be true to be true to the server's own statement that the resource represents
>>>> the new orders. It would be bad to serve a list of shipped orders, for example.
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>> Robert Brewer
>>>>> fumanchu@...
>>>>>
>>>>
>>>> -----------------------------------
>>>> Jan Algermissen, Consultant
>>>> NORD Software Consulting
>>>>
>>>> Mail: algermissen@...
>>>> Blog: http://www.nordsc.com/blog/
>>>> Work: http://www.nordsc.com/
>>>> -----------------------------------
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> ------------------------------------
>>>>
>>>> Yahoo! Groups Links
>>>>
>>>>
>>>>
>>>>
>>>
>>>
>>> ------------------------------------
>>>
>>> Yahoo! Groups Links
>>>
>>>
>>>
>>
>> -----------------------------------
>> Jan Algermissen, Consultant
>> NORD Software Consulting
>>
>> Mail: algermissen@...
>> Blog: http://www.nordsc.com/blog/
>> Work: http://www.nordsc.com/
>> -----------------------------------
>>
>>
>>
>>
>>
-----------------------------------
Jan Algermissen, Consultant
NORD Software Consulting
Mail: algermissen@...
Blog: http://www.nordsc.com/blog/
Work: http://www.nordsc.com/
-----------------------------------
Jan Algermissen wrote: > > How would the user agent implementation deal with HTML? Special > syntactic assumptions are not allowed (because of Accept: text/html) > or would mean a hidden coupling. > Why would you ever Accept: text/html? An HTML front-end is easy enough to generate from any XML back-end. I know that I can PUT or POST your custom media type to a given URI because I read it in your hypertext. Whether the user agent can actually formulate a request using the custom media type is another problem. What matters is that your hypertext provides a self-documenting API. -Eric
@PROFILE
in my haste, i left out references for HTML profiles[1][2]. HTML5
initially did not include profiles, but there is an effort to bring it
back in an expanded way[3]
<snip>
> Basically, a profile parameter works like media type subclassing: Accept: application/atom+xml;profile=orderlist would mean: I can handle an atom feed, but only if it conforms to a certain profile. The server can still respond with 200 Ok, Content-Type:application/atom if the profile is met or 406 if it isn't.
>
> However, this is really not much different from minting a media type.
</snip>
Yes on both counts. I employ @profile when I am constrained (by client
dev, server dev, or both). When possible, I mint new media-types.
<snip>
> Ah - so you do not let the server 'negotiate' but let the client fail gracefully if the condition is not met?
</snip>
In the iomplementations I have done, clients and/or servers can
negotiate (using server-drvien[4], agent-driven[5], or transparent[6]
[not done that one ever].) for the generic type (e.g. app/xml,
app/atom+xml, app/xhtml+xml, text/html, etc.). Once that is
accomplished, the representation is returned w/ the additional
semantic information (@profile or namespace declrations, as
appropriate) and at that point is it left to the client to inspect
these details against that client's own expectations (expectations
that have already been programmed into the client ahead of time based
on documentation supplied when building the client).
<snip>
> Yes, but it does not solve the problem of a hidden expectation being in place that to some extend (implicitly) constrains the service owner.
</snip>
That is correct. This "hack" (using generic types, with additional
semantic instruction pointers in the representation using a
pre-defined pattern) is only helpful when clients are written to
expect and act accordingly.
<snip>
> I think we are just too afraid of minting media types.
</snip>
I agreed that _some_ are averse to minting types; I am not one of
them<g>. As I've already mentioned, I employ this technique when I am
constrained from minting a new type for the target implementation.
</snip>
IMHO there should be one (or a couple) for each domain such as SCM
(procurement), ERP, HR, ITIL, BI, etc. You know, the big acronyms of
the enterprise IT space.
</snip>
I think that's a fine idea. I think you are working along these lines
already, no?
[1] http://www.w3.org/TR/html401/struct/global.html#profiles
[2] http://gmpg.org/xmdp/
[3] http://dev.w3.org/html5/profiles/source/
[4] http://www.w3.org/Protocols/rfc2616/rfc2616-sec12.html#sec12.1
[5] http://www.w3.org/Protocols/rfc2616/rfc2616-sec12.html#sec12.2
[6] http://www.w3.org/Protocols/rfc2616/rfc2616-sec12.html#sec12.3
mca
http://amundsen.com/blog/
http://mamund.com/foaf.rdf#me
On Sat, Aug 7, 2010 at 17:47, Jan Algermissen <algermissen1971@...> wrote:
>
> On Aug 7, 2010, at 6:13 PM, mike amundsen wrote:
>
>> <snip>
>>> We know the user agent cannot handle the HTML-containing/scan-referencing feed because we did not program it that way. And besides that: both, HTML and the scan do not make good candidates for machine processing without a hidden contract regarding the structure.
>> </snip>
>>
>> not a problem. this would be easy in a face-to-face, but email has
>> it's limitations that must be overcome.
>>
>> OK, "because we did not program it that way" is the key here. IOW, the
>> ability to "pluck" the proper content out of a representation (that
>> looks like the one you offered as your example) has not been
>> programmed into the client. I understand that.
>>
>> <snip>
>> HTML and the scan do not make good candidates for machine processing
>> without a hidden contract regarding the structure.
>> </snip>
>> and
>> <snip>
>> If the implemented goal is to compile a list of ordered items from
>> each order the user agent needs to be able to parse the order
>> representation.
>> </snip>
>>
>> and then: "..without a [hidden] contract..."
>> All media-type processing is by contract: the contract offered when
>> the media-type is documented. I think I hear you saying that the
>> contract details for a client that uses Atom would need (in your case)
>> _additional_ contract information such as ("here is how you can
>> recognize an order list inside an Atom feed", etc.). I can see that
>> this is so.
>>
>> <snip>
>> But the problem is really that of formats embedded in formats because
>> we cannot implement the client without making assumptions about the
>> possible sub-formats. If these assumptions cannot be stated in the
>> Acept header, the situation I am dealing with exists.
>> </snip>
>>
>> PROBLEM RESTATEMENT:
>> OK, now I think we're getting to the heart of the matter. It would
>> seem that the issue here is whether it is possible or reasonable to
>> create ways for clients to "know whether they understand this
>> representation" even in cases where the Accept header is "insufficient
>> as a descriptor" (due to the fact that a well-known generic media-type
>> is employed for the representation).
>
> Yes, I think we are (sort of) in alignment now.
>
>
>>
>> PROPOSED SOLUTION:
>> I will offer the following that I've done in the past that might address this:
>> 1 - for cases where the representation is based on HTML, I use the
>> @profile model[1]. This allows me to program clients to look for the
>> proper information within the @profile attribute and reject it if
>> necessary (invalid representation) or, if the @profile is valid, but
>> the body does not conform, pitch another error (invalid body), etc.
>
> Got a reference for [1]? :-)
>
> I (think I) like profiles to express 'bundles of extensions or syntax conventions'. Though I'd rather use the profile in the Accept header to enable conneg. Teh profile parameter is already standardized (I think; too late for a pointer). Another approach is the Content-Features header (also: too late for pointers).
>
> Basically, a profile parameter works like media type subclassing: Accept: application/atom+xml;profile=orderlist would mean: I can handle an atom feed, but only if it conforms to a certain profile. The server can still respond with 200 Ok, Content-Type:application/atom if the profile is met or 406 if it isn't.
>
> However, this is really not much different from minting a media type.
>
>
>>
>> 2 - for cases where the presentation is based on XML (Atom, etc.), I
>> use standard namespace checking. That means, for my designs, I use
>> Atom's extension model rather than embedding custom XML in the content
>> element. I have, in the past, used a namespace within the content
>> element, but no longer do that.
>
> Ah - so you do not let the server 'negotiate' but let the client fail gracefully if the condition is not met?
>
>
>>
>> These two "hacks" allow me to design representations that use
>> well-known formats and still provide a simple test for clients to use
>> in order to validate the representation before attempting to process
>> it.
>>
>> Does this make sense?
>
> Yes, but it does not solve the problem of a hidden expectation being in place that to some extend (implicitly) constrains the service owner. What is so bad about just minting a new type? (Especially since all design activity in REST is spent in this area anyhow?)
>
> I think we are just too afraid of minting media types. IMHO there should be one (or a couple) for each domain such as SCM (procurement), ERP, HR, ITIL, BI, etc. You know, the big acronyms of the enterprise IT space.
>
> Jan
>
>
>>
>> mca
>> http://amundsen.com/blog/
>> http://mamund.com/foaf.rdf#me
>>
>>
>>
>>
>> On Sat, Aug 7, 2010 at 11:43, Jan Algermissen <algermissen1971@...> wrote:
>>>
>>> On Aug 7, 2010, at 5:17 PM, mike amundsen wrote:
>>>
>>>> <snip>
>>>> While a usual feed reader (e.g. Apple Mail) would be able to perform
>>>> it's implemented goal based on that feed our
>>>> compile-list-of-newly-ordered-items user agent would definitely not be
>>>> able to do what it is implemented to do.
>>>> </snip>
>>>>
>>>> QUESTION:
>>>> how do you "know" this to be true? IOW, what is it about the
>>>> representation example you provided that leads you to believe your
>>>> "compile-list-of-newly-ordered-items user agent" cannot "do what it is
>>>> implemented to do"?
>>>
>>> Gee - every sentence I leave out leads to confusion. Sorry. What I meant to say was:
>>>
>>> We know the user agent cannot handle the HTML-containing/scan-referencing feed because we did not program it that way. And besides that: both, HTML and the scan do not make good candidates for machine processing without a hidden contract regarding the structure.
>>>
>>>>
>>>> ASSUMPTION:
>>>> I think I hear you talking about the need for clients to know ahead of
>>>> time whether the representation returned is something they can
>>>> process.
>>>
>>> Yes. That is: meaningfully process according to their implemented goals. If the goal is to turn the controls contained in the representation into something the user can activate (e.g. as browsers or feed readers do) then fine. If the implemented goal is to compile a list of ordered items from each order the user agent needs to be able to parse the order representation. If it understands that there are orders at all (feed not empty) but it does not understand the individual order syntax it needs to reposrt an error somehow. (Which might just be ok - depending on the goal implementation).
>>>
>>> But the problem is really that of formats embedded in formats because we cannot implement the client without making assumptions about the possible sub-formats. If these assumptions cannot be stated in the Acept header, the situation I am dealing with exists.
>>>
>>>> If that's the case, that means there must be some information
>>>> baked into the client that is used to "check" the representation
>>>> returned. The Accept header is one of these methods ("I am a client
>>>> that will only be able to understand the following representation
>>>> formats").
>>>>
>>>> I that what this is about?
>>>
>>> Yes. It is the question of how specific the Acept header needs to be without causing hidden coupling. (Or whether we should just live with the uncertainty on the client side - which I think we should not).
>>>
>>> Jan
>>>
>>>
>>>>
>>>> mca
>>>> http://amundsen.com/blog/
>>>> http://mamund.com/foaf.rdf#me
>>>>
>>>>
>>>>
>>>>
>>>> On Sat, Aug 7, 2010 at 06:47, Jan Algermissen <algermissen1971@...> wrote:
>>>>>
>>>>> On Aug 7, 2010, at 12:25 AM, Robert Brewer wrote:
>>>>>
>>>>>>> Jan Algermissen wrote:
>>>>>
>>>>>>> The question is whether Accept: text/html is indeed sufficient. Is it
>>>>>>> true that the user agent can persue its implemented goal of compiling
>>>>>> a
>>>>>>> list of all newly ordered items from any HTML document?
>>>>>>>
>>>>>>> Suppose the server provides both, application/order and text/html as
>>>>>>> representations of the list of new orders. When a user agent comes
>>>>>>> along that says Accept: text/html the server can freely assume
>>>>>> browser-
>>>>>>> like capabilities of the user agent (any HTML will do; even an <ul>
>>>>>>> with items referring to scanned PNGs of the orders). IOW, the owner of
>>>>>>> the server is free to change the implementation for text/html as long
>>>>>>> as a) the resource semantics remain stable (list of new orders) and
>>>>>>> valid HTML is returned.
>>>>>>>
>>>>>>> How would the user agent implementation deal with HTML? Special
>>>>>>> syntactic assumptions are not allowed (because of Accept: text/html)
>>>>>> or
>>>>>>> would mean a hidden coupling. How would a user agent distinguish
>>>>>>> between an HTML it does not understand but that contains orders (e.g.
>>>>>>> the list of scanned order images) and an empty list of orders that is
>>>>>>> augmented with some HTML it does not (and need not) understand?
>>>>>>>
>>>>>>> IMO that is impossible and hence Accept: text/html does not cut it.
>>>>>>
>>>>>> I think all that demonstrates is that HTML is too generic to be useful
>>>>>> for your particular task, not that all media types require "special
>>>>>> syntactic assumptions" (whether implicit or explicit). The fact that you
>>>>>> can make a "list" in HTML using any of a hundred types of tags doesn't
>>>>>> mean Atom, for example, also suffers from the same inappropriateness to
>>>>>> your task.
>>>>>>
>>>>>>
>>>>>
>>>>> I knew you were going to say that :-)
>>>>>
>>>>> Let's see:
>>>>>
>>>>> The implementor of the server side chooses to expose the order list as HTML and Atom. In the Atom case, she would write sth like this (in JAX-RS):
>>>>>
>>>>> @Path("/new-orders")
>>>>> class NewOrders {
>>>>>
>>>>> @GET
>>>>> @Produces("text/html")
>>>>> public Response newOrdersAsHTML() {
>>>>> // ...
>>>>> }
>>>>>
>>>>> @GET
>>>>> @Produces("application/atom+xml")
>>>>> public Response newOrdersAsAtomFeed() {
>>>>>
>>>>> }
>>>>> }
>>>>>
>>>>>
>>>>> When it comes to implementing (or changing) the Atom-producing method, the server developer need not (must not) be concerned with any client expectations. All that matters is to produce any valid Atom feed[1].
>>>>>
>>>>> Given that, it would be a perfectly fine implementation to produce an Atom feed such as this:
>>>>>
>>>>> <feed>
>>>>> <entry>
>>>>> <summary type="xhtml">
>>>>> <xhtml:div xmlns:xhtml="http://www.w3.org/1999/xhtml">
>>>>> <xhtml:h1>Order 551-A-1272</xhtml:h1>
>>>>> <xhtml:ul>
>>>>> <xhtml:li>Device Foo, Item Price: ... </xhtml:li>
>>>>> <xhtml:li>Screw Bar, Item Price: ... </xhtml:li>
>>>>> </xhtml:ul>
>>>>> <xhtml:b>Total: 600 EUR</xhtml:b>
>>>>> </xhtml:div>
>>>>> </summary>
>>>>> <content type="image/png" src="/scan-archive/orders/551-A-1272.png"/>
>>>>> </entry>
>>>>> <entry>
>>>>> <summary type="xhtml">
>>>>> <xhtml:div xmlns:xhtml="http://www.w3.org/1999/xhtml">
>>>>> <xhtml:h1>Order 551-A-1273</xhtml:h1>
>>>>> <xhtml:ul>
>>>>> <xhtml:li>Device Foo, Item Price: ... </xhtml:li>
>>>>> <xhtml:li>Screw Bar, Item Price: ... </xhtml:li>
>>>>> </xhtml:ul>
>>>>> <xhtml:b>Total: 600 EUR</xhtml:b>
>>>>> </xhtml:div>
>>>>> </summary>
>>>>> <content type="image/png" src="/scan-archive/orders/551-A-1273.png"/>
>>>>> </entry>
>>>>> </feed>
>>>>>
>>>>>
>>>>> If you develop a user agent that says Accept: application/atom+xml;type=feed you must be prepared to receive the above feed.
>>>>>
>>>>> While a usual feed reader (e.g. Apple Mail) would be able to perform it's implemented goal based on that feed our compile-list-of-newly-ordered-items user agent would definitely not be able to do what it is implemented to do.
>>>>>
>>>>> Two questions arise:
>>>>>
>>>>> 1. How does the user agent detect that it cannot perform its task but (despite having a perfectly valid answer)
>>>>> 2. What to do about that
>>>>>
>>>>>
>>>>> 1.:
>>>>> Given the feed above, how do we need to implement the client to report to the user (e.g. someone that at some point looks in a log file or someone that uses the business intelligence application that uses the compiled reports about newly ordered items) that a correct answer was received, that it did indeed contain orders but that the list could not be processed as intended?
>>>>>
>>>>> First of all, the client trusts the higher level assumption that the resource indeed provides the list of new orders. This is the same kind of trust that any browser has when it follows an <img src=""/> hypermedia control. The server told the user agent something about the referenced resource and the client can reasonably expect that to be true (otherwise we would deal with a broken server and that is not the issue here).
>>>>>
>>>>> Since the client expects the feed to represent the list of new orders, it is IMHO reasonable to assume that any entry in that feed points to a new order. No entries would mean 'no new orders'. This is IMHO not semantic tunneling through the Atom feed because the assumption is backed by the semantics of the resource as advertised by the server.
>>>>>
>>>>> The feed appearently contains two entries, hence the user agent can be programmed to understand that there are two new orders to process. When it comes to processing the orders the user agent will have to realize that neither the summary nor the referenced content is available in a format that is sufficient to extract the ordered items automatically. Hence the user agent has to report an error eventually leading to some human action to fix the situation:
>>>>>
>>>>> 2.:
>>>>> We reach question #2 once the fact that a problem exists for the user agent has reached a human. What is he supposed to do? There are three options:
>>>>>
>>>>> a) call the server developer and negotiate a certain format for the Atom feed
>>>>> b) adjust the user agent implementation to handle the format received (e.g. parse out the HTML from the summary or OCR the scanned orders)
>>>>> c) do nothing except mark the compiled report as 'wrong' or 'unusable'. IWO, accept the fact that the user goal cannot be satisfied
>>>>>
>>>>> a) Leads to coupling (if it is at all possible/desireable to call the server implementor)
>>>>> b) Does not improve the situation because the format can just change again tomorrow
>>>>> c) is the honest option but provides no business value
>>>>>
>>>>> In my opinion, the only thing to really improve the situation is to standardize a format that allows the server developer to actually determine the user agent expectations (capabilities) from the Accept header. If we had application/orderlist (or at least application/atom+xml;profile=orderlist) the server developer would need to either add a new response-producing method or send a 406.
>>>>>
>>>>> Does that sufficiently illustrate the point?
>>>>>
>>>>> Jan
>>>>>
>>>>>
>>>>> [1] and of course be true to be true to the server's own statement that the resource represents
>>>>> the new orders. It would be bad to serve a list of shipped orders, for example.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>> Robert Brewer
>>>>>> fumanchu@...
>>>>>>
>>>>>
>>>>> -----------------------------------
>>>>> Jan Algermissen, Consultant
>>>>> NORD Software Consulting
>>>>>
>>>>> Mail: algermissen@...
>>>>> Blog: http://www.nordsc.com/blog/
>>>>> Work: http://www.nordsc.com/
>>>>> -----------------------------------
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> ------------------------------------
>>>>>
>>>>> Yahoo! Groups Links
>>>>>
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>> ------------------------------------
>>>>
>>>> Yahoo! Groups Links
>>>>
>>>>
>>>>
>>>
>>> -----------------------------------
>>> Jan Algermissen, Consultant
>>> NORD Software Consulting
>>>
>>> Mail: algermissen@...
>>> Blog: http://www.nordsc.com/blog/
>>> Work: http://www.nordsc.com/
>>> -----------------------------------
>>>
>>>
>>>
>>>
>>>
>
> -----------------------------------
> Jan Algermissen, Consultant
> NORD Software Consulting
>
> Mail: algermissen@...
> Blog: http://www.nordsc.com/blog/
> Work: http://www.nordsc.com/
> -----------------------------------
>
>
>
>
>
<snip>
Maybe he wants to serve it as normal. plain Atom, too?
</snip>
If you have an implementation that wants to server both "plain" atom
and "profiled atom" my suggestion will not work. To date, I've not
encountered this difficulty.
<snip>
Why would the server develper bother what use the client makes of the
representation he provides?
</snip>
I have no idea what that would happen and this, too, has not come up
in my implementations. Clients are out of my control. If they want to
ignore the media-type instructions provided by the server developers,
that's no business of mine, etc. Again, this has not been the case in
implementations I've worked with.
<snip>
The kind of agreement is IMHO exactly what REST tries to eliminate (or
make explicit as a media type) because it causes maintainence
nightmares.
</snip>
As I've mentioned previously, the only difference in my compromise
implementations is that additional semantics are documented on top of
the ones defined by the generic type. Yes, it's a bummer that clients
must know an additional set of semantic rules. Yes, it's unfortunate
that I am sometimes working with clients and/or servers that constrain
me from minting a new type. These things happen and I do the best I
can.
<snip>
> Uh, oh. The magic client?
...
> You'll get an A++ from me when that thingy is out :-)
>
> [But seriously: can you sketch a 'solution'?]
</snip>
My only working examples right now are implemented as client-side XSLT
transformations using common Web browsers[1]. There are several
problems w/ example I point to here (shortcomings, missing
implementation details, limited to text-based, human-driven
renderings, etc.) so I'll not go into the unsavory details here.
An alternate version I've been working on uses FF XUL-based plug-ins
that treat the in-browser document based on a "DSL" model understood
by the plug-in. It is in even worse condition than my transform
example.
Finally, I think Guilherme Silveira has made very good progress along
these lines w/ his RESTfulie client[2] and I encourage anyone
interested in this line of work to check out his blog and the
RESTfulie source code.
[1] http://www.amundsen.com/hypermedia/examples/doc.xml
[2] http://blog.caelumobjects.com/2010/05/27/minimize-coupling-with-rest-processes/
mca
http://amundsen.com/blog/
http://mamund.com/foaf.rdf#me
On Sat, Aug 7, 2010 at 17:55, Jan Algermissen <algermissen1971@...> wrote:
>
> On Aug 7, 2010, at 6:24 PM, mike amundsen wrote:
>
>> <snip>
>> When the server developer implements for application/atom+xml;
>> type=feed it simply has no idea what special assumptions some client
>> will make. As long as the service returns valid Atom it will be a
>> correct implementation. Any side-agreements between client and server
>> violate what REST tries to achieve.
>> </snip>
>>
>> I understand that the case you describe here is _possible_ but I am
>> not convinced it is _reasonable_. IOW, I do not accept that a
>> server/developer MUST act in the way you describe. I think a server
>> developer can be more responsible than what you characterize here and
>> can provide additional media-type instruction to any client that
>> wishes to participate. The server developer can provide details on an
>> Atom Extension employed or additional information on how clients can
>> recognize sub-types within a message (as I describe in my previous
>> message).
>
> Sure, but how would he know? Maybe he wants to serve it as normal. plain Atom, too? Why would the server develper bother what use the client makes of the representation he provides? The kind of agreement is IMHO exactly what REST tries to eliminate (or make explicit as a media type) because it causes maintainence nightmares. Question: is there any promise by Amazon about how the HTML of the site ooks like? No. And for good reason. They do not want clients to start making assumptions beyond text/html.
>
>
>>
>> Yes, I think my suggestion is compromise for cases where the
>> well-known type lacks the proper semantics, but I assert this
>> compromise is reasonable and valid. The next reasonable alternative
>> (in cases where this compromise is not acceptable) is to develop a
>> custom media-type and instruct the client developers to "learn" the
>> details of that custom media type and code that knowledge into the
>> client head of time. I've done both and find merit in both.
>
> See your point. The 'server promisses some out of band stuff' leads to coupling of clients to *that* server though. Something REST aims to avoid.
>
> However, I guess that media types will likely be derived from experience that started based on out of band promisses in the first place. So the approach is definitely reasonable.
>
>>
>> SPECULATION:
>> I think, long-term, there is another possible solution; one that I
>> have been working on in tiny private examples lately. That solution is
>> to create a way to make "understanding a new media type" easier for
>> state-machine clients.
>
> Uh, oh. The magic client?
>
>> IOW, a way that clients can "learn" the
>> semantic rules of a new type by installing a media-type definition (in
>> the same manner that users install "plug-ins" and "add-ons" in their
>> common Web browsers today). I have no serious examples to show for
>> this right now, but am encouraged that this is do-able and has good
>> long-term values.
>
> You'll get an A++ from me when that thingy is out :-)
>
> [But seriously: can you sketch a 'solution'?]
>
>
> Jan
>
>>
>> mca
>> http://amundsen.com/blog/
>> http://mamund.com/foaf.rdf#me
>>
>>
>>
>>
>> On Sat, Aug 7, 2010 at 11:57, Jan Algermissen <algermissen1971@...> wrote:
>>>
>>> On Aug 7, 2010, at 5:17 PM, mike amundsen wrote:
>>>
>>>> <snip>
>>>> While a usual feed reader (e.g. Apple Mail) would be able to perform
>>>> it's implemented goal based on that feed our
>>>> compile-list-of-newly-ordered-items user agent would definitely not be
>>>> able to do what it is implemented to do.
>>>> </snip>
>>>>
>>>> QUESTION:
>>>> how do you "know" this to be true? IOW, what is it about the
>>>> representation example you provided that leads you to believe your
>>>> "compile-list-of-newly-ordered-items user agent" cannot "do what it is
>>>> implemented to do"?
>>>>
>>>> ASSUMPTION:
>>>> I think I hear you talking about the need for clients to know ahead of
>>>> time whether the representation returned is something they can
>>>> process. If that's the case, that means there must be some information
>>>> baked into the client that is used to "check" the representation
>>>> returned. The Accept header is one of these methods ("I am a client
>>>> that will only be able to understand the following representation
>>>> formats").
>>>
>>> I think it is important to be explicit about what "understand" means.
>>>
>>> I'd rather say that Accept means: "I am a user agent that will only be able to sensibly perform it's implemented goal if the representation has one of these media types"
>>>
>>>
>>> Note that it all depends on the implemented goal. If that goal is to "compile a list of newly ordered items from those orders that I happen to be able to parse and report the number of unparsable orders" then that would work just fine with Accept: application/atom+xml;type=feed.
>>>
>>> However, we must then understand that the eventual application state exposed to the user (the compiled list/report, maybe stuffed into some database) can only reflect what the user agent was able to make of the feed. IOW, the report might look like this:
>>>
>>> New Orders as of date foo: 201
>>> Processable Orders: 11
>>> Summary of items in those 11: [some list of items here]
>>> Unprocessable orders 190 [Reference to temporary filesystem where they can be reviewed]
>>>
>>> (This might, BTW, just what we want)
>>>
>>>
>>> You can turn all this around and say:
>>>
>>> When the server developer implements for application/atom+xml; type=feed it simply has no idea what special assumptions some client will make. As long as the service returns valid Atom it will be a correct implementation. Any side-agreements between client and server violate what REST tries to achieve.
>>>
>>>
>>> Jan
>>>
>>>
>>>
>>>
>>>
>>>>
>>>> I that what this is about?
>>>>
>>>> mca
>>>> http://amundsen.com/blog/
>>>> http://mamund.com/foaf.rdf#me
>>>>
>>>>
>>>>
>>>>
>>>> On Sat, Aug 7, 2010 at 06:47, Jan Algermissen <algermissen1971@...> wrote:
>>>>>
>>>>> On Aug 7, 2010, at 12:25 AM, Robert Brewer wrote:
>>>>>
>>>>>>> Jan Algermissen wrote:
>>>>>
>>>>>>> The question is whether Accept: text/html is indeed sufficient. Is it
>>>>>>> true that the user agent can persue its implemented goal of compiling
>>>>>> a
>>>>>>> list of all newly ordered items from any HTML document?
>>>>>>>
>>>>>>> Suppose the server provides both, application/order and text/html as
>>>>>>> representations of the list of new orders. When a user agent comes
>>>>>>> along that says Accept: text/html the server can freely assume
>>>>>> browser-
>>>>>>> like capabilities of the user agent (any HTML will do; even an <ul>
>>>>>>> with items referring to scanned PNGs of the orders). IOW, the owner of
>>>>>>> the server is free to change the implementation for text/html as long
>>>>>>> as a) the resource semantics remain stable (list of new orders) and
>>>>>>> valid HTML is returned.
>>>>>>>
>>>>>>> How would the user agent implementation deal with HTML? Special
>>>>>>> syntactic assumptions are not allowed (because of Accept: text/html)
>>>>>> or
>>>>>>> would mean a hidden coupling. How would a user agent distinguish
>>>>>>> between an HTML it does not understand but that contains orders (e.g.
>>>>>>> the list of scanned order images) and an empty list of orders that is
>>>>>>> augmented with some HTML it does not (and need not) understand?
>>>>>>>
>>>>>>> IMO that is impossible and hence Accept: text/html does not cut it.
>>>>>>
>>>>>> I think all that demonstrates is that HTML is too generic to be useful
>>>>>> for your particular task, not that all media types require "special
>>>>>> syntactic assumptions" (whether implicit or explicit). The fact that you
>>>>>> can make a "list" in HTML using any of a hundred types of tags doesn't
>>>>>> mean Atom, for example, also suffers from the same inappropriateness to
>>>>>> your task.
>>>>>>
>>>>>>
>>>>>
>>>>> I knew you were going to say that :-)
>>>>>
>>>>> Let's see:
>>>>>
>>>>> The implementor of the server side chooses to expose the order list as HTML and Atom. In the Atom case, she would write sth like this (in JAX-RS):
>>>>>
>>>>> @Path("/new-orders")
>>>>> class NewOrders {
>>>>>
>>>>> @GET
>>>>> @Produces("text/html")
>>>>> public Response newOrdersAsHTML() {
>>>>> // ...
>>>>> }
>>>>>
>>>>> @GET
>>>>> @Produces("application/atom+xml")
>>>>> public Response newOrdersAsAtomFeed() {
>>>>>
>>>>> }
>>>>> }
>>>>>
>>>>>
>>>>> When it comes to implementing (or changing) the Atom-producing method, the server developer need not (must not) be concerned with any client expectations. All that matters is to produce any valid Atom feed[1].
>>>>>
>>>>> Given that, it would be a perfectly fine implementation to produce an Atom feed such as this:
>>>>>
>>>>> <feed>
>>>>> <entry>
>>>>> <summary type="xhtml">
>>>>> <xhtml:div xmlns:xhtml="http://www.w3.org/1999/xhtml">
>>>>> <xhtml:h1>Order 551-A-1272</xhtml:h1>
>>>>> <xhtml:ul>
>>>>> <xhtml:li>Device Foo, Item Price: ... </xhtml:li>
>>>>> <xhtml:li>Screw Bar, Item Price: ... </xhtml:li>
>>>>> </xhtml:ul>
>>>>> <xhtml:b>Total: 600 EUR</xhtml:b>
>>>>> </xhtml:div>
>>>>> </summary>
>>>>> <content type="image/png" src="/scan-archive/orders/551-A-1272.png"/>
>>>>> </entry>
>>>>> <entry>
>>>>> <summary type="xhtml">
>>>>> <xhtml:div xmlns:xhtml="http://www.w3.org/1999/xhtml">
>>>>> <xhtml:h1>Order 551-A-1273</xhtml:h1>
>>>>> <xhtml:ul>
>>>>> <xhtml:li>Device Foo, Item Price: ... </xhtml:li>
>>>>> <xhtml:li>Screw Bar, Item Price: ... </xhtml:li>
>>>>> </xhtml:ul>
>>>>> <xhtml:b>Total: 600 EUR</xhtml:b>
>>>>> </xhtml:div>
>>>>> </summary>
>>>>> <content type="image/png" src="/scan-archive/orders/551-A-1273.png"/>
>>>>> </entry>
>>>>> </feed>
>>>>>
>>>>>
>>>>> If you develop a user agent that says Accept: application/atom+xml;type=feed you must be prepared to receive the above feed.
>>>>>
>>>>> While a usual feed reader (e.g. Apple Mail) would be able to perform it's implemented goal based on that feed our compile-list-of-newly-ordered-items user agent would definitely not be able to do what it is implemented to do.
>>>>>
>>>>> Two questions arise:
>>>>>
>>>>> 1. How does the user agent detect that it cannot perform its task but (despite having a perfectly valid answer)
>>>>> 2. What to do about that
>>>>>
>>>>>
>>>>> 1.:
>>>>> Given the feed above, how do we need to implement the client to report to the user (e.g. someone that at some point looks in a log file or someone that uses the business intelligence application that uses the compiled reports about newly ordered items) that a correct answer was received, that it did indeed contain orders but that the list could not be processed as intended?
>>>>>
>>>>> First of all, the client trusts the higher level assumption that the resource indeed provides the list of new orders. This is the same kind of trust that any browser has when it follows an <img src=""/> hypermedia control. The server told the user agent something about the referenced resource and the client can reasonably expect that to be true (otherwise we would deal with a broken server and that is not the issue here).
>>>>>
>>>>> Since the client expects the feed to represent the list of new orders, it is IMHO reasonable to assume that any entry in that feed points to a new order. No entries would mean 'no new orders'. This is IMHO not semantic tunneling through the Atom feed because the assumption is backed by the semantics of the resource as advertised by the server.
>>>>>
>>>>> The feed appearently contains two entries, hence the user agent can be programmed to understand that there are two new orders to process. When it comes to processing the orders the user agent will have to realize that neither the summary nor the referenced content is available in a format that is sufficient to extract the ordered items automatically. Hence the user agent has to report an error eventually leading to some human action to fix the situation:
>>>>>
>>>>> 2.:
>>>>> We reach question #2 once the fact that a problem exists for the user agent has reached a human. What is he supposed to do? There are three options:
>>>>>
>>>>> a) call the server developer and negotiate a certain format for the Atom feed
>>>>> b) adjust the user agent implementation to handle the format received (e.g. parse out the HTML from the summary or OCR the scanned orders)
>>>>> c) do nothing except mark the compiled report as 'wrong' or 'unusable'. IWO, accept the fact that the user goal cannot be satisfied
>>>>>
>>>>> a) Leads to coupling (if it is at all possible/desireable to call the server implementor)
>>>>> b) Does not improve the situation because the format can just change again tomorrow
>>>>> c) is the honest option but provides no business value
>>>>>
>>>>> In my opinion, the only thing to really improve the situation is to standardize a format that allows the server developer to actually determine the user agent expectations (capabilities) from the Accept header. If we had application/orderlist (or at least application/atom+xml;profile=orderlist) the server developer would need to either add a new response-producing method or send a 406.
>>>>>
>>>>> Does that sufficiently illustrate the point?
>>>>>
>>>>> Jan
>>>>>
>>>>>
>>>>> [1] and of course be true to be true to the server's own statement that the resource represents
>>>>> the new orders. It would be bad to serve a list of shipped orders, for example.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>> Robert Brewer
>>>>>> fumanchu@...
>>>>>>
>>>>>
>>>>> -----------------------------------
>>>>> Jan Algermissen, Consultant
>>>>> NORD Software Consulting
>>>>>
>>>>> Mail: algermissen@...
>>>>> Blog: http://www.nordsc.com/blog/
>>>>> Work: http://www.nordsc.com/
>>>>> -----------------------------------
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> ------------------------------------
>>>>>
>>>>> Yahoo! Groups Links
>>>>>
>>>>>
>>>>>
>>>>>
>>>
>>> -----------------------------------
>>> Jan Algermissen, Consultant
>>> NORD Software Consulting
>>>
>>> Mail: algermissen@...
>>> Blog: http://www.nordsc.com/blog/
>>> Work: http://www.nordsc.com/
>>> -----------------------------------
>>>
>>>
>>>
>>>
>>>
>>>
>>> ------------------------------------
>>>
>>> Yahoo! Groups Links
>>>
>>>
>>>
>>>
>
> -----------------------------------
> Jan Algermissen, Consultant
> NORD Software Consulting
>
> Mail: algermissen@...
> Blog: http://www.nordsc.com/blog/
> Work: http://www.nordsc.com/
> -----------------------------------
>
>
>
>
>
Jan Algermissen wrote: > > Question: is there any promise by Amazon about how the HTML of the > site looks like? No. And for good reason. They do not want clients to > start making assumptions beyond text/html. > Wait a minute, yes, they do. HTML is general enough that there are a variety of ways, RESTful or not, to code a shopping cart in text/html. We now have the GoodRelations ontology, which allows this diverse markup to all have the same _meaning_ from one site to the next. I posted a link last week to my source for the assertion that BestBuy's online sales increased by 30% after they used RDFa to implement GR. Any number of domain-specific vocabularies like GR may be embedded in any number of ubiquitous media types, to address any problem. There is no prohibition against this in REST, in fact, it's exactly what REST advocates. -Eric
On Sat, Aug 7, 2010 at 7:07 PM, Bill de hra <bill@...> wrote: > On Sat, 2010-08-07 at 11:39 +0100, Mike Kelly wrote: >> On Sat, Aug 7, 2010 at 12:52 AM, Bill de hra <bill@...> wrote: >> > On Mon, 2010-08-02 at 09:22 +0100, Mike Kelly wrote: >> >> On Sun, Aug 1, 2010 at 6:51 PM, Bill de hra <bill@...> wrote: >> >> > So there's a tradeoff. Some developers would like to >> >> > go direct to the status to avoid the hop. One way to do this is have the >> >> > URLs prepared in advance. The argument is that a way to balance these >> >> > concerns is to allow the server to publish a document that client can >> >> > cache and from which the client can pull the status url directly and so >> >> > short circuit the traversal without being very strongly coupled to the >> >> > server's uri space. This kind of tradeoff seems reasonable to me, hence >> >> > I don't understand the level of objection in some quarters to approaches >> >> > like WADL. >> >> >> >> Why use WADL for that? Seems unnecessary when can achieve the same >> >> thing with just a Link header. >> > >> > How? >> > >> > Bill >> >> >> By serving a cacheable representation that includes the appropriate >> 'short circuit' link (and relation) in its Link header. > > Oh so for every related resource I serve a Link header with relations? A > couple of things come to mind > > - I can't compress headers. This actually matters in mobile systems. > > - I need to do a ton of testing to see whether Link will go through > gateways and proxies on mobile systems. > > - I now have a management problem as to where to put relations (headers > or document? both? This kind of thing drives developers insane. > > - I know have even more indirection due to Extension Relation Types to > go off and read the document that 'explains' the link type. > > - I can't cache the Links and representation independently (cue lots of > HEAD requests). > > No offence, but Link in this scenario seems like a way to avoid RDF at > any cost (or avoiding admitting documents with actual semantics are > important). Which brings me back to interlingua and just serving a > description document (WADL or otherwise). > > I'm not seeing a clear win, theoretically or practically. I mentioned Link header just as an alternative that demonstrates how much of WADL is unnecessary for that purpose. Opting to use a document/media type is probably a better option, just not WADL. Personally, I find RDF and its serialisations way too fiddly for something application-oriented like this.. I'd much prefer to use a simple, lightweight document format that just borrows atom's link element - and possibly includes semantics for embedding other documents. A generic m2m hypertext media type shouldn't need much more than that - I started exploring this with 'hal' [1] Cheers, Mike [1] http://restafari.blogspot.com/2010/06/please-accept-applicationhalxml.html
On Aug 8, 2010, at 1:04 AM, Eric J. Bowman wrote: > Jan Algermissen wrote: >> >> Question: is there any promise by Amazon about how the HTML of the >> site looks like? No. And for good reason. They do not want clients to >> start making assumptions beyond text/html. >> > > Wait a minute, yes, they do. Do you have a pointer where Amzon describes that syntactical 'promise'? I could not find it. Would be interesting to see what exactly they write. Jan > HTML is general enough that there are a > variety of ways, RESTful or not, to code a shopping cart in text/html. > We now have the GoodRelations ontology, which allows this diverse > markup to all have the same _meaning_ from one site to the next. > > I posted a link last week to my source for the assertion that BestBuy's > online sales increased by 30% after they used RDFa to implement GR. > Any number of domain-specific vocabularies like GR may be embedded in > any number of ubiquitous media types, to address any problem. There is > no prohibition against this in REST, in fact, it's exactly what REST > advocates. > > -Eric > > > ------------------------------------ > > Yahoo! Groups Links > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
> > > Jan Algermissen wrote: > >> > >> Question: is there any promise by Amazon about how the HTML of the > >> site looks like? No. And for good reason. They do not want clients > >> to start making assumptions beyond text/html. > >> > > > > Wait a minute, yes, they do. > > Do you have a pointer where Amzon describes that syntactical > 'promise'? I could not find it. Would be interesting to see what > exactly they write. > Where did I say they'd made such a promise? The fact is, all the big players are heavily invested in their existing text/html systems, which are utterly incomprehensible. This incomprehensibility has a workaround by embedding metadata. Things are moving towards standardizing that metadata, for the explicit reason of making it more harvestable: http://ebusiness-unibw.org/pipermail/goodrelations/2010-January/000175.html Where is it documented, besides embedded in their crap markup? I don't know. But I do know, the big players are trying to expose this information, like what the price is and how to buy the item, such that it becomes searchable. They are not doing this by ditching text/html, they are doing this by adopting standardized domain-specific metadata vocabularies within their existing text/html. <span class='price'>$11.99</span> That's some real-world Amazon markup. If they didn't want to expose this as the price of an item, they wouldn't tag it like this next to a link to follow to purchase the item. If this is how they've decided to expose price, then they can change their markup as often as they like -- provided the API exposed by the metadata stays the same. If that span were just meant for style, then why not class='xyzzy' to keep user agents from making the assumption that it's an item price? The reason it's class='price' is to encourage that assumption. No, that isn't RDFa + GR like O'Reilly or BestBuy, but it is metadata embedded in existing text/html for no reason but machine readability. Amazon can get away with making others transform their metadata to GR. But there are few who can, the rest are adopting standards like RDFa + GR -- which have nothing to do with how the site looks, although I've found that adopting standardized metadata makes CSS easier to maintain. -Eric
On Aug 6, 2010, at 3:28 PM, Bill de hra wrote: > That is, they are semantic nonsense that don't hold up to scrutiny. Is there an alternative that does hold up to scrutiny? I get the same sense from alternatives I know of. Subbu
On Aug 6, 2010, at 3:28 PM, Bill de hra wrote: > That is, they are semantic nonsense that don't hold up to scrutiny. Is there an alternative that does hold up to scrutiny? I get the same sense from alternatives I know of. Subbu
--- In rest-discuss@yahoogroups.com, Bill de hra <bill@...> wrote: > > On Fri, 2010-08-06 at 23:11 +0200, Jan Algermissen wrote: > > > > On Aug 6, 2010, at 6:22 PM, Robert Brewer wrote: > > > I consider media types as syntax, not semantics. > > > > Media types are a lot more than syntax: media types provide intended > > processing semantics. > > > > Is this <html> ... </html> an HTML document or an XSLT stylesheet? > > Only the media type provided by the sender can tell you that. > > I agree with the sentiment, Jan, but I don't believe this is what > actually happens. More and more, media types seem broken as designed to > me. That is, they are semantic nonsense that don't hold up to scrutiny. > > Bill > A fork is broken as designed if you are trying to use it to eat soup. I find that media types aren't such a problem (at least in the context of Atom) if you take the "envelope" approach rather than the "extension" approach. The content-type & accept headers as well as collection/accept and content/@type give you what you need and work fairly well. Media types just don't provide a good solution for "extension" in any context. That said, like the others, I'd like to hear more details on the issues you see with media types. Andrew
> > Whether the user agent can actually formulate a request using the > custom media type is another problem. What matters is that your > hypertext provides a self-documenting API. > In fact, a user agent doesn't even need to understand your custom media type. My demo uses HTML to drive Atom interactions. When I post the Xforms interface, you'll see it comes down to filling out "Atom forms" to submit or edit entries. The fact that browsers understand Atom is beside the point -- user agents don't need an understanding of Atom to be able to generate or pass around Atom documents -- as it isn't used. What matters is that Xforms instructs the user agent how to create XML and submit it to some URI as whatever *+xml media type it needs to be. XSLT transforms Atom into Xforms for display and editing. The fact that the system is Atom-based is not relevant to user agents following the hypertext API layered over it. What user agents need to understand are the non-Atom media types used to create the hypertext API. There's no way to point user agents to that Atom-based system and say, "the API is Atom Protocol." Making that assumption doesn't make for a REST system. In REST, hypertext appropriate to driving interactivity is used to describe Atom Protocol interfaces. This holds true for any media type that's targeted for manipulation, using any protocol. Provided there's a hypertext API based on ubiquitous media types, the ubiquity of the media type(s) targeted for manipulation becomes less (but certainly not un-) important. -Eric
I don;t understand preciselly what you're looking for. At least since the "Design Patterns: Elements of Reusable Object-Oriented Software" from The Gang of Four(!) until "Patterns of Enterprise Application Achitecture" (Martin Fowler) and "Enterprise Integration Patterns" (Hohpe, Woolf), there are lot's of architecture aproaches based on Patterns, from wich "Timeless way of building" was a precursor. What are you looking specifically? BTW, did yow saw this? http://charliealfred.wordpress.com/200/ 2010/8/7 Benot Fleury <benoit.fleury@...> > > > > Hi thank you for your answer et pointers. I was more interested in the design process in general. I am wondering if this design process has been used and documented in other software architectures. It's why I titled my mail "Off topic" :) > > Thanks again, > Benoit. > > 2010/8/6 Jan Algermissen <algermissen1971@...> >> >> On Aug 6, 2010, at 11:45 PM, Benot Fleury wrote: >> >> > >> > >> > Sorry if I wasn't clear. I'm talking about the design process described here: http://www.ics.uci.edu/~fielding/pubs/dissertation/software_arch.htm#sec_1_6 and used in chapter 5. Starting with the null style and adding constraints one after the other to let the desired properties emerge. >> >> Rohit Khare built on top of Roy's work in his ARRESTED thesis (http://www.ics.uci.edu/~rohit/ARRESTED-ICSE.pdf [Huge download!]) and you will find something in the book referenced in [1]. Plough through the references of Roy's thesis, especially around Garlan/Shaws work. >> >> IIRC Mark has also worked on defining the Semantic Web as REST+1 other constraint (explicit data semantics). He mentioned that in his blog in the early days. Maybe ask him. >> >> HTH, >> >> Jan >> >> >> [1] http://www.nordsc.com/blog/?p=11 >> >> >> > >> > >> > 2010/8/6 Jan Algermissen <algermissen1971@...> >> > Benot >> > >> > On Aug 6, 2010, at 6:38 PM, Benot Fleury wrote: >> > >> > > >> > > >> > > Hi, >> > > >> > > in his dissertation, Roy explicitly cites the "Timeless way of building" and applies this approach of design in the fifth chapter. I was wondering if any of you encountered other examples of this approach in software architecture? >> > >> > I think it might help if you provide some more context what you are looking for or a quote of the dis that illstrates you point. >> > >> > Jan >> > >> > >> > >> > >> > >> > > >> > > Thanks a lot, >> > > Benoit. >> > > >> > > >> > > >> > > >> > >> > ----------------------------------- >> > Jan Algermissen, Consultant >> > NORD Software Consulting >> > >> > Mail: algermissen@... >> > Blog: http://www.nordsc.com/blog/ >> > Work: http://www.nordsc.com/ >> > ----------------------------------- >> > >> > >> > >> > >> > >> > >> > >> > >> >> ----------------------------------- >> Jan Algermissen, Consultant >> NORD Software Consulting >> >> Mail: algermissen@... >> Blog: http://www.nordsc.com/blog/ >> Work: http://www.nordsc.com/ >> ----------------------------------- >> >> >> >> > >
BTW, some 2 years ago I was working with Spring Integration that is a "application" of the book I've mentioned "Enterprise Integration Patterns", but my guess is that you know these already :) http://www.springsource.org/spring-integration http://www.eaipatterns.com/ 2010/8/9 Antnio Mota <amsmota@...>: > I don;t understand preciselly what you're looking for. At least since > the "Design Patterns: Elements of Reusable Object-Oriented Software" > from The Gang of Four(!) until "Patterns of Enterprise Application > Achitecture" (Martin Fowler) and "Enterprise Integration Patterns" > (Hohpe, Woolf), there are lot's of architecture aproaches based on > Patterns, from wich "Timeless way of building" was a precursor. What > are you looking specifically? > > BTW, did yow saw this? http://charliealfred.wordpress.com/200/ > > > 2010/8/7 Benot Fleury <benoit.fleury@...> >> >> >> >> Hi thank you for your answer et pointers. I was more interested in the design process in general. I am wondering if this design process has been used and documented in other software architectures. It's why I titled my mail "Off topic" :) >> >> Thanks again, >> Benoit. >> >> 2010/8/6 Jan Algermissen <algermissen1971@...> >>> >>> On Aug 6, 2010, at 11:45 PM, Benot Fleury wrote: >>> >>> > >>> > >>> > Sorry if I wasn't clear. I'm talking about the design process described here: http://www.ics.uci.edu/~fielding/pubs/dissertation/software_arch.htm#sec_1_6 and used in chapter 5. Starting with the null style and adding constraints one after the other to let the desired properties emerge. >>> >>> Rohit Khare built on top of Roy's work in his ARRESTED thesis (http://www.ics.uci.edu/~rohit/ARRESTED-ICSE.pdf [Huge download!]) and you will find something in the book referenced in [1]. Plough through the references of Roy's thesis, especially around Garlan/Shaws work. >>> >>> IIRC Mark has also worked on defining the Semantic Web as REST+1 other constraint (explicit data semantics). He mentioned that in his blog in the early days. Maybe ask him. >>> >>> HTH, >>> >>> Jan >>> >>> >>> [1] http://www.nordsc.com/blog/?p=11 >>> >>> >>> > >>> > >>> > 2010/8/6 Jan Algermissen <algermissen1971@...> >>> > Benot >>> > >>> > On Aug 6, 2010, at 6:38 PM, Benot Fleury wrote: >>> > >>> > > >>> > > >>> > > Hi, >>> > > >>> > > in his dissertation, Roy explicitly cites the "Timeless way of building" and applies this approach of design in the fifth chapter. I was wondering if any of you encountered other examples of this approach in software architecture? >>> > >>> > I think it might help if you provide some more context what you are looking for or a quote of the dis that illstrates you point. >>> > >>> > Jan >>> > >>> > >>> > >>> > >>> > >>> > > >>> > > Thanks a lot, >>> > > Benoit. >>> > > >>> > > >>> > > >>> > > >>> > >>> > ----------------------------------- >>> > Jan Algermissen, Consultant >>> > NORD Software Consulting >>> > >>> > Mail: algermissen@acm.org >>> > Blog: http://www.nordsc.com/blog/ >>> > Work: http://www.nordsc.com/ >>> > ----------------------------------- >>> > >>> > >>> > >>> > >>> > >>> > >>> > >>> > >>> >>> ----------------------------------- >>> Jan Algermissen, Consultant >>> NORD Software Consulting >>> >>> Mail: algermissen@... >>> Blog: http://www.nordsc.com/blog/ >>> Work: http://www.nordsc.com/ >>> ----------------------------------- >>> >>> >>> >>> >> >> >
On Sat, Aug 7, 2010 at 5:04 PM, Eric J. Bowman <eric@...> wrote: > Wait a minute, yes, they do. HTML is general enough that there are a > variety of ways, RESTful or not, to code a shopping cart in text/html. > We now have the GoodRelations ontology, which allows this diverse > markup to all have the same _meaning_ from one site to the next. A client that is expecting goodrelations annotations will not be able to function properly if it receives valid html without the meta-data. The client cannot make that need visible the server, or to intermediates. It has to ask for `text/html` and hope that the server always sends goodrelations meta-data and that no intermediates strip what might seem to be superfluous bloat from the returned representation. (The html spec does not require that goodrelations meta-data be included so stripping the meta-data is, strictly speaking, not wrong.) In practice, i know that rdfa works almost all the time. But it does have some weaknesses. For example, if some other ontology comes along to replace/compete with goodrelations, the server has to serve annotation for both of those ontologies all the time. Or choose to break a subset of the clients it might otherwise support. It cannot return one or the other depending on the need of the client. Hiding this dependency also makes implementing intermediates that can convert between these two ontologies far more difficult. Media types describe the semantics and syntax of representations. I think that html with embedded goodrelations meta-data has a should constitute a new media type. It has a similar syntax to html but with different semantics. Doing so would make the message much more self descriptive. Peter
<snip> Media types describe the semantics and syntax of representations. I think that html with embedded goodrelations meta-data has a should constitute a new media type. It has a similar syntax to html but with different semantics. Doing so would make the message much more self descriptive. </snip> +1 mca http://amundsen.com/blog/ http://mamund.com/foaf.rdf#me On Mon, Aug 9, 2010 at 11:35, Peter Williams <pezra@...> wrote: > On Sat, Aug 7, 2010 at 5:04 PM, Eric J. Bowman <eric@...> wrote: >> Wait a minute, yes, they do. HTML is general enough that there are a >> variety of ways, RESTful or not, to code a shopping cart in text/html. >> We now have the GoodRelations ontology, which allows this diverse >> markup to all have the same _meaning_ from one site to the next. > > A client that is expecting goodrelations annotations will not be able > to function properly if it receives valid html without the meta-data. > The client cannot make that need visible the server, or to > intermediates. It has to ask for `text/html` and hope that the server > always sends goodrelations meta-data and that no intermediates strip > what might seem to be superfluous bloat from the returned > representation. (The html spec does not require that goodrelations > meta-data be included so stripping the meta-data is, strictly > speaking, not wrong.) > > In practice, i know that rdfa works almost all the time. But it does > have some weaknesses. For example, if some other ontology comes along > to replace/compete with goodrelations, the server has to serve > annotation for both of those ontologies all the time. Or choose to > break a subset of the clients it might otherwise support. It cannot > return one or the other depending on the need of the client. Hiding > this dependency also makes implementing intermediates that can convert > between these two ontologies far more difficult. > > Media types describe the semantics and syntax of representations. I > think that html with embedded goodrelations meta-data has a should > constitute a new media type. It has a similar syntax to html but with > different semantics. Doing so would make the message much more self > descriptive. > > Peter > > > ------------------------------------ > > Yahoo! Groups Links > > > >
On Sun, 2010-08-08 at 00:58 -0600, Eric J. Bowman wrote:
>
> <span class='price'>$11.99</span>
>
> That's some real-world Amazon markup. If they didn't want to expose
> this as the price of an item, they wouldn't tag it like this next to a
> link to follow to purchase the item. If this is how they've decided to
> expose price, then they can change their markup as often as they like
> -- provided the API exposed by the metadata stays the same.
Here's some real world amazon CSS
.price { font-family: verdana,arial,helvetica,sans-serif; color:
#990000; }
> If that span were just meant for style, then why not class='xyzzy' to
> keep user agents from making the assumption that it's an item price?
>
> The reason it's class='price' is to encourage that assumption. No,
> that isn't RDFa + GR like O'Reilly or BestBuy, but it is metadata
> embedded in existing text/html for no reason but machine readability.
That's a gensym fallacy.
Bill
----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
Peter Williams wrote: > > A client that is expecting goodrelations annotations will not be able > to function properly if it receives valid html without the meta-data. > Doesn't that hold true for any system? If the system breaks, it won't function properly until it isn't broken anymore. > > The client cannot make that need visible the server, or to > intermediates. > Why would it need to? Application-specific media types are not part of the REST style, and for good reason. > > It has to ask for `text/html` and hope that the server > always sends goodrelations meta-data and that no intermediates strip > what might seem to be superfluous bloat from the returned > representation. (The html spec does not require that goodrelations > meta-data be included so stripping the meta-data is, strictly > speaking, not wrong.) > Or application/xhtml+xml, or application/atom+xml, or whatever other media type happens to be hosting the domain-specific vocabulary. The host media type is what's negotiated, not the domain-specific vocabulary contained within. Do you have any real-world examples to point to, of this actually happening in practice? If metadata were stripped by intermediaries that didn't understand it, nobody's CSS would work, because all the superfluous <span> and @class bloat would be removed. This argument sounds like FUD to me. GR isn't part of HTML, but the attributes it relies on, are. I've never heard of such stripping. Comments, yes, once upon a time, but that stopped because so many web sites rely on comments to work properly, like <!-- main content --> <!-- /main content -->. HTML says unknown elements and attributes may be _ignored_. It does not say that they may be _removed_. If there's some edge case where this happens, I'm simply not going to concern myself with it. Once again, I'm befuddled at the pushback here. Why is it that Roy gets no pushback when he says this, yet if I repeat it, it's subject to debate as if it's something I just came up with, all by myself? I'll let you folks convince Roy, then I'll let Roy convince me. Until then, your argument lies with Roy, not me. I'm simply basing my answers on the established facts of REST, not philosophizing about how it might be better if we ignore the requirement for standardized media types and embedded domain-specific vocabularies in favor of application-specific media types. Such an architecture wouldn't be REST and would have no working model to point to as proof of the style, as REST has with the Web. So I'll be sticking with REST and the Web as they exist today, and embedding my domain-specific metadata within ubiquitous media types, because I know this is what works, and I know for a fact that to do so is to follow the REST style. > > In practice, i know that rdfa works almost all the time. But it does > have some weaknesses. For example, if some other ontology comes along > to replace/compete with goodrelations, the server has to serve > annotation for both of those ontologies all the time. Or choose to > break a subset of the clients it might otherwise support. It cannot > return one or the other depending on the need of the client. Hiding > this dependency also makes implementing intermediates that can convert > between these two ontologies far more difficult. > So you're saying that if I change my API, then clients coded to a prior version of the API will break? Agreed. Where doesn't that hold true, though, and why should REST be any different? REST has nothing to do with negotiating between versions of an API. As a system developer, I don't care about the needs of the client. The client needs to care about how the system is built. If you want your client to understand my domain-specific vocabulary, then your client must understand my domain-specific vocabulary. If a client doesn't, then it can still render the ubiquitous media type based on well-known rules. This is loose coupling. Relying on clients to have knowledge of application-specific media types, or fail, is tight coupling. Why would an intermediary need to convert from one domain-specific vocabulary to another? All intermediaries need to know is the media type. If I want to offer my domain-specific vocabulary in some other form, then I provide a GRDDL link to glean it from its existing form. If I see some service on the Web that I want to integrate with my system, but it uses a different domain-specific vocabulary, then I can always use a SPARQL service to convert it. The reason this works, is due to the ubiquity of the host media types. Not *despite* the ubiquity of the host media types. > > Media types describe the semantics and syntax of representations. I > think that html with embedded goodrelations meta-data has a should > constitute a new media type. It has a similar syntax to html but with > different semantics. Doing so would make the message much more self > descriptive. > Media types describe how to process a payload. Self-descriptive messaging has nothing to do with understanding payload content. It only has to do with understanding the nature of the messaging. At the protocol level, there is no need to expose that RDFa or GR is being used. The only need is to inform clients that do grok RDFa, that they may scan the payload for RDFa, by informing them that a media type is being used that supports RDFa, which does not require the string 'rdfa' to appear in the media type -- it's a given in HTML media types. Referring once again to REST: "The trade-off, though, is that a uniform interface degrades efficiency, since information is transferred in a standardized form rather than one which is specific to an application's needs." You're either willing to make this tradeoff in pursuit of REST, or you aren't actually following REST. Non-standardized forms which _are_ specific to an application's needs are a REST violation, and no amount of arguing this point with me, as opposed to Roy, will change that fact, because it isn't about me, and there's nothing open to alternate interpretation in that thesis quote. REST is not about assigning media types to every possible application- specific need. I don't know how much more clear Roy's thesis could have been, that this is not the REST style. Take a look at the direction the Web is evolving. Do we see any proliferation of application-specific media types? No, the big sites are moving towards embedding domain-specific vocabulary inside ubiquitous media types (apparently, whether some folks here like it or not). There are hundreds of examples besides BestBuy or Amazon or O'Reilly. REST explains exactly why this is the case, and argues specifically against addressing this problem by defining media types for every possible domain-specific vocabulary. Such a solution is _not_ the REST style, and I can't say that emphatically enough, or often enough, or accurately-to-the-thesis enough. This is a case where the evolution of the Web isn't so much following REST, as predicted by it. It's well worth all the time it takes to understand this. Suggesting, or giving a +1 to a suggestion, that what we really need is application- specific media types, is coming right out and saying that REST is B.S., a position I cannot condone, because there's simply no proof of it. Once more: "The trade-off, though, is that a uniform interface degrades efficiency, since information is transferred in a standardized form rather than one which is specific to an application's needs." Enough said? This is a fact. I'm merely the messenger. Please, don't shoot the messenger. RDFa + GR is a prime example of exactly the sort of thing that has no business being assigned its own media type in a REST architecture. -Eric
Bill de hra wrote:
>
> Here's some real world amazon CSS
>
> .price { font-family: verdana,arial,helvetica,sans-serif; color:
> #990000; }
>
How does that dispute what I said? I said,
>
> > If that span were *just meant for style*...
>
I don't know how that would imply it isn't meant for styling at all.
>
> > The reason it's class='price' is to encourage that assumption. No,
> > that isn't RDFa + GR like O'Reilly or BestBuy, but it is metadata
> > embedded in existing text/html for no reason but machine
> > readability.
>
> That's a gensym fallacy.
>
In general, sure. In this specific case, when companies tell us that
they're doing this to increase their search-engine exposure, it is not
a fallacy to say, "that's why @class='price' isn't @class='xyzzy'."
-Eric
I'm with Eric on this. The debate over that text/html CAN represent "everything" or "anything" versus having it represent what the client expects is moot. In the end, servers don't work well with clients and clients don't work well without servers. IF a server wants to PROMOTE interoperability, it is motivated to continue sending "sane", consistent results in order to better serve it's clients. The fact that Amazon is using the the class "price" for some text that happens to have numbers, a decimal point, and a dollar sign, and the fact that this class has specific formatting called out in the CSS can be either by complete happenstance or it can be by design. As a uninformed observer, we don't know which it is, since the generated HTML is a artifact of a larger process. Amazon (apparently) has chosen not to document or share the semantics that it may be embedding in to its HTML pages. However, if Amazon wishes to be a "good citizen" and allow people to leverage whatever IMPLIED semantics that can be mined from manual review of their representations, then it is in Amazons interest to maintain and continue using those semantics and format details for as long as practical, indirectly encouraging 3rd party clients to leverage their systems and rely on their servers. Maybe Amazon simply doesn't care. Perhaps they have an alternate system that they DO want to support (don't they have some more official store API of some kind?). If they do, they have no motivation to maintain their formats from day to day or request to request. They have given no contract, so they stand by no contract and offer no promises. If they make a change and your code breaks, not their problem. Best Buy, in contrast, perhaps decided that mixing their API within their human readable content using some evolving standards is a better use of their resources than maintaining a separate API. But the bottom line is simply that the PAYLOAD is not enough to describe the API. Even in REST. Even in REST external, out of band documentation is required to get a client to properly communicate and interact with a server. The payloads may offer links to such documentation, but that documentation is designed to Carbon Based Lifeforms to consume and interpret. Having someone send a list of orders as HTML wrapped in Atom is a perfectly acceptable. Obviously if someone said "Yea, it's in Atom", a consumer may be off put when that what they get. But clearly the conversation simply go far enough. But the key is that the payload is inevitably consistent in someway for a machine to be able to process it consistently. That discussion is handled off line between the parties, but not between the machines. Regards, Will Hartung (willh@...)
On Mon, Aug 9, 2010 at 6:20 PM, Eric J. Bowman <eric@...> wrote: > Peter Williams wrote: >> >> A client that is expecting goodrelations annotations will not be able >> to function properly if it receives valid html without the meta-data. >> > > Doesn't that hold true for any system? If the system breaks, it won't > function properly until it isn't broken anymore. > >> >> The client cannot make that need visible the server, or to >> intermediates. >> > > Why would it need to? Application-specific media types are not part of > the REST style, and for good reason. My point is that is that html+gd is a much an application specific media type as a custom xml format. Not giving it a name does not change that fact. Once the server adds gd annotations, and clients started depending on them, the representations become application specific. I am not oppose to domain specific representations so this does not really bother me. My concern is that a client that needs html+gd, but asks for html is a lot less likely to get what it needs than a client that explicitly requests what it needs. > REST has nothing to do > with negotiating between versions of an API. Sure it does. The accept header allows the negotiation of API versions. Consider `accept: text/html` vs `accept: application/atom+xml`. One says the client wants to interact with the html version of the api, the other says the client wants to interact with the atom version of the api. > Referring once again to REST: > > "The trade-off, though, is that a uniform interface degrades > efficiency, since information is transferred in a standardized form > rather than one which is specific to an application's needs." I don't get what you seem to get that from this quote. Later in the same section we get this paragraph. In order to obtain a uniform interface, multiple architectural constraints are needed to guide the behavior of components. REST is defined by four interface constraints: identification of resources; manipulation of resources through representations; self-descriptive messages; and, hypermedia as the engine of application state. No where does that suggest there is some limit to the allowable number of representation flavors. Your reading of the uniform interface seems different than much of the community. RestWiki is pretty quiet on the idea of limiting media types being part of the uniform interface in both the interface genericity's[1] and rest in plain english[2] pages. Both seem to imply that domain specific media types would be ok. Stefan Tilkov clearly states that multiple media types are acceptable in 'A Brief Introduction to REST'[3]. The result might be some company-specific XML format that represents customer information. ... Summary: Provide multiple representations of resources for different needs. I could keep going but that is enough. My point is not that any of these are authoritative sources. (I don't believe in such nonsense.) But rather that it seems that much of REST community does not hold your belief that html is a fundamental, in inalienable, part of the uniform interface of the web. Perhaps i am mis-reading the community or perhaps i am totally wrong and html is key. I am willing to be convinced. However, the best outcomes i have experienced in m2m systems using rest have come from using explicitly named domain specific media types. There is certainly a trade off between using existing media types and creating new ones. If an existing media type has the needed semantics it should definitely be used. However, if no media type exists with the required semantics creating a new one that does seems superior to trying to infer such semantics from an existing one based on out-of-band information. Peter <http://barelyenough.org> [1]: http://rest.blueoxen.net/cgi-bin/wiki.pl?InterfaceGenericity [2]: http://rest.blueoxen.net/cgi-bin/wiki.pl?RestInPlainEnglish#nid68J [3]: http://www.infoq.com/articles/rest-introduction
Apologies for last night's empty message - wrong recipient :-) Jan On Aug 10, 2010, at 1:48 AM, Jan Algermissen wrote: > > ----------------------------------- > Jan Algermissen, Consultant > NORD Software Consulting > > Mail: algermissen@... > Blog: http://www.nordsc.com/blog/ > Work: http://www.nordsc.com/ > ----------------------------------- > > > > > > > ------------------------------------ > > Yahoo! Groups Links > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
Jan Algermissen wrote: > > Apologies for last night's empty message - wrong recipient :-) > Thanks for clarifying, I just figured it was Yahoo's fault. Anyone else notice how we've gone from posting instantaneously last week, to having messages bounce, to now a 2-3 hour delay before posts show up, leading to double-posting? The only apologizing necessary is from yahoogroups... "Well, I don't think there is any question about it. It can only be attributable to human error. This sort of thing has cropped up before, and it has always been due to human error." -HAL 9000 ...although I'm sure they'll blame it on hardware failure! ;-) -Eric
I probably will regret again to jump into this kind of discussions, but it bothers me that some things are said like they are absolute trues, written in stone, when they are at the very least arguable. But I'm not jumping in here because it bothers me, but because those kind of things can lead people - like it lead me already on this list - to believe in things some people say just to later found out that there are no evidence of it, besides the "because I say so" or "that's because it's the way it is". Because IRL people like me have responsabilities on design and architectural choices and we have to make sure of what we propose have good reasons for it. And I lost already several hours looking to support some decisions that in the end were plain wrong. So let me start by saying that I do believe if you are building systems to the web - from static pages to more or less complex applications like shoping-carts - you should really stick to well-known media-types like html. I would dare even to say you *must* stick to well-know media-types. However, REST is much more than *the web*. Iit is a style for "Network-based Software Architectures" of wich the web is one example. There are lots of other "Network-based Software Architectures", inside the enterprise, outside the enterprise conecting a limited number of companies, either using proprietary networks or using the Internet infrastructure - and let's us rememeber that the web is not the only service that runs over the internet. And for such cases, it is simply not true the assertion that is mandatory to use html or stick to only well knows media types. In some cases it will be the best choice, in other custom media types will be the best choice. Also, with this I'm not saying that the use of other media-types besides the well-knows should be made *at a aplication level*. No, if you need to design custom media-types they shoul not be specific to the *application level* but to the *domain level* - or higher if possible - of that application. Now let me say that this is my POV only, my opinion, I don't say that this will hold true for all cases, but is a opinion based in the opinion of other's, and that make sense to me not only in the realm of REST but in the more general realm of Software architecture design or - to refer to another recent post - in the more general realm of "Timeless way of building" - meaning the use of patterns that is common to all types, or styles, of software architecture. In other post I quoted Roy about this - and I don't like to quote Roy, as I've seen the exactly same sentence being quoted in this list to prove both one point and his opposite, but nevertheless I'll do it again: ( The *** are mine) > ***Obviously, I can't say that all data types have to be *the* standard > before they are used in a REST-based architecture.*** > > (...) > > ***The degree to which the format chosen is a commonly accepted standard is > less important than making sure that the sender and recipient agree to the > same thing*** > > (...) > > Sure, it is easier to deploy the use of a commonly understood data format. > However, ***it is also more efficient to use a format that is more > specifically intended for a given application.*** > > Where those two trade-offs intersect is often dependent on the > application. ***REST does not demand that everyone agree on a single format for the > exchange of data -- only that the participants in the communication > agree.*** > > Beyond that, designers need to apply their own common sense and > ***choose/create*** the best formats for the job. > I mean, I don't know much about English, but doesn't this mean something like "use standard-media types as the preferred choice, use custom media types when standard media types are not the best choice?" How, after reading this, can someone say "Application-specific media types are not part of the REST style"? Unless what is wrong here is the word "application" - I agree that they should not be application-specific but at least a order of degree larger - domain-specific at least. But otherwise, it's not true that "specific media types are not part of the REST style". "The trade-off, though, is that a uniform interface degrades > efficiency, since information is transferred in a standardized form > rather than one which is specific to an application's needs." > Am I wrong in saying that "standardized form" in the context of the uniform-interface means the one that was agreed by all the participants in the system, and not the one's that were published by IANA? - I'm not sure here, maybe I *do* am wrong. Let's say a company has a relatively small number of clients and business-partners, say 20 other companies. If that company agrees to exchange info - info that is basically to transfer data from one database to another database, so each company can treat it the way they see fit, not for human consuption on a browser - on a custom designed XML format , isn't that "standard" enough to fit in the concept of "uniform-interface"? And if tomorrow instead of 20 companies, it's 200 pr 2000 companies, will that wont be true neverthless? The company don't want to exchange info with everione in the world - only specified, known companies who whom they have some business agreement. They can even agree that info should *not* be human-readable for secuity purposes... Would that be "not part of the REST style"? Please, someone can tell me if my English is that bad that I can't understand the above quotes? On 10 August 2010 04:58, Peter Williams <pezra@...> wrote: > > > On Mon, Aug 9, 2010 at 6:20 PM, Eric J. Bowman <eric@...<eric%40bisonsystems.net>> > wrote: > > Peter Williams wrote: > >> > >> A client that is expecting goodrelations annotations will not be able > >> to function properly if it receives valid html without the meta-data. > >> > > > > Doesn't that hold true for any system? If the system breaks, it won't > > function properly until it isn't broken anymore. > > > >> > >> The client cannot make that need visible the server, or to > >> intermediates. > >> > > > > Why would it need to? Application-specific media types are not part of > > the REST style, and for good reason. > > My point is that is that html+gd is a much an application specific > media type as a custom xml format. Not giving it a name does not > change that fact. Once the server adds gd annotations, and clients > started depending on them, the representations become application > specific. > > I am not oppose to domain specific representations so this does not > really bother me. My concern is that a client that needs html+gd, but > asks for html is a lot less likely to get what it needs than a client > that explicitly requests what it needs. > > > > REST has nothing to do > > with negotiating between versions of an API. > > Sure it does. The accept header allows the negotiation of API > versions. Consider `accept: text/html` vs `accept: > application/atom+xml`. One says the client wants to interact with the > html version of the api, the other says the client wants to interact > with the atom version of the api. > > > > Referring once again to REST: > > > > "The trade-off, though, is that a uniform interface degrades > > efficiency, since information is transferred in a standardized form > > rather than one which is specific to an application's needs." > > I don't get what you seem to get that from this quote. Later in the > same section we get this paragraph. > > In order to obtain a uniform interface, multiple architectural > constraints are needed to guide the behavior of components. REST is > defined by four interface constraints: identification of resources; > manipulation of resources through representations; self-descriptive > messages; and, hypermedia as the engine of application state. > > No where does that suggest there is some limit to the allowable number > of representation flavors. > > Your reading of the uniform interface seems different than much of the > community. > > RestWiki is pretty quiet on the idea of limiting media types being > part of the uniform interface in both the interface genericity's[1] > and rest in plain english[2] pages. Both seem to imply that domain > specific media types would be ok. > > Stefan Tilkov clearly states that multiple media types are acceptable > in 'A Brief Introduction to REST'[3]. > > The result might be some company-specific XML format that > represents customer information. ... Summary: Provide multiple > representations of resources for different needs. > > I could keep going but that is enough. My point is not that any of > these are authoritative sources. (I don't believe in such nonsense.) > But rather that it seems that much of REST community does not hold > your belief that html is a fundamental, in inalienable, part of the > uniform interface of the web. > > Perhaps i am mis-reading the community or perhaps i am totally wrong > and html is key. I am willing to be convinced. However, the best > outcomes i have experienced in m2m systems using rest have come from > using explicitly named domain specific media types. > > There is certainly a trade off between using existing media types and > creating new ones. If an existing media type has the needed semantics > it should definitely be used. However, if no media type exists with > the required semantics creating a new one that does seems superior to > trying to infer such semantics from an existing one based on out-of-band > information. > > Peter > <http://barelyenough.org> > > [1]: http://rest.blueoxen.net/cgi-bin/wiki.pl?InterfaceGenericity > [2]: http://rest.blueoxen.net/cgi-bin/wiki.pl?RestInPlainEnglish#nid68J > [3]: http://www.infoq.com/articles/rest-introduction > >
Peter Williams wrote: > > > Why would it need to? Application-specific media types are not > > part of the REST style, and for good reason. > > My point is that is that html+gd is a much an application specific > media type as a custom xml format. Not giving it a name does not > change that fact. Once the server adds gd annotations, and clients > started depending on them, the representations become application > specific. > Of course representations are application-specific. But that does not require the media type of the representation to be application-specific or say anything about what domain-specific vocabularies are contained within. Assigning a new media type for every possible usage of an existing media type goes against REST, where ubiquitous media types are re-used except as a last resort in the face of compelling need. > > I am not oppose to domain specific representations so this does not > really bother me. My concern is that a client that needs html+gd, but > asks for html is a lot less likely to get what it needs than a client > that explicitly requests what it needs. > If a client is coded to interpret HTML + GD served as an HTML media type, then I don't understand how it would be "less likely" to get what it's after by requesting that media type from a site whose markup clearly implements RDFa. If I tell you my service responds with RDFa embedded in text/html, why would you assume that it wouldn't, and what do "contracts" guaranteeing that you will have to do with REST? If a client doesn't understand RDFa + GR, why wouldn't I send it that representation anyway if it asked for HTML? Why would I want to set up conneg, to send a variant that's been stripped of any domain-specific vocabulary? That variant sounds pretty useless by comparison, and I don't see any cost-benefit to performing all the work to make that behavior happen. If user agents weren't required to ignore unknown elements and attributes, then yeah, it would make sense to proliferate media types for each and every possible use of known markup. But, that isn't the architectural style we're dealing with. > > > REST has nothing to do > > with negotiating between versions of an API. > > Sure it does. The accept header allows the negotiation of API > versions. > It had better not. Late binding of representation to resource should have no effect on the API those representations describe. If they describe different APIs, then how can they be variants? My HTML and Atom variants describe the same API by using the same link targets and relations, for example. > > Consider `accept: text/html` vs `accept: > application/atom+xml`. One says the client wants to interact with the > html version of the api, the other says the client wants to interact > with the atom version of the api. > Exactly. Variants of the *same* API, not different versions of the API. There's no reason the domain-specific vocabulary would need to change from one media type to the next, in the above scenario. There is no such thing as using conneg to negotiate for a particular version of HTML. This idea is floated regularly, but is consistently shot down as it goes against REST and Web architecture. It would be silly to have conneg based on HTML 4 vs. HTML 5, the same principles and reasoning apply to explain why we don't have text/html and text/ html+rdfa -- representations are application-specific, not media types. > > > "The trade-off, though, is that a uniform interface degrades > > efficiency, since information is transferred in a standardized form > > rather than one which is specific to an application's needs." > > I don't get what you seem to get that from this quote. > I get from it that text/html is a standardized form, regardless of which particular elements and attributes are included within a given representation. When media types are created not to introduce new languages, but just to indicate domain-specific vocabulary tailored to the application's exact needs, it goes directly against this quote, and everything else Roy is trying to get across in his thesis. > > Later in the same section we get this paragraph. > > In order to obtain a uniform interface, multiple architectural > constraints are needed to guide the behavior of components. REST is > defined by four interface constraints: identification of resources; > manipulation of resources through representations; self-descriptive > messages; and, hypermedia as the engine of application state. > The self-descriptive messaging constraint goes on to say this: "REST components communicate by transferring a representation of a resource in a format matching one of an evolving set of standard data types, selected dynamically based on the capabilities or desires of the recipient and the nature of the resource." I'll explain my position again. Yes, you can create a custom media type. However, if its sole purpose is to be used in your application, and nobody else adopts it, in what way is it standardized? I'd be all over adopting a new standard media type indicating html+rdfa, if only I saw any compelling need for it. Creating new media types willy-nilly, specifically to avoid using ubiquitous media types, for the purpose of being tailored to the needs of the application, is clearly and unambiguously a violation of REST, which advocates the principle of generality. I really should use a larger quote: "By applying the software engineering principle of generality to the component interface, the overall system architecture is simplified and the visibility of interactions is improved. Implementations are decoupled from the services they provide, which encourages independent evolvability. The trade-off, though, is that a uniform interface degrades efficiency, since information is transferred in a standardized form rather than one which is specific to an application's needs." The thesis is on and on about generality, aka re-use. Creating new media types willy-nilly is some architectural style that doesn't emphasize the principle of generality, not REST. > > No where does that suggest there is some limit to the allowable number > of representation flavors. > No, it doesn't, but REST does emphasize the principle of generality. Another way of looking at this, is that Roy was right. Out of countless thousands of media type identifiers folks have created over the years, how many succeed in becoming ubiquitous? Why haven't folks long since ditched text/html? Maybe there's something to this generality stuff after all, then? The purpose of REST is to tap in to the common knowledge encapsulated in ubiquitous media types, whenever your documentation needs to be out- of-band. If none of that out-of-band documentation is common knowledge because everybody is winging it on defining new media types, the Web would crumble, because that is not following the proven, well-defined- by-Roy model of what made the Web a success in the first place. Roy was right. The benefits of REST are indeed achievable when you re- use ubiquitous media types (as the thesis clearly says). Proof: Oh, I don't know, Amazon, Best Buy, O'Reilly -- RESTful or not (and Amazon most certainly is not), architectures based on sticking with ubiquitous media types have proven themselves capable of dealing with almost any task imaginable over the Web. The Web is proof enough of Roy's thesis. We don't need separate media types for widget sales vs. airline reservations vs. event tickets vs. online banking vs. school enrollment vs. stock trading vs. conference bookings vs. list goes on... is your problem really such a unique snowflake that it can't be solved except by an application-specific media type? If it is, then I don't have a problem with that, who am I to judge the adequacy of your solution to your problem space? My problem is if you fail to identify that as about the most-obvious REST mismatch there is, and still call it REST -- you'd be arguing that your bug is a feature. > > Your reading of the uniform interface seems different than much of the > community. > Yup. Colored entirely by building websites since 1993 without any nefarious SOA influence. Another reason it took me so long to learn, is that for many years, I believed what others were telling me about REST -- which made me think that none of my work from pre-2004 was RESTful. When in reality, most of it was, like the pizza-delivery example I gave last month. It's amazing the number of REST mismatches I used to see in that solution, like using query URIs, or having /cgi-bin/ in my path, etc. It took me over a year after reading Roy's "REST APIs must be hypertext driven" post, to understand all of this was nonsense, and that by following the path of least resistance I'd actually been doing REST years before I'd ever heard of HTTP Request Object. If you're publishing a distributed API over the Internet using HTTP, then REST defines the path of least resistance and greatest scaling benefit for you. Unless you're doing telephony or something, why not re-use HTML media types to drive your hypertext API? It's been proven to work, and has been wildly successful across myriad problem domains. > > RestWiki is pretty quiet on the idea of limiting media types being > part of the uniform interface in both the interface genericity's[1] > and rest in plain english[2] pages. Both seem to imply that domain > specific media types would be ok. > Roy's thesis is the only normative reference for REST. Roy's further writings on domain-specific vocabulary vs. common-knowledge media types is where I've gotten my information. There's an interesting search to run -- again, I'm not making this stuff up. Why would Roy give an example of using image/gif to model a sparse-bit array, and say nothing about how it would really be better to define a new media type for this application-specific purpose? Because that would lead to defining new media types based on what an image represents -- what we don't need, and what implicitly goes against the REST style, is image/sba+gif, image/people+gif, image/dogs+ gif, image/porn+gif, and so on and so forth. I see no difference between text/rdfa+html and image/porn+gif. Domain-specific vocabulary doesn't belong in common-knowledge media types, nor does it require new media types. > > Stefan Tilkov clearly states that multiple media types are acceptable > in 'A Brief Introduction to REST'[3]. > > The result might be some company-specific XML format that > represents customer information. ... Summary: Provide multiple > representations of resources for different needs. > I can use a variety of ubiquitous media types to create domain-specific vocabulary for customer information. There's probably an RDF ontology or two for doing just that, and I could always create a schema and link to that as further in-band documentation of domain-specific vocabulary. I see where that fits with Stefan's quote, what I don't see is where Stefan is advocating that domain-specific vocabulary requires a custom media type. > > But rather that it seems that much of REST community does not hold > your belief that html is a fundamental, in inalienable, part of the > uniform interface of the web. > I would hope not, especially since I've never made that assertion. I am pushing back against the notion that HTML media types are somehow obsolete or incapable of doing the things folks are creating custom, non-ubiquitous media types for. If the community was getting this right, there'd be no need to push back against it. I consistently phrase my posts to say that a REST API is required to have an interface consisting of hypertext controls, in a media type that's designed to drive a hypertext API. There are several which fit this bill, primarily HTML, which is not to assert that HTML is somehow a requirement of REST. I don't care what your back-end format is, or how clients interact with it, all I do care about is that you provide hypertext controls to define your API. You can point a hard-coded Atom Protocol client at my system and interact with it -- but this is not REST. The way you figure out how to hard-code a client against my system is by reading the developer documentation, i.e. my HTML code, as it represents a self- documenting API to whatever internals I choose to publicize. > > Perhaps i am mis-reading the community or perhaps i am totally wrong > and html is key. I am willing to be convinced. However, the best > outcomes i have experienced in m2m systems using rest have come from > using explicitly named domain specific media types. > I don't see why hypertext control APIs can't be both human and machine readable. Especially given the success of RDFa and GR. HTML has always been a key component, like URI and HTTP, in the Web instantiation of REST. None of these are requirements of the REST style. If your intention is to provide a distributed hypertext API to some sort of back-end system over the public Internet using HTTP, then yeah, you'd better have a damn good reason for not just using HTML. Is your system really such a unique snowflake, that the accepted standards used to build distributed APIs for such purposes as widget sales, airline reservations, event ticketing, online banking, school enrollment, stock trading, conference bookings and anything else under the sun, just can't be adapted to your needs? That's kinda the whole point of REST, fit your system to a uniform interface instead of an application-specific interface, so that what it does and how it works may be decuced with tools as simple as curl. > > There is certainly a trade off between using existing media types and > creating new ones. If an existing media type has the needed semantics > it should definitely be used. However, if no media type exists with > the required semantics creating a new one that does seems superior to > trying to infer such semantics from an existing one based on > out-of-band information. > Where we apparently differ, is that for every 1,000 custom media types I look at, I see maybe one that actually has semantics that aren't well-covered by existing solutions. That one, has about a 50/50 chance of becoming ubiquitous. If the media type driving your hypertext API isn't ubiquitous, and stands exactly zero chance of ever becoming so, then you simply aren't using the REST style. -Eric
On Aug 10, 2010, at 11:44 AM, Eric J. Bowman wrote: > Assigning a new media type for every possible usage of an > existing media type goes against REST, where ubiquitous media types are > re-used except as a last resort in the face of compelling need. REST emphazises design for re-usability! Thinking about your payloads *beyond* the current service is what facilitates re-usability. This is why REST *encourages* us to eventually strive for payload formats that go beyond the currenty perceived needs of the envisioned clients to the service we are building. HTML, for example, enabled crawlers to index Web sites - it was never intended for that purpose! The key here is in Roy's words: "REST components communicate by transferring a representation of a resource in a format matching one of an evolving set of standard data types". (<http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm#sec_5_2_1> Explanations to be found in <http://tech.groups.yahoo.com/group/rest-discuss/message/6613> (whcih is interestingly the Posting quoted by Antonio a few minutes ago :-) Jan ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
Jan Algermissen wrote: > > > My point is that is that html+gd is a much an application specific > > media type as a custom xml format. Not giving it a name does not > > change that fact. Once the server adds gd annotations, and clients > > started depending on them, the representations become application > > specific. > > Even worse: they become service specific and the client > implementation couples itself to an (un-guarantee) particularity of > that service. This is not different than having a service specific > API in the first place. > What do you mean by service-specific? Domain-sepcific vocabulary embedded in metadata is not some sort of design flaw. Google and others understand GoodRelations. Any service implementing GR has provided a machine-readable API that Google interacts with. These services are in no way required to resemble one another. So what do you mean by coupling? Agreeing to a domain-specific vocabulary implemented using standard media types, is exactly what is meant by decoupling. > > Actually, IMHO it is even worse than RPC in the long run because in > the RPC-case there is usually an IDL defining the promissed service > interface. In the case of extended HTML the client simply hoping for > the format not to change. > But this is not an argument against what I say is REST. REST has no notion of these "contracts" you speak of, beyond an agreement to what a media type means. IMHO, such IDLs are exactly what's meant by coupling. I don't understand why the question is always asked, "What happens if the interface changes?" Well, things may or may not break, but even if they do, that's not within REST's scope, so pointing out that representations change over time is also not an argument against what I say is REST. > > I think it is a maintenance nightmare because the server owner won't > have any idea what the client is actually hoping for. > Why should the server owner care? If BestBuy changes from GR to something else, then Google can no longer identify items and prices, until Google implements the new ontology, at which point the client and server again agree on implementations, and again the system works. There is no requirement for the server owner to care, this notion of "contracts" on such things has nothing to do with REST. > > What I simply do not understand: If the service provider would take > the design phase just a bit further and look a bit beyond the single > one service it is about to implement the situation would be far > better. The servic eprovider could make the extensions valuable > beyond the single service, give the specification a name and hence > mint a new media type (or a least a documented profile that can be > used in conneg). > > What is the reason for this ubiquitous obsession of not standardizing > extensions of hypermedia formats? (Standardizing meaning: making in > applicable beyond the single service and documenting it outside the > realm of the single service). > Or asked another way, what is the obsession against embedding domain- specific vocabulary within ubiquitous media types? Domain-specific vocabularies don't need to be exposed at the protocol layer. So why fragment the understanding of ubiquitous media types, by denying the possibility of defining any number of domain-specific vocabularies within a well-known hypertext container format? (And by that, no, I don't just mean HTML, stop making me bend over backwards to say that every time I write hypertext please, folks...) > > > I am not oppose to domain specific representations so this does not > > really bother me. My concern is that a client that needs html+gd, > > but asks for html is a lot less likely to get what it needs than a > > client that explicitly requests what it needs. > > Exactly! And from the POV of change impact analysis on the server > side it is horrible because the server developer needs to know all > this additional stuff when working on the resource implementation for > text/html. > What REST constraint is violated, if a service changes to a media type that causes user agents that used to work with it, not to any more? What I see as having to learn a bunch of additional stuff, is having to learn a new media type as opposed to just learning a new domain-specific vocabulary within markup elements and attributes I'm already familiar with. Especially if avoiding ubiquitous types has led to the re- invention of common hypertext controls. > > >> REST has nothing to do > >> with negotiating between versions of an API. > > > > Sure it does. The accept header allows the negotiation of API > > versions. Consider `accept: text/html` vs `accept: > > application/atom+xml`. One says the client wants to interact with > > the html version of the api, the other says the client wants to > > interact with the atom version of the api. > > Yes, exactly. Or consider: > > Accept: application/atom+xml vs. Accept: application/atom-v2+xml > That would tell me that you're requesting version 1 of Atom or version 2 of Atom, not version 1 vs. version 2 of an API. There is no need in REST to version APIs, or parameterize version information in media types. The world tried this approach, but it lost out, proof of this is that HTML 5 is still text/html, not 'text/html; version=5'. If there is an Atom 2, it would still be application/atom+xml, in keeping with the REST style as instantiated on the Web. > > > > >> Referring once again to REST: > >> > >> "The trade-off, though, is that a uniform interface degrades > >> efficiency, since information is transferred in a standardized form > >> rather than one which is specific to an application's needs." > > > > I don't get what you seem to get that from this quote. Later in the > > same section we get this paragraph. > > > > In order to obtain a uniform interface, multiple architectural > > constraints are needed to guide the behavior of components. REST > > is defined by four interface constraints: identification of > > resources; manipulation of resources through representations; > > self-descriptive messages; and, hypermedia as the engine of > > application state. > > > > No where does that suggest there is some limit to the allowable > > number of representation flavors. > > Right. Roy refers to the fact that general-purpose payloads are > naturally less efficient (in terms payload size) than payloads > designed specifically for a single service. > And by implication, more uniform. The goal of a REST system is to become more uniform, at the tradeoff of efficiency, for the purposes of scaling and serendipitous re-use, in keeping with the principle of generality. > > > Your reading of the uniform interface seems different than much of > > the community. > > > > RestWiki is pretty quiet on the idea of limiting media types being > > part of the uniform interface in both the interface genericity's[1] > > and rest in plain english[2] pages. Both seem to imply that domain > > specific media types would be ok. > > Yes, of course they are ok. They are the *essence* of building > RESTful systems beyond the existing human HTML and Feeds Web. > I disagree vehemently. Myriad diverse systems have been built using ubiquitous media types. These media types are capable of embedding machine-readable, domain-specific vocabularies. Ubiquitous media types does not mean HTML. How many times must I mention telephony systems with hypertext REST APIs that have nothing to do with HTML or browsers? Unless you have such a compelling use case for not using HTML, and no other ubiquitous type exists for your problem, then g'head. But, 999 times out of 1,000 the nature of the system is not such a unique snowflake that HTML + RDFa need to be dismissed out-of-hand. There is simply no reason that m2m can't be done this way, as proven by the m2m interaction via HTML + RDFa that's happening more and more each day now that GR is proliferating. HTML is capable of *accessibly* describing the hypertext controls of almost any conceivable REST API. It's well understood, and easily maintainable (if well-written, but that goes for anything) because it is both human and machine readable. Such a hypertext API can wrap any number of back-end formats and systems, RESTful or not, and make a REST system out of it. 999 out of 1,000 custom media types think they're providing a hypertext API, but aren't really meeting the hypertext constraint at all. -Eric
> > Explanations to be found in > <http://tech.groups.yahoo.com/group/rest-discuss/message/6613> > Yes, that's been repeatedly posted as if I'm contradicting it somehow. I am not: " This is one of those gray areas of increasing RESTfulness that will doubtless drive some people nuts. The problem is that I can't say 'REST requires media types to be registered' because both Internet media types and the registry controlled by IANA are a specific architecture's instance of the style -- they could just as well be replaced by some other mechanism for metadata description. " Hinting at Waka there at the last... When we're talking about making a system available on the public Internet using the HTTP protocol, which is most of the time, we *can* say "REST requires media types to be registered by IANA." " The broader question is what does it take to create an *evolving* set of standard data types? Obviously, I can't say that all data types have to be *the* standard before they are used in a REST-based architecture. " I keep repeating this all the time. New ubiquitous types are possible, but 999 out of 1,000 custom media types have absolutely zero chance of this, in which case, wouldn't the system be better off had it been designed using a ubiquitous media type, even if a tiny bit less efficient? " At the same time, I do require enough standardization to allow the data format sent to be understood as such by the recipient. Hence, both sender and recipient agree to a common registration authority (the standard) for associating media types with data format descriptions. " When we're talking about making a system available on the public Internet using the HTTP protocol, which is most of the time, we *do* require ubiquitous media types, because recipient = world-at-large. If recipient = partner-company-only, then what do we care about the serendipitous re-use or anarchic scalability brought about by using ubiquitous media types? " The degree to which the format chosen is a commonly accepted standard is less important than making sure that the sender and recipient agree to the same thing, and that's all I meant by an evolving set of standard data types. " What is the purpose of your REST API? If it's to expose a distributed interface to the world-at-large, instead of only to those governed by some contractual agreement, then you need to choose a media type your recipients have at least heard of before, if not implemented. " Sure, it is easier to deploy the use of a commonly understood data format. However, it is also more efficient to use a format that is more specifically intended for a given application. " More efficient, but less RESTful seems to be Roy's point in this post. I've noted before that Roy has said elsewhere that there are no degrees of REST, so I don't understand what he means by "increasing RESTfulness." Gray area, sure. " Where those two trade-offs intersect is often dependent on the application. REST does not demand that everyone agree on a single format for the exchange of data -- only that the participants in the communication agree. Beyond that, designers need to apply their own common sense and choose/create the best formats for the job. " When we're talking about making a system available on the public Internet using the HTTP protocol, which is most of the time, then "participants in the communication" includes intermediaries, and no intermediary can agree to something it's never heard of. -Eric
On Aug 10, 2010, at 12:20 PM, Eric J. Bowman wrote: > Jan Algermissen wrote: >> >>> My point is that is that html+gd is a much an application specific >>> media type as a custom xml format. Not giving it a name does not >>> change that fact. Once the server adds gd annotations, and clients >>> started depending on them, the representations become application >>> specific. >> >> Even worse: they become service specific and the client >> implementation couples itself to an (un-guarantee) particularity of >> that service. This is not different than having a service specific >> API in the first place. >> > > What do you mean by service-specific? Specific to a service. Not orthogonal to the service. HTML is orthogonal to Amazon. Amazon's HTML style attributes are specifc to Amazon. > Domain-sepcific vocabulary > embedded in metadata is not some sort of design flaw. Google and > others understand GoodRelations. How would the server know that a user agent depends on it to fullfil its implemented goal? If the user agent does not *depend* on GoodRelations then Acept: text/html is just fine. But if it *needs* the embedded stuff to work properly, Accept text/html is not. .. and there is a hidden contract that will eventualy break. > Any service implementing GR has > provided a machine-readable API that Google interacts with. These > services are in no way required to resemble one another. So what do > you mean by coupling? Agreeing to a domain-specific vocabulary > implemented using standard media types, is exactly what is meant by > decoupling. Right. And HTML + GoodRelations is not such a standardized media type, eh? > >> >> Actually, IMHO it is even worse than RPC in the long run because in >> the RPC-case there is usually an IDL defining the promissed service >> interface. In the case of extended HTML the client simply hoping for >> the format not to change. >> > > But this is not an argument against what I say is REST. REST has no > notion of these "contracts" you speak of, beyond an agreement to what a > media type means. IMHO, such IDLs are exactly what's meant by coupling. > I don't understand why the question is always asked, "What happens if > the interface changes?" Well, things may or may not break, but even if > they do, that's not within REST's scope, so pointing out that > representations change over time is also not an argument against what I > say is REST. > >> >> I think it is a maintenance nightmare because the server owner won't >> have any idea what the client is actually hoping for. >> > > Why should the server owner care? If BestBuy changes from GR to > something else, then Google can no longer identify items and prices, > until Google implements the new ontology, So then, why do we need HTM in the fisrt place? If this works so smoothly, why not just have Amazon send application/xml? Amazon would send a certain kind of XML, ya now: <html><title>...</title> ... </html> and Google and browser implementors would just implement that. Then, if Amazon changes from that syntax to some other one, Google and our browsers can no longer work with the XML. Your train of thought implies that that is also just fine because once Google and the browsers follow the new stuff, everyone is happy again. So - why do we need text/html as a media type? (Hint: because browsers (and Google) depend on it implementation-wise. They do not work with any application/xml. That is why they say: I 'Accept: text/html, application/xhtml+xml' and as long as you send me something that conforms to one of those types I *can* carry out the implemented goal for the response to *this* request.) To stress the point again: if the user agent's implementation of a certain user goal *depends* on a certain format of representation that must be expressed in the Accept header. If a user agent is implemented to wake up every hour and check the prices of items on various shoppping sites it has far more specific needs than a generic agent that displays a [Next] button if it encounters a next link or displays an [edit] button if it encounters an AtomPub edit link. The former fails to perform the goal, the latter does not. Jan > at which point the client and > server again agree on implementations, and again the system works. > There is no requirement for the server owner to care, this notion of > "contracts" on such things has nothing to do with REST. > >> >> What I simply do not understand: If the service provider would take >> the design phase just a bit further and look a bit beyond the single >> one service it is about to implement the situation would be far >> better. The servic eprovider could make the extensions valuable >> beyond the single service, give the specification a name and hence >> mint a new media type (or a least a documented profile that can be >> used in conneg). >> >> What is the reason for this ubiquitous obsession of not standardizing >> extensions of hypermedia formats? (Standardizing meaning: making in >> applicable beyond the single service and documenting it outside the >> realm of the single service). >> > > Or asked another way, what is the obsession against embedding domain- > specific vocabulary within ubiquitous media types? Domain-specific > vocabularies don't need to be exposed at the protocol layer. So why > fragment the understanding of ubiquitous media types, by denying the > possibility of defining any number of domain-specific vocabularies > within a well-known hypertext container format? (And by that, no, I > don't just mean HTML, stop making me bend over backwards to say that > every time I write hypertext please, folks...) > >> >>> I am not oppose to domain specific representations so this does not >>> really bother me. My concern is that a client that needs html+gd, >>> but asks for html is a lot less likely to get what it needs than a >>> client that explicitly requests what it needs. >> >> Exactly! And from the POV of change impact analysis on the server >> side it is horrible because the server developer needs to know all >> this additional stuff when working on the resource implementation for >> text/html. >> > > What REST constraint is violated, if a service changes to a media type > that causes user agents that used to work with it, not to any more? > What I see as having to learn a bunch of additional stuff, is having to > learn a new media type as opposed to just learning a new domain-specific > vocabulary within markup elements and attributes I'm already familiar > with. Especially if avoiding ubiquitous types has led to the re- > invention of common hypertext controls. > > >> >>>> REST has nothing to do >>>> with negotiating between versions of an API. >>> >>> Sure it does. The accept header allows the negotiation of API >>> versions. Consider `accept: text/html` vs `accept: >>> application/atom+xml`. One says the client wants to interact with >>> the html version of the api, the other says the client wants to >>> interact with the atom version of the api. >> >> Yes, exactly. Or consider: >> >> Accept: application/atom+xml vs. Accept: application/atom-v2+xml >> > > That would tell me that you're requesting version 1 of Atom or version > 2 of Atom, not version 1 vs. version 2 of an API. There is no need in > REST to version APIs, or parameterize version information in media > types. The world tried this approach, but it lost out, proof of this > is that HTML 5 is still text/html, not 'text/html; version=5'. If > there is an Atom 2, it would still be application/atom+xml, in keeping > with the REST style as instantiated on the Web. > >> >>> >>>> Referring once again to REST: >>>> >>>> "The trade-off, though, is that a uniform interface degrades >>>> efficiency, since information is transferred in a standardized form >>>> rather than one which is specific to an application's needs." >>> >>> I don't get what you seem to get that from this quote. Later in the >>> same section we get this paragraph. >>> >>> In order to obtain a uniform interface, multiple architectural >>> constraints are needed to guide the behavior of components. REST >>> is defined by four interface constraints: identification of >>> resources; manipulation of resources through representations; >>> self-descriptive messages; and, hypermedia as the engine of >>> application state. >>> >>> No where does that suggest there is some limit to the allowable >>> number of representation flavors. >> >> Right. Roy refers to the fact that general-purpose payloads are >> naturally less efficient (in terms payload size) than payloads >> designed specifically for a single service. >> > > And by implication, more uniform. The goal of a REST system is to > become more uniform, at the tradeoff of efficiency, for the purposes of > scaling and serendipitous re-use, in keeping with the principle of > generality. > >> >>> Your reading of the uniform interface seems different than much of >>> the community. >>> >>> RestWiki is pretty quiet on the idea of limiting media types being >>> part of the uniform interface in both the interface genericity's[1] >>> and rest in plain english[2] pages. Both seem to imply that domain >>> specific media types would be ok. >> >> Yes, of course they are ok. They are the *essence* of building >> RESTful systems beyond the existing human HTML and Feeds Web. >> > > I disagree vehemently. Myriad diverse systems have been built using > ubiquitous media types. These media types are capable of embedding > machine-readable, domain-specific vocabularies. Ubiquitous media types > does not mean HTML. How many times must I mention telephony systems > with hypertext REST APIs that have nothing to do with HTML or browsers? > > Unless you have such a compelling use case for not using HTML, and no > other ubiquitous type exists for your problem, then g'head. But, 999 > times out of 1,000 the nature of the system is not such a unique > snowflake that HTML + RDFa need to be dismissed out-of-hand. There is > simply no reason that m2m can't be done this way, as proven by the m2m > interaction via HTML + RDFa that's happening more and more each day now > that GR is proliferating. > > HTML is capable of *accessibly* describing the hypertext controls of > almost any conceivable REST API. It's well understood, and easily > maintainable (if well-written, but that goes for anything) because it > is both human and machine readable. Such a hypertext API can wrap any > number of back-end formats and systems, RESTful or not, and make a REST > system out of it. 999 out of 1,000 custom media types think they're > providing a hypertext API, but aren't really meeting the hypertext > constraint at all. > > -Eric > > > ------------------------------------ > > Yahoo! Groups Links > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
What happened to the simple guidance that the "best practice" is to try to reuse existing media types and if that is not possible, create a custom one? Why doesn't a simple statement like that work?
On Aug 10, 2010, at 1:57 PM, Eb wrote: > What happened to the simple guidance that the "best practice" is to try to reuse existing media types and if that is not possible, create a custom one? Nothing. The question is: What is the meaning of "if that is not possible". Question: Why did people bother defining application/atom+xml? text/html with some extensions would have worked just as fine. Jan > > Why doesn't a simple statement like that work? ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
On Tue, Aug 10, 2010 at 9:11 AM, Jan Algermissen <algermissen1971@...>wrote: > > > > On Aug 10, 2010, at 1:57 PM, Eb wrote: > > > What happened to the simple guidance that the "best practice" is to try > to reuse existing media types and if that is not possible, create a custom > one? > > Nothing. The question is: What is the meaning of "if that is not possible". > > Question: Why did people bother defining application/atom+xml? text/html > with some extensions would have worked just as fine. > > Jan > > > > > > Why doesn't a simple statement like that work? > Hey Jan - I'm no historian but I would imagine that what existed previously did not fit their use case "naturally" (whatever that means :)) and hence the need for a specialized media type. I'm pretty sure an Atom could have been represented in plain html and/or xml and the "client" taught to interpret it, but at some point your clients become numerous and maybe it makes sense to invent a new media type with its own specification that anyone consuming your service can refer too. Just postulating now..... Eb
I think there is difference between a 'Restful' solution and a 'Restful'
HTTP solution.
If one has to use HTTP, then the following are my design
1. There is nothing mapped to GET some url like /foo/{id}.
2. GET /foo/HEAD does not cache, and returns an {id}.
3. POST /foo/ to push().
4. DELETE /foo/{id} to pop(). If {id} is never the HEAD then the request
fails.
Not sure if it is good to implement 3 and 4 by PATCH.
Cheers,
Dong
On Thu, Aug 5, 2010 at 12:59 PM, Juergen Brendel <
juergen.brendel@...> wrote:
>
>
>
> Hello!
>
> Let's say I have a queue resource: /foo
>
> I can POST new entries into the queue. I can even refer to individual
> entries within the queue: /foo/<id>
>
> But how do I pop the next entry? How do I construct a single request
> that gets me the next/first entry but also removes the entry at the same
> time?
>
> Maybe I can implement a special resource /foo/next, which always refers
> to the next entry in the queue. But clearly, I can't use GET to pop the
> entry, since that would not be idempotent.
>
> The queue has multiple consumers, so the 'pop' operation should be
> atomic. This seems to rule out the possibility of doing a GET to
> retrieve the latest element, followed by a DELETE to remove it. Someone
> else could have gotten the 'latest' element in the meantime, thus
> causing the same element to be consumed twice.
>
> Maybe I can cause a 'move', where a single request causes the next
> element to be renamed to a unique ID, which is then returned to the
> client, who then is the only one who has a handle on that object. The
> client can then work with the resource. But the question now is:
>
> a) What happens when the client fails before it can delete the resource?
> b) What is the best way to 'move' an item in that way?
>
> Juergen
>
> --
> Juergen Brendel
> http://restx.mulesoft.org
>
>
>
More details ...
On Tue, Aug 10, 2010 at 9:34 AM, Dong Liu <edongliu@gmail.com> wrote:
> I think there is difference between a 'Restful' solution and a 'Restful'
> HTTP solution.
>
> If one has to use HTTP, then the following are my design
>
> 1. There is nothing mapped to GET some url like /foo/{id}.
>
2. GET /foo/HEAD does not cache, and returns an {id}.
>
and other representation. Many clients can have this {id}.
> 3. POST /foo/ to push().
> 4. DELETE /foo/{id} to pop(). If {id} is never the HEAD then the request
> fails.
>
Clients might race in this case, but that is the reality for a queue to
server many clients.
>
> Not sure if it is good to implement 3 and 4 by PATCH.
>
> Cheers,
>
> Dong
>
> On Thu, Aug 5, 2010 at 12:59 PM, Juergen Brendel <
> juergen.brendel@...> wrote:
>
>>
>>
>>
>> Hello!
>>
>> Let's say I have a queue resource: /foo
>>
>> I can POST new entries into the queue. I can even refer to individual
>> entries within the queue: /foo/<id>
>>
>> But how do I pop the next entry? How do I construct a single request
>> that gets me the next/first entry but also removes the entry at the same
>> time?
>>
>> Maybe I can implement a special resource /foo/next, which always refers
>> to the next entry in the queue. But clearly, I can't use GET to pop the
>> entry, since that would not be idempotent.
>>
>> The queue has multiple consumers, so the 'pop' operation should be
>> atomic. This seems to rule out the possibility of doing a GET to
>> retrieve the latest element, followed by a DELETE to remove it. Someone
>> else could have gotten the 'latest' element in the meantime, thus
>> causing the same element to be consumed twice.
>>
>> Maybe I can cause a 'move', where a single request causes the next
>> element to be renamed to a unique ID, which is then returned to the
>> client, who then is the only one who has a handle on that object. The
>> client can then work with the resource. But the question now is:
>>
>> a) What happens when the client fails before it can delete the resource?
>> b) What is the best way to 'move' an item in that way?
>>
>> Juergen
>>
>> --
>> Juergen Brendel
>> http://restx.mulesoft.org
>>
>>
>>
>
>
Just saw this from a MQ vendor. It looks pretty interesting. Its documenting their REST interface to their MQ. http://docs.jboss.org/resteasy/hornetq-rest/1.0-beta-1/userguide/html_single/index.html
On Aug 10, 2010, at 4:53 PM, Eb wrote: > > > > > On Tue, Aug 10, 2010 at 9:11 AM, Jan Algermissen <algermissen1971@...> wrote: > > > > On Aug 10, 2010, at 1:57 PM, Eb wrote: > > > What happened to the simple guidance that the "best practice" is to try to reuse existing media types and if that is not possible, create a custom one? > > Nothing. The question is: What is the meaning of "if that is not possible". > > Question: Why did people bother defining application/atom+xml? text/html with some extensions would have worked just as fine. > > Jan > > > > > > Why doesn't a simple statement like that work? > > Hey Jan - > > I'm no historian but I would imagine that what existed previously did not fit their use case "naturally" (whatever that means :)) and hence the need for a specialized media type. I'm pretty sure an Atom could have been represented in plain html and/or xml and the "client" taught to interpret it, but at some point your clients become numerous That is an interesting way to put it :-) I take you to say that from some number of clients N (N being 'numerous') decoupling becomes important (and hence a dedicated media type). Is that what you are saying? Jan > and maybe it makes sense to invent a new media type with its own specification that anyone consuming your service can refer too. > > Just postulating now..... > > Eb > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
On Tue, Aug 10, 2010 at 6:43 PM, Jan Algermissen <algermissen1971@...>wrote: > > > That is an interesting way to put it :-) > > I take you to say that from some number of clients N (N being 'numerous') > decoupling becomes important (and hence a dedicated media type). Is that > what you are saying? > > Jan > > > > Oh, I definitely think the larger (and distinct) your client and its use case(s) becomes, the more important decoupling becomes (or at least the degree of decoupling as decoupling is always important). This may (or may not) translate into a minting a new media type, but I think probably does (no data to really prove this however). With consumers expecting something in certain format/structure, it just may make sense to standardize that format to make adoption easier as client built to handle a certain media type can deal with regardless of the source because there is an assumption that it conforms to structure. Isn't Atom at the end of the day a "specialization" of XML but what would its adoption be like if it wasn't a standard media type. Imagine a world without application/atom+xml. What would it look it from a syndication perspective? How do I know what's syndication "xml" versus catalog "xml"? Probably can say the same for HTML and SGML and browsers on multiple platforms. My $0.02.
On Mon, 2010-08-09 at 18:26 -0600, Eric J. Bowman wrote:
> Bill de hÓra wrote:
> >
> > Here's some real world amazon CSS
> >
> > .price { font-family: verdana,arial,helvetica,sans-serif; color:
> > #990000; }
> >
>
> How does that dispute what I said? I said,
>
> >
> > > If that span were *just meant for style*...
It's from www.amazon.de.
Bill
Bill de hra wrote:
>
> >
> > >
> > > Here's some real world amazon CSS
> > >
> > > .price { font-family: verdana,arial,helvetica,sans-serif; color:
> > > #990000; }
> > >
> >
> > How does that dispute what I said? I said,
> >
> > >
> > > > If that span were *just meant for style*...
>
> It's from www.amazon.de.
>
I don't get what you're driving at. Why wouldn't .price also identify
a price in Euros? Or, if .price identifies price in dollars, why can't
that be converted to Euros?
The point remains, such metadata can be used to identify item and price,
regardless of how a site is marked up.
-Eric
Jan Algermissen wrote: > > Question: Why did people bother defining application/atom+xml? > text/html with some extensions would have worked just as fine. > No, it wouldn't have. You can't assign metadata to the root element in SGML or SGML-derived XML media types, beyond namespaces. SGML and XML media types clue us in about the root element. What you can't do with HTML is define whether a page is a collection, or a member of a collection, particularly by varying the root element in some way. So there is no way to extend HTML to match the semantics of Atom. The way HTML pages are differentiated as to member/collection on my demo system, is by media type identifier -- <link rel='alternate' type= 'application/atom+xml; type=feed' href='foo'/>. Without Atom, I'd have no such mechanism. With Atom, I can use standard link relations and media type identifiers to indicate collection vs. member. There are other semantics in Atom which aren't in HTML, but may be duplicated in HTML using metadata. The reason these weren't just added to HTML, is that we lacked the tools (like RDFa) to do so at the time. As it is, Atom plugged a big gaping hole in the semantic capabilities of media type identifiers (collections), and as a result was well on its way to becoming ubiquitous before it was even finalized. -Eric
I'm trying to figure out a RESTful way to do compositions. I managed to tie my
head in a knot. Please untangle me.
Suppose that I am trying to supply my clients with a RESTful way to send
Christmas cards. I have access to a RESTful service that catalogs my friends.
It's at http://yourfriends.example.com . It gives me a way to GET a person
document defined by a person.xsd XML schema by linking from a page that lists my
friends. This unfortunately doesn't have addresses, and I need my client to pick
one of the addresses on file to send the card to. Fortunately, I know of another
RESTful web service that does supply addresses for a person. There's even a link
to it in the person XML. The links point to address.xsd documents generally
hosted by http://addresses-galore.example.com
Here's some approaches (m namespace is mine, the two above are p and a
respectively):
1) I create representation like that shown below in my own new schema
person-with-address.xsd
<m:person-with-address>
<p:person ...>...</p:person>
<m:addresses>
<a:address ...>...</a:address>
...
</m:addresses>
</m:person-with-address>
2) I notice that person.xsd is extensible and so I create my own compliant
person.xsd documents like so
<p:person ...>
...
<m:addresses>
<a:address ...>...</a:address>
...
</m:addresses>
</p:person>
3) Suppose I take approach #2 and yourfriends.example.com notices and decides to
host them. They wish to offer both representations (their original and my
extension). Are these representations of the same or different resources? Is it
reasonable for them to try to let users choose the thin or fat version of a
person using content negotiation? If so, how can you differentiate these by a
media type?
Is there any issue here in solution 1 or 2 because I'm constructing an
application state as (a client of yourfriends and addresses-galore) that wasn't
hypertext driven?
Media type identifers inform clients what codec or engine to use for deciphering the payload. Nothing more. Clients are limited by how recent their codec/engine is for the given type, or how fully they implement that type. Media type identifiers are _not_ meant to say anything about the nature of the payload. Doing so would introduce coupling, violating the layered-system constraint, and result in more media types than anyone could possibly keep up with, defeating the whole purpose of self-descriptive messaging. Implementation details, like domain-specific vocabularies, are hidden behind REST's uniform interface. REST simply does not care about the nature of the information flowing between connectors, only that its format (headers making assertions, possibly about payloads) is obvious to anyone who cares to look (self-descriptive messaging) at the headers. Exposing the nature of the payload in media type identifiers is an obvious coupling of client to server -- such implementation details need not be, and cannot be, understood by intermediaries. Fracturing the Web by exposing the nature of the payload in media type identifiers places a burden on intermediaries that would be impossible to bear. Which results in coupling of client to server based on media type identifier, which violates the layered system constraint, even if such media types were to become ubiquitous enough not to violate the self- descriptive messaging constraint. Caches and proxies simply do not care about item and price information, only the format such metadata is contained in. The Web would collapse in a non-interoperable heap if it were any other way. REST explains this, and predicts the evolution of the Web to continue along the very path it is taking, which is interoperable ubiquitous media type identifiers, not media type identifers which expose the nature of the payload. Implementation details, like domain-specific vocabularies, remain hidden behind the uniform interface -- unless you violate the uniform interface by exposing such details in media type identifiers. The best example I can give of proper media type design, and proper assignation of media type identifiers, is image types. You younger folks here really are spoiled nowadays -- what browser doesn't grok progressive rendering, transparency or animation? This was not always so, these things came about incrementally, _without_ creating new media type identifiers every step of the way. I'm sure that a search of the relevant archives will show that this is so in large part due to consistent prodding from Roy. When I first went online, the only media types I had to choose from to inline an image were GIF 87a and JPEG. Eventually the Web moved to full support of GIF 89a, and more advanced JPEG capabilities like progressive rendering. Patent controversy led to PNG. We have now reached a place where it just isn't necessary to define new media type identifiers for images (esp. w/ SVG), especially for the purpose of communicating the *nature* of the payload, which is a REST antipattern. (Of course, there's still room for new image media types to evolve, for example some new breakthrough in image compression would require a new codec would require a new media type identifier.) These different media types (GIF 87a and 89a are different media types which share an identifier) are backwards-compatible. Thus, serving an animated GIF 89a to a GIF 87a browser used to result in the display of only the first frame of the animation. Transparency would show up as whatever masking color was used. The image would need to transfer completely before display, instead of rendering incrementally. Media types evolve. Ideally, their identifiers do not. Only new media types require new media type identifiers, extensions to existing media types do not. All the media type identifier does is inform the user agent to use its latest knowledge of GIF, not the nature of the payload as dogs vs. porn, or as having transparency, animation or whatnot. To the user agent, the media type identifer just says "use your latest GIF codec to decipher this payload as best you can." There is nothing more the server needs to assert about the payload at the protocol layer. All else is implementation detail, hidden behind the uniform interface, not sent over the wire as part of the protocol. It would be silly to require content negotiation in order to provide a single frame to GIF 87a browsers, vs. animation for GIF 89a browsers. Such coupling does not allow client and server to independently evolve. The differences between the image media types which share identifiers have nothing to do with the format of the image, i.e. the codec used for rendering is the same. The differences come down to metadata support. Adding support for defining a given color bit as transparent, or having progressive rendering, or being a collection of sequential images, is not really any different than adding RDFa to existing HTML or XML media types. Properly designed media types don't need versioning, or the conneg that would encourage. Had the Web evolved that way, chaos would have ensued. Or not -- in which case we'd have falsification of what I'm saying about REST. As it is, we have corroboration of what I'm saying about REST -- evolving media types doesn't begin to mean creating new media type identifers every time an existing media type is extended, or used in a new way (like using a GIF as a sparse-bit array) -- this just won't scale. > > > > >> > >>> My point is that is that html+gd is a much an application specific > >>> media type as a custom xml format. Not giving it a name does not > >>> change that fact. Once the server adds gd annotations, and > >>> clients started depending on them, the representations become > >>> application specific. > >> > >> Even worse: they become service specific and the client > >> implementation couples itself to an (un-guarantee) particularity of > >> that service. This is not different than having a service specific > >> API in the first place. > >> > > > > What do you mean by service-specific? > > Specific to a service. Not orthogonal to the service. HTML is > orthogonal to Amazon. Amazon's HTML style attributes are specifc to > Amazon. > I'm not holding Amazon up as a shining example of anything other than the use of domain-specific vocabulary within a ubiquitous media type, to prove that such a design pattern works even without a defined ontology or RDFa (it's basically a microformat). Let's get off of Amazon please, and back to the original example of BestBuy, which is using RDFa to implement the GoodRelations ontology instead of winging it. Although, the difference between the two approaches lies outside the scope of REST, because implementation details like domain-specific vocabularies using RDFa vs. microformats are hidden behind the uniform interface. Implementing GR in RDFa is not service-specific. It is domain-specific and in no way couples clients to servers. Changing the GIFs on my websites back in the day, from GIF 87a to 89a with progressive rendering (Using <img @lores/> to load a highly- compressed JPEG first -- anyone else remember those dialup days of yore?) degraded gracefully, due to the loose coupling of client to server. Had I been required to implement a new media type identifier, I wouldn't have moved to 89a until all browsers supported it, because it would have required coupling client to server, which would have meant implementing conneg and continuing to serve the 87a images to older clients. Why wouldn't I apply this experience to my knowledge of REST, which was both derived from, and motivated the continuation of, the media type vs. identifier design pattern of images? Coupling clients to servers by exposing the nature of the payload in a media type identifier, is exactly the opposite of the proven-successful design pattern REST is based on, which is why I say it can't be REST to go that way. > > > Domain-sepcific vocabulary > > embedded in metadata is not some sort of design flaw. Google and > > others understand GoodRelations. > > How would the server know that a user agent depends on it to fullfil > its implemented goal? > Why would the server care? The goal in REST is graceful degradation, as opposed to tight coupling. The user agent doesn't depend on domain- specific vocabulary. That understanding is between the server owner and the user, not the server and the user agent, which only care about agreeing on a container format for domain-specific vocabulary. RDFa and GoodRelations are tools for communicating the meaning of the server owner such that it may be understood by machine users. Contracts exist between the stakeholders in a system, not its REST connectors. Enforcing contracts between connectors instead of stakeholders is a violation of the layered system constraint. The user, human or machine, is a stakeholder with no knowledge of the protocol layer -- only the application state provided by the user agent. The protocol layer could likewise care less about user goals or the nature of the payload, only its format. This understanding of intent between server owner and human or machine user, lies neatly outside the scope of REST. Communicating this understanding from server to user agent via application-specific media type identifier, breaks right through the layered system constraint. Having the user agent communicate this understanding to the user via domain-specific vocabulary does not violate any constraints. > > If the user agent does not *depend* on GoodRelations then Acept: > text/html is just fine. But if it *needs* the embedded stuff to work > properly, Accept text/html is not. .. and there is a hidden contract > that will eventualy break. > Of course that contract is hidden, and of course being a contract, it's subject to change. It's hidden because it's an implementation detail that has no business being part of REST's uniform interface. RDFa can embed GR inside a variety of host languages in a variety of ways, the only contractual agreement in REST between the user agent and the server is to agree on the format, *not* the nature, of the payload. The goal in REST is graceful degradation, which means allowing the client and server to evolve independently due to loose coupling based on shared understanding of evolving media types. If some client won't work with RDFa + GR via HTML, then it needs its understanding of HTML upgraded to participate in communications on such a system. Until then, it understands as much of the text/html as it understands, and ignores the rest -- the REST style is not based on "mustUnderstand." The client may be upgraded by recoding the client, or by the server invoking some code-on-demand via some client-side mechanism which detects the user agent's capabilities within the media type (like XSLT or Xforms capabilities in browsers may be detected using js). Or the server can account for other clients entirely, for example by GRDDL- transforming RDFa into RDF, or rather linking to such a transformation. Or perhaps the client understood RDFa + GR via HTML long before the server implemented it. Instead of constantly asking the server "Are we there yet?" and failing, such a client simply updates automatically when the server starts sending payloads of that nature. Except for compression, I like to avoid conneg wherever possible. An architecture which requires conneg based on version or payload nature for every resource that's ever been changed over time, is fundamentally opposed to REST. If that's the price of implementing application-specific contracts at the protocol layer, then I say it's far too steep to pay, especially when the same goals may be achieved using standardized domain-specific vocabularies and RDFa via HTML. > > > Any service implementing GR has > > provided a machine-readable API that Google interacts with. These > > services are in no way required to resemble one another. So what do > > you mean by coupling? Agreeing to a domain-specific vocabulary > > implemented using standard media types, is exactly what is meant by > > decoupling. > > Right. And HTML + GoodRelations is not such a standardized media > type, eh? > No, HTML is a media type. RDFa extends the HTML media type (regardless of which media type identifier is used). GoodRelations is a domain- specific vocabulary communicated from server owner to m2m user via RDFa metadata that's part of the HTML media type by extension. If a client (component or connector) is concerned with the difference between GIF 87a and 89a, then it may introspect the payload to determine which it is, just as a client may introspect a payload to determine the presence of RDFa attributes, or perhaps read a composite DTD. When RDFa or Xforms are used as guest languages within HTML host languages, they do not change the underlying nature of the host language, its semantics, its root node or nodes, or anything else about it. It's merely an extension, no different than adding animation to image/gif. The nature of the payload, i.e. what the animated GIF shows or what ontology RDFa exposes or what forms language is used or what URIs are in @profile, has no bearing on its media type. Extending existing media types with metadata allows graceful degradation. Assigning new media type identifiers every time this happens couples clients to servers, such that they must evolve at the same time instead of independently. > > > Why should the server owner care? If BestBuy changes from GR to > > something else, then Google can no longer identify items and prices, > > until Google implements the new ontology, > > So then, why do we need HTM in the fisrt place? If this works so > smoothly, why not just have Amazon send application/xml? Amazon would > send a certain kind of XML, ya now: <html><title>...</title> ... > </html> and Google and browser implementors would just implement > that. > Because that would violate the self-descriptive messaging constraint. HTML isn't just some random XML with a schema, it's a specific format which, as you describe, has a root element of <html>, a <head> which contains the <title> of the document (i.e. well-known semantics) and other metadata, plus a <body> for content, with well-known rendering rules as a container for scripts, styling, transformations, forms languages and whatnot; which, after processing, provides the user with an application steady-state consisting of hypertext controls for whatever API the server owner is attempting to communicate to human or machine users using natural or machine (RDFa) language. Sending text/html or application/xhtml+xml describes everything in that last paragraph; which means that these media types come with whatever security implications arise from allowing transformation, scripting or forms languages to execute, and that these are clearly defined in well- known markup patterns. Nothing about form semantics or script bindings should be implied, let alone assumed as explicit, when application/xml is used, because application/xml doesn't define any such things, although it does allow for styling and transformation via XML Processing Instructions and linking via Xlink or rdf:about. The same goes for application/custom+xml. Pick a media type capable of conveying your hypertext API to me, then tag it with a ubiquitous media type identifier that explicitly states things like root element, title and security profile. Hypertext APIs are document-driven, so define your APIs using one or more known formats for conveying hypertext control documents. Instead of re-inventing things like <title> or forms or script bindings or image inlining or accessibility and whatnot. Just because browsers sniff content and implement javascript for media type identifiers js doesn't have bindings for (application/xml, or text/plain in IE's case) doesn't mean they're right to do so, and certainly doesn't mean that doing so is proper REST architecture. In fact, doing so is a direct violation of what little security features exist in REST or Web architectures. Unless you see no danger from having text/plain treated like text/html by allowing transformations, forms and scripts to execute *despite* the fact that none of those behaviors are defined for text/plain. The risks of HTML media types are well known. Reinventing new hypertext control document languages for every new service results in an SOA-like disaster where the security considerations vary from service to service, and are unknowns, of the "known unknown" variety. Give SVG a try as your hypertext control document media type. As XML, it allows CSS styling, plus it defines linking semantics and javascript bindings, and can serve as an Xforms and RDFa host language, thereby implementing GoodRelations. These are all "known knowns" and as such, the security implications of the payload are bright and clear when tagged with the application/svg+xml media type identifier. > > Then, if Amazon changes from that syntax to some other one, Google > and our browsers can no longer work with the XML. > Amazon changing from some ad-hoc microformat to RDFa + GR won't break Google at all. Instead, Google would be able to highlight item + price information for Amazon just like they do for BestBuy. If Amazon had beaten BestBuy to the punch here, then BestBuy most likely wouldn't have realized a 30% increase in sales. Domain-specific vocabulary has no effect on browsers, which are there to render the media type, not interpret the m2m metadata any more than they interpret the human- readable metadata. The fact that Google is able to glean item/price information from HTML code annotated with RDFa + GR really has nothing to do with REST. It's just implementation details, the sort which are hidden behind the uniform interface. Which is why I keep talking about it here -- making this agreement a protocol-layer concern is NOT REST. > > Your train of thought implies that that is also just fine because > once Google and the browsers follow the new stuff, everyone is happy > again. > No, REST implies that clients and servers evolve independently, this isn't something I came up with. There is no need for Amazon or BestBuy to wait until they have some contractual agreement with Google before sending GR, just as there is no need for Google to wait until anyone is actually sending GR before they decide to understand it. This is loose coupling. The tight coupling which results from requiring a contract, such that clients and servers must have some a priori arrangement before either may evolve, certainly has nothing to do with REST architecture. > > So - why do we need text/html as a media type? > I never said we did. I have no idea why, when I say hypertext, people assume I only mean HTML. I likewise have no idea why, when I say HTML media types, people assume I only mean text/html. I'd much prefer it if one major browser vendor didn't have their head shoved so far up... erm, ummm... anyway, there's no valid reason why application/xhtml+xml shouldn't be the de facto media type for the Web in this day and age. > > (Hint: because browsers (and Google) depend on it > implementation-wise. They do not work with any application/xml. > I have no idea what you're talking about. Google will read anything it possibly can, it won't refuse to read anything that isn't text/html. Check out my demo, give application/xml a try under IE. IE will also process my text/xml and text/plain variants. No browser with XSLT won't work with application/xml, while all XSLT browsers except IE grok application/xhtml+xml. So my plan is to use conneg to send exactly the same polyglot document as application/xhtml+xml to XSLT browsers except IE, application/xml to IE, and transform the document on the server to send text/html to the rest, including Google. I'm not sending application/xhtml+xml to Google, because Google won't execute my XSLT transformation (last I checked). I suppose I could send the server-side transformation to Google as application/xhtml+xml, but I'm using text/html because this is my default variant I want to have work with anything, including obsolete browsers, so it's just easier to have the conneg default to text/html, and detect XSLT browsers. If I just didn't care about older user agents, I'd not be using text/html, it's certainly not a Google or browser requirement (except for IE). > > That is why they say: I 'Accept: text/html, application/xhtml+xml' > and as long as you send me something that conforms to one of those > types I *can* carry out the implemented goal for the response to > *this* request.) > No, Google and browsers all include '*/*' in their Accept headers, to indicate their willingness to receive any format. I don't actually care what a user agent prefers; if it specifically says it Accepts application/xhtml+xml, that's what I send it, because that media type uses less bandwidth and has better user-perceived performance than text/html. The client isn't instructing the server what media type to send, making things work in such fashion is not the result of following REST. Receiving a response of a certain media type says absolutely nothing about whether a user agent can accomplish the goals of the user; that's a function of domain-specific vocabulary processed at a different layer from the protocol interaction between connectors. > > To stress the point again: if the user agent's implementation of a > certain user goal *depends* on a certain format of representation > that must be expressed in the Accept header. > Sure. An Atom user agent is going to have one hell of a time trying to interpret my demo system if I only send it HTML media types. But that's just one variant; by using RDFa I can incorporate the same domain-specific vocabulary into my HTML and Atom media types, such that all variants describe the same API. If a user agent depends on that domain-specific vocabulary, then it will have to introspect the payload to determine compatibility -- there is absolutely no REST constraint stating that this must be exposed in the media type identifier, in fact to do so violates REST, as I've explained. > > If a user agent is implemented to wake up every hour and check the > prices of items on various shoppping sites it has far more specific > needs than a generic agent that displays a [Next] button if it > encounters a next link or displays an [edit] button if it encounters > an AtomPub edit link. > Of course its needs are more specific. That's what domain-specific vocabulary is for. If I'm a human user, domain-specific vocabulary is what tells me which link to click to order which item, and what the price is, in natural language. If I'm a machine user, I'm looking at the metadata to determine the same thing. Either way, the hypertext is driving application state. It wouldn't be a user agent that's implemented to do any specific task. If a machine user has a goal of price-checking items on some schedule, then the machine user is coded to have the user agent request the proper-format (regardless of the nature of the payload) representation and render it into an application steady-state; the machine user chooses from the available state transitions; the user agent makes it so. The options for which state transition to take must be contained in the payload (since we don't have standard link relations for this, the Link: header is not relevant). All RDFa does, is allows that state transition to be annotated in machine-readable fashion. Otherwise, a machine user would not be able to decipher the state transition choices, let alone choose between them. The smart agent you describe can be easily implemented using HTML, js and libcurl. I see no advantage to describing such a service API using any other hypertext language. There is absolutely no REST constraint admonishing against the use of HTML to make this work. REST is nowhere near as complicated as you're trying to make it by insisting that media type identifiers must constitute some inviolable, binding agreement between stakeholders, or indicate the nature of the payload, when they only say what the format of the payload is. Requiring media type identifiers to mean anything more than that, violates REST, plain and simple. -Eric
I think there is a fundamental question that should clarified unequivocally by the experts on this list - I do have a opinion but I'm not a expert so it's just that, a opinion. Is REST realm - the problem-space where it should be applied, or where it makes sense to apply it - exclusively the Web? Or it should, or it can, be applied to the more general space of network-based software architectures, thus including intranets (network based apps that runs exclusively inside a company) and extranets (the use of private networks and/or the public infrastructure of the internet to connect a limited number of companies - considering limited does not equal small)? Because if it is indeed only applicable to the Web - and please note that Web != Internet - to making websites that are going to be used by humans using a browser, most of the discussions here don't really make sense. Otherwise, some people assumptions when they deal with the issues presented on this list are, to say the least, limited. Or plain wrong, not to say the least. Now I do understand that one or the other approach may have to do with each one background, people who's work is limited to the web may have a different point of view than others who worked over several areas and platforms along the years. For instance, what is the sense in saying "Media type identifers inform clients what codec or engine to use for deciphering the payload" if my clients are *not* browsers? And also please someone correct if I'm wrong by saying "ubiquitous" != "standard"... While it is true that intermediaries don't look inside the msg to perform their function, why is that true also for servers that are not web-servers? If I have a application/mystuff+xml, all the intermediaries understand what they need to understand - they read this as application/xml. Why should the server be limited to this, knowing that I, as a architect/designer, although I *do not* have control on intermediaries I *do* have control over the server? The coupling using application/xml or application/mystuff+xml is, from this point of view, exactly the same. And I do see advantage of using "application/mystuff+xml" on content-negociation *on the server side*, because like that I can even put that content-negociation on my *server-side connector* - which I also control, thus relieving the server from workload, improving balancing, implementing scalability and effectively implementing layered design - but of course all this is just impelementation. Also, and all please excuse my rant but these things must be said... why some people on this list insist in treating people sometimes like morons and sometimes like little kids in the classroom in front of the master? Shouldn't we all consider the others as pairs, even if the knowledge of some are superior to the one of others? Aren't we all professionals? What's the "you kids"? I'm trying not to say harsh words, but do I have to publicize that I develop software for the past 30 years, 27 of then as a professional, 20 of then as a independent consultant / contractor? That I started to do web sites since the earlier 90's and I kept working on the web (although not exclusively) until as recently as 2007, when I designed and implemented a web-site login method using telephony - where you had to call a number to be authorized to enter the site and you will be logged in until you hang up the call. Should I also say that I did a web site to a chicken delivery service - I didn't put that on my CV since I made it to a friend and it was not a paid job - or actually it was, I got paid in chickens... I even designed myself many animated gif's... How's that for publicity? I don't know, maybe it's just me that dislike being treated with this kind of disdain? Nevertheless, I really think that this list should clarify the question I talked above, because frankly if REST is only about the web, I am plain wrong in my approach and it is better for me to understand that now and move on to other technologies. And I think others will benefit from that clarification also. On 12 August 2010 07:19, Eric J. Bowman <eric@...> wrote: > > > > Media type identifers inform clients what codec or engine to use for > deciphering the payload. Nothing more. Clients are limited by how > recent their codec/engine is for the given type, or how fully they > implement that type. Media type identifiers are _not_ meant to say > anything about the nature of the payload. Doing so would introduce > coupling, violating the layered-system constraint, and result in more > media types than anyone could possibly keep up with, defeating the > whole purpose of self-descriptive messaging. >
Antnio: Not sure if this is exactly where you are heading, but here my POV: REST style is protocol-agnostic (not limited to HTTP) REST style is not limited to Web or Internet usage (e. g. has application for communication between autonomous devices in a closed custom network) REST style using HTTP over the Web is not limited to using the common Browser for the "client" (e. g. desktop applications. console apps, bots, etc.) Finally, the REST style is not the only interesting style for building distributed network applications. mca http://amundsen.com/blog/ http://mamund.com/foaf#me Join me at #RESTFest 2010 Sep 17 & 18 http://restfest.org http://restfest.org/workshop ---------- Forwarded message ---------- From: Antnio Mota <amsmota@...> Date: 2010/8/12 Subject: Re: [rest-discuss] Atom feed vs. list of orders To: "Eric J. Bowman" <eric@...> Cc: Jan Algermissen <algermissen1971@...>, Peter Williams <pezra@...>, Rest List <rest-discuss@yahoogroups.com> I think there is a fundamental question that should clarified unequivocally by the experts on this list - I do have a opinion but I'm not a expert so it's just that, a opinion. Is REST realm - the problem-space where it should be applied, or where it makes sense to apply it - exclusively the Web? Or it should, or it can, be applied to the more general space of network-based software architectures, thus including intranets (network based apps that runs exclusively inside a company) and extranets (the use of private networks and/or the public infrastructure of the internet to connect a limited number of companies - considering limited does not equal small)? Because if it is indeed only applicable to the Web - and please note that Web != Internet - to making websites that are going to be used by humans using a browser, most of the discussions here don't really make sense. Otherwise, some people assumptions when they deal with the issues presented on this list are, to say the least, limited. Or plain wrong, not to say the least. Now I do understand that one or the other approach may have to do with each one background, people who's work is limited to the web may have a different point of view than others who worked over several areas and platforms along the years. For instance, what is the sense in saying "Media type identifers inform clients what codec or engine to use for deciphering the payload" if my clients are *not* browsers? And also please someone correct if I'm wrong by saying "ubiquitous" != "standard"... While it is true that intermediaries don't look inside the msg to perform their function, why is that true also for servers that are not web-servers? If I have a application/mystuff+xml, all the intermediaries understand what they need to understand - they read this as application/xml. Why should the server be limited to this, knowing that I, as a architect/designer, although I *do not* have control on intermediaries I *do* have control over the server? The coupling using application/xml or application/mystuff+xml is, from this point of view, exactly the same. And I do see advantage of using "application/mystuff+xml" on content-negociation *on the server side*, because like that I can even put that content-negociation on my *server-side connector* - which I also control, thus relieving the server from workload, improving balancing, implementing scalability and effectively implementing layered design - but of course all this is just impelementation. Also, and all please excuse my rant but these things must be said... why some people on this list insist in treating people sometimes like morons and sometimes like little kids in the classroom in front of the master? Shouldn't we all consider the others as pairs, even if the knowledge of some are superior to the one of others? Aren't we all professionals? What's the "you kids"? I'm trying not to say harsh words, but do I have to publicize that I develop software for the past 30 years, 27 of then as a professional, 20 of then as a independent consultant / contractor? That I started to do web sites since the earlier 90's and I kept working on the web (although not exclusively) until as recently as 2007, when I designed and implemented a web-site login method using telephony - where you had to call a number to be authorized to enter the site and you will be logged in until you hang up the call. Should I also say that I did a web site to a chicken delivery service - I didn't put that on my CV since I made it to a friend and it was not a paid job - or actually it was, I got paid in chickens... I even designed myself many animated gif's... How's that for publicity? I don't know, maybe it's just me that dislike being treated with this kind of disdain? Nevertheless, I really think that this list should clarify the question I talked above, because frankly if REST is only about the web, I am plain wrong in my approach and it is better for me to understand that now and move on to other technologies. And I think others will benefit from that clarification also. On 12 August 2010 07:19, Eric J. Bowman <eric@...> wrote: > > > > Media type identifers inform clients what codec or engine to use for > deciphering the payload. Nothing more. Clients are limited by how > recent their codec/engine is for the given type, or how fully they > implement that type. Media type identifiers are _not_ meant to say > anything about the nature of the payload. Doing so would introduce > coupling, violating the layered-system constraint, and result in more > media types than anyone could possibly keep up with, defeating the > whole purpose of self-descriptive messaging. > ------------------------------------ Yahoo! Groups Links
Some thought: Resources are allowed to have different representations, why should they not be allowed to provide different/more details in some representations? In my opinion, it would be okay to handle both representations as reps of one resource. (+) Different services can exchange resource addresses to person resources independent of their need for addresses (-) A further content type is necessary Because of the extensibility of person.xsd it would be possible to use the same content type but different resources. (+) Reuse of content type (-) Different services with different requirements towards addresses could not exchange resource addresses. If they exchange resource addresses, the errors may be recognized lately (solid implementations assumed, it should only be a problem, if the address is missed, old clients should ignore unknown elements in extensible XML schemata; of course, new clients needing the address should not crash, but they may give up although an address would be available; different content type would at least allow clients to decide, whether addresses are indeed unavailable). This problem may be solved by introducing links between the two resources. (-) Cache invalidation after updating address resources may not work correctly because intermediaries will not know the dependence between the two different resources. I think, this problem was topic of Mike Kelly and Michael Hausenblas at WS-REST 2010 (Using HTTP Link: Header for Gateway Cache Invalidation) ... Because the representations will change seldom and temporary wrong representations will not lead to disasters, I would provide two different resource addresses with links between them allowing clients to exchange links to persons and enabling further extensions (by providing more links with new link relations, as long as the old one will be supported, no client should break). Daniel Am 12.08.2010 06:07, schrieb Bryan Taylor: > > I'm trying to figure out a RESTful way to do compositions. I managed > to tie my > head in a knot. Please untangle me. > > Suppose that I am trying to supply my clients with a RESTful way to send > Christmas cards. I have access to a RESTful service that catalogs my > friends. > It's at http://yourfriends.example.com . It gives me a way to GET a > person > document defined by a person.xsd XML schema by linking from a page > that lists my > friends. This unfortunately doesn't have addresses, and I need my > client to pick > one of the addresses on file to send the card to. Fortunately, I know > of another > RESTful web service that does supply addresses for a person. There's > even a link > to it in the person XML. The links point to address.xsd documents > generally > hosted by http://addresses-galore.example.com > > Here's some approaches (m namespace is mine, the two above are p and a > respectively): > > 1) I create representation like that shown below in my own new schema > person-with-address.xsd > <m:person-with-address> > <p:person ...>...</p:person> > <m:addresses> > <a:address ...>...</a:address> > ... > </m:addresses> > </m:person-with-address> > > 2) I notice that person.xsd is extensible and so I create my own > compliant > person.xsd documents like so > <p:person ...> > ... > <m:addresses> > <a:address ...>...</a:address> > ... > </m:addresses> > </p:person> > > 3) Suppose I take approach #2 and yourfriends.example.com notices and > decides to > host them. They wish to offer both representations (their original and my > extension). Are these representations of the same or different > resources? Is it > reasonable for them to try to let users choose the thin or fat version > of a > person using content negotiation? If so, how can you differentiate > these by a > media type? > > Is there any issue here in solution 1 or 2 because I'm constructing an > application state as (a client of yourfriends and addresses-galore) > that wasn't > hypertext driven? > > -- Daniel "Oscar" Schulte Woestestrasse 2 58675 Hemer Telefon: +49 2372 726121 Mobil: +49 176 20646122 E-Mail: mail@... Internet: http://www.DanielOscarSchulte.de ICQ: 158955358
Antnio, On Aug 12, 2010, at 12:05 PM, Antnio Mota wrote: > Because if it is indeed only applicable to the Web - and please note > that Web != Internet - to making websites that are going to be used by > humans using a browser, most of the discussions here don't really make > sense. There is no difference between human-to-machine or machine-to-machine. There are users and software systems. Users intent to accomplish some goal by means of these software systems. The components of software systems work together and create applications through which the user goal is accomplished. In the case of networked applications the components communicate over some network. In the case of REST these components are called user agents, intermediaries and origin servers. The user interacts with the user agent[1] and the user agent and the other components work together and create the application that runs towards the user goal. Conceptually, it does not matter how often the user is needed to make a decision (carry out the next user action). It might be often (as it is with browsers) or never (as it is with indexing spiders). ... It might be that you have requirements for your architecture that do not match the properties of the Web (as induced by its style: REST). For example, you might not be willing to trade some efficiency for increased simplicity and evolvability as REST does and then the Web would not be the appropriate architecture to use. However, the suitability of the Web is not a question of how much the user is involved in the execution of the application that realizes the user's goal. Does that help? Jan [1] Even if this interaction is limited to scheduling the user agent execution (e.g. an indexing spider) and reviewing the final application state (the index, probably located in some relational database). ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
Your example doesn't indicate why you need to put the address directly into the representation of a person. After accessing the resources yourfriends and addresses-galore, your client appears to have everything it needs to generate an xmas card, so why not just do it? Perhaps what you left out is that your client needs to POST to a 3rd resource, say mailcard.example.com, to actually send the card. But if that is the case, then the mailcard resource will tell you what "composite" representation to POST. Also, WRT option (2), I think it's a bit odd to add an "inline" address representation to person.xsd, when the original designer of the rep. chose to only provide a link to an address. Wouldn't your adding the address inline as well potentially be contrary to the intent of the schema design, eg the designer of person.xsd wanted to loosely couple addresses and people? -- Nick Nick Gall Phone: +1.781.608.5871 Twitter: ironick AOL IM: Nicholas Gall Yahoo IM: nick_gall_1117 MSN IM: (same as email) Google Talk: (same as email) Email: nick.gall AT-SIGN gmail DOT com Weblog: http://ironick.typepad.com/ironick/ On Thu, Aug 12, 2010 at 12:07 AM, Bryan Taylor <bryan_w_taylor@...>wrote: > I'm trying to figure out a RESTful way to do compositions. I managed to tie > my > head in a knot. Please untangle me. > > Suppose that I am trying to supply my clients with a RESTful way to send > Christmas cards. I have access to a RESTful service that catalogs my > friends. > It's at http://yourfriends.example.com . It gives me a way to GET a person > document defined by a person.xsd XML schema by linking from a page that > lists my > friends. This unfortunately doesn't have addresses, and I need my client to > pick > one of the addresses on file to send the card to. Fortunately, I know of > another > RESTful web service that does supply addresses for a person. There's even a > link > to it in the person XML. The links point to address.xsd documents generally > hosted by http://addresses-galore.example.com > > Here's some approaches (m namespace is mine, the two above are p and a > respectively): > > 1) I create representation like that shown below in my own new schema > person-with-address.xsd > <m:person-with-address> > <p:person ...>...</p:person> > <m:addresses> > <a:address ...>...</a:address> > ... > </m:addresses> > </m:person-with-address> > > 2) I notice that person.xsd is extensible and so I create my own compliant > person.xsd documents like so > <p:person ...> > ... > <m:addresses> > <a:address ...>...</a:address> > ... > </m:addresses> > </p:person> > > 3) Suppose I take approach #2 and yourfriends.example.com notices and > decides to > host them. They wish to offer both representations (their original and my > extension). Are these representations of the same or different resources? > Is it > reasonable for them to try to let users choose the thin or fat version of a > person using content negotiation? If so, how can you differentiate these by > a > media type? > > Is there any issue here in solution 1 or 2 because I'm constructing an > application state as (a client of yourfriends and addresses-galore) that > wasn't > hypertext driven? > > > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
2010/8/12 Jan Algermissen <algermissen1971@...> > Antnio, > > On Aug 12, 2010, at 12:05 PM, Antnio Mota wrote: > > > Because if it is indeed only applicable to the Web - and please note > > that Web != Internet - to making websites that are going to be used by > > humans using a browser, most of the discussions here don't really make > > sense. > > > There is no difference between human-to-machine or machine-to-machine. > > Well, there is a difference, that's why we have User Interfaces... But for what I was asking, when I said users I was referring to browsers - that is a piece of software running on a machine with the intent to produce User Interfaces so humans can interact with applications. So basically, HTML is a interface layer that allows humans to interact with machines. A interface layer that according to some people will tend to be more and more vague, in what is called ubiquitous computing - if those kind of interfaces will be based on current standards like HTML and standardized media-types is yet to be seen, I guess. And this - browsers as interfaces - independently of the possibility of using the exact same standards to non-browser machine to machine interaction, obviously. I mean, last time I changed houses, I could use a Ferrari to transport boxes from the old house to the new, but a van was more efficient. Like if I have a Ferrari.... Machine-to-machine doesn't need User Interfaces, doesn't need a presentation layer. It may need other kind of interface layer, but not a User Interface like the ones current browser produces.
On 12 August 2010 13:19, mike amundsen <mamund@...> wrote: > REST style is not limited to Web or Internet usage (e. g. has > application for communication between autonomous devices in a closed > custom network) > > REST style using HTTP over the Web is not limited to using the common > Browser for the "client" (e. g. desktop applications. console apps, > bots, etc.) > > Yes, these two are the kind of clarification that I was talking about. The other two points I think that everybody agrees...
2010/8/12 Antnio Mota <amsmota@...> > > Well, there is a difference, that's why we have User Interfaces... But for > what I was asking, when I said users I was referring to browsers - that is a > piece of software running on a machine with the intent to produce User > Interfaces so humans can interact with applications. So basically, HTML is a > interface layer that allows humans to interact with machines. A interface > layer that according to some people will tend to be more and more vague, in > what is called ubiquitous computing - if those kind of interfaces will be > based on current standards like HTML and standardized media-types is yet to > be seen, I guess. And this - browsers as interfaces - independently of the > possibility of using the exact same standards to non-browser machine to > machine interaction, obviously. I mean, last time I changed houses, I could > use a Ferrari to transport boxes from the old house to the new, but a van > was more efficient. Like if I have a Ferrari.... > > Machine-to-machine doesn't need User Interfaces, doesn't need a > presentation layer. It may need other kind of interface layer, but not a > User Interface like the ones current browser produces. > > > > Hey Antonio - I don't think there is really that much of a difference or maybe I am missing the point. The concept of User Interfaces (imho) is not as relevant to this this particular discussion I don't think beyond the fact that there is an amount of intelligence that resides outside of the user agent. The user agent/client is either a browser or a machine. The "end user" has to provide inputs (interact with) the user agents for something to happen in either cases. An end users interaction with a browser is obviously "different" and possibly more dynamic but my machine could be dynamic with reason depending on it interacts with its user agent. Eb
I think these two are widely agreed on the basis of my perception of Fielding's and others' writings and talks. If this was not the case, then there should not have been the revisit of REST for developing services or "web services". Cheers, Dong 2010/8/12 Antnio Mota <amsmota@...> > > > On 12 August 2010 13:19, mike amundsen <mamund@...> wrote: > >> REST style is not limited to Web or Internet usage (e. g. has >> application for communication between autonomous devices in a closed >> custom network) >> > > >> REST style using HTTP over the Web is not limited to using the common >> Browser for the "client" (e. g. desktop applications. console apps, >> bots, etc.) >> >> > Yes, these two are the kind of clarification that I was talking about. The > other two points I think that everybody agrees... > > >
I concur with Mike's assessments. I have implemented several systems using http over the web where the users were automatons. These systems work well and benefit greatly from the characteristics of rest. Particularly the evolvability and scalability it provides. Peter <http://barelyenough.org> 2010/8/12 Antnio Mota <amsmota@...> > > > On 12 August 2010 13:19, mike amundsen <mamund@...> wrote: > >> REST style is not limited to Web or Internet usage (e. g. has >> application for communication between autonomous devices in a closed >> custom network) >> > > >> REST style using HTTP over the Web is not limited to using the common >> Browser for the "client" (e. g. desktop applications. console apps, >> bots, etc.) >> >> > Yes, these two are the kind of clarification that I was talking about. The > other two points I think that everybody agrees... > > >
2010/8/12 Eb <amaeze@...> > > I don't think there is really that much of a difference > I agree that there is not that much of a difference, specially if we're talking on a pure technical level - it's all bits and bytes in the end. But there is nevertheless one difference. Browsers consumes HTML documents that are composed of Presentation + Data. But if the client is some kind of process that, for instance, gets the data from the server and writes it in a database, there is no need for any presentation layer. Like in pure EDI. That is relevant to the discussion in the sense that if REST realm is only the Web, than I'll say that POVs like the ones in relation to the use of media types of Eric Bowman and others are indeed correct. If not, if REST is applicable to, say, implement EDI over the Internet (and I stress again that Internet != Web), where that presentation is not only superfluous but counter-productive, then I would say that said POVs are incorrect.
I would like to implement the composition to mashup in this context. So one would have two choice, a server-side mashup and a client-side mashup. For the former approach, the server hosting the mashup needs to retrieve two resources' representations from the two domains and generates a new resource. For the latter, the client gets code-on-demand representation from a server, and code-on-demand manages to get the resources' representations from the two domains and generates a new representation. Cheers, Dong On Wed, Aug 11, 2010 at 10:07 PM, Bryan Taylor <bryan_w_taylor@...>wrote: > > > I'm trying to figure out a RESTful way to do compositions. I managed to tie > my > head in a knot. Please untangle me. > > Suppose that I am trying to supply my clients with a RESTful way to send > Christmas cards. I have access to a RESTful service that catalogs my > friends. > It's at http://yourfriends.example.com . It gives me a way to GET a person > > document defined by a person.xsd XML schema by linking from a page that > lists my > friends. This unfortunately doesn't have addresses, and I need my client to > pick > one of the addresses on file to send the card to. Fortunately, I know of > another > RESTful web service that does supply addresses for a person. There's even a > link > to it in the person XML. The links point to address.xsd documents generally > > hosted by http://addresses-galore.example.com > > Here's some approaches (m namespace is mine, the two above are p and a > respectively): > > 1) I create representation like that shown below in my own new schema > person-with-address.xsd > <m:person-with-address> > <p:person ...>...</p:person> > <m:addresses> > <a:address ...>...</a:address> > ... > </m:addresses> > </m:person-with-address> > > 2) I notice that person.xsd is extensible and so I create my own compliant > person.xsd documents like so > <p:person ...> > ... > <m:addresses> > <a:address ...>...</a:address> > ... > </m:addresses> > </p:person> > > 3) Suppose I take approach #2 and yourfriends.example.com notices and > decides to > host them. They wish to offer both representations (their original and my > extension). Are these representations of the same or different resources? > Is it > reasonable for them to try to let users choose the thin or fat version of a > > person using content negotiation? If so, how can you differentiate these by > a > media type? > > Is there any issue here in solution 1 or 2 because I'm constructing an > application state as (a client of yourfriends and addresses-galore) that > wasn't > hypertext driven? > > >
Well, I just posted in another thread a post that may fits better in this thread: 2010/8/12 Eb <amaeze@...> > I don't think there is really that much of a difference (between > human/browser-to-machine and machine-to-machine) > I agree that there is not that much of a difference, specially if we're talking on a pure technical level - it's all bits and bytes in the end. But there is nevertheless one difference. Browsers consumes HTML documents that are composed of Presentation + Data. But if the client is some kind of process that, for instance, gets the data from the server and writes it in a database, there is no need for any presentation layer. Like in pure EDI. That is relevant to the discussion in the sense that if REST realm is only the Web, than I'll say that POVs like the ones in relation to the use of media types of Eric Bowman and others are indeed correct. If not, if REST is applicable to, say, implement EDI over the Internet (and I stress again that Internet != Web), where that presentation is not only superfluous but counter-productive, then I would say that said POVs are incorrect. 2010/8/12 Peter Williams <pezra@...> > I concur with Mike's assessments. > > I have implemented several systems using http over the web where the users > were automatons. These systems work well and benefit greatly from the > characteristics of rest. Particularly the evolvability and scalability it > provides. > > Peter > <http://barelyenough.org> > > > 2010/8/12 Antnio Mota <amsmota@...> > >> >> >> On 12 August 2010 13:19, mike amundsen <mamund@...> wrote: >> >>> REST style is not limited to Web or Internet usage (e. g. has >>> application for communication between autonomous devices in a closed >>> custom network) >>> >> >> >>> REST style using HTTP over the Web is not limited to using the common >>> Browser for the "client" (e. g. desktop applications. console apps, >>> bots, etc.) >>> >>> >> Yes, these two are the kind of clarification that I was talking about. The >> other two points I think that everybody agrees... >> >> >> > > >
<snip> For the latter, the client gets code-on-demand representation from a server, and code-on-demand manages to get the resources' representations from the two domains and generates a new representation. </snip> I often implement desktop and console apps that do "client-side" resolution of the URIs. This ability is "built-in" for the client and requires no code-on-demand implementation. When I'm in a hurry (e.g. doing a simple one-off app) I employ x:include[1] in the client. If it's something more involved, I simply define a media type that contains an element that mimics that "Link Embed" [2] pattern of x:include, HTML's IMG, IFRAME, etc. I've even used this same approach w/ common Browsers by designing a custom hypermedia format for the XML media type and using XSLT + code-on-demand[3] (trivial example, view source). That way, the "messy parts" are out-of-sight for most developers. [1] http://www.w3.org/TR/xinclude/ [2] http://amundsen.com/hypermedia/hfactor/#le [3] http://amundsen.com/hypermedia/examples/doc.xml mca http://amundsen.com/blog/ http://mamund.com/foaf.rdf#me Join me at #RESTFest 2010 Sep 17 & 18 http://restfest.org http://restfest.org/workshop On Thu, Aug 12, 2010 at 12:05, Dong Liu <edongliu@...> wrote: > > > I would like to implement the composition to mashup in this context. > > So one would have two choice, a server-side mashup and a client-side > mashup. > > For the former approach, the server hosting the mashup needs to retrieve > two resources' representations from the two domains and generates a new > resource. > > For the latter, the client gets code-on-demand representation from a > server, and code-on-demand manages to get the > resources' representations from the two domains and generates a new > representation. > > Cheers, > > Dong > > On Wed, Aug 11, 2010 at 10:07 PM, Bryan Taylor <bryan_w_taylor@...>wrote: > >> >> >> I'm trying to figure out a RESTful way to do compositions. I managed to >> tie my >> head in a knot. Please untangle me. >> >> Suppose that I am trying to supply my clients with a RESTful way to send >> Christmas cards. I have access to a RESTful service that catalogs my >> friends. >> It's at http://yourfriends.example.com . It gives me a way to GET a >> person >> document defined by a person.xsd XML schema by linking from a page that >> lists my >> friends. This unfortunately doesn't have addresses, and I need my client >> to pick >> one of the addresses on file to send the card to. Fortunately, I know of >> another >> RESTful web service that does supply addresses for a person. There's even >> a link >> to it in the person XML. The links point to address.xsd documents >> generally >> hosted by http://addresses-galore.example.com >> >> Here's some approaches (m namespace is mine, the two above are p and a >> respectively): >> >> 1) I create representation like that shown below in my own new schema >> person-with-address.xsd >> <m:person-with-address> >> <p:person ...>...</p:person> >> <m:addresses> >> <a:address ...>...</a:address> >> ... >> </m:addresses> >> </m:person-with-address> >> >> 2) I notice that person.xsd is extensible and so I create my own compliant >> >> person.xsd documents like so >> <p:person ...> >> ... >> <m:addresses> >> <a:address ...>...</a:address> >> ... >> </m:addresses> >> </p:person> >> >> 3) Suppose I take approach #2 and yourfriends.example.com notices and >> decides to >> host them. They wish to offer both representations (their original and my >> extension). Are these representations of the same or different resources? >> Is it >> reasonable for them to try to let users choose the thin or fat version of >> a >> person using content negotiation? If so, how can you differentiate these >> by a >> media type? >> >> Is there any issue here in solution 1 or 2 because I'm constructing an >> application state as (a client of yourfriends and addresses-galore) that >> wasn't >> hypertext driven? >> >> > > >
On Aug 12, 2010, at 6:02 PM, Antnio Mota wrote: > > > 2010/8/12 Eb <amaeze@...> > > I don't think there is really that much of a difference > > I agree that there is not that much of a difference, specially if we're talking on a pure technical level - it's all bits and bytes in the end. But there is nevertheless one difference. Browsers consumes HTML documents that are composed of Presentation + Data. But if the client is some kind of process that, for instance, gets the data from the server and writes it in a database, there is no need for any presentation layer. Like in pure EDI. You always have a user. In some applications the user will have to interact with the application while it is executing. In all cases the user will be interested in the result of the application and hence that result needs to be presented to him. The rest is just components communicating. The only difference is in which cases the user needs to be involved because this affects the design of the representations and representation types. The question regarding the Web's applicability to a problem space can only be answered in relation to its architectural properties. > > That is relevant to the discussion in the sense that if REST realm is only the Web, REST is the architectural style of the Web and we surely could create another system that has the REST architectural style. We could build new Web that has the same style but works somewhat differently from the Web we know. This about WAKA for example. > (and I stress again that Internet != Web) Roughly: Internet == TCP/IP + DNS Web == HTTP + URI EMail == SMTP Jan > , where that presentation is not only superfluous but counter-productive, then I would say that said POVs are incorrect. > > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
<snip> That is relevant to the discussion in the sense that if REST realm is only the Web, than I'll say that POVs like the ones in relation to the use of media types of Eric Bowman and others are indeed correct. If not, if REST is applicable to, say, implement EDI over the Internet (and I stress again that Internet != Web), where that presentation is not only superfluous but counter-productive, then I would say that said POVs are incorrect. </snip> FWIW, I've used XHTML for bots and console apps in the past w/ much success. many client libraries already have HTML parsers, understand how to code against the FORM and A elements, etc. these bots don't do any UI rendering, but still are quite efficient at parsing and processing XHTML bodies. i often have servers the support both a specific media type (application/bot-work.xhtml) and a generic media type (application/xhtml+xml, text/html, etc.). this allows clients to pick the media-type they prefer. often the representation for these selected types *is identical* this means client devs can use browsers to view/hack/debug their work while writing the bot. mca http://amundsen.com/blog/ http://mamund.com/foaf.rdf#me Join me at #RESTFest 2010 Sep 17 & 18 http://restfest.org http://restfest/org/workshop 2010/8/12 Antnio Mota <amsmota@...> > Well, I just posted in another thread a post that may fits better in this > thread: > > 2010/8/12 Eb <amaeze@...> > > >> I don't think there is really that much of a difference (between >> human/browser-to-machine and machine-to-machine) >> > > I agree that there is not that much of a difference, specially if we're > talking on a pure technical level - it's all bits and bytes in the end. But > there is nevertheless one difference. Browsers consumes HTML documents that > are composed of Presentation + Data. But if the client is some kind of > process that, for instance, gets the data from the server and writes it in a > database, there is no need for any presentation layer. Like in pure EDI. > > That is relevant to the discussion in the sense that if REST realm is only > the Web, than I'll say that POVs like the ones in relation to the use of > media types of Eric Bowman and others are indeed correct. If not, if REST is > applicable to, say, implement EDI over the Internet (and I stress again that > Internet != Web), where that presentation is not only superfluous but > counter-productive, then I would say that said POVs are incorrect. > > > 2010/8/12 Peter Williams <pezra@...> > > I concur with Mike's assessments. >> >> I have implemented several systems using http over the web where the users >> were automatons. These systems work well and benefit greatly from the >> characteristics of rest. Particularly the evolvability and scalability it >> provides. >> >> Peter >> <http://barelyenough.org> >> >> >> 2010/8/12 Antnio Mota <amsmota@...> >> >>> >>> >>> On 12 August 2010 13:19, mike amundsen <mamund@...> wrote: >>> >>>> REST style is not limited to Web or Internet usage (e. g. has >>>> application for communication between autonomous devices in a closed >>>> custom network) >>>> >>> >>> >>>> REST style using HTTP over the Web is not limited to using the common >>>> Browser for the "client" (e. g. desktop applications. console apps, >>>> bots, etc.) >>>> >>>> >>> Yes, these two are the kind of clarification that I was talking about. >>> The other two points I think that everybody agrees... >>> >>> >>> >> >> >> >
On Wed, Aug 11, 2010 at 10:07 PM, Bryan Taylor <bryan_w_taylor@...> wrote: > Is it > reasonable for them to try to let users choose the thin or fat version of a > person using content negotiation? I think the thin/fat representation approach is an anti-pattern. It tends to reduce the serendipity that normally occurs with rest. The thin representations in a thin/fat pair tend to be narrowly focus on some particular use case. Usually multiple uses will emerge that could almost, but not quite, use the thin representation because it is a little too thin. These use cannot all be accommodated ore the thin representation would become too "fat". Having a multiple representation types also raises the cognitive effort required implement a client of the system. Client implementers must understand both representations and decide which to use. It also elevates the risk that a client implementer will decide your system is unsuitable because they do not understand that a more complete representation is available, since that information is only available out-of-band (ie, in the documentation which they might not bother to read). In my experience it is best to avoid the thin/fat representation approach and just have a single representation that has all the relevant information. If that representation is very large it is often a good indication that one or more additional resources would useful for some of the information. The original representation can then link to the resource(s) that contain the additional information. So in your example, i would probably not embed the address in the user and stick with a link to the address resource. Peter <http://barelyenough.org>
2010/8/12 mike amundsen <mamund@...> > i often have servers the support both a specific media type > (application/bot-work.xhtml) and a generic media type > (application/xhtml+xml, text/html, etc.). this allows clients to pick the > media-type they prefer. often the representation for these selected types > *is identical* this means client devs can use browsers to view/hack/debug > their work while writing the bot. > This approach is brilliant. Server implementers get to leverage the excellent, and difficult, work that has gone in to designing html, et al. Clients get to express what they actually need. Having that coupling made explicit allows servers to manage their evolution in ways that will provide improvements to everyone while breaking fewer clients. Peter <http://barelyenough.org>
On Aug 12, 2010, at 5:59 PM, Peter Williams wrote: > > > I concur with Mike's assessments. > > I have implemented several systems using http over the web where the users were automatons. [Not meant as an argument, just as a note] It helps a lot to put the notion of 'machine user' aside and accept that all users of software systems are humans (in the sense of primary actors of use cases). Once you take that view, the whole client side 'process' (or piece of code or whatever you name it) becomes the user agent. If you take an index spider, for example: Some human has the goal of the site being indexed. The user agent is the piece of code that acts on behalf of that human, meaning: the whole spider is the user agent component. This view of user agents makes it much easier to approach the question of the client side programming model: user agents simply act upon representations received according to their implementation and configuration. There is no 'machine user' that somehow interferes with the application state machine received from the server(s). Jan > These systems work well and benefit greatly from the characteristics of rest. Particularly the evolvability and scalability it provides. > > Peter > <http://barelyenough.org> > > > 2010/8/12 Antnio Mota <amsmota@...> > > > On 12 August 2010 13:19, mike amundsen <mamund@...> wrote: > REST style is not limited to Web or Internet usage (e. g. has > application for communication between autonomous devices in a closed > custom network) > > > REST style using HTTP over the Web is not limited to using the common > Browser for the "client" (e. g. desktop applications. console apps, > bots, etc.) > > > > Yes, these two are the kind of clarification that I was talking about. The other two points I think that everybody agrees... > > > > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
<snip> This view of user agents makes it much easier to approach the question of the client side programming model: user agents simply act upon representations received according to their implementation and configuration. There is no 'machine user' that somehow interferes with the application state machine received from the server(s). </snip> I think I agree here. To me, this talk about "machines" or M2M,etc. is really about different "levels of abstraction" within the client application. As humans can deal with lots of abstractions, using representations that contain visual cues along with the hypermedia controls (e.g. HTML) and clients that can rendering them works fine. the human can work out the details, ascribe "meaning" to the elements, make choices, fill in values, etc. If you want to reduce the human involvement (i.e. "abstract away the human interactions"), more information needs to be included in either the representation (detailed @rel tags, limiting choices, suggested values, etc.), the client (a deeper understanding of the representation format, the "meaning" of the @rel values, a way to make choices between options given, etc.) or both. IOW, it's a continuum. mca http://amundsen.com/blog/ http://mamund.com/foaf.rdf#me On Thu, Aug 12, 2010 at 12:41, Jan Algermissen <algermissen1971@...> wrote: > > On Aug 12, 2010, at 5:59 PM, Peter Williams wrote: > >> >> >> I concur with Mike's assessments. >> >> I have implemented several systems using http over the web where the users were automatons. > > [Not meant as an argument, just as a note] > > It helps a lot to put the notion of 'machine user' aside and accept that all users of software systems are humans (in the sense of primary actors of use cases). > > Once you take that view, the whole client side 'process' (or piece of code or whatever you name it) becomes the user agent. If you take an index spider, for example: Some human has the goal of the site being indexed. The user agent is the piece of code that acts on behalf of that human, meaning: the whole spider is the user agent component. > > This view of user agents makes it much easier to approach the question of the client side programming model: user agents simply act upon representations received according to their implementation and configuration. There is no 'machine user' that somehow interferes with the application state machine received from the server(s). > > Jan > > > >> These systems work well and benefit greatly from the characteristics of rest. Particularly the evolvability and scalability it provides. >> >> Peter >> <http://barelyenough.org> >> >> >> 2010/8/12 Antnio Mota <amsmota@...> >> >> >> On 12 August 2010 13:19, mike amundsen <mamund@...> wrote: >> REST style is not limited to Web or Internet usage (e. g. has >> application for communication between autonomous devices in a closed >> custom network) >> >> >> REST style using HTTP over the Web is not limited to using the common >> Browser for the "client" (e. g. desktop applications. console apps, >> bots, etc.) >> >> >> >> Yes, these two are the kind of clarification that I was talking about. The other two points I think that everybody agrees... >> >> >> >> >> >> > > ----------------------------------- > Jan Algermissen, Consultant > NORD Software Consulting > > Mail: algermissen@... > Blog: http://www.nordsc.com/blog/ > Work: http://www.nordsc.com/ > ----------------------------------- > > > > >
On Aug 12, 2010, at 6:39 PM, Peter Williams wrote: > > > 2010/8/12 mike amundsen <mamund@...> > i often have servers the support both a specific media type (application/bot-work.xhtml) and a generic media type (application/xhtml+xml, text/html, etc.). this allows clients to pick the media-type they prefer. often the representation for these selected types *is identical* this means client devs can use browsers to view/hack/debug their work while writing the bot. > > This approach is brilliant. Yep. This alone is IMHO enough justification to switch from any WS-* and friends to the Web. It is developer's heaven and also satisfies the notorious manager question of 'Anything there yet you can show us?' Jan > Server implementers get to leverage the excellent, and difficult, work that has gone in to designing html, et al. Clients get to express what they actually need. Having that coupling made explicit allows servers to manage their evolution in ways that will provide improvements to everyone while breaking fewer clients. > > Peter > <http://barelyenough.org> > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
On Thu, Aug 12, 2010 at 10:41 AM, Jan Algermissen <algermissen1971@...> wrote: > It helps a lot to put the notion of 'machine user' aside and accept > that all users of software systems are humans (in the sense of > primary actors of use cases). > > Once you take that view, the whole client side 'process' (or piece > of code or whatever you name it) becomes the user agent. If you take > an index spider, for example: Some human has the goal of the site > being indexed. The user agent is the piece of code that acts on > behalf of that human, meaning: the whole spider is the user agent > component. > > This view of user agents makes it much easier to approach the > question of the client side programming model: user agents simply > act upon representations received according to their implementation > and configuration. There is no 'machine user' that somehow > interferes with the application state machine received from the > server(s). I suppose it you squint and tilt your head enough there is always a human user interested in the outcome (so far). However, the humans can be quite far removed from many of the processes involved in distributed systems. I once worked on a system where many processes automatically reported measurements periodically. These reported measurements caused cascades of automated processing, involving many http requests against other systems. The end result being, amongst many other things, a human being alerted if the processes being monitor needed attention. So yes, all of these systems served to alert the human when needed but, there were at least 5 distinct code bases involved. Each of those was driving a hypertext application (per measurement) that was somewhat independent of all the others. These systems where all quite separate from one another, both architecturally and physically. In your mental model they would all comprise a single user agent. There would be not way to really separate out the piece because that was the only (potential) human interaction in the mix. To me, that is seems too big to lump together as a single user agent. I tend to think of a user agent at the bit of software that allows the entity with a goal to communication with the server. The functionally provided by a user agent (for a web based system) would be an http protocol implementation (hopefully including all the niceties like transfer encoding, caching, etc), representation serialization/deserialization and an interface that the entity with a goal can use to pursue that goal. Obviously, in this model the goal seeking entity could be human or a bit of software. Peter <http://barelyenough.org>
2010/8/12 Jan Algermissen <algermissen1971@...> > You always have a user. In some applications the user will have to interact with the application while it is executing. In all cases the user will be interested in the result of the application and hence that result needs to be presented to him. The rest is just components communicating. That is not always the case One application I've made some years ago that is actually running on the Web was about measuring the Quality of Electricity. The application was in fact two different applications, that could work without the other. One was a process that received data from boxes planted along the power lines, parsed it and stored it in a DB. The other was a normal web site that displayed the data from the DB is several ways and made lot's of different analysis. The first one had no user at all. Generally in EDI you have two applications interfacing with each other. Each of the applications may or may not have users. But the EDI itself does not have any user. > > > (and I stress again that Internet != Web) > > Roughly: > > Internet == TCP/IP + DNS > > Web == HTTP + URI > EMail == SMTP > What about the applications that directly use BGP DHCP FTP IMAP IRC LDAP MGCP NNTP NTP POP RIP RPC RTP SIP SNMP SSH Telnet TLS/SSL XMPP, VoIP, IPTV, AS1, AS2, AS3...
On 12 August 2010 18:22, Peter Williams <pezra@...> wrote: > > However, the humans can be quite far removed from many of the processes > involved in distributed systems. I once worked on a system where many > processes automatically reported measurements periodically. > Exactly my thought. Curiously I just posted a experience with similar app in another thread...
2010/8/12 Antnio Mota <amsmota@...> > 2010/8/12 Jan Algermissen <algermissen1971@...> > > > You always have a user. In some applications the user will have to > interact with the application while it is executing. In all cases the user > will be interested in the result of the application and hence that result > needs to be presented to him. The rest is just components communicating. > > That is not always the case One application I've made some years ago > that is actually running on the Web was about measuring the Quality of > Electricity. The application was in fact two different applications, > that could work without the other. One was a process that received > data from boxes planted along the power lines, parsed it and stored it > in a DB. The other was a normal web site that displayed the data from > the DB is several ways and made lot's of different analysis. The first > one had no user at all. > > Generally in EDI you have two applications interfacing with each > other. Each of the applications may or may not have users. But the EDI > itself does not have any user. > > > > > > (and I stress again that Internet != Web) > > > > Roughly: > > > > Internet == TCP/IP + DNS > > > > Web == HTTP + URI > > EMail == SMTP > > > > What about the applications that directly use BGP DHCP FTP IMAP > IRC LDAP MGCP NNTP NTP POP RIP RPC RTP SIP SNMP > SSH Telnet TLS/SSL XMPP, VoIP, IPTV, AS1, AS2, AS3... > Always a user at the end of the day, even if the user setup services to "auto start". :) Always a user. Always a stick figure initiating the user case. Not to undermine the fact that the "user" could be another application. But I digress....
On Thu, Aug 12, 2010 at 1:22 PM, Peter Williams <pezra@...>wrote: > > > So yes, all of these systems served to alert the human when needed > but, there were at least 5 distinct code bases involved. Each of > those was driving a hypertext application (per measurement) that was > somewhat independent of all the others. These systems where all quite > separate from one another, both architecturally and physically. > > I see this no differently than when I have 5 browsers open looking for "Rumored Arsenal Signings". In practice, it is 5 user agents for sure. However, they belong to the same user agent class of "browser" helping me the "end user" find out who Arsenal is about to sign. User agent(s) ultimately exist to aid an end user in achieving some goal by interacting with a server(s) somewhere regardless of whether the need is met immediately or 10 years later. My opinion obviously.:)
Well, sure, but I was refering to "user" as in "user interface", the user that uses the user interface, so to speak :) If its a application, its more a "consumer" than a "user"... On 12 Aug 2010 18:43, "Eb" <amaeze@...m> wrote: 2010/8/12 Antnio Mota <amsmota@...> > 2010/8/12 Jan Algermissen <algermissen1971@...> > > > > You always have a user. In some applications the user will have to > interact with the application... > Always a user at the end of the day, even if the user setup services to "auto start". :) Always a user. Always a stick figure initiating the user case. Not to undermine the fact that the "user" could be another application. But I digress....
I think you should be happy enough with Fabregas staying... On 12 Aug 2010 19:04, "Eb" <amaeze@...> wrote: On Thu, Aug 12, 2010 at 1:22 PM, Peter Williams <pezra@...> wrote: > > > > So yes, al... I see this no differently than when I have 5 browsers open looking for "Rumored Arsenal Signings". In practice, it is 5 user agents for sure. However, they belong to the same user agent class of "browser" helping me the "end user" find out who Arsenal is about to sign. User agent(s) ultimately exist to aid an end user in achieving some goal by interacting with a server(s) somewhere regardless of whether the need is met immediately or 10 years later. My opinion obviously.:)
2010/8/12 Antnio Mota <amsmota@...> > I think you should be happy enough with Fabregas staying... > Sorry, can't do.
mike amundsen wrote: > > When I'm in a hurry (e.g. doing a simple one-off app) I employ > x:include[1] in the client. If it's something more involved, I simply > define a media type that contains an element that mimics that "Link > Embed" [2] pattern of x:include, HTML's IMG, IFRAME, etc. > I liked XHTML 2's approach of allowing @src on any element, so you could inline content as easily as inlining an image. -Eric
Peter Williams wrote: > > > Is it reasonable for them to try to let users choose the thin or fat > > version of a person using content negotiation? > > I think the thin/fat representation approach is an anti-pattern. It > tends to reduce the serendipity that normally occurs with rest. > > The thin representations in a thin/fat pair tend to be narrowly focus > on some particular use case. Usually multiple uses will emerge that > could almost, but not quite, use the thin representation because it is > a little too thin. These use cannot all be accommodated ore the thin > representation would become too "fat". > I agree with your assessment. But there is a RESTful way to do this which results in the same application state regardless of fat vs. thin. I'm working on implementing conneg on my demo, at this time the following resource is defaulting to server-side transformation sent as text/html: http://charger.bisonsystems.net/conneg/ Clients capable of performing XSLT transformations will soon be detected, and served stub-file representations like you see here: http://charger.bisonsystems.net/xmltest/index.xht The latter is definitely a thin representation compared to the former, assuming a user agent is capable of rendering it into an application state. The application state resulting from either representation is the same, only if this weren't the case would I consider it a problem. > > Having a multiple representation types also raises the cognitive > effort required implement a client of the system. Client implementers > must understand both representations and decide which to use. It also > elevates the risk that a client implementer will decide your system is > unsuitable because they do not understand that a more complete > representation is available, since that information is only available > out-of-band (ie, in the documentation which they might not bother to > read). > What I've done is use <link rel='alternate'/> to indicate what other representations exist, and their respective media types, in-band. Client implementers are given an obvious choice of supporting XSLT or not. > > In my experience it is best to avoid the thin/fat representation > approach and just have a single representation that has all the > relevant information. If that representation is very large it is > often a good indication that one or more additional resources would > useful for some of the information. The original representation can > then link to the resource(s) that contain the additional information. > I agree 100%, if the variants don't yield the same application state, then they're better off being separate resources. I also agree 100% with any approach resembling my own, which minimizes the amount of data transferred where possible, falling back to a 'fat' representation where not possible, provided the variants yield the same application state. -Eric
Antnio Mota wrote: > > That is relevant to the discussion in the sense that if REST realm is > only the Web, than I'll say that POVs like the ones in relation to > the use of media types of Eric Bowman and others are indeed correct. > Oh good grief. Will you not be satisified unless I mention the following before and after every paragraph I write, instead of merely every third post or so? A RESTful telephony system bears no resemblance to a Web site and doesn't have anything to do with humans using browsers. What's important is that you provide hypertext control documents to drive applications from one state to the next. How on Earth you can keep misrepresenting my position as one of "REST only means humans using Web browsers" aside from as a deliberate troll, escapes me. > > If not, if REST is applicable to, say, implement EDI over the > Internet (and I stress again that Internet != Web), where that > presentation is not only superfluous but counter-productive, then I > would say that said POVs are incorrect. > If REST isn't appropriate for a solution, then don't use it. If you're putting EDI over the Internet using REST, then presumably you're using HTTP, in which case the rules about using standard media types apply. If such EDI system doesn't have a hypertext control interface, then it 100% definitely, beyond any shadow of any doubt, cannot by any stretch of the imagination possibly even be dreamed of to resembe REST in any way, shape or form. This is not "my opinion," this is a REST constraint. See Roy's weblog entry about how REST APIs must be hypertext driven, and stop insiting that this is my "opinion". No hypertext controlling user agent interaction? NOT REST. Nothing whatever to do with humans using browsers, unless you obtusely insist that I must mean exactly that, despite how many times I bend over backwards to state otherwise. No wonder I flame you. -Eric
Antnio Mota wrote: > > Machine-to-machine doesn't need User Interfaces, doesn't need a > presentation layer. It may need other kind of interface layer, but > not a User Interface like the ones current browser produces. > Of course a machine user doesn't need a presentation layer. But it still needs a hypertext API. Which is why Googlebot has absolutely no problem following my hypertext controls despite having no human involved, without bothering to download the presentation layer, which is contained entirely within external CSS files. What a hypertext control looks like in a browser is entirely irrelevant to whether or not a user agent can follow the instructions it's being given by a human or a machine. -Eric
Antnio Mota wrote: > > One was a process that received data from boxes planted along the > power lines, parsed it and stored it in a DB. The other was a normal > web site that displayed the data from the DB is several ways and made > lot's of different analysis. The first one had no user at all. > Then you were following the design pattern of REST, which is to provide a hypertext control API as a layer encapsulating some back-end system that doesn't have a uniform interface. That's all anyone is saying when we say your REST API must be hypertext driven. It isn't a requirement that said back-end system must be RESTful. -Eric
> > Then you were following the design pattern of REST, which is to > provide a hypertext control API as a layer encapsulating some > back-end system that doesn't have a uniform interface. That's all > anyone is saying when we say your REST API must be hypertext driven. > It isn't a requirement that said back-end system must be RESTful. > Could Googlebot index that frontend you had? Then your hypertext API was being used by a non-human client bearing no resemblance whatsoever to a browser, without your having to implement conneg to provide some alternate API for Googlebot. That's REST. -Eric
mike amundsen wrote: > > REST style is protocol-agnostic (not limited to HTTP) > While other protocols may be part of a REST system, the fact is that in the here-and-now there is only one RESTful protocol, and that's HTTP. REST is protocol-agnostic, meaning it isn't limited to HTTP (allowing for HTTP 2 or Waka), but that doesn't mean you'll be building REST apps with other existing protocols, since those protocols have no notion of content negotiation, caching, or any of the other things that make REST what it is. Until such time as some other RESTful protocol arrives, what else besides HTTP can we possibly mean when discussing REST? FTP is not an example of client-cache-stateless-server and has no late binding of representation to resource. None of which matters when using FTP PUT, which I do use on occasion, but it makes all the difference in the world on GET -- where no protocol other than HTTP makes any shred of sense to use in REST, in this day and age. -Eric
Jan Algermissen wrote: > > On Aug 12, 2010, at 6:39 PM, Peter Williams wrote: > > > > > > > 2010/8/12 mike amundsen <mamund@...> > > i often have servers the support both a specific media type > > (application/bot-work.xhtml) and a generic media type > > (application/xhtml+xml, text/html, etc.). this allows clients to > > pick the media-type they prefer. often the representation for these > > selected types *is identical* this means client devs can use > > browsers to view/hack/debug their work while writing the bot. > > > > This approach is brilliant. > > Yep. This alone is IMHO enough justification to switch from any WS-* > and friends to the Web. It is developer's heaven and also satisfies > the notorious manager question of 'Anything there yet you can show > us?' > But this is not REST at all. Is that custom media type registered? Has anyone else ever heard of it, or is it application-specific instead of generic? Why can't bots make OPTIONS requests and receive some ubiquitous media type? This would achieve all the claimed benefits without violating the uniform interface by using IANA-unregistered media types over HTTP on the public Internet -- a clear and unequivocal violation of REST. -Eric
<snip> Until such time as some other RESTful protocol arrives, what else besides HTTP can we possibly mean when discussing REST? </snip> it is my POV that one way to promote additional protocols that can be used in conjunction with this particular architectural style is to talk (and think) about the style without assuming any single protocol. i don't do this to _ignore_ realities on the ground or in order to try to _refute_ them. i do this in order to learn more about the style itself. does this helps me see another level of abstraction that exists between the style and a protocol. i have recently begun to think about hypermedia the same way - as a stand-alone concept. one that is important to one or more protocols; one or more architectural styles, but also one that can be discussed on it own merits. that's all i mean by what i say here. mca http://amundsen.com/blog/ http://mamund.com/foaf.rdf#me On Thu, Aug 12, 2010 at 18:44, Eric J. Bowman <eric@bisonsystems.net> wrote: > mike amundsen wrote: >> >> REST style is protocol-agnostic (not limited to HTTP) >> > > While other protocols may be part of a REST system, the fact is that in > the here-and-now there is only one RESTful protocol, and that's HTTP. > REST is protocol-agnostic, meaning it isn't limited to HTTP (allowing > for HTTP 2 or Waka), but that doesn't mean you'll be building REST apps > with other existing protocols, since those protocols have no notion of > content negotiation, caching, or any of the other things that make REST > what it is. > > Until such time as some other RESTful protocol arrives, what else > besides HTTP can we possibly mean when discussing REST? FTP is not an > example of client-cache-stateless-server and has no late binding of > representation to resource. None of which matters when using FTP PUT, > which I do use on occasion, but it makes all the difference in the > world on GET -- where no protocol other than HTTP makes any shred of > sense to use in REST, in this day and age. > > -Eric >
Eb wrote: > > User agent(s) ultimately exist to aid an end user in achieving some > goal by interacting with a server(s) somewhere regardless of whether > the need is met immediately or 10 years later. > Exactly. Human or machine, there's a distinct user and a distinct user agent. The difference, i.e. where to draw the line, is that a user agent is not a stakeholder -- libcurl doesn't care to what use it's being put any more than a browser does. -Eric
I wasn't disagreeing with you, just trying to clarify that protocol- agnostic doesn't mean "any protocol goes." Until an alternative to HTTP comes along, REST = HTTP + whatever other protocols you need, speaking pragmatically about implementing real-world systems. -Eric mike amundsen wrote: > > <snip> > Until such time as some other RESTful protocol arrives, what else > besides HTTP can we possibly mean when discussing REST? > </snip> > > it is my POV that one way to promote additional protocols that can be > used in conjunction with this particular architectural style is to > talk (and think) about the style without assuming any single protocol. > > i don't do this to _ignore_ realities on the ground or in order to try > to _refute_ them. > > i do this in order to learn more about the style itself. does this > helps me see another level of abstraction that exists between the > style and a protocol. > > i have recently begun to think about hypermedia the same way - as a > stand-alone concept. one that is important to one or more protocols; > one or more architectural styles, but also one that can be discussed > on it own merits. > > that's all i mean by what i say here. > > mca > http://amundsen.com/blog/ > http://mamund.com/foaf.rdf#me > > > > > On Thu, Aug 12, 2010 at 18:44, Eric J. Bowman <eric@...> > wrote: > > mike amundsen wrote: > >> > >> REST style is protocol-agnostic (not limited to HTTP) > >> > > > > While other protocols may be part of a REST system, the fact is > > that in the here-and-now there is only one RESTful protocol, and > > that's HTTP. REST is protocol-agnostic, meaning it isn't limited to > > HTTP (allowing for HTTP 2 or Waka), but that doesn't mean you'll be > > building REST apps with other existing protocols, since those > > protocols have no notion of content negotiation, caching, or any of > > the other things that make REST what it is. > > > > Until such time as some other RESTful protocol arrives, what else > > besides HTTP can we possibly mean when discussing REST? FTP is not > > an example of client-cache-stateless-server and has no late binding > > of representation to resource. None of which matters when using > > FTP PUT, which I do use on occasion, but it makes all the > > difference in the world on GET -- where no protocol other than HTTP > > makes any shred of sense to use in REST, in this day and age. > > > > -Eric > >
> > Exactly. Human or machine, there's a distinct user and a distinct > user agent. The difference, i.e. where to draw the line, is that a > user agent is not a stakeholder -- libcurl doesn't care to what use > it's being put any more than a browser does. > Taking this thought further, Googlebot isn't a stakeholder, it's a user agent. The user is Google's indexing service -- as a stakeholder, its handling of BestBuy content changed when BestBuy implemented RDFa + GR. Googlebot's handling of BestBuy content did not, because user agents aren't stakeholders. -Eric
Does anyone know of a media type for representing user profiles with name/address/email/phone/skype etc. details? Thanks. /J�rn
On Thu, Aug 12, 2010 at 9:51 PM, Jørn Wildt <jw@...> wrote: > > > Does anyone know of a media type for representing user profiles with > name/address/email/phone/skype etc. details? > > Thanks. > > /Jørn > I don't know of an explicit media type, but you should be able to use HTML with the hcard microformat <http://microformats.org/wiki/hcard> or the FOAF<http://www.foaf-project.org/> and/or vcard <http://www.w3.org/2001/vcard-rdf/3.0#> RDF/RDFa<http://www.w3.org/TR/rdfa-syntax/>formats. Ryan Riley
Oh, yes, vCard and it's family. Why did I not think of that! Thanks. /Jørn ----- Original Message ----- From: "Ryan Riley" <ryan.riley@...> To: "Jørn Wildt" <jw@...> Cc: "Rest Discussion List" <rest-discuss@yahoogroups.com> Sent: Friday, August 13, 2010 8:35 AM Subject: Re: [rest-discuss] Media type for user profiles? On Thu, Aug 12, 2010 at 9:51 PM, Jørn Wildt <jw@...> wrote: > > > Does anyone know of a media type for representing user profiles with > name/address/email/phone/skype etc. details? > > Thanks. > > /Jørn > I don't know of an explicit media type, but you should be able to use HTML with the hcard microformat <http://microformats.org/wiki/hcard> or the FOAF<http://www.foaf-project.org/> and/or vcard <http://www.w3.org/2001/vcard-rdf/3.0#> RDF/RDFa<http://www.w3.org/TR/rdfa-syntax/>formats. Ryan Riley
I really was hoping that you kept your promise of not answering my posts. But here you are again, trying to drag me in yet another flame so discussions go to a dead end, and that way you don;t have to confront opinions different from your ones . Why don't you just let other people express their opinions instead of, like a parrot, always repeating your mambo-jambo? You know, the fact that you repeat your opinions doesn't make then more true. They may be correct in your limited way of thinking and in your limited experience, but I know that by now, so no point in keep parroting. With all due respect that I have for you, and actually by everybody on this list, I found now your points of view and your posts basically a waste of time to read. The eventually one or two things that you could be helpful dilute themselves in all the other mambo-jambo and parroting you do, on your endless posts where you focus on little details mistaken the tree for the forest, or when you keep answering your own posts... I mean, I can't expect to learn much from someone who thinks of himself he is the one, or the right person, or whatever crap you think of yourself. I do know a guy that calls himself "the special one" but that guy at least delivers what he promise... So I beg, stop trying to flame me when I ask something, please ignore me and let other people say their say. All I was asking was a yes or no to a simple question, and yet you can not answer that way, without flaming and parroting... Don't be afraid of other people saying their opinions, I'm sure you'll still be "the one"... 2010/8/12 Eric J. Bowman <eric@...>: > Antnio Mota wrote: >> >> That is relevant to the discussion in the sense that if REST realm is >> only the Web, than I'll say that POVs like the ones in relation to >> the use of media types of Eric Bowman and others are indeed correct. >> > > Oh good grief. Will you not be satisified unless I mention the > following before and after every paragraph I write, instead of merely > every third post or so? > > A RESTful telephony system bears no resemblance to a Web site and > doesn't have anything to do with humans using browsers. What's > important is that you provide hypertext control documents to drive > applications from one state to the next. > > How on Earth you can keep misrepresenting my position as one of "REST > only means humans using Web browsers" aside from as a deliberate troll, > escapes me. > >> >> If not, if REST is applicable to, say, implement EDI over the >> Internet (and I stress again that Internet != Web), where that >> presentation is not only superfluous but counter-productive, then I >> would say that said POVs are incorrect. >> > > If REST isn't appropriate for a solution, then don't use it. If you're > putting EDI over the Internet using REST, then presumably you're using > HTTP, in which case the rules about using standard media types apply. > > If such EDI system doesn't have a hypertext control interface, then it > 100% definitely, beyond any shadow of any doubt, cannot by any stretch > of the imagination possibly even be dreamed of to resembe REST in any > way, shape or form. This is not "my opinion," this is a REST > constraint. See Roy's weblog entry about how REST APIs must be > hypertext driven, and stop insiting that this is my "opinion". > > No hypertext controlling user agent interaction? NOT REST. Nothing > whatever to do with humans using browsers, unless you obtusely insist > that I must mean exactly that, despite how many times I bend over > backwards to state otherwise. No wonder I flame you. > > -Eric >
Well, I think that after another bombing raid from Eric, this thread is probably dead. To bad because I really wanted to know the opinion of people in this list, but it seems that other people don;t like clarification and yes/no answers, maybe so they can keep a supposed aura of, well, I don;t even know what... But let me try one last time, by using the simplest form possible, to see if even "the right person" can give a yes or no answer. Is REST style limited to Web usage? On 13 August 2010 01:04, Eric J. Bowman <eric@...> wrote: >> >> Exactly. Human or machine, there's a distinct user and a distinct >> user agent. The difference, i.e. where to draw the line, is that a >> user agent is not a stakeholder -- libcurl doesn't care to what use >> it's being put any more than a browser does. >> > > Taking this thought further, Googlebot isn't a stakeholder, it's a user > agent. The user is Google's indexing service -- as a stakeholder, its > handling of BestBuy content changed when BestBuy implemented RDFa + GR. > Googlebot's handling of BestBuy content did not, because user agents > aren't stakeholders. > > -Eric >
Antnio Mota wrote: > > Is REST style limited to Web usage? > Good grief. How can you possibly assume that to be the case, when REST clearly gives you your answer, by simply reading the thesis? "The REST interface is designed to be efficient for large-grain hypermedia data transfer, optimizing for the common case of the Web, but resulting in an interface that is not optimal for other forms of architectural interaction." Do you honestly believe that means REST is *restricted to* the common case of the Web? Your question is a troll. -Eric
Antnio Mota wrote: > > I really was hoping that you kept your promise of not answering my > posts. > Uuuuuughhh. Blather, blather, blather! If you attribute a position to me, that I never took, using my name explicitly, and expect me _not_ to respond, then you're as high as the "professional left" in America. Stop trolling. -Eric
You guys - take it off-list, please!! Jan On Aug 13, 2010, at 10:52 AM, Eric J. Bowman wrote: > Antnio Mota wrote: >> >> I really was hoping that you kept your promise of not answering my >> posts. >> > > Uuuuuughhh. Blather, blather, blather! If you attribute a position to > me, that I never took, using my name explicitly, and expect me _not_ to > respond, then you're as high as the "professional left" in America. > > Stop trolling. > > -Eric ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
The position I said it was yours was the one "use only standard media types" vs. "use custom-made media types". But then again, you'll just say anything, the problem with parroting is that one, people keep saying things "just because it is"... 2010/8/13 Eric J. Bowman <eric@...>: > Antnio Mota wrote: >> >> I really was hoping that you kept your promise of not answering my >> posts. >> > > Uuuuuughhh. Blather, blather, blather! If you attribute a position to > me, that I never took, using my name explicitly, and expect me _not_ to > respond, then you're as high as the "professional left" in America. > > Stop trolling. > > -Eric >
2010/8/13 Eric J. Bowman <eric@...>: > Antnio Mota wrote: >> >> Is REST style limited to Web usage? >> > > Good grief. How can you possibly assume that to be the case, when REST > clearly gives you your answer, by simply reading the thesis? > Damn, not a yes/no... At least yes/no will not be parroting... But actually you didn't read the question, did you? Or you just didn;t understand it, as simple as it was? To use your your style - How in the hell in that question I assume that to be the case? Actually my opinion is, well, how can I put this in a way you can understand? "No" will be understandable? > "The REST interface is designed to be efficient for large-grain > hypermedia data transfer, optimizing for the common case of the Web, > but resulting in an interface that is not optimal for other forms of > architectural interaction." > Parroting... > Do you honestly believe that means REST is *restricted to* the common > case of the Web? Your question is a troll. > And yet, you now contradict yourself (again...) because in just in a few posts ago you said " fact is that in the here-and-now there is only one RESTful protocol, and that's HTTP" "REST is protocol-agnostic but that doesn't mean you'll be building REST apps with other existing protocols," "Until such time as some other RESTful protocol arrives, what else besides HTTP can we possibly mean when discussing REST? " So, when you say HTTP here, you don't mean Web? So if I ask Is REST style limited to HTTP usage? your answer will be "yes"? - supposing you can answer things without parroting and mambo-jambo? > Your question is a troll. Why are you so interested in flaming any thread you don't like? My question is trolling? Why don;t you just let other people decide that by answering or not answering? What are you afraid of, in letting other people discuss?
Antnio Mota wrote: > > The position I said it was yours was the one "use only standard media > types" vs. "use custom-made media types". > Which, yet again, is not a position I've ever taken. Strip out all nuance in a quest for yes-or-no answers where there are none, and you may come to that conclusion, but it isn't what I've ever said. -Eric
Antnio Mota wrote: > > >> > >> Is REST style limited to Web usage? > >> > > > > Good grief. How can you possibly assume that to be the case, when > > REST clearly gives you your answer, by simply reading the thesis? > > > > Damn, not a yes/no... At least yes/no will not be parroting... > I quoted REST to give you the definitive "no" answer. Rejecting all efforts by myself and others to explain this to you, then bitching that quoting REST is just "parroting" is why you're a troll. > > > "The REST interface is designed to be efficient for large-grain > > hypermedia data transfer, optimizing for the common case of the Web, > > but resulting in an interface that is not optimal for other forms of > > architectural interaction." > > > > Parroting... > Trolling... > > > Do you honestly believe that means REST is *restricted to* the > > common case of the Web? Your question is a troll. > > > > And yet, you now contradict yourself (again...) because in just in a > few posts ago you said > > " fact is that in the here-and-now there is only one RESTful protocol, > and that's HTTP" > > "REST is protocol-agnostic but that doesn't mean you'll be building > REST apps with other existing protocols," > > "Until such time as some other RESTful protocol arrives, what else > besides HTTP can we possibly mean when discussing REST? " > > So, when you say HTTP here, you don't mean Web? So if I ask > > Is REST style limited to HTTP usage? > > your answer will be "yes"? - supposing you can answer things without > parroting and mambo-jambo? > Completely misrepresenting my position. Is your HTTP-using intranet the Web? Have I ever said you can't implement REST on an intranet? Must I mention in every post I make, how I integrate FTP or SMTP into my REST systems? No, REST isn't "limited" to HTTP, you can use any protocol you need. But currently, the only RESTful protocol for implementing a hypertext control API is HTTP. Doing such over, say, FTP eliminates so many of REST's possibilities as to make it pointless to be implementing REST. -Eric
What protocols have a notion of representation vs. resource, thereby being capable of instantiating a REST system incorporating any number of other protocols, like FTP or SMTP? -Eric
2010/8/13 Eric J. Bowman <eric@...>: > Rejecting all > efforts by myself and others to explain this to you, t No no, I'm just trying to reject you in particular, not others... I now believe is a waste of time to pay you attention, because I think you don;t want to clarify, you just want to be seen as some kind of REST Master that thinks is so good that people should just accept what it says without questioning or reasoning - but no, I don;t believe you're Roy number 2. But that is just my opinion... But now I see you've succeed in trash another another thread that for some reason doesn't interest you, to go and start another of your "I'm the master and you kids are just too ignorant so just follows what I say" style of POP Quiz... Well, be it. have it your way...
Ambulance of the way now, taking my Inbox to the hospital.... Jan On Aug 13, 2010, at 12:09 PM, Antnio Mota wrote: > 2010/8/13 Eric J. Bowman <eric@...>: > >> Rejecting all >> efforts by myself and others to explain this to you, t > > No no, I'm just trying to reject you in particular, not others... I > now believe is a waste of time to pay you attention, because I think > you don;t want to clarify, you just want to be seen as some kind of > REST Master that thinks is so good that people should just accept what > it says without questioning or reasoning - but no, I don;t believe > you're Roy number 2. But that is just my opinion... > > But now I see you've succeed in trash another another thread that for > some reason doesn't interest you, to go and start another of your "I'm > the master and you kids are just too ignorant so just follows what I > say" style of POP Quiz... Well, be it. have it your way... ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
Antnio Mota wrote: > > But now I see you've succeed in trash another another thread that for > some reason doesn't interest you, to go and start another of your "I'm > the master and you kids are just too ignorant so just follows what I > say" style of POP Quiz... Well, be it. have it your way... > Not true. It's a trick question, designed specifically to elicit debate, and as such, correct answers are graded from A+ to C-. I'll look like an idiot if, before someone else posts the A+ answer, Roy stomps all over my question -- which is the A+ answer, but nobody will believe that it's what I expected Roy to post, if he so deigns. Or, Roy (or someone else with Roy's level of cred, which doesn't include me, by my own reckoning or anyone else's) could stomp all over my question for some reason I didn't anticipate, in which case I'll have learned something about REST and so will everyone else. What you perceive as arrogance, is only my own willingness to be spectacularly wrong in public. -Eric
Here is a different angle how one might look at this: The two headers Content-Type and Accept serve two entirely different purposes: Content-Type: tells the recipient which processing model to apply to the message body. Roughly, the meaning is "I am sending you this. Please understand it to be of this type and process accordingly". Accept: tells the server which types of representations the user agent can handle to achieve whatever it wants to achieve with the response to the current query. Roughly: "I am about to do something with what you send me back. I can do that something as long as you send me one of the accepted types." [The following is contradicting my previous statement below, don't get confused] Media types primarily serve the purpose of expressing processing models. Only when new processing models emerge should we mint new media types. IOW: as long as the media types you have available make correct candidates for the Content-Type header values we do not need new media types. application/atom+xml is in that sense necessary because it defines a very different processing model than text/html. While you could provide the same information as application/atom+xml does with text/html and microformats you could not express a processing model beyond that of text/html. Think of it this way: when you say Content-Type: text/html the recipient will fire up the browser, never the feed reader. The issues I have had with declaring capabilities or expectations in the Accept header are better solved by the principle of designing for re-use, IMO. My major concern is how the server developer would know about any additional expectation of the client beyond the declared Accept values. I am (was) worried that the server developer would not have a clue that would keep him form doing simply *anything* as long as the response comes in e.g. application/atom+xml. Now, in my thinking, the developers 'in' a system (say: an enterprise, or *the* Web, or an enterprise and it's suppliers) know[1] the set of media types used. Regarding the example below, a server side developer of the new-orders resource would know that orders can be represented as application/order. When it comes to implementing the handler for application/atom+xml designing for maximised re-usability would mean to either include the application/order XML directly in the feed or at least provide alternate links from where the client can access the application/order version. Simply serving an HTML (or even scaned image) based feed and not providing application/order somehow would just be bad design. On this basis, having the newly-ordered-items-summary-building client just send Accept: applicaton/atom+xml would do the job for me. Then, if you want to fortify your system a bit more, you can use profile parameters to help conneg (but it now appears to me as typical enterprisey overkill). Jan [1] 'know' of course being an evolving state On Aug 6, 2010, at 1:05 PM, Jan Algermissen wrote: > Something I have been trying to wrap my head around: > > Suppose we are dealing with the procurement domain. Also suppose we plan on dealing with lists of orders (e.g. maybe there is a system that manages orders and exposes the new ones, processsed ones or the ones being shipped. There will be clients that do something with these order lists such as compiling a report. > > Also suppose we have defined a link semantic that allows a server to point a client to, for example, the list of new orders. > > It is not important how that link semantic looks, but it could be <newOrders href="/foo/bar" /> or <link rel="new-orders" href="/foo/bar"/> or an AtomPub collection with a special category: <collection href="/foo/bar"><category term="new-orders" scheme=".."/></collection>. > > I personally 'call' any of those 'link semantics' and for the purpose of my question it only matters that the useragent ends up knowing that > > > /foo/bar is the URI of a resource that represents the list of new orders. > > An equivalent from the HTML world would be that <img src="/baz.gif"/> tells the client that > > /baz.gif is a resource that is 'an image'[1] > > > The issue I am dealing with is this: What is the appropriate degree of specificity of the media type for lists of orders. Especially I am wondering whether it is enough for the user agent to say > > Accept: application/atom+xml;type=feed > > > or whether the Accept header should include the user agent capabilities regarding the individual order entries, e.g. > > Accept: application/orderlist > > > Take a step back and lets think about what is happening here. At one level, the server informs that client about the nature of a resource and at another (lower) level the client informs the server about its technical capabilities that allow it to process responses for a request to the given resource. > > I think it is important to distinuish these levels because the actual request the client makes does not express any assumptions about the nature of the resource, only about the technical capability. > > The assumption (e.g. that the requested resource is 'an image') happens before that. > > Browsers are implemented to follow <img src=""/> links and process the response by inlining the received images into the rendered page. Other HTML-aware clients might be implemented to produce a fine-printed book of all images found via <img src=""/> links. > > The actual request will (usually) contain an Accept header of the form: > > Accept: image/gif,image/jpeg,image/png,image/* > > What this accept header is saying is *not* > > "I expect that the requested resource is 'an image'" > > but > > "I can process a response to this request if you give me any of the accepted formts" > IOW:"I can do whatever I want to do if the response comes in any of these formats" > > > Before this gets boring, lets shift to the example of the list of new orders. Suppose I am implementing a user agent that compiles a list of all items ordered in the list of new orders. > > Such a user agent would be implemented to find (or just be given or have bookmarked) the URI of the resource that represents the list of new orders (in the same sense as browsers get hold of the URI of 'an image'). > > How do I have to implement the user agent's construction of the GET request to /foo/bar? > > Suppose we are using a media type application/order for order representations and have also decided to build upon Atom for dealing with lists of stuff in our domain. We might construct the request as: > > GET /foo/bar > Accept: application/atom+xml;type=feed > > and the server might send something like (excuse flaws in the XML, pls) > > > 200 Ok > Content-Type: application/atom+xml[1] > > <feed> > <entry> > <content type="application/order> > <order>....</order> > </content> > </entry> > <entry> > <content type="application/order> > <order>....</order> > </content> > </entry> > </feed> > > Is that sufficient? Does the acept header sufficiently express the processing capabilities in the Accept header? Can the server know that the user agent wants to receive the entries as application/order? Is it ok to just program the user agent to ignore the entries of which it does not understand the type? > > Would we end up with the correct list of ordered items if all entries come back as HTML and the user agent ignores them? > > > > > I think that there is a great danger of creating a nightmare of hidden coupling because in my opinion the user agent simply can *not* fullfil its processing goal given simply 'an atom feed'. An Atom feed reader *can* do that (because it has a different goal) but a newly-ordered-items-list compiling user agent can not do that it it must express that in the Accept header. > > I'd rather define a media type application/orderlist (defined as an Atom feed containing entries of application/order) and have the user agent be explicit: > > GET /foo/bar > Accept: application/orderlist > > > 200 Ok > Content-Type: application/orderlist > > <feed> > <entry> > <content type="application/order> > <order>....</order> > </content> > </entry> > <entry> > <content type="application/order> > <order>....</order> > </content> > </entry> > </feed> > > > What do others think? > > (See also [3]) > > Jan > > [1] 'An image' is as good as it gets in terms of definitions, BTW. > <http://www.w3.org/TR/html401/struct/objects.html#edef-IMG> > Note that the HTML spec also provides some sort of hint what media types are involved when dealing with images. > > [2] conneged on the type param already, so no need to repeat it in the Content-Type header > > [3] There is also the issue of returning a feed that consists of references to entries that the user agent can then GET as Accept: application/order individually. Certainly we aould not want to define a list format that constrains the references to only references application;order resource. The user agent would basically have to report an error if the referenced order is not available as application/order (that is upon a 406 on a GET subrequest) > > An alternative would be to have the user agent Accept: application/atom+xml;type=feed but report an error if an entry in the feed is not provided as application/order (be it inline or via a sub-request). > > > ----------------------------------- > Jan Algermissen, Consultant > NORD Software Consulting > > Mail: algermissen@... > Blog: http://www.nordsc.com/blog/ > Work: http://www.nordsc.com/ > ----------------------------------- > > > > > > > ------------------------------------ > > Yahoo! Groups Links > > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
Please, don't make me take Jan's inbox ambulance... Have it your way, I'm out of here... If I were a bad person, which I'm not, I will wonder why you instead don't fix the website that carries your name on it, as the only evidence of REST I found there is all the 404 answers I got... Damn, I did it, I got personal with this, I'm now officially a bad person. Sorry, my bad. I won't do it again... 2010/8/13 Eric J. Bowman <eric@...>: > Antnio Mota wrote: >> >> But now I see you've succeed in trash another another thread that for >> some reason doesn't interest you, to go and start another of your "I'm >> the master and you kids are just too ignorant so just follows what I >> say" style of POP Quiz... Well, be it. have it your way... >> > > Not true. It's a trick question, designed specifically to elicit > debate, and as such, correct answers are graded from A+ to C-. I'll > look like an idiot if, before someone else posts the A+ answer, Roy > stomps all over my question -- which is the A+ answer, but nobody will > believe that it's what I expected Roy to post, if he so deigns. > > Or, Roy (or someone else with Roy's level of cred, which doesn't include > me, by my own reckoning or anyone else's) could stomp all over my > question for some reason I didn't anticipate, in which case I'll have > learned something about REST and so will everyone else. What you > perceive as arrogance, is only my own willingness to be spectacularly > wrong in public. > > -Eric >
Antnio Mota wrote: > > Damn, I did it, I got personal with this, I'm now officially a bad > person. Sorry, my bad. I won't do it again... > Actually, what makes you a flaming troll is the fact that you must always have the last word. I've done my best to try leaving you alone, but when you quote my posts, then misrepresent my positions and attach my name to them in public, I will set the record straight. You accuse *me* of trashing threads, when it's *you* who insists on making all threads a discussion of me, personally, rather than the topic at hand. Which is the definition of the flaming you accuse me of in response to your antics... You're the one who starts this up again, thread after thread, even though I've always let you have the last word. You also insist on trashing the demo I posted, just now over 404s I've already explained as resulting from the fat that the demo is a distillation of ongoing work and is itself a work-in-progress. But, some people's response to the luxury of having working examples posted, is to denigrate the example, just like you did last week. And the week before that. And the week before that, dating all the way back to when you started posting here... You're right, that does make you an asshole. -Eric
Hey, I said a bad a person, not a asshole. There is a difference that I'm sure you know what it is. No need to insult me... I don't need to have the last word because I don't want to convince anyone that I'm "the right person". I don't care about REST other than helping me doing my job. Because of some of your so called "expert" opinions I already in the past lost valuable time searching for things that don't exist except perhaps in your head. I think you are a obstacle for people who want to learn REST as a heuristic device and not as a religion. I wish you all the luck in your role as high priest, and I do hope you feel happy with that. Let's just finish this non-sense? I don't really care about what you say, not even when you just plain insult me. I'll promise not to try to have the last word, providing you're not going to insult me again... 2010/8/13 Eric J. Bowman <eric@bisonsystems.net> > Antnio Mota wrote: > > > > Damn, I did it, I got personal with this, I'm now officially a bad > > person. Sorry, my bad. I won't do it again... > > > > Actually, what makes you a flaming troll is the fact that you must > always have the last word. I've done my best to try leaving you alone, > but when you quote my posts, then misrepresent my positions and attach > my name to them in public, I will set the record straight. > > You accuse *me* of trashing threads, when it's *you* who insists on > making all threads a discussion of me, personally, rather than the > topic at hand. Which is the definition of the flaming you accuse me of > in response to your antics... > > You're the one who starts this up again, thread after thread, even > though I've always let you have the last word. You also insist on > trashing the demo I posted, just now over 404s I've already explained > as resulting from the fat that the demo is a distillation of ongoing > work and is itself a work-in-progress. > > But, some people's response to the luxury of having working examples > posted, is to denigrate the example, just like you did last week. And > the week before that. And the week before that, dating all the way > back to when you started posting here... You're right, that does make > you an asshole. > > -Eric >
Jan Algermissen wrote: > > Simply serving an HTML (or even scaned image) based feed and not > providing application/order somehow would just be bad design. > Why? Aren't humans interested in orders? I've seen hundreds of e- commerce interfaces which prove that orders can be modeled as HTML. While I think you're on the right track by stating that your custom media type can be used within atom:content, to gain the benefits of Atom's standard semantics, link relations and processing model, you'll still need some sort of hypertext control API which instructs user agents how to interact with orders if you need to use methods beyond GET. Allowing DELETE on those Atom resources by user agents that "just know" your resource type can be deleted, isn't the same as providing a hypertext control which allows one or more resources to be selected from a list, and deleted one-at-a-time or as a batch by instructing the user agent what resource or resources need what method called to achieve that goal. Atom Protocol doesn't define DELETE on collection resources. How does your system handle DELETE on collection resources? More importantly, how do I know how your system handles DELETE on collection resources, or in what situations such batch deletion should occur, if that does indeed result in batch deletion of members? If that documentation is out-of-band, but not part of the media type definition, then it isn't a REST API. You can bring it in-band using HTML, where Xforms and HTML 5 provide the hypertext controls needed to define this behavior. Your Atom resources containing application/order aren't providing any hooks for alternative user agents (screen readers and such) to make your data accessible, thus imposing a 'sighted' requirement on any human needing to look at an order for any reason. Actually, if we're just talking about reading not manipulating, a standard Web browser with Atom capability will adapt your content to its accessibility API, so this isn't a showstopper. But... Try to imagine how you might maintain your own system in a few years if, like too many people I know, your failing eyesight prevents you from accessing or manipulating data that has no inherent eyesight requirement? Or manipulating data that does, like images, by providing @alt and such? You can allow accessible human interaction with these orders, and provide me that in-band information I'm looking for about deleting collections and such, using HTML. Which is not to say you shouldn't use application/order, embedded in Atom or not, just that it's an incomplete solution from a REST perspective, unless it's GET-only. -Eric
On Fri, Aug 13, 2010 at 6:35 AM, Jan Algermissen <algermissen1971@...>wrote: > > > Media types primarily serve the purpose of expressing processing models. > Only when new processing models emerge should we mint new media types. IOW: > as long as the media types you have available make correct candidates for > the Content-Type header values we do not need new media types. > > Processing model as in what though? I'm a little confused as to what that really means. As a user I can use a browser (user agent) to peruse a list of orders on a web page. But I could also write a program to do the same thing. In both cases, my client ACCEPTs text/html, right? At what point do I make the leap to minting a new media type? Eb
On Aug 13, 2010, at 2:06 PM, Eb wrote: > > > On Fri, Aug 13, 2010 at 6:35 AM, Jan Algermissen <algermissen1971@...> wrote: > > Media types primarily serve the purpose of expressing processing models. Only when new processing models emerge should we mint new media types. IOW: as long as the media types you have available make correct candidates for the Content-Type header values we do not need new media types. > > > > Processing model as in what though? I'm a little confused as to what that really means. As a user I can use a browser (user agent) to peruse a list of orders on a web page. But I could also write a program to do the same thing. In both cases, my client ACCEPTs text/html, right? At what point do I make the leap to minting a new media type? Yeah - if I would have been able to explain it better I would have done it :-) So, I am with you on this question. What do you think about something like: "text/html means: please understand the payload in terms of the HTML specification. When you process it, you can expect to find stuff like a title and (usually) haman targeted text portions, with inline images etc. You can also look for additional rendering stuff (via style sheets) and there might also be some referenced code on demand (JavaScript). There'll also be a bunch on inline links that you can as you see fit render to a human or automatically GET or bookmark or whatever. AtomPub's media types have a very different intended processing model. Especially it addressed to a certain extend how to deal with time ordered entries and checking at what point new entries have been added or old ones have been changed. I think you basically need new media types when you have new processing controls (see NewsML G2's pubStatus control (usable, withdrawn) for example) or when you have new domain entities such as orders etc. In the case of procurement, it would make sense to define application/procurement and have the necessary documents such as order, creditNote, billOfLading etc. in that media type. Jan > > Eb ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
On Fri, Aug 13, 2010 at 8:22 AM, Jan Algermissen <algermissen1971@...>wrote: > > > I think you basically need new media types when you have new processing > controls (see NewsML G2's pubStatus control (usable, withdrawn) for example) > or when you have new domain entities such as orders etc. In the case of > procurement, it would make sense to define application/procurement and have > the necessary documents such as order, creditNote, billOfLading etc. in that > media type. > > Jan > > Or maybe new media types are needed when how they will be consumption of the response does not fit (well) into existing media types? If stated that way, its has little to do with the "documents" per se, but rather how clients will consume them. Interesting.
Eb wrote: > > Processing model as in what though? I'm a little confused as to what > that really means. As a user I can use a browser (user agent) to > peruse a list of orders on a web page. But I could also write a > program to do the same thing. In both cases, my client ACCEPTs > text/html, right? At what point do I make the leap to minting a new > media type? > I think that regardless of whether a user agent is rendering text/html (browser) vs. interpreting text/html (googlebot), it's following the rules for processing HTML. Because it isn't rendering text/html as if it were a browser, doesn't mean googlebot needs some other media type. -Eric
On Fri, Aug 13, 2010 at 9:34 AM, Eric J. Bowman <eric@...>wrote: > Eb wrote: > > > > Processing model as in what though? I'm a little confused as to what > > that really means. As a user I can use a browser (user agent) to > > peruse a list of orders on a web page. But I could also write a > > program to do the same thing. In both cases, my client ACCEPTs > > text/html, right? At what point do I make the leap to minting a new > > media type? > > > > I think that regardless of whether a user agent is rendering text/html > (browser) vs. interpreting text/html (googlebot), it's following the > rules for processing HTML. Because it isn't rendering text/html as if > it were a browser, doesn't mean googlebot needs some other media type. > > -Eric > Eric - I concur with your thoughts 100% however, the question is, at what point (or what about user agent) makes a designer come to the conclusion that text/html doesn't suffice. I offered up a thought on this also on this thread. Eb
Eb wrote: > > I concur with your thoughts 100% however, the question is, at what > point (or what about user agent) makes a designer come to the > conclusion that text/html doesn't suffice. I offered up a thought on > this also on this thread. > It would probably be worthwhile to make a list of all hypertext control document media types, then analyze where the use cases for those media types (i.e. CCXML) diverge from HTML's capabilities. IOW, it may be possible to determine where to draw the line by analysis of where the line has been drawn, and extrapolate the similarities into some sort of rule of thumb. -Eric
Eb wrote: > > ...at what point (or what about user agent) makes a designer come to > the conclusion that text/html doesn't suffice... > If it's any help: If a representation needs to express hypertext controls within a vector-graphic image, HTML just won't do, but SVG will. I don't think it's a user agent concern, application/svg+xml is a ubiquitous media type, so any stakeholder with an interest knows what processing model any user agent needs to support, regardless of intent (render vs. introspect for links). I'd say that the needs of the resource drive the choice of media type, not the needs of the user agent. Modeling a telephony system in HTML so that it may be driven by a human using a browser is inappropriate to the needs of the resource. If user agent = telephony device, then choice of device should be driven by support for the media types the service uses. Just as choice of browser should be driven by support for application/svg+xml if the resource representations are best modeled as hypertext controls embedded within vector graphics. The obvious problem with my logic is that it depends upon free will. If the corporate intranet only allows IE, then the representations can't be application/svg+xml, so the needs of the user agent are not driving choice of media type, but imposing artificial restrictions on that choice. Which I see as a bad thing, hence my advocacy for going the other way around... So, yeah, make "supported media types" a function of resource modeling, without considering client concerns. Choosing HTML should be driven by its appropriateness to the resource, letting user agents take whatever form they need to take, using the common knowledge of a ubiquitous media type to inform them of available state transitions, etc. -Eric
On Fri, Aug 13, 2010 at 11:39 AM, Eric J. Bowman <eric@...>wrote: > I'd say that the needs of the resource drive the choice of media type, > not the needs of the user agent. Modeling a telephony system in HTML > so that it may be driven by a human using a browser is inappropriate to > the needs of the resource. > > So, yeah, make "supported media types" a function of resource modeling, > without considering client concerns. Choosing HTML should be driven by > its appropriateness to the resource, letting user agents take whatever > form they need to take, using the common knowledge of a ubiquitous > media type to inform them of available state transitions, etc. > > -Eric > Hmmm, I'll have to ponder this one as I think I come from the exact opposite viewpoint that the client drives what media type should be. If your client is going to be a browser, then text/html. I don't see why the resource plans such a significant role in this decision making. Anyway, let me stew over this. As always, interesting!
Jan Algermissen wrote: > Here is a different angle how one might look at this: > > The two headers Content-Type and Accept serve two entirely different > purposes: > > Content-Type: tells the recipient which processing model to apply to > the message body. Roughly, the meaning is "I am sending you this. > Please understand it to be of this type and process accordingly". > > Accept: tells the server which types of representations the user agent > can handle to achieve whatever it wants to achieve with the response to > the current query. Roughly: "I am about to do something with what you > send me back. I can do that something as long as you send me one of the > accepted types." > > [The following is contradicting my previous statement below, don't get > confused] > > Media types primarily serve the purpose of expressing processing > models. Only when new processing models emerge should we mint new media > types. IOW: as long as the media types you have available make correct > candidates for the Content-Type header values we do not need new media > types. > > application/atom+xml is in that sense necessary because it defines a > very different processing model than text/html. While you could provide > the same information as application/atom+xml does with text/html and > microformats you could not express a processing model beyond that of > text/html. > > Think of it this way: when you say Content-Type: text/html the > recipient will fire up the browser, never the feed reader. > > > > The issues I have had with declaring capabilities or expectations in > the Accept header are better solved by the principle of designing for > re-use, IMO. > > My major concern is how the server developer would know about any > additional expectation of the client beyond the declared Accept values. > I am (was) worried that the server developer would not have a clue that > would keep him form doing simply *anything* as long as the response > comes in e.g. application/atom+xml. > > Now, in my thinking, the developers 'in' a system (say: an enterprise, > or *the* Web, or an enterprise and it's suppliers) know[1] the set of > media types used. Regarding the example below, a server side developer > of the new-orders resource would know that orders can be represented as > application/order. When it comes to implementing the handler for > application/atom+xml designing for maximised re-usability would mean to > either include the application/order XML directly in the feed or at > least provide alternate links from where the client can access the > application/order version. > > Simply serving an HTML (or even scaned image) based feed and not > providing application/order somehow would just be bad design. > > On this basis, having the newly-ordered-items-summary-building client > just send Accept: applicaton/atom+xml would do the job for me. Then, if > you want to fortify your system a bit more, you can use profile > parameters to help conneg (but it now appears to me as typical > enterprisey overkill). Replace "processing model" with "data model" and I think you've got it. This style bends over backwards to be declarative, which limits the "processing model" to two things: parsing and state transitions, both of which are provided fully in the data itself. Any higher layers of processing are unconstrained, and a human with a browser can process a response wildly differently than a spider, or even differently than other humans. This is the essence of re-use, and is baked into the constraints of the style. Bonus points for identifying recent popular media-types that have strayed from this declarative path. Robert Brewer fumanchu@...
Eb's question, and my answer, reminds me of a war story from the mid- 90's. While pursuing every dirty trick in the book to divest from me the steamboat.com domain name for zero compensation, the Steamboat Ski and Resort Corporation was contracting with me for Internet services and consulting... One of the coolest feats of DIY engineering I've ever come across was the realtime GUI for the snowmaking system. The snowmaking chief hacked it together himself using MS Access (!) of all things. He then dismissed one consultant after another, for basically not telling him what he wanted to hear when he asked us about networking his solution. Which, for the record, was "Sure, no problem, MS Access can do anything, including distributed client-server!" and also why the single- computer prototype was never networked successfully... Had I known then what I know now, and had SVG existed, I could have easily come up with a REST encapsulation layer, which could then be controlled from a handheld over a wireless LAN, or controlled from the chief's home computer via the Internet. Certain aspects of snowmaking systems are intensely visual -- the guys running the system don't care so much what a valve does, beyond knowing that it needs to be in position X for operation A and position Y for operation B. So reading some HTML displaying the degrees from vertical of some named handle, even with an image inlined, would be useless. In Access, this guy had designed a visual representation of his system, and could click on the handle to change it from X to Y so he could tell with his eyeballs that his software settings correlated to his desired hardware settings. Then he could create macros to change the entire setup from A to B, and visually confirm success. Gauges weren't images of gauges next to digital readouts either, they were visual, with the same green-yellow-red range markings as the system's analog gauges. One look at this MS Access creation, and you'd know it couldn't be represented in HTML and preserve any usability. SVG, OTOH, looking back now? Perfect. No, I don't remember what product he was using to create enhanced visual interfaces in MS Access. But it was really cool. -Eric
gopher comes to mind On Aug 13, 2010, at 2:53 AM, Eric J. Bowman wrote: > What protocols have a notion of representation vs. resource, thereby > being capable of instantiating a REST system incorporating any number > of other protocols, like FTP or SMTP? > > -Eric > > > ------------------------------------ > > Yahoo! Groups Links > > >
On Aug 13, 2010, at 6:06 PM, Ata ul Haq wrote: > I recently joined this list after reading some discussions on what REST is and what it is not. Looking to resolve my confusions and it's not getting anywhere. User error?! :) No :-) I think it will be helpful to go backwards in the archives and read your way through from 2002 to about 2008. > > From this thread, what I take is that pre-defined media-types are known knowledge for a REST API publisher and consumer. Yes. Think of it this way: REST moves all coupling from the interface into the message body types. In addition, REST encourages two things: 1. Punctual evolution; some peers can agree on extensions and use them without breaking the other system participants 2. Standardization of message body types (in order to minimize punctual integration) Ideally: what works on a punctual basis and seems of value to the general public (be it 'the Web' or some enterprise) should be moved into a standard eventually. Example: in order for you to build a user agent that can talk to amazon.com you only need to look at the HTML spec, not call Amazon. Vice versa: in order for Amazon to implement an service they need not call their customers, they only need to look at HTML. This concept is the same regardless of how much the user is involved (how many automated requests your user agent is programmed to make). > That is the out of band information that everyone would need to know to make things "work". Right. All coupling is done on a global basis (ideally). > Publisher needs to know this to make sure they fit the data in the right format/container/media-type/watever and consumer needs to know that to be able to handle the data properly. > > A bot/human/etc. parsing only text/html probably can read application/svg+xml but won't know what to do with it. A publisher providing only application/atom+xml to bots looking for text/html would not work either. Right. And that is perfectly fine. > > I understand the use of Content-Type and Accept headers but my client won't work without knowledge of media types supported AND knowing which ones i MUST support from client side to be able to parse/utilize the data from publisher. All of this is even before i start writing a client. > > Am i correct or wayyyyy off? Yes. Though it is of course an iterative process in which clients learn, formats die, the world evolves.... The interesting thing is that the architectural style used does not at all affect this conceptual foundation. WS-* etc. does not make this fact of live go away, it only hides it behind IDL artifacts that suggest control where there is none (due to the networked environment). REST makes all this explicit and forces you to deal with it instead of being surprised later in the day. HTH, Jan > > > > On Fri, Aug 13, 2010 at 11:39 AM, Eric J. Bowman <eric@...> wrote: > > Eb wrote: > > > > ...at what point (or what about user agent) makes a designer come to > > the conclusion that text/html doesn't suffice... > > > > If it's any help: If a representation needs to express hypertext > controls within a vector-graphic image, HTML just won't do, but SVG > will. I don't think it's a user agent concern, application/svg+xml is > a ubiquitous media type, so any stakeholder with an interest knows what > processing model any user agent needs to support, regardless of intent > (render vs. introspect for links). > > I'd say that the needs of the resource drive the choice of media type, > not the needs of the user agent. Modeling a telephony system in HTML > so that it may be driven by a human using a browser is inappropriate to > the needs of the resource. > > If user agent = telephony device, then choice of device should be > driven by support for the media types the service uses. Just as choice > of browser should be driven by support for application/svg+xml if the > resource representations are best modeled as hypertext controls embedded > within vector graphics. > > The obvious problem with my logic is that it depends upon free will. > If the corporate intranet only allows IE, then the representations > can't be application/svg+xml, so the needs of the user agent are > not driving choice of media type, but imposing artificial restrictions > on that choice. Which I see as a bad thing, hence my advocacy for > going the other way around... > > So, yeah, make "supported media types" a function of resource modeling, > without considering client concerns. Choosing HTML should be driven by > its appropriateness to the resource, letting user agents take whatever > form they need to take, using the common knowledge of a ubiquitous > media type to inform them of available state transitions, etc. > > -Eric > > ----------------------------------- Jan Algermissen, Consultant NORD Software Consulting Mail: algermissen@... Blog: http://www.nordsc.com/blog/ Work: http://www.nordsc.com/ -----------------------------------
I recently joined this list after reading some discussions on what REST is and what it is not. Looking to resolve my confusions and it's not getting anywhere. User error?! :) From this thread, what I take is that pre-defined media-types are known knowledge for a REST API publisher and consumer. That is the out of band information that everyone would need to know to make things "work". Publisher needs to know this to make sure they fit the data in the right format/container/media-type/watever and consumer needs to know that to be able to handle the data properly. A bot/human/etc. parsing only text/html probably can read application/svg+xml but won't know what to do with it. A publisher providing only application/atom+xml to bots looking for text/html would not work either. I understand the use of Content-Type and Accept headers but my client won't work without knowledge of media types supported AND knowing which ones i MUST support from client side to be able to parse/utilize the data from publisher. All of this is even before i start writing a client. Am i correct or wayyyyy off? On Fri, Aug 13, 2010 at 11:39 AM, Eric J. Bowman <eric@...>wrote: > > > Eb wrote: > > > > ...at what point (or what about user agent) makes a designer come to > > the conclusion that text/html doesn't suffice... > > > > If it's any help: If a representation needs to express hypertext > controls within a vector-graphic image, HTML just won't do, but SVG > will. I don't think it's a user agent concern, application/svg+xml is > a ubiquitous media type, so any stakeholder with an interest knows what > processing model any user agent needs to support, regardless of intent > (render vs. introspect for links). > > I'd say that the needs of the resource drive the choice of media type, > not the needs of the user agent. Modeling a telephony system in HTML > so that it may be driven by a human using a browser is inappropriate to > the needs of the resource. > > If user agent = telephony device, then choice of device should be > driven by support for the media types the service uses. Just as choice > of browser should be driven by support for application/svg+xml if the > resource representations are best modeled as hypertext controls embedded > within vector graphics. > > The obvious problem with my logic is that it depends upon free will. > If the corporate intranet only allows IE, then the representations > can't be application/svg+xml, so the needs of the user agent are > not driving choice of media type, but imposing artificial restrictions > on that choice. Which I see as a bad thing, hence my advocacy for > going the other way around... > > So, yeah, make "supported media types" a function of resource modeling, > without considering client concerns. Choosing HTML should be driven by > its appropriateness to the resource, letting user agents take whatever > form they need to take, using the common knowledge of a ubiquitous > media type to inform them of available state transitions, etc. > > -Eric > >
Noah Campbell wrote: > > gopher comes to mind > Poor ol' gopher never saw that truck that run him over a-comin', did he? -Eric
On Wed, 2010-08-11 at 18:03 -0600, Eric J. Bowman wrote: > I don't get what you're driving at. Why wouldn't .price also identify > a price in Euros? Or, if .price identifies price in dollars, why can't > that be converted to Euros? So why not class=".preis"? So far you've said: "I do know, the big players are trying to expose this information, like what the price is and how to buy the item, such that it becomes searchable." "If that span were just meant for style, then why not class='xyzzy' to keep user agents from making the assumption that it's an item price?" "The point remains, such metadata can be used to identify item and price, regardless of how a site is marked up." I can't follow your reasoning. If it's meant to be searchable as you say, why isn't the symbol '.preis' in the .de site? If it's just a symbol for a machine, then 'xyzzy' is fine for that. Bill
Bill de hra wrote: > > I can't follow your reasoning. If it's meant to be searchable as you > say, why isn't the symbol '.preis' in the .de site? If it's just a > symbol for a machine, then 'xyzzy' is fine for that. > By that logic, for .de domains <title> should be <titel>. I don't really care what language any metadata is in, be it element, attribute, or attribute content. The element content will give the units in such a case, dollars or euros. The difference with 'xyzzy' is that it doesn't map to the concept of cost in any language -- and by that I mean metadata language as much as natural language. An ontology that exposes cost with class='xyzzy' is perfectly acceptable if, like GoodRelations, it's a defined ontology. For something ad-hoc like Amazon, natural language is the only thing that makes sense, but I don't see why it would be required to look for different metadata to determine dollars vs. euros. -Eric
On Sat, 2010-08-14 at 19:42 -0600, Eric J. Bowman wrote: > Bill de hÓra wrote: > > > > I can't follow your reasoning. If it's meant to be searchable as you > > say, why isn't the symbol '.preis' in the .de site? If it's just a > > symbol for a machine, then 'xyzzy' is fine for that. > > > > By that logic, for .de domains <title> should be <titel>. Incorrect conclusion, when following your previous arguments. > I don't > really care what language any metadata is in, be it element, attribute, > or attribute content. You started to care when you said the class attribute values were to help with search and not anything else. > The element content will give the units in such > a case, dollars or euros. The difference with 'xyzzy' is that it > doesn't map to the concept of cost in any language -- and by that I > mean metadata language as much as natural language. The point is it doesn't matter. The 'xyzzy' symbol can be bound to anything. An ontology isn't needed for that any more than google spellcheck needs an actual dictionary. > An ontology that exposes cost with class='xyzzy' is perfectly acceptable > if, like GoodRelations, it's a defined ontology. It's entirely acceptable without predefinition if it can be associated with something a user will search on. > For something ad-hoc > like Amazon, natural language is the only thing that makes sense, but I > don't see why it would be required to look for different metadata to > determine dollars vs. euros. That isn't coherent. On the one hand you're arguing that xyzzz isn't meaningful, when of course it can be - there's a long line of AI/KR and literature your'e welcome to dispute with, but which I don't need to justify . On the other hand you're arguing that scraping out the currency symbol is meaningful. Again, I can't follow your reasoning. Bill
Bill de hra wrote: > > > By that logic, for .de domains <title> should be <titel>. > > Incorrect conclusion, when following your previous arguments. > What? My argument is that <span class='price'> can not only be used to style the price of an item, but also make it easy to extract the price of items from markup. What does that have to do with the language of the document content? Nothing! Exactly like the language used to define elements and attributes has absolutely nothing to do with the language of the content -- which you're arguing against, which makes about as much sense as changing <title> to <titel>. > > > I don't > > really care what language any metadata is in, be it element, > > attribute, or attribute content. > > You started to care when you said the class attribute values were to > help with search and not anything else. > What? My argument is that <span class='price'> can not only be used to style the price of an item, but also make it easy to extract the price of items from markup. > > > The element content will give the units in such > > a case, dollars or euros. The difference with 'xyzzy' is that it > > doesn't map to the concept of cost in any language -- and by that I > > mean metadata language as much as natural language. > > The point is it doesn't matter. The 'xyzzy' symbol can be bound to > anything. An ontology isn't needed for that any more than google > spellcheck needs an actual dictionary. > Of course 'xyzzy' can be bound to anything. What that anything *is*, is a whole lot less obvious than if 'price' is used. If Amazon were using 'xyzzy' I wouldn't be arguing that they _do_ want item prices to be easily extracted from their content. > > > An ontology that exposes cost with class='xyzzy' is perfectly > > acceptable if, like GoodRelations, it's a defined ontology. > > It's entirely acceptable without predefinition if it can be associated > with something a user will search on. > Could someone associate 'xyzzy' with item cost? Sure. I never meant to imply this couldn't be done. But we're talking about serendipitous re-use here. Are you really arguing that 'price' has no benefit over 'xyzzy' to the content producer seeking serendipitous re-use? Which is the real point I'm trying to make, RDFa + GR has no point but promoting machine-readable serendipitous re-use of content, and this is the way the Web is evolving, not towards application-specific media type identifiers. > > > For something ad-hoc > > like Amazon, natural language is the only thing that makes sense, > > but I don't see why it would be required to look for different > > metadata to determine dollars vs. euros. > > That isn't coherent. On the one hand you're arguing that xyzzz isn't > meaningful, when of course it can be - there's a long line of AI/KR > and literature your'e welcome to dispute with, but which I don't need > to justify. > I didn't say it couldn't be. I did say, that 'xyzzy' has no natural- language meaning, certainly not "this content is an item price" as is suggested by using 'price'. Is my use of the term "natural language" incoherent? Is it incoherent to point out that, whether we're talking RDFa or microformats, the metadata doesn't need to be so fine-grained as to distinguish between dollars and euros, when this may be accomplished in element content? By ad-hoc, I meant undocumented. If Amazon does indeed desire to expose item cost in metadata, without providing documentation, then it would be counter-productive to choose obscure strings like 'xyzzy' instead of obvious, natural-language terms like 'price'. This is hardly a controversial thing to say -- it's the same logic behind using element names like <title> in HTML instead of, say, <t>. Sure, you can document somewhere that <t> is a title element, but why add that layer of indirection if simplicity is a goal? > > On the other hand you're arguing that scraping out the > currency symbol is meaningful. Again, I can't follow your reasoning. > If I'm trying to extract item prices, and I know they're marked up with class='price', then obviously the currency symbol is intended as a currency symbol and not, for example, "$ = string" in that context. -Eric
Hi, I'm trying to figure out how to represent the availability of a particular login/username in my system so that clients can check if a given login is available. How should I represent this - or, what exactly is the resource I'm dealing with here? Is it a UserName resource? How does one interact with it in order to figure out if a particular username is available? Thanks in advance, Sidu. http://c42.in
Hmm.. return a 200 response code if it already exists, and 404 if it does not? I know that sounds "backwards" since the 200 is really the error condition though. On Thu, Aug 19, 2010 at 9:43 AM, Dark Seid <lorddaemon@gmail.com> wrote: > > > Hi, > > I'm trying to figure out how to represent the availability of a particular > login/username in my system so that clients can check if a given login is > available. How should I represent this - or, what exactly is the resource > I'm dealing with here? Is it a UserName resource? How does one interact with > it in order to figure out if a particular username is available? > > Thanks in advance, > Sidu. > http://c42.in > >
It's only "backwards" (and an "error") if you're approaching the design from the point of view of this one end-user goal, instead of modeling resources for serendipitous re-use. One man's tweet is another man's frisson. Not only would I recommend a 200, but also that you return more data about /users/fumanchu in the response body if you can. Robert Brewer fumanchu@aminus.org From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of Jeff Robertson Sent: Thursday, August 19, 2010 12:21 PM To: Dark Seid Cc: rest-discuss@yahoogroups.com Subject: Re: [rest-discuss] Modelling availability Hmm.. return a 200 response code if it already exists, and 404 if it does not? I know that sounds "backwards" since the 200 is really the error condition though. On Thu, Aug 19, 2010 at 9:43 AM, Dark Seid <lorddaemon@gmail.com> wrote: Hi, I'm trying to figure out how to represent the availability of a particular login/username in my system so that clients can check if a given login is available. How should I represent this - or, what exactly is the resource I'm dealing with here? Is it a UserName resource? How does one interact with it in order to figure out if a particular username is available? Thanks in advance, Sidu. http://c42.in
On Thu, Aug 19, 2010 at 3:57 PM, Robert Brewer <fumanchu@...> wrote: > > > It's only "backwards" (and an "error") if you're approaching the design > from the point of view of this one end-user goal, instead of modeling > resources for serendipitous re-use. One man's tweet is another man's > frisson. Not only would I recommend a 200, but also that you return more > data about /users/fumanchu in the response body if you can. > > > > > > Robert Brewer > > fumanchu@... > > > > Huh. Not sure I get this. Depending on the rules involved, it could imply that the resource at the URI has been overwritten. I would PUT to /users/fumanchu, 201 indicates success and error code if the resource could not be created for some reason such as "it already exists".
I wrote: > It's only "backwards" (and an "error") if you're approaching > the design from the point of view of this one end-user goal, > instead of modeling resources for serendipitous re-use. > One man's tweet is another man's frisson. Not only would > I recommend a 200, but also that you return more data about > /users/fumanchu in the response body if you can. and Eb replied: > Huh. Not sure I get this. Depending on the rules involved, > it could imply that the resource at the URI has been > overwritten. I would PUT to /users/fumanchu, 201 indicates > success and error code if the resource could not be created > for some reason such as "it already exists". If you want to jump straight to PUT then 201 and 409 are perfectly valid responses. I was talking about GET in a look-before-you-leap sense, in which case it makes sense to return more than just 200 with no content. Robert Brewer fumanchu@...
On Fri, Aug 20, 2010 at 12:14 AM, Robert Brewer <fumanchu@...> wrote: > I wrote: > > It's only "backwards" (and an "error") if you're approaching > > the design from the point of view of this one end-user goal, > > instead of modeling resources for serendipitous re-use. > > One man's tweet is another man's frisson. Not only would > > I recommend a 200, but also that you return more data about > > /users/fumanchu in the response body if you can. > > and Eb replied: > > Huh. Not sure I get this. Depending on the rules involved, > > it could imply that the resource at the URI has been > > overwritten. I would PUT to /users/fumanchu, 201 indicates > > success and error code if the resource could not be created > > for some reason such as "it already exists". > > If you want to jump straight to PUT then 201 and 409 are perfectly valid > responses. I was talking about GET in a look-before-you-leap sense, in which > case it makes sense to return more than just 200 with no content. > > > Robert Brewer > fumanchu@... > > > > Gotcha.
"Robert Brewer" wrote: > > If you want to jump straight to PUT then 201 and 409 are perfectly > valid responses. I was talking about GET in a look-before-you-leap > sense, in which case it makes sense to return more than just 200 with > no content. > HEAD also works for this, if the desire is that no content be returned. -Eric
> > I would PUT to /users/fumanchu, 201 indicates success and error code if > the resource could not be created for some reason such as "it already > exists". Ah, but the resource that I care to persist is the User, not the UserLogin. I know that this is an implementation bound question, but would you suggest I persist the UserLogin as part of the workflow needed to create a User (and perhaps clean it out after)? Thanks, Sidu. http://c42.in On Fri, Aug 20, 2010 at 4:01 AM, Eb <amaeze@...> wrote: > > > On Thu, Aug 19, 2010 at 3:57 PM, Robert Brewer <fumanchu@...>wrote: > >> >> >> It's only "backwards" (and an "error") if you're approaching the design >> from the point of view of this one end-user goal, instead of modeling >> resources for serendipitous re-use. One man's tweet is another man's >> frisson. Not only would I recommend a 200, but also that you return more >> data about /users/fumanchu in the response body if you can. >> >> >> >> >> >> Robert Brewer >> >> fumanchu@... >> >> >> >> > Huh. Not sure I get this. Depending on the rules involved, it could imply > that the resource at the URI has been overwritten. I would PUT to > /users/fumanchu, 201 indicates success and error code if the resource could > not be created for some reason such as "it already exists". >
<snip> i often have servers the support both a specific media type (application/bot-work.xhtml) and a generic media type (application/xhtml+xml, text/html, etc.). this allows clients to pick the media-type they prefer. often the representation for these selected types *is identical* this means client devs can use browsers to view/hack/debug their work while writing the bot. </snip> I am encouraged to hear you do this as I had arrived at a similar conclusion.. Thanks Mike. 2010/8/12 mike amundsen <mamund@...> > > > <snip> > That is relevant to the discussion in the sense that if REST realm is only > the Web, than I'll say that POVs like the ones in relation to the use of > media types of Eric Bowman and others are indeed correct. If not, if REST is > applicable to, say, implement EDI over the Internet (and I stress again that > Internet != Web), where that presentation is not only superfluous but > counter-productive, then I would say that said POVs are incorrect. > </snip> > > FWIW, I've used XHTML for bots and console apps in the past w/ much > success. many client libraries already have HTML parsers, understand how to > code against the FORM and A elements, etc. these bots don't do any UI > rendering, but still are quite efficient at parsing and processing XHTML > bodies. > > i often have servers the support both a specific media type > (application/bot-work.xhtml) and a generic media type > (application/xhtml+xml, text/html, etc.). this allows clients to pick the > media-type they prefer. often the representation for these selected types > *is identical* this means client devs can use browsers to view/hack/debug > their work while writing the bot. > > mca > http://amundsen.com/blog/ > http://mamund.com/foaf.rdf#me > > > Join me at #RESTFest 2010 Sep 17 & 18 > http://restfest.org > http://restfest/org/workshop > > > 2010/8/12 Antnio Mota <amsmota@...> > >> Well, I just posted in another thread a post that may fits better in this >> thread: >> >> 2010/8/12 Eb <amaeze@gmail.com> >> >> >>> I don't think there is really that much of a difference (between >>> human/browser-to-machine and machine-to-machine) >>> >> >> I agree that there is not that much of a difference, specially if we're >> talking on a pure technical level - it's all bits and bytes in the end. But >> there is nevertheless one difference. Browsers consumes HTML documents that >> are composed of Presentation + Data. But if the client is some kind of >> process that, for instance, gets the data from the server and writes it in a >> database, there is no need for any presentation layer. Like in pure EDI. >> >> That is relevant to the discussion in the sense that if REST realm is only >> the Web, than I'll say that POVs like the ones in relation to the use of >> media types of Eric Bowman and others are indeed correct. If not, if REST is >> applicable to, say, implement EDI over the Internet (and I stress again that >> Internet != Web), where that presentation is not only superfluous but >> counter-productive, then I would say that said POVs are incorrect. >> >> >> 2010/8/12 Peter Williams <pezra@barelyenough.org> >> >> I concur with Mike's assessments. >>> >>> I have implemented several systems using http over the web where the >>> users were automatons. These systems work well and benefit greatly from the >>> characteristics of rest. Particularly the evolvability and scalability it >>> provides. >>> >>> Peter >>> <http://barelyenough.org> >>> >>> >>> 2010/8/12 Antnio Mota <amsmota@...> >>> >>>> >>>> >>>> On 12 August 2010 13:19, mike amundsen <mamund@...> wrote: >>>> >>>>> REST style is not limited to Web or Internet usage (e. g. has >>>>> application for communication between autonomous devices in a closed >>>>> custom network) >>>>> >>>> >>>> >>>>> REST style using HTTP over the Web is not limited to using the >>>>> common >>>>> Browser for the "client" (e. g. desktop applications. console apps, >>>>> bots, etc.) >>>>> >>>>> >>>> Yes, these two are the kind of clarification that I was talking about. >>>> The other two points I think that everybody agrees... >>>> >>>> >>>> >>> >> > >
Glenn Block wrote: > > I am encouraged to hear you do this as I had arrived at a similar > conclusion.. > I'll be encouraged to hear you folks who do this, acknowledge that it is a clear-cut violation of REST to send any unregistered media type like that over the public Internet. Intranet, fine. REST mismatches aren't a sin, you just need to recognize them in your system, for what they are. No, I won't be getting down from my high horse on this any time soon. This issue is fundamental to REST. The worst and perhaps most common REST advice throws these unregistered media types around like there's nothing amiss; this must stop because it's at odds with the style. If you're tossing out a media type nobody's ever heard of in your REST advice, please note that your advice is intranet-specific, while the public Internet is (in REST) restricted to IANA-registered types. -Eric
Thinking a little bit more about the recent similar discussion, where some folks noted reservations about using (200 | 304) = success, while !(200 | 304) = fail. Another way to check (using GET or HEAD) would be to send 'If-Match: *' and expect a 412 response = failure. This removes ambiguity as to which 4xx code to expect, or what if it's 5xx, if for some reason you don't want to check !(200 | 304). -Eric
Hello! I am really interested to find out where it says that on the public Internet the only permissible media types have to be IANA registered. Isn't it enough to say that the type is registered 'somewhere'? Isn't it completely normal already that some media types - even IANA registered ones - are not supported by all clients? So, what makes the IANA registration so special? It doesn't magically make all clients understand all those types. It's just a place where you can go to find the official definition of the type. And when you feel like it, you could possibly implement a proper handler for it. In that case: Isn't the real requirement that an explanation of the type should be available 'somewhere'? And if it's not an official IANA type, should it not suffice if you can provide a place where that definition can be retrieved? Maybe a link to the definition of your 'non standard' media type could be included every time you send an object with that media type? In that case, clients would be free to examine that definition so that they can start to make sense of it. For a public type, they would go to the IANA site and read up on it, in your particular case they go to your server and read about it. If your client is unable to understand that definition, then offering one of the standard types as an alternative (as mentioned by some people in this thread) seems to be reasonable. What am I missing? Juergen On Mon, 2010-08-23 at 02:18 -0600, Eric J. Bowman wrote: > > Glenn Block wrote: > > > > I am encouraged to hear you do this as I had arrived at a similar > > conclusion.. > > > > I'll be encouraged to hear you folks who do this, acknowledge that it > is > a clear-cut violation of REST to send any unregistered media type like > that over the public Internet. Intranet, fine. REST mismatches aren't > a sin, you just need to recognize them in your system, for what they > are. > > No, I won't be getting down from my high horse on this any time soon. > This issue is fundamental to REST. The worst and perhaps most common > REST advice throws these unregistered media types around like there's > nothing amiss; this must stop because it's at odds with the style. > > If you're tossing out a media type nobody's ever heard of in your REST > advice, please note that your advice is intranet-specific, while the > public Internet is (in REST) restricted to IANA-registered types. > > -Eric > > > > -- Juergen Brendel Architect, MuleSoft Inc. http://mulesoft.com
Surely it's more scalable and evolveable to allow types to have their eligibility within the system emerge naturally, according to their uptake. If I document a media type and publish that documentation on the web, does that not constitute 'registration' in some sense? Might that be more acceptable if we used URIs to identify media types? Cheers, Mike On Mon, Aug 23, 2010 at 9:18 AM, Eric J. Bowman <eric@...> wrote: > Glenn Block wrote: >> >> I am encouraged to hear you do this as I had arrived at a similar >> conclusion.. >> > > I'll be encouraged to hear you folks who do this, acknowledge that it is > a clear-cut violation of REST to send any unregistered media type like > that over the public Internet. Intranet, fine. REST mismatches aren't > a sin, you just need to recognize them in your system, for what they > are. > > No, I won't be getting down from my high horse on this any time soon. > This issue is fundamental to REST. The worst and perhaps most common > REST advice throws these unregistered media types around like there's > nothing amiss; this must stop because it's at odds with the style. > > If you're tossing out a media type nobody's ever heard of in your REST > advice, please note that your advice is intranet-specific, while the > public Internet is (in REST) restricted to IANA-registered types. > > -Eric
Juergen Brendel wrote: > > I am really interested to find out where it says that on the > public Internet the only permissible media types have to be > IANA registered. > Remebering that HTTP != REST, RFC 2616 says this: " Media-type values are registered with the Internet Assigned Number Authority (IANA [19]). The media type registration process is outlined in RFC 1590 [17]. Use of non-registered media types is discouraged. " The RFC makes no distinction between public/private use of HTTP. So RFC 2616 discourages the use of non-registered media types on intranets, as do I. > > Isn't it enough to say that the type is registered 'somewhere'? > No, because there is only one registration authority defined by HTTP, and that's IANA. > > Isn't it completely normal already that some media types - even > IANA registered ones - are not supported by all clients? > Of course. Which is why I use the term 'ubiquitous media types' when discussing REST -- being registered != being standardized, while being standardized != being registered. To clarify: an unregistered media type on your intranet, is a de-facto standard on your intranet, IANA be damned. If it's traversing the Internet, however, REST requires that it be standardized, therefore registered. > > So, what makes the IANA registration so special? It doesn't magically > make all clients understand all those types. It's just a place where > you can go to find the official definition of the type. And when you > feel like it, you could possibly implement a proper handler for it. > That's exactly what *does* make it so special. If your traffic is traversing my public intermediary on the public Internet, then I have every right to insist that you don't use a private media type identifier. I need to be able to make sure that your chocolate doesn't get in my peanut butter, so to speak, and the only way I can do that is if I know where to look for a definition of your media type. Which is what a media type identifier _does_. By looking up the identifier in a registry, I can see what media type it is bound to (remember that a single media type may have multiple valid identifiers). That binding is the address of the specification of the media type. Without a registry to correlate a media type identifier to a spec, the media type identifier is semantically meaningless. If I see a media type identifier that isn't in the IANA registry, how do I know where to look, if I *do* want to implement a handler for it? > > In that case: Isn't the real requirement that an explanation of the > type should be available 'somewhere'? And if it's not an official > IANA type, should it not suffice if you can provide a place where > that definition can be retrieved? > No, and no. I'm sending 'application/xbel+xml' over the Internet. This is clearly a REST mismatch -- the media type being identified is standardized, but by Python not IETF, and defines no media type identifier. My use of 'application/xbel+xml' is ad-hoc. To determine what it means, you have to Google for 'xbel' and even then you're just guessing until you've validated the payload. The whole point of self-descriptive messaging is that you _don't_ have to set about Googling and introspecting, in order to confirm what spec a media type identifier points to. There's a registry for that... So the solution to my REST mismatch, is to write up XBEL as an IETF standard, such that it qualifies for the "standards tree" in the IANA registry of media type identifiers. I think that's too strict a requirement, but it isn't my call. Once it's done, though, my REST mismatch disappears because now, like all standards-tree media type identifiers, 'application/xbel+xml' will point to an IETF-standardized media type specification. (Yeah, I know, nothing I call a spec is really a spec, just go with it...) > > Maybe a link to the definition of your 'non standard' media type > could be included every time you send an object with that media type? > HTTP defines no such mechanism, HTTP only defines the IANA registry. Which, if you think about it, is quite the improvement over Gopher, where the nascent concept of a protocol-layer token engaging a codec was baked into the spec, requiring the spec to be versioned every time a new token needed creating. Although this did restrict Gopher to a uniform interface of only a very limited number of ubiquitous types. > > In that case, clients would be free to examine that definition so > that they can start to make sense of it. For a public type, they > would go to the IANA site and read up on it, in your particular case > they go to your server and read about it. > This mechanism just isn't in, and won't be added to, HTTP 1.1 -- which is not to say that such an extension mechanism isn't slated for some successor protocol or another, like HTTP 2 or Waka. Roy has dropped hints here and there that Waka will not be adopting IANA, or its associated media type identifiers. This would mean that, like Gopher, some token other than 'text/html' would be used to identify the family of HTML media types. > > If your client is unable to understand that definition, then offering > one of the standard types as an alternative (as mentioned by some > people in this thread) seems to be reasonable. > But the media type identifier must be understood by all participants in the communication! This includes intermediaries, whose owners have every right to insist that you not be able to bypass security restrictions they have implemented on, say, 'image/jpg' by tunneling that codec selection through some private media type identifier that the intermediary owner can't possibly agree to because it isn't any sort of standard, or even if it is, isn't contained in the IANA registry. The Internet is part of the public commons. Don't abuse that trust, especially for the sake of avoiding the anarchic scalability and serendipitous re-use that come from choosing ubiquitous media type identifiers, by sending private media type identifiers which are the antithesis of self-descriptive messaging. It isn't fair to anyone who wants to understand the nature of the data traversing their network, and it isn't fair to yourself if your goals correlate with those of REST. -Eric
> > It isn't fair to anyone who wants to understand the nature of the > data traversing their network, and it isn't fair to yourself if your > goals correlate with those of REST. > Exception: If the nature of your data is private, and your goal is to make sure that it's never cached on public intermediaries, there's a protocol for that: HTTPS, not HTTP with private media type identifier. -Eric
Mike Kelly wrote: > > Surely it's more scalable and evolveable to allow types to have their > eligibility within the system emerge naturally, according to their > uptake. > Of course. But as I've pointed out before, 999 out of 1,000 media type identifiers registered have exactly zero chance of ever becoming ubiquitous enough to be considered part of the uniform interface on the public Internet. Then there's Atom, an overnight success which filled a pressing need -- there just aren't that many pressing needs left. > > If I document a media type and publish that documentation on the web, > does that not constitute 'registration' in some sense? Might that be > more acceptable if we used URIs to identify media types? > No, because that isn't what RFC 2616 allows. Nothing in REST overrides the requirements of the chosen protocol. Even if you IANA-register this custom media type of yours, unless it becomes ubiquitous it just isn't part of a uniform interface on the public Internet. It's your unique-snowflake API designed around application-specific needs, instead of trading off such efficiencies for the generality of the uniform interface provided by standardized media types. -Eric
> > Then there's Atom, an overnight success which filled a pressing need > -- there just aren't that many pressing needs left. > One of which, is a registered media type identifier for the ubiquitous XBEL media type. Which is why I think approval of application/xbel+xml is a foregone conclusion, once XBEL is rewritten as an IETF RFC -- it's already widely implemented using the much-less-desirable application/xml media type identifier. Until then, I'll live with the REST mismatch that's "discouraged" by RFC 2616. -Eric
<snip> I'll be encouraged to hear you folks who do this, acknowledge that it is a clear-cut violation of REST to send any unregistered media type like that over the public Internet. </snip> nope. <snip> the public Internet is (in REST) restricted to IANA-registered types. </snip> nope. <snip> No, I won't be getting down from my high horse on this any time soon. </snip> What it is that you and the horse you rode in on are doing is of no interest to me. mca http://amundsen.com/blog/ http://mamund.com/foaf.rdf#me On Mon, Aug 23, 2010 at 04:18, Eric J. Bowman <eric@...> wrote: > Glenn Block wrote: > > > > I am encouraged to hear you do this as I had arrived at a similar > > conclusion.. > > > > I'll be encouraged to hear you folks who do this, acknowledge that it is > a clear-cut violation of REST to send any unregistered media type like > that over the public Internet. Intranet, fine. REST mismatches aren't > a sin, you just need to recognize them in your system, for what they > are. > > No, I won't be getting down from my high horse on this any time soon. > This issue is fundamental to REST. The worst and perhaps most common > REST advice throws these unregistered media types around like there's > nothing amiss; this must stop because it's at odds with the style. > > If you're tossing out a media type nobody's ever heard of in your REST > advice, please note that your advice is intranet-specific, while the > public Internet is (in REST) restricted to IANA-registered types. > > -Eric >
Hello! Thank you for the detailed response, Eric. On Mon, 2010-08-23 at 04:13 -0600, Eric J. Bowman wrote: > But the media type identifier must be understood by all participants in > the communication! This includes intermediaries, whose owners [who] > can't possibly agree to [a proprietary type] because it isn't any > sort of standard, or even if it is, isn't contained in the IANA > registry. Interesting point about intermediaries. > The Internet is part of the public commons. Don't abuse that trust, > ... by sending private media type identifiers which are the > antithesis of self-descriptive messaging. >From what I read here and elsewhere is that it's not very helpful to say "application/xml" or "application/json", since that is not very descriptive. If I use XMl or JSON in a particular manner, I would be tempted to say "application/xml+foo" or "application/json+foo". Yet, this - you say - shouldn't be done if we are using any part of the Internet. But is the meaningless "application/xml" (or .../json) the only alternative I have then? Likewise, there is the whole concept using media types, rather than URIs for versioning. I guess that's not allowed then either? Juergen
On Aug 23, 2010, at 10:41 PM, Juergen Brendel wrote: > > Hello! > > >> From what I read here and elsewhere is that it's not very helpful to say > "application/xml" or "application/json", since that is not very > descriptive. Right. application/xml just means something like: "Stuff this into an XML parser". > If I use XMl or JSON in a particular manner, I would be > tempted to say "application/xml+foo" or "application/json+foo". What you should do is mint a new type if you define a new intention of processing/interpreting the payload. application/atomsrv+xml is intended to be processed and interpreted very different from text/html. > Yet, > this - you say - shouldn't be done if we are using any part of the > Internet. Which is just plain wrong.MInt your own type if it makes sense to you (say, if you envision enough consumers and consumer scenarios that can not be satisfied on a purely human basis (with HTML)). If you type is of use it will be adopted more and more. Take it to IANA for registration then if you like. If you type is not so interesting it will simply sort of die. The only real harm that you can do is if you publish and use some extensions or special syntax rules that cause clients to be coupled to your service instead of to a standard type. When the number of clients grows and you want to change anything you are right there in good ol' RPC, tight coupling land. > But is the meaningless "application/xml" (or .../json) the > only alternative I have then? No. Mint a type. Get people interested. Evolve the type for public good and plan for re-usability. > > Likewise, there is the whole concept using media types, rather than URIs > for versioning. I guess that's not allowed then either? Very simple: if you evolve the syntax of a media type in a forward compatible way, increase the version ID of the spec and/or the XML and keep the type name. If you make forward incompatible changes, define a new type. Conneg between the old type and the new type is your API versioning mechanism. Jan > > Juergen > > > > > > ------------------------------------ > > Yahoo! Groups Links > > >
mike amundsen wrote: > > <snip> > I'll be encouraged to hear you folks who do this, acknowledge that it > is a clear-cut violation of REST to send any unregistered media type > like that over the public Internet. > </snip> > nope. > Yes, absolutely yes. If you would like to explain how self-descriptive messaging works with opaque media type identifiers, knock yourself out, instead of giving a one-word rebuttal... > > <snip> > the public Internet is (in REST) restricted to IANA-registered types. > </snip> > nope. > Yes, absolutely yes. REST has a constraint called "self-descriptive messaging" which can't be met if nobody can look up your media type identifier. REST is not based on opaque, anything-goes strings as media type identifiers. REST *is* based on self-descriptive messaging where anyone can figure out what your media type identifier means, because its presence in the IANA registry makes it transparent. -Eric
This may be feeding a thread that doesn't need it, but.... On 08/23/2010 02:31 PM, Eric J. Bowman wrote: > > > mike amundsen wrote: > > > > <snip> > > I'll be encouraged to hear you folks who do this, acknowledge that it > > is a clear-cut violation of REST to send any unregistered media type > > like that over the public Internet. > > </snip> > > nope. > > > > Yes, absolutely yes. If you would like to explain how self-descriptive > messaging works with opaque media type identifiers, knock yourself out, > instead of giving a one-word rebuttal... > When I read the word "violation" I note that it has at least two different meanings. One in the context of relevant RFCs (such as for MIME, which you note merely discourages use of unregistered types) where actual conformance criteria exist, and the other in the context of REST. REST, as an architectural style, doesn't really have "violations" - there are approaches which look more like the style, and approaches that look less so. The word "violation" completely undercuts your argument, as REST (as described by Roy), as near as I can tell never embraces the notion that "conformance" exists. Nor can it, so far as I can tell, because as a style, it is at least two levels of abstraction removed from actual design/implementation. > > > > > <snip> > > the public Internet is (in REST) restricted to IANA-registered types. > > </snip> > > nope. > > > > Yes, absolutely yes. REST has a constraint called "self-descriptive > messaging" which can't be met if nobody can look up your media type > identifier. REST is not based on opaque, anything-goes strings as > media type identifiers. REST *is* based on self-descriptive messaging > where anyone can figure out what your media type identifier means, > because its presence in the IANA registry makes it transparent. > I thought of mentioning this before, but with the onslaught of mobile applications, we have the perfect situation for those applications to develop new media types without first getting an IANA registration. Specifications are frequently in development for *years* before the registration might be finalized, so the notion that early versions of those specifications cannot be used across the internet, to figure out how well/poorly they actually work, is just silly. I agree transparency is good. Your strawman seems to be that this must mean IANA registration, which is a separate question, and REST is certainly not restricted to those. Eric Johnson
Juergen Brendel wrote:
>
> From what I read here and elsewhere is that it's not very helpful to
> say "application/xml" or "application/json", since that is not very
> descriptive. If I use XMl or JSON in a particular manner, I would be
> tempted to say "application/xml+foo" or "application/json+foo".
>
I think you meant /foo+xml or /foo+json. As I've pointed out many
times before, there is no extensibility defined for application/json
like there is for application/xml -- without first changing the JSON
media type identifier document to allow for it, /foo+json is non
sequitir -- you _can't_ register it.
Since application/*+json isn't and can't be defined, why would there be
any expectation that caches won't just ignore it?
You also can't go about minting application/foo+xml, because that
identifier string indicates the 'standards tree', i.e. 'foo' must refer
to some IETF RFC. You can register application/vnd.foo+xml and,
provided nobody else has registered it, you're good to go.
>
> Yet, this - you say - shouldn't be done if we are using any part of
> the Internet. But is the meaningless "application/xml" (or .../json)
> the only alternative I have then?
>
I didn't say it shouldn't be done; I said it's a REST mismatch if you
do. The application/xml media type identifier isn't meaningless; it
means the payload is some random XML with a schema, and indicates that
links take the form of rdf:about, XLink, or XInclude. When this
condition evaluates to true, I use application/xml.
If I need to send a payload that's an annotated list of links which may
be loaded into a browser as bookmarks, the first thing I do is choose a
media type that's meant for such a task -- XBEL. Now, I just need to
figure out what media type identifier to send -- application/xml being
a horrible choice, because it doesn't tell anyone that the payload is a
hierarchical collection of annotated links as <bookmark href='{URL}'>.
To do that requires the minting of a new media type identifier, which
announces to the world that the payload has a root element of <folder>
and that links take the form of nested <bookmark href='{URL}'> elements.
So I'm sending application/xbel+xml, but since that isn't registered it
cannot be said to meet the self-descriptive messaging constraint. The
REST mismatch will clear when I follow up on registering the identifier.
If I need to send a payload that's an associative array of integers, I
use application/json to let the whole world know that the numeric
strings contained in the payload (which lack decimal points) are to be
evaluated as integers. If JSON is being used properly, there's no need
for any subtypes -- at the protocol layer we just don't care about the
nature of the payload beyond how to decode it. Media type identifiers
shouldn't define the meaning of the payload, doing so introduces tight
coupling that REST is meant to avoid.
>
> Likewise, there is the whole concept using media types, rather than
> URIs for versioning. I guess that's not allowed then either?
>
The whole concept of versioning is antithetical to REST, via URI or
media type identifier. If the codec changes, then the media type
identifier needs to be changed, not versioned. If the codec doesn't
change, neither should the media type. Consider image types:
http://tech.groups.yahoo.com/group/rest-discuss/message/16267
-Eric
Jan Algermissen wrote: > > > Yet, this - you say - shouldn't be done if we are using any part of > > the Internet. > > Which is just plain wrong. > No, it isn't. Without some sort of registry like IANA, there's no way to avoid media type identifier *collisions*, by which I mean how can anybody know that application/foo+xml from example.org has the same meaning as application/foo+xml from example.com? There's a reason HTTP frowns on unregistered types and REST constrains against them. Messaging isn't self-descriptive if there's no record of your media type identifier in the IANA registry. *It can't be.* It is simply NOT the REST style to treat media type identifiers as opaque strings that only need to be understood by producers and consumers of data, when using the Internet, because you're taking advantage of an existing cache infrastructure which requires media type identifiers to mean the same thing regardless of origin domain. Unregistered media types carry no such guarantee not to vary according to origin domain. The whole purpose of the IANA registry of media type identifiers, is to avoid exactly such collisions. > > Mint your own type if it makes sense to you (say, if you envision > enough consumers and consumer scenarios that can not be satisfied on > a purely human basis (with HTML)). > I wouldn't be that lenient; I'd say only mint a new media type identifier if no existing media type identifier describes how to process the payload. No existing media type identifier describes how to process XBEL as a hierarchical collection of annotated links, so I minted one. If I had overlooked some other identifier and minted one because it "makes sense to me" I'd be violating the uniform interface, which is based on the priciple of generality. -Eric
Eric Johnson wrote: > > When I read the word "violation" I note that it has at least two > different meanings. One in the context of relevant RFCs (such as for > MIME, which you note merely discourages use of unregistered types) > where actual conformance criteria exist, and the other in the context > of REST. REST, as an architectural style, doesn't really have > "violations" > Sure it does. Using cookies to store application state, clearly goes against REST. Whether that's called a mismatch, a violation, or a failure to implement a constraint, is just debating semantics. Using an opaque, unregistered media type identifier can't even remotely be considered RESTful, because REST constrains messaging to be self- descriptive. > > - there are approaches which look more like the style, and approaches > that look less so. The word "violation" completely undercuts your > argument, as REST (as described by Roy), as near as I can tell never > embraces the notion that "conformance" exists. Nor can it, so far as > I can tell, because as a style, it is at least two levels of > abstraction removed from actual design/implementation. > To quote Roy: "What needs to be done to make the REST architectural style clear on the notion that hypertext is a constraint? In other words, if the engine of application state (and hence the API) is not being driven by hypertext, then it cannot be RESTful and cannot be a REST API. Period. Is there some broken manual somewhere that needs to be fixed?" Clearly, then, Roy believes that failing to meet a constraint (in this case, the hypertext constraint) means the system cannot be RESTful. I constantly emphasize that REST is a tool for guiding the long-term evolution of a system. A system may not meet all of REST's constraints when introduced, but REST provides the means to identify mismatches, which can then be dealt with over time (unless they aren't recognized as such). The long-term goal is conformance with the constraints of REST. > > > Yes, absolutely yes. REST has a constraint called "self-descriptive > > messaging" which can't be met if nobody can look up your media type > > identifier. REST is not based on opaque, anything-goes strings as > > media type identifiers. REST *is* based on self-descriptive > > messaging where anyone can figure out what your media type > > identifier means, because its presence in the IANA registry makes > > it transparent. > > > > I thought of mentioning this before, but with the onslaught of mobile > applications, we have the perfect situation for those applications to > develop new media types without first getting an IANA registration. > Specifications are frequently in development for *years* before the > registration might be finalized, so the notion that early versions of > those specifications cannot be used across the internet, to figure out > how well/poorly they actually work, is just silly. > I agree. I also never said anything along the lines of, "Thou shalt not use unregistered media types." The point is, that until the media type identifier is common knowledge it isn't part of a uniform interface. Early adoption of Atom meant using an early form of media type identifier -- these still aren't RESTful to use. Once Atom was finalized, it included a new media type identifier, systems using that identifier once it evolved to refer to a standard never violated the self-descriptive messaging constraint. > > I agree transparency is good. Your strawman seems to be that this > must mean IANA registration, which is a separate question, and REST is > certainly not restricted to those. > REST unequivocally requires self-descriptive messaging. Simply put, this means that media type identifiers must be registered somewhere. In Gopher, identifiers are baked into the spec. In HTTP, the IANA registry is baked into the spec. Whatever protocol you choose to instantiate REST must support the notion of self-descriptive messaging. If you're using HTTP, the _only_ way to meet that constraint is by using IANA-registered media type identifiers, because no other mechanism is defined for correlating identifiers to media types. REST is not based on opaque identifiers only understood by producer and consumer, it is based on re-using identifiers that are commonly understood by all parties to the communication, including intermediaries. If your REST system uses HTTP over the Internet, then there is no exception to using IANA-registered types -- there is simply no other way to meet the self-descriptive messaging constraint. -Eric
Hey guys, I've been thinking about this for a long time and I think there is an existing media type which will be completely adequate for all the use cases you are likely to encounter. Topic Maps - http://www.topicmaps.org/ - see http://www.ontopia.net/topicmaps/materials/tao.html for the canonical description Has any one other than me thought about this? TM already has all the concepts key to REST incorporated into the language. The problems I see with it are: * Its XML serialization is VERY verbose (there is a more terse text serialization though). * I'm not sure how many FOSS libraries there are for it (TM is a standard for many governments so I think there are some that are well written). But I know that I could use it to build a (nearly) arbitrarily complex self-coding system from it (at least within the bounds of the types of systems we build with REST constraints) Thoughts? Adam
Adam: It's not clear to me that topic map representations can express a wide range of application control information (e.g. define queries, the possible write operations for the current representation, etc.). Section 3.5 in the "TAO" link you provided alludes to some of these issues, but provides no references and I am unaware of any work in this area. Any chance you can provide links that might help me get a better understanding on how TopicMaps can express Read/Write semantics similar to the way it is done in HTML, VoiceXML, and other media-types? Thanks. mca http://amundsen.com/blog/ http://mamund.com/foaf.rdf#me On Fri, Aug 20, 2010 at 13:08, littlefyr <adam@...> wrote: > Hey guys, > > I've been thinking about this for a long time and I think there is an > existing media type which will be completely adequate for all the use cases > you are likely to encounter. > > Topic Maps - http://www.topicmaps.org/ - see > http://www.ontopia.net/topicmaps/materials/tao.html for the canonical > description > > Has any one other than me thought about this? TM already has all the > concepts key to REST incorporated into the language. > > The problems I see with it are: > > * Its XML serialization is VERY verbose (there is a more terse text > serialization though). > * I'm not sure how many FOSS libraries there are for it (TM is a standard > for many governments so I think there are some that are well written). > > But I know that I could use it to build a (nearly) arbitrarily complex > self-coding system from it (at least within the bounds of the types of > systems we build with REST constraints) > > Thoughts? > Adam > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
Hiya, On Sat, Aug 21, 2010 at 3:08 AM, littlefyr <adam@...> wrote: > Has any one other than me thought about this? TM already has all > the concepts key to REST incorporated into the language. Not sure I can agree with that statement. First, I suspect what you're talking about here is the TM Data Model and various implementations of it, and while it's true that there are smaller core ontologies and implementation wrappers that people have created which makes a perfectly RESTful TM world, the TM standards themselves don't hold much RESTness in them. Having said that, I and a few others have individually all create various RESTful interfaces to integrate with the TM paradigms, and they kick ass! The trouble is that you also need to understand basic identity management and ontology work in order to make much sense of the TM world, although I don't think a little education is much to ask when the solution to all your problems are offered. :) > The problems I see with it are: > * Its XML serialization is VERY verbose (there is a more terse text serialization though). There's more compact XML version like TM/XML and CXTM, as well as TM/JSON, plus various notation languages like LTM, and they could all be very fast to serialize (especially TM/JSON is interesting as it also leverages more common web architectural concepts). > * I'm not sure how many FOSS libraries there are for it (TM is a standard for many governments so I think there are some that are well written). There's a small forest of them, from big and complex (like Ontopia) or simple (TinyTIM) Java, down to Python, Ruby, and PHP (embedded, or LAMP, or somewhere in the middle). Heck, there's even one in XSLT. :) > But I know that I could use it to build a (nearly) arbitrarily complex self-coding system from it (at least within the bounds of the types of systems we build with REST constraints) Thoughts? Yes, yes, and yes. I have various forms of ontologies that embedded all of REST, and you can use that to even create self-assembled systems. The possibilities are endless, but require a bit of mental effort. Regards, Alex -- Project Wrangler, SOA, Information Alchemist, UX, RESTafarian, Topic Maps --- http://shelter.nu/blog/ ---------------------------------------------- ------------------ http://www.google.com/profiles/alexander.johannesen ---
This discussion of media type identifiers refers to traffic traversing the Internet only. I'm feeling all nostalgic for Gopher... the ISP I opened in 1994 had a gopherspace to fetch the customer agreement and such as text files for printing. It took me another couple of months to get httpd and a website up and running, despite having started that work well before attempting the Gopher (which took all of five minutes to deploy). I killed off my Gopher about a year later. The state of Gopher affairs in 2010 is not good; support was removed from IE 7, and the next Firefox release as of this writing kills it off, too. Gopher may be used to instantiate REST, with one big glaring mismatch, but the results are clearly "the REST style" otherwise. Then we'll see what happens if we try to Gopher an Atom system, to prove my point about opaque vs. transparent media type identifiers... My XBEL blogroll example would be fun to turn into a gopherspace that reads the blogroll.xbel file instead of a filesystem; after all, Gopher is just a hierarchical collection of annotated links. Updating the deployed Gopher from one that reads the filesystem, to one that reads an XBEL file, is an implementation detail hidden behind a uniform interface. So, Gopher meets the layered-system constraint. Identification of resources? Check. Manipulation of resources via representation? Check. URIs ending in .html aren't considered to be HTML, they're considered to be whatever media type they're tagged as, so a user agent might be instructed to treat it as raw text or transfer it as a binary file. It's this distinction that precludes many protocols from instantiating REST, although such protocols may be included in REST systems to the extent that they're compliant with the uniform interface. (FTP considers the type to be whatever the host OS has registered for the filename extension. While it may be included in a REST system to the extent allowed by standard methods, FTP cannot instantiate REST.) Gopher is a stateless client-server protocol. But, it lacks caching, so it can't fully instantiate REST (client-only caching isn't the same thing as anarchic scalability). IMO, a REST mismatch like this is not as big a deal as a mismatch which goes *against* the style, but there can be no doubt that caching is a constraint -- REST says it is, and only defines one optional constraint, code on demand -- which you can do in Gopher by serving HTML with embedded javascript, btw, provided your client can distinguish a retrieved text representation as HTML. Hypertext constraint? Check. Application state is driven by hyperlink protocol headers. Where Gopher really falls short, is its inability to differentiate between, say, Atom and HTML over the wire -- both are either binary files for transfer, or raw text for display. Which leaves self-descriptive messaging. Gopher defines a limited number of media type identifiers, which clients use to select a codec for processing the payload, just like HTTP. This decoupling is what allows clients and servers to evolve independently. Only those identifiers defined by Gopher are self- descriptive, so only those identifiers are part of Gopher's uniform interface. Unless and until the spec changes, it just doesn't account for serving Atom as Atom or HTML as HTML -- although an Atom feed could be refactored into links-as-protocol-headers. To extend Gopher to account for HTML vs. Atom over-the-wire, would require the introduction of new, custom, unregistered, non-ubiquitous, private, nonstandard identifiers to point to these ubiquitous media types instead of '0' for both. It could be done, but without changing the spec to account for these new identifiers they just can't be considered part of the uniform interface of the Gopher protocol, which discourages such extensions while providing a mechanism for them and stressing that the results will only interoperate by chance, not design. Using a private identifier that's only understood by nonstandard clients and servers, couples those servers to those clients while eliminating the possibility of serendipitous re-use by anyone with a standard Gopher client. That isn't self-descriptive messaging, that's exactly the sort of library-based API style Roy compares to the network-based API REST style in 6.5.1: " A library-based API provides a set of code entry points and associated symbol/parameter sets so that a programmer can use someone else's code to do the dirty work of maintaining the actual interface between like systems, provided that the programmer obeys the architectural and language restrictions that come with that code. The assumption is that all sides of the communication use the same API, and therefore the internals of the interface are only important to the API developer and not the application developer. ... Why is this important? Because it differentiates a system where network intermediaries can be effective agents from a system where they can be, at most, routers. " See, were Gopher cacheable, not only would private identifiers restrict the service to nonstandard clients, it would also short-circuit the ability of intermediaries to participate in the communication beyond being routers, i.e. the HTTPS effect (tunneling, not encryption). So it is with non-ubiquitous media type identifiers in HTTP -- the messaging is not self-descriptive, because it relies on understanding nonstandard implementation details (library-based APIs) instead of common knowledge (network-based APIs) inherent in ubiquitous media types. So the tradeoffs involved in using Gopher instead of HTTP to serve up my blogroll are easy to identify; the useful knowledge REST gives me is what informs my choice of protocol. I'm not willing to sacrifice the anarchic scalability of HTTP caching, or the improved serendipitous re- use of serving a list of annotated links as payload as opposed to headers. Nor am I willing to release an application-specific client library that understands unregistered identifiers and couple my service to it, to use Gopher. My choice of the ubiquitous XBEL media type is RESTful, but there's no self-descriptive-messaging-compliant way to serve it over HTTP at this time. Which is a lot less of an issue than using a custom media type identifier, particularly an unregistered one, to refer to some media type nobody has ever heard of (vs. XBEL, which has specific handling widely implemented by many browsers and online bookmarking services). Serving ubiquitous XBEL as ubiquitous application/xml is also not the answer. XBEL has a specific processing model that's widely implemented, and is not reflected by using application/xml as an identifier. No other XML language triggers this processing model, so XBEL is crying out for a unique differentiator to reflect its real-world handling. The only way to RESTfully serve XBEL is to use a media type identifier which engages that codec. Once a registered identifier comes along, XBEL joins HTML, Atom, SVG and other existing markup languages capable of driving a hypertext API, as part of HTTP's uniform interface. Until then, despite being a ubiquitous media type, serving XBEL is a REST mismatch. See RFC 1436 (specifically section 4) for an example of an almost- RESTful protocol with a similar uniform interface for defining network-based APIs rather than library-based APIs... Registered identifiers pointing to obscure, application-specific processing models may meet the self-descriptive messaging constraint, but they don't meet the requirement of standardized types. On the Internet, only those media types with widely-deployed processing models and registered identifiers are part of HTTP's uniform interface, just as with Gopher, only 0-9, g, I and T are part of the uniform interface. Such types continue to evolve. Hopefully, application/xbel+xml will eventually be accepted into the IANA standards tree. Aside from the standards body, XBEL meets all the requirements. But in the here-and- now world of today, serving XBEL is unequivocally a REST mismatch. The goal of REST development (like RFC 1436.4) is to refactor your system such that it can utilize the uniform interface provided by your protocol of choice. If your starting position is to define a custom media type to send over the Internet, you aren't following the REST style. Not in Gopher, and not in HTTP. " A network-based API is an on-the-wire syntax, with defined semantics, for application interactions. A network-based API does not place any restrictions on the application code aside from the need to read/write to the network, but does place restrictions on the set of semantics that can be effectively communicated across the interface. " The uniform interface restriction, is that the semantics of an HTTP interaction be transparently defined by ubiquitous media types, via the mechanism of the IANA registry of media type identifiers. Anything- goes, opaque strings do not effectively communicate semantics across the interface. -Eric
On Mon, Aug 23, 2010 at 10:31 PM, Eric J. Bowman <eric@...> wrote: > mike amundsen wrote: >> >> <snip> >> I'll be encouraged to hear you folks who do this, acknowledge that it >> is a clear-cut violation of REST to send any unregistered media type >> like that over the public Internet. >> </snip> >> nope. >> > > Yes, absolutely yes. If you would like to explain how self-descriptive > messaging works with opaque media type identifiers, knock yourself out, > instead of giving a one-word rebuttal... There's more than enough self descriptiveness from other HTTP headers (Vary, ETag, Content-Location, Cache-Control, Link, etc.) to power most types of layered/intermediary mechanism, 'opaque' media type identifiers don't effect those. I'd be interested to understand exactly what you believe the costs of such identifiers are to self-descriptiveness, and what types of layered/intermediary mechanism would be effected. >> >> <snip> >> the public Internet is (in REST) restricted to IANA-registered types. >> </snip> >> nope. >> > > Yes, absolutely yes. REST has a constraint called "self-descriptive > messaging" which can't be met if nobody can look up your media type > identifier. REST is not based on opaque, anything-goes strings as > media type identifiers. REST *is* based on self-descriptive messaging > where anyone can figure out what your media type identifier means, > because its presence in the IANA registry makes it transparent. > You can't look something up without a registry? Pro-tip: http://www.google.com/search?q=%22application/hal%2Bxml%22 If crawlers were to provide a pagerank equivalent for media types, according to public usage, could we do away with the registry altogether? If not, why not? Cheers, Mike
On Mon, Aug 23, 2010 at 11:23 PM, Eric J. Bowman <eric@...> wrote: > Jan Algermissen wrote: >> >> > Yet, this - you say - shouldn't be done if we are using any part of >> > the Internet. >> >> Which is just plain wrong. >> > > No, it isn't. Without some sort of registry like IANA, there's no way > to avoid media type identifier *collisions*, by which I mean how can > anybody know that application/foo+xml from example.org has the same > meaning as application/foo+xml from example.com? Why do *collisions* matter that much? Evolution has its own mechanisms for dealing with this in the real world, why do they not apply here? > There's a reason HTTP frowns on unregistered types and REST constrains > against them. Messaging isn't self-descriptive if there's no record of > your media type identifier in the IANA registry. *It can't be.* The messages are *less* descriptive, not non-descriptive. "This is the media type (you don't recognize it)" is still descriptive, and besides - there's way more to HTTP messages than just the Content-Type. > It is simply NOT the REST style to treat media type identifiers as > opaque strings that only need to be understood by producers and > consumers of data, when using the Internet, because you're taking > advantage of an existing cache infrastructure which requires media type > identifiers to mean the same thing regardless of origin domain. Why would a cache care whether or not the origin server's notion of xyz media type conflicts with some other version? That wouldn't even matter if the origin server was contradicting /itself/ across multiple resources - let alone another server entirely. Cheers, Mike
Mike Kelly wrote: > > There's more than enough self descriptiveness from other HTTP headers > (Vary, ETag, Content-Location, Cache-Control, Link, etc.) to power > most types of layered/intermediary mechanism, 'opaque' media type > identifiers don't effect those. > Yes, there is a lot more to the self-descriptive messaging constraint than media type identifiers. But it's fundamental to REST that you send Content-Type to allow intermediaries and user agents to identify the processing model of the payload, where the response includes a payload. This is not required by HTTP, but it is a REST constraint. Opaque media type identifiers do have an impact on intermediaries, many caches only concern themselves with ubiquitous media types and ignore everything else as not accounting for enough traffic to warrant caching. Not that caching is the only action an intermediary can take. If no intermediary has any clue about the processing model of a custom media type, then all it can be is a router, not a participant -- regardless of what other headers are sent. See REST 6.5.1. > > I'd be interested to understand exactly what you believe the costs of > such identifiers are to self-descriptiveness, and what types of > layered/intermediary mechanism would be effected. > No, I'm not going to rewrite Roy's thesis, which explains the benefits and tradeoffs of the uniform interface in depth: "REST enables intermediate processing by constraining messages to be self-descriptive: interaction is stateless between requests, standard methods and media types are used to indicate semantics and exchange information, and responses explicitly indicate cacheability." Intermediate processing is _not_ enabled _except_ through the use of stateless interactions consisting of standard methods and media types. The entire notion of anarchic scalability depends upon self-descriptive messaging, which by definition requires the use of standard media types, not just cache-control headers. > > You can't look something up without a registry? Pro-tip: > > http://www.google.com/search?q=%22application/hal%2Bxml%22 > First of all, please refer to RFC 4288. Your syntax is that of the standards tree, and will never be approved by IANA unless there's a corresponding RFC. You need to choose an appropriate tree, then take five whole minutes out of your day to register your identifier. Why is this such an insurmountable obstacle, that using Google is preferable? Yes, you can look anything up on Google, but Google isn't a registry, and a registry is required to avoid collisions where multiple parties are using the same identifier for wildly different purposes. Google can't keep anyone else from using your identifier for another purpose. The IANA registry exists for exactly that reason. > > If crawlers were to provide a pagerank equivalent for media types, > according to public usage, could we do away with the registry > altogether? If not, why not? > You're free to suggest any new mechanism as a successor to the IANA registry. But, in the real world of today, if you're using HTTP, NO, you can't do away with the IANA registry because it's an integral part of the protocol. Such suggestions are only relevant to discussions of successor protocols, like HTTP 2 or Waka. But, I think that's a horrible idea, since it does nothing to avoid collisions -- only allows a set-in-stone identifier's meaning to change over time according to the whims of the mob. Such anarchy is not conducive to the goals of a uniform interface, and is certainly at odds with the long-term stability that's a goal of REST. The self-descriptive messaging constraint of REST _cannot_ be met in HTTP over the Internet, _unless_ the media type is IANA-registered. BY DEFINITION. No amount of arguing with me can change this reality... Unless of course there's something wrong with my grasp of plain English: "REST enables intermediate processing by constraining messages to be self-descriptive... STANDARD MEDIA TYPES are used to indicate semantics and exchange information..." Seriously, people, what part of that am I somehow getting wrong? How else can one identify what STANDARD MEDIA TYPE is used, except through the IANA registry which correlates identifiers to media types, which is the only mechanism HTTP defines to determine the media type from any identifier sent over the Internet? Once again, how this is at all controversial just befuddles me. :-( -Eric
Mike Kelly wrote: > > > No, it isn't. Without some sort of registry like IANA, there's no > > way to avoid media type identifier *collisions*, by which I mean > > how can anybody know that application/foo+xml from example.org has > > the same meaning as application/foo+xml from example.com? > > Why do *collisions* matter that much? Evolution has its own mechanisms > for dealing with this in the real world, why do they not apply here? > Because we aren't talking about evolution, we're talking about RFC 2616. When I see a Content-Type header, the only thing I know to do is to look in the IANA registry to see what media type it correlates with, because that's the ONLY place the RFC 2616 tells me to look. If it isn't there, it's an unknown and subject to collisions, so I ignore it. The goal in REST is not to have intermediaries ignore your payload, or be restricted to only caching it, if that... None of these problems occur when using registered identifiers to refer to standard media types... so tell me, why is it we want to *avoid* doing what the architectural style requires and HTTP strongly suggests? Less interoperability?!? Again, the pushback befuddles me, why we must debate against every single plain-as-day REST requirement... > > > There's a reason HTTP frowns on unregistered types and REST > > constrains against them. Messaging isn't self-descriptive if > > there's no record of your media type identifier in the IANA > > registry. *It can't be.* > > The messages are *less* descriptive, not non-descriptive. "This is the > media type (you don't recognize it)" is still descriptive, and besides > - there's way more to HTTP messages than just the Content-Type. > There is a world of difference between not recognizing a media type, and not being able to recognize it because it isn't registered. Without a registered media type, the message is NOT self-descriptive: "REST enables intermediate processing by constraining messages to be self-descriptive: interaction is stateless between requests, standard methods and media types are used to indicate semantics and exchange information, and responses explicitly indicate cacheability." ALL standard media types have corresponding identifiers in the IANA registry (except XBEL). How can it be any more plain-as-day, that the self-descriptive messaging constraint requires STANDARD MEDIA TYPES? If not present, the message isn't _less_ self-descriptive, it just plain _isn't_ self-descriptive, BY DEFINITION, not by opinion of Eric... Fine, you've coined a new identifier, and registered it even, and want to send it over the Internet. But unless and until it's adopted as a standard (allowing for evolution), it simply _cannot_ meet the self- descriptive messaging constraint, unless my English and Roy's English have radically different semantics... > > > It is simply NOT the REST style to treat media type identifiers as > > opaque strings that only need to be understood by producers and > > consumers of data, when using the Internet, because you're taking > > advantage of an existing cache infrastructure which requires media > > type identifiers to mean the same thing regardless of origin domain. > > Why would a cache care whether or not the origin server's notion of > xyz media type conflicts with some other version? That wouldn't even > matter if the origin server was contradicting /itself/ across multiple > resources - let alone another server entirely. > Dumb routers don't care. See REST 6.5.1 again. In order for any intermediary to take active part in communication, the key to anarchic scalability on the Web, it must agree on what the media type identifier means. How can this occur, if different origin servers are tagging different payloads with different processing requirements, using the same identifier? Isn't that the antithesis of self-descriptiveness? How can you seriously expect Web architecture not to collapse in smoking wreckage if anarchy prevails over any form of registry for identifiers, sacrificing the ability to easily tell identifiers apart by simple string comparison, in favor of semantics that are bound to what the most popular interpretation of the string was on the date it was implemented for a particular service? If you ask me, I'm all in favor of using simple string comparison to tell one media type identifier from another... KISS. -Eric
> > None of these problems occur when using registered identifiers to > refer to standard media types... so tell me, why is it we want to > *avoid* doing what the architectural style requires and HTTP strongly > suggests? > Believing that I'm wrong about this would require me to somehow agree with this statement: "Unregistered identifiers referring to unpublished media types are OK in REST." Which just doesn't square with REST's language, unless I don't know the meaning of the term "standard": ...standard methods and media types are used... ...information is transferred in a standardized form... ...an evolving set of standard data types... ...the standard data format of an encapsulated rendering engine... ...messages include standardized application semantics... And from 6.5.1: "The semantics are expressed by the combination of an object key and operation, which are object-specific rather than standardized across all objects." Opaque media type identifiers amount to object keys which are object- specific, rather than standardized across all objects. Only refactoring into standard data types decouples object from object key, because only then do the application semantics become described by the network interface, i.e. standardized. (Just like, if you want to utilize Gopher's uniform interface, you need to refactor your system's text output into a hierarchical collection of plain text files -- which don't have to be static files, they can identify stored procedures, just as in HTTP -- with implementation details hidden behind the uniform interface.) Unregistered identifiers referring to unpublished types, lack the standardization required for the network interface to describe the application semantics, which results in a library-style API... is what Roy seems to be saying. What's a standard on your intranet is not the same thing as what's a standard on the Internet. On the Internet, RFC 2616 rules, and provides only one mechanism for correlating identifiers with standard media types -- the IANA registry: "Media-type values are registered with the Internet Assigned Number Authority... Use of non-registered media types is discouraged." Why would REST encourage us to do what HTTP discourages? REST development is all about leveraging exactly such best-practice advice in the standards, to gain the full benefits that come with them. Going against this recommendation of RFC 2616 is not only not justified in and of itself, but is also not justified by the self-descriptive messaging constraint which relies on this best-practice being followed when HTTP is used on the Internet. -Eric
On Tue, Aug 24, 2010 at 11:21 AM, Eric J. Bowman <eric@...> wrote: > Mike Kelly wrote: >> >> There's more than enough self descriptiveness from other HTTP headers >> (Vary, ETag, Content-Location, Cache-Control, Link, etc.) to power >> most types of layered/intermediary mechanism, 'opaque' media type >> identifiers don't effect those. >> > > Yes, there is a lot more to the self-descriptive messaging constraint > than media type identifiers. Ok, good.. Then there was no need for you to ask how "self-descriptive messaging works with opaque media type identifiers", since you already know. > >> >> I'd be interested to understand exactly what you believe the costs of >> such identifiers are to self-descriptiveness, and what types of >> layered/intermediary mechanism would be effected. >> > > No, I'm not going to rewrite Roy's thesis, which explains the benefits > and tradeoffs of the uniform interface in depth: > > "REST enables intermediate processing by constraining messages to be > self-descriptive: interaction is stateless between requests, standard > methods and media types are used to indicate semantics and exchange > information, and responses explicitly indicate cacheability." > > Intermediate processing is _not_ enabled _except_ through the use of > stateless interactions consisting of standard methods and media types. > The entire notion of anarchic scalability depends upon self-descriptive > messaging, which by definition requires the use of standard media types, > not just cache-control headers. Where abouts does that explain the costs of emergent (bottom-up) 'standardisation' vs. the strict (top-down) way of doing it? If the costs are marginal, then why bother with the pain and inefficiency associated with the latter? REST's position on this issue depends massively on what you interpret from the word 'standard': http://en.wiktionary.org/wiki/standardised "Designed in a standard manner _or_ according to an official standard." My point of view is that standard media types would emerge naturally anyway without the need for a registry, not that REST is achievable without standard media types. I'm trying to understand why you believe this could not happen, and/or why it is not desirable. >> >> You can't look something up without a registry? Pro-tip: >> >> http://www.google.com/search?q=%22application/hal%2Bxml%22 >> > > First of all, please refer to RFC 4288. Your syntax is that of the > standards tree, and will never be approved by IANA unless there's a > corresponding RFC. You need to choose an appropriate tree, then take > five whole minutes out of your day to register your identifier. Why is > this such an insurmountable obstacle, that using Google is preferable? > > Yes, you can look anything up on Google, but Google isn't a registry, > and a registry is required to avoid collisions where multiple parties > are using the same identifier for wildly different purposes. Google > can't keep anyone else from using your identifier for another purpose. > The IANA registry exists for exactly that reason. In such a system it would be highly unlikely for two media types with the same identifier to emerge to a level of 'standardisation' simultaneously.. maybe future REST systems should look to use URIs to identify media types and dereference specs, therefore avoiding this risk altogether? >> >> If crawlers were to provide a pagerank equivalent for media types, >> according to public usage, could we do away with the registry >> altogether? If not, why not? >> > > You're free to suggest any new mechanism as a successor to the IANA > registry. But, in the real world of today, if you're using HTTP, NO, > you can't do away with the IANA registry because it's an integral part > of the protocol. Such suggestions are only relevant to discussions of > successor protocols, like HTTP 2 or Waka. Ok, thanks.. I'm discussing REST, though. Funnily enough. > But, I think that's a horrible idea, since it does nothing to avoid > collisions -- only allows a set-in-stone identifier's meaning to change > over time according to the whims of the mob. Such anarchy is not > conducive to the goals of a uniform interface, and is certainly at odds > with the long-term stability that's a goal of REST. You seriously think "the mob" (constituents of the system, and those baring the cost) are likely to create or allow those sorts of destabilizing changes? > The self-descriptive messaging constraint of REST _cannot_ be met in > HTTP over the Internet, _unless_ the media type is IANA-registered. BY > DEFINITION. No amount of arguing with me can change this reality... > Unless of course there's something wrong with my grasp of plain English: > > "REST enables intermediate processing by constraining messages to be > self-descriptive... STANDARD MEDIA TYPES are used to indicate semantics > and exchange information..." > > Seriously, people, what part of that am I somehow getting wrong? How > else can one identify what STANDARD MEDIA TYPE is used, except through > the IANA registry which correlates identifiers to media types, which is > the only mechanism HTTP defines to determine the media type from any > identifier sent over the Internet? > Because in REST terms it depends on what you mean by 'standard' and in practical HTTP terms, even though it conflicts with spec, there are other less formal ways of standardising a type that _will_ be used because they work and don't present any major issues. Cheers, Mike
Mike Kelly wrote: > > > Yes, there is a lot more to the self-descriptive messaging > > constraint than media type identifiers. > > Ok, good.. Then there was no need for you to ask how "self-descriptive > messaging works with opaque media type identifiers", since you already > know. > No, I don't know. As far as I know, it can't possibly. So if you disagree, please enlighten me as to how avoiding the IANA registry using HTTP over the Internet could possibly be self-descriptive. > > > No, I'm not going to rewrite Roy's thesis, which explains the > > benefits and tradeoffs of the uniform interface in depth: > > > > "REST enables intermediate processing by constraining messages to be > > self-descriptive: interaction is stateless between requests, > > standard methods and media types are used to indicate semantics and > > exchange information, and responses explicitly indicate > > cacheability." > > > > Intermediate processing is _not_ enabled _except_ through the use of > > stateless interactions consisting of standard methods and media > > types. The entire notion of anarchic scalability depends upon > > self-descriptive messaging, which by definition requires the use of > > standard media types, not just cache-control headers. > > Where abouts does that explain the costs of emergent (bottom-up) > 'standardisation' vs. the strict (top-down) way of doing it? > REST doesn't need to care about that, it's a protocol concern. There are a whole slew of RFCs explaining media types and media type identifiers, and the requirements for admittance into the standards tree. There is a syntactical difference between emerging types which have experimental or vendor status. Once types from the non-standards trees have become sufficiently standardized as ordained by the IETF's rules and regulations, they may be admitted into the standards tree, at which point it becomes a standard. > > If the costs are marginal, then why bother with the pain and > inefficiency associated with the latter? > Do you want a uniform-interface REST API or not? > > REST's position on this issue depends massively on what you interpret > from the word 'standard': > No, it doesn't. When discussing RFC 2616, what's meant by "standard" is quite clearly documented by a variety of other RFCs which have been agreed to as the way to do things, for many years now. By definition, there is nothing standardized about an unregistered identifier referring to an unpublished specification on the Internet based on the commonly understood meaning of "standard" documented in the pertinent RFCs. > > My point of view is that standard media types would emerge naturally > anyway without the need for a registry, not that REST is achievable > without standard media types. I'm trying to understand why you believe > this could not happen, and/or why it is not desirable. > But this has already been tried, and failed. Modern Gopher clients and servers understand 'h' as an identifier for HTML. But, all the spec says about 'h' is "reserved for future use". So where is it documented that 'h' correlates to HTML? Not in the spec. This is the point of an extensibility mechanism like the IANA registry, which does nothing to prevent the natural emergence of new standards, as proven by Atom and the fact that the IANA registry provides a mechanism for new standards to evolve from experimental to deployed status. So I'm not sure what problem you're trying to solve by doing away with the notion of a registry, where the solution doesn't just take us back to somewhere we've been. All the IANA registry does, is set a high bar between what's registered and what's standardized, but not impossibly high -- just high enough to keep the overall architecture from collapsing in a heap of non-interoperable complexity. The IANA registry isn't perfect, but we are stuck with it for the forseeable future, and it's not been subject to any serious debate for over a decade, so again the pushback against it befuddles me... > > >> > >> You can't look something up without a registry? Pro-tip: > >> > >> http://www.google.com/search?q=%22application/hal%2Bxml%22 > >> > > > > First of all, please refer to RFC 4288. Your syntax is that of the > > standards tree, and will never be approved by IANA unless there's a > > corresponding RFC. You need to choose an appropriate tree, then > > take five whole minutes out of your day to register your > > identifier. Why is this such an insurmountable obstacle, that > > using Google is preferable? > > > > Yes, you can look anything up on Google, but Google isn't a > > registry, and a registry is required to avoid collisions where > > multiple parties are using the same identifier for wildly different > > purposes. Google can't keep anyone else from using your identifier > > for another purpose. The IANA registry exists for exactly that > > reason. > > In such a system it would be highly unlikely for two media types with > the same identifier to emerge to a level of 'standardisation' > simultaneously.. maybe future REST systems should look to use URIs to > identify media types and dereference specs, therefore avoiding this > risk altogether? > Collisions have nothing to do with standardisation or uptake. What little security architecture the Web has, is based on media type identifiers. Various image formats have known security risks. I might want my intermediary to filter out any exploits before passing it on, or rejecting it, or whatever. None of which works if image/jpg traffic originating from Google has a different codec, and therefore a different security profile, than image/jpg traffic originating from Amazon. I don't know what "risk" you're referring to, as it only applies when the IANA registry is deliberately avoided. Register your identifier, and no intermediary will ever need to consider "what else it might mean" because there's a registry entry guiding everyone to one and only one associated spec, from then on, no collisions possible except by nonstandard implementations based on googling for the identifier instead of looking it up in IANA... > > >> If crawlers were to provide a pagerank equivalent for media types, > >> according to public usage, could we do away with the registry > >> altogether? If not, why not? > >> > > > > You're free to suggest any new mechanism as a successor to the IANA > > registry. But, in the real world of today, if you're using HTTP, > > NO, you can't do away with the IANA registry because it's an > > integral part of the protocol. Such suggestions are only relevant > > to discussions of successor protocols, like HTTP 2 or Waka. > > Ok, thanks.. I'm discussing REST, though. Funnily enough. > You're talking about "doing away with the registry" for media type identifiers, a concept that isn't mentioned anywhere in REST. REST has a self-descriptive messaging constraint that's instantiated in HTTP by, among other things, the IANA registry for media type identifiers. So you must be discussing HTTP. But, HTTPbis is a work-in-progress, the scope of which precludes eliminating, or providing an alternative to, the IANA registry. So my answer stands -- nothing to be done about it except in some successor protocol which obsoletes HTTP 1.1. > > > But, I think that's a horrible idea, since it does nothing to avoid > > collisions -- only allows a set-in-stone identifier's meaning to > > change over time according to the whims of the mob. Such anarchy > > is not conducive to the goals of a uniform interface, and is > > certainly at odds with the long-term stability that's a goal of > > REST. > > You seriously think "the mob" (constituents of the system, and those > baring the cost) are likely to create or allow those sorts of > destabilizing changes? > Oh, yes. Exhibit A: HTML 5... ;-) With a registry, the media type identifiers I chose for projects I built back in the mid-90's mean exactly the same thing today as they did back then, and this transparency is exactly what allowed those representations to be cached in archive.org, where they still work with modern browsers today. Allowing an identifier to evolve over time to point to some other media type, solves what problem again? > > Because in REST terms it depends on what you mean by 'standard' and in > practical HTTP terms, even though it conflicts with spec, there are > other less formal ways of standardising a type that _will_ be used > because they work and don't present any major issues. > REST isn't about doing what works, i.e. unbounded creativity. REST is about a disciplined approach involving the application of constraints. You don't have to follow REST, but if you're trying to, then you need to follow the rules for evolving an identifier from the experimental to the standards tree of the IANA registry, as its associated media type evolves from a proposal to a specification -- playing semantics with the word "standard" is not a loophole. -Eric
> > Why would REST encourage us to do what HTTP discourages? REST > development is all about leveraging exactly such best-practice advice > in the standards, to gain the full benefits that come with them. > Let's say I have a system that isn't REST. In the here-and-now, its requirements square exactly with the benefits of REST. So, I decide to redevelop the system in the REST style. I want the new system to be online on the Internet by the end of 2010. If those are my goals, why would I waste time engaging in architectural astronuttery developing custom media types? The beauty of REST is that by refactoring into standard media types, I can deploy a system which gains all the benefits of REST -- without having to wait for standards approval, then hope for uptake, like I would if introducing a new media type. The state of the HTTP uniform interface in 2010 doesn't include any media type identifiers you introduce in 2010. If you are sitting down today and expecting to develop a system that's RESTful and deployable in a reasonable time frame, why not use standardized types? They're what gives you a uniform interface in the here-and-now, without any hypotheticals or hedging or grey areas or uncertainty. Engaging in a standardization effort to introduce a new media type should be its own endeavor, not part of developing in the REST style, which emphasizes the re-use of existing standards (until such time as new standards emerge). Pragmatically, unless your API really is a unique snowflake, there's no reason you can't re-use standardized types; and in fact, to do so is to develop in the REST idiom. Again, assuming we're talking about extending an interface over the Internet using HTTP. -Eric
Can you guys please remove my email from the Cc field? I'm tired of read some people giving their opinions like it was the God given truth, with no evidence whatsoever... I can't take much help from all this, quite the contrary unfortunately. -- * Melhores cumprimentos / Beir beannacht / Best regards **_____________________________________________________________* *Antnio Manuel dos Santos Mota Contacts: http://card.ly/amsmota **_____________________________________________________________ If you're on a mobile phone you can add my contact by scanning the code on the card below * ** <http://lh3.ggpht.com/_1aTCd17_nho/TEblN4fV-_I/AAAAAAAAAHw/wZ51kXrfJcs/qrcode_bc_1.jpg> Please click on the image to enlarge it<http://lh3.ggpht.com/_1aTCd17_nho/TEblN4fV-_I/AAAAAAAAAHw/wZ51kXrfJcs/qrcode_bc_1.jpg> *_____________________________________________________________ * *Disclaimer: The opinions expressed herein are just my opinions and they are not necessary right.* *_____________________________________________________________*
Hello!
Eric, I certainly appreciate the time you take to explain your point of
view patiently and in detail.
On Tue, 2010-08-24 at 04:46 -0600, Eric J. Bowman wrote:
> None of these problems occur when using registered identifiers to refer
> to standard media types... so tell me, why is it we want to *avoid*
> doing what the architectural style requires and HTTP strongly suggests?
> Less interoperability?!? Again, the pushback befuddles me, why we must
> debate against every single plain-as-day REST requirement...
I think one reason for the push-back could be that fine-grained,
specific media types can certainly have advantages. For example, in my
application I use JSON to return meta information about available
resources, but also to return raw data when you access these resources.
While currently I use application/json for both cases, this doesn't
actually feel quite right. As I keep following links, all I ever see is
that with the next link I will get application/json. But that doesn't
help self-descriptiveness at all. Sometimes I would like to let the
client know that the next link will return meta information (schemas and
other such things), while that other link yields raw data (maybe records
from a database).
If you are restricting yourself to IANA registered types, what do you
suggest is the best way to go about improving the situation I described?
Then there is this blog post by Roy Fielding ("REST APIs must be
hypertext-driven":
http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven)
There he writes:
A REST API should spend almost all of its descriptive effort in
defining the media type(s) used for representing resources and
driving application state, or in defining extended relation
names and/or hypertext-enabled mark-up for existing standard
media types.
Here he seems to imply that designing your own media type is OK
("spend ... effort in defining the media type(s)"). He also suggest
adding hypertext enabled mark-up to existing standard types, but does so
with an 'or'. Both options are fine. So, defining your own types is ok
with Roy, it seems?
Further in the same post he writes:
A REST API should be entered with no prior knowledge beyond the
initial URI (bookmark) and set of standardized media types that
are appropriate for the intended audience (i.e., expected to be
understood by any client that might use the API).
Here he seems to be only concerned with server and client, not
intermediaries. He says "standardized media types that are appropriate
for the intended audience". While we can't be 100% certain, it certainly
sounds like he's talking about "standardized" in a way that makes sense
to the client (maybe using an internal registry or just the fact that
client and server code were written by the same person).
This seems to accommodate the case where I create a RESTful application
using my own (non IANA registered) media types, which is used across all
branch offices, where I know my clients and servers, but the traffic
traverses the public Internet.
I understand your points about intermediaries on the Internet. They
don't know what to do with your data if the content type is proprietary.
However, if you are forced to restrict yourself to IANA standardized
media types, even if maybe something more custom-designed would be more
appropriate, then you risk those same intermediaries to screw up your
data. For example, there are some proxies that will re-compress images
into lower-quality versions in order to reduce bandwidth usage. But
there was this one example (I think Roy himself did that?) where he used
a PNG to send a bit-array. "How brilliant! An IANA standard type used
for something so surprising!" we proclaim. But if that thing goes
through one of those compressing proxies ... well, your information is
lost.
So, by being forced to use IANA standard types for everything we could
give those intermediaries a false sense of security as to what they can
do with the content.
Juergen
--
Juergen Brendel
RESTx - the fastest and easiest way to created RESTful web services
http://restx.org
At Tue, 24 Aug 2010 06:08:46 -0600, Eric J. Bowman wrote: > Which just doesn't square with REST's language, unless I don't know the > meaning of the term "standard": Standard has lots of meanings. To take the IETF viewpoint, for example, atom is not a standard (only an RFC). best, Erik Hetzner
At Tue, 24 Aug 2010 04:46:38 -0600, Eric J. Bowman wrote: > Because we aren't talking about evolution, we're talking about RFC 2616. > When I see a Content-Type header, the only thing I know to do is to > look in the IANA registry to see what media type it correlates with, > because that's the ONLY place the RFC 2616 tells me to look. So all of your code refuses to deal with application/x-www-form-urlencoded? best, Erik
On Tue, Aug 24, 2010 at 3:44 PM, Erik Hetzner <erik.hetzner@...> wrote: > At Tue, 24 Aug 2010 06:08:46 -0600, > Eric J. Bowman wrote: >> Which just doesn't square with REST's language, unless I don't know the >> meaning of the term "standard": > > Standard has lots of meanings. To take the IETF viewpoint, for example, > atom is not a standard (only an RFC). FWIW, Roy also defined what he understands standard to mean too[1]. He's also said that a particular specification need not necessarily be a standard[2]. I know this forum has a tendency towards black and white, but I take it Roy this[3] as our liberty to live in the gray a little: "The degree to which the format chosen is a commonly accepted standard is less important than making sure that the sender and recipient agree to the same thing, and that's all I meant by an evolving set of standard data types." --tim [1] - http://roy.gbiv.com/untangled/2008/no-rest-in-cmis [2] - http://tech.groups.yahoo.com/group/rest-discuss/message/6594 [3] - http://tech.groups.yahoo.com/group/rest-discuss/message/6613
Indeed, I'm more hoping that this discussion forum will explore the gray areas more. What seem to be black or white/yes or no/up or down choices rarely are. The interesting discussion points explore the middle. I'd rather ship imperfect software than never ship at all. I hope, as I understand which imperfections are less problematic, that I can advise my colleagues appropriately. -Eric. On 08/24/2010 01:17 PM, Tim Williams wrote: > > > On Tue, Aug 24, 2010 at 3:44 PM, Erik Hetzner <erik.hetzner@... > <mailto:erik.hetzner%40ucop.edu>> wrote: > > At Tue, 24 Aug 2010 06:08:46 -0600, > > Eric J. Bowman wrote: > >> Which just doesn't square with REST's language, unless I don't know the > >> meaning of the term "standard": > > > > Standard has lots of meanings. To take the IETF viewpoint, for example, > > atom is not a standard (only an RFC). > > FWIW, Roy also defined what he understands standard to mean too[1]. > He's also said that a particular specification need not necessarily be > a standard[2]. I know this forum has a tendency towards black and > white, but I take it Roy this[3] as our liberty to live in the gray a > little: > > "The degree to which the format chosen > is a commonly accepted standard is less important than making sure > that the sender and recipient agree to the same thing, and that's > all I meant by an evolving set of standard data types." > > --tim > > [1] - http://roy.gbiv.com/untangled/2008/no-rest-in-cmis > [2] - http://tech.groups.yahoo.com/group/rest-discuss/message/6594 > [3] - http://tech.groups.yahoo.com/group/rest-discuss/message/6613 > >
The irony of this entire discussion and the premise that media type used in what would be a RESTful application has to be registered in IANA simple suggests that before text/html or *application*/*atom*+*xml* were registered, systems that used them were not RESTful. Really? Is that true? Prescriptive guidance is prescriptive guidance. I sometimes think that there is an effort to make it "law" and I'm not sure it helps us understand the original rationale for the prescriptive guidance in the first place. Eb
Erik Hetzner wrote: > > > Because we aren't talking about evolution, we're talking about RFC > > 2616. When I see a Content-Type header, the only thing I know to do > > is to look in the IANA registry to see what media type it > > correlates with, because that's the ONLY place the RFC 2616 tells > > me to look. > > So all of your code refuses to deal with > application/x-www-form-urlencoded? > No, what I meant to say was, when I see an identifier that I've never heard of before. A close read of REST ch. 6 reveals that it's impossible to build an HTTP system without REST mismatches, application/x-www-form-urlencoded is a historical anomaly that's only defined in the HTML and CGI specs. We "just know" what it means, without a registry entry pointing to a spec, so yeah it's technically a REST mismatch. One that's been around for so long and is so widely understood that it just isn't a big deal -- otherwise the 'x-' would've been dropped ages ago... Although it is curious that it isn't registered... huh. -Eric
Juergen Brendel wrote:
>
> I think one reason for the push-back could be that fine-grained,
> specific media types can certainly have advantages. For example, in my
> application I use JSON to return meta information about available
> resources...
>
That sounds like more of a job for RDF than JSON, but I'd need way more
details about your problem space than I'm willing to read in an e-mail,
to pass judgment or offer alternatives... ;-)
>
> ...but also to return raw data when you access these resources. While
> currently I use application/json for both cases, this doesn't
> actually feel quite right. As I keep following links, all I ever see
> is that with the next link I will get application/json.
>
It also sounds like you're figuring out why JSON isn't the right media
type to use for driving a hypertext application -- I don't understand
what you mean by "following links" in JSON, because there's no
definition of "link" in JSON...
>
> But that doesn't help self-descriptiveness at all. Sometimes I would
> like to let the client know that the next link will return meta
> information (schemas and other such things), while that other link
> yields raw data (maybe records from a database).
>
Using HTML as your hypertext engine, makes it very easy to define links
to metadata, and even their media type -- <link/>. That way, your
messaging is self-descriptive and your API is self-documenting (two
separate concepts often confused). You let the client know these
things not necessarily by the media type identifier you send over the
wire, but also by the media type identifiers you include in your
content as @type.
There is no such thing as a self-documenting JSON API, because JSON
defines no hypertext controls.
Don't make me list again, all the wildly divergent back-end services
that HTML has proven itself capable of encapsulating. Is your API
*really* a unique snowflake where this time-tested, tried-and-true
design pattern that was the inspiration for REST in the first place,
must be dismissed as inadequate? Try it. It might just work, in which
case you'd be well on your way to a widely-interoperable system on the
Internet, thanks to the uniform interface.
>
> If you are restricting yourself to IANA registered types, what do you
> suggest is the best way to go about improving the situation I
> described?
>
I don't restrict myself to IANA registered types, hence the whole
discussion about how my use of application/xbel+xml is a REST mismatch.
Which hasn't stopped me from using it, in the least...
My suggestion is that whatever your back-end system may be, you wrap it
with standard hypertext to drive application state, most likely HTML.
Tools like XSLT allow you to generate that HTML content from your other
media types, while HTML instructs the client how to manipulate your
other media types.
Encapsulation is the way to go, not trying to use JSON as a replacement
for an actual hypertext language capable of driving application state.
REST development involves exactly such refactoring of a system to fit
the existing uniform interface, by re-using standardized media types as
they're intended -- JSON isn't intended as a replacement for HTML or
any other hypertext API language.
>
> There he writes:
>
> A REST API should spend almost all of its descriptive effort
> in defining the media type(s) used for representing resources and
> driving application state, or in defining extended relation
> names and/or hypertext-enabled mark-up for existing standard
> media types.
>
> Here he seems to imply that designing your own media type is OK
> ("spend ... effort in defining the media type(s)").
>
This is a mis-read of Roy. The REST API of my demo project doesn't
need to spend *any* descriptive effort defining the media types used,
because I haven't introduced any new media types. Thus, all such
documentation is already "in-band".
My REST API spends all its out-of-band descriptive effort explaining
that I'm using the opaque string 'application/xbel+xml' as an
identifier for the XBEL media type, which if registered, removes this
description because now it's "in-band".
No custom media types = no descriptive effort defining them...
No extended link relations = no descriptive effort defining them...
No hypertext extensions = no descriptive effort defining them...
What Roy isn't doing, is recommending against re-using standardized
types -- that would mean Roy is contradicting his thesis, which he
doesn't do... What Roy is describing, is the proper way to evolve a
new media type, if and only if you really need to go that route -- to
make a point about the hypertext constraint, not to contradict best
practice.
>
> Further in the same post he writes:
>
> A REST API should be entered with no prior knowledge beyond
> the initial URI (bookmark) and set of standardized media types that
> are appropriate for the intended audience (i.e., expected to
> be understood by any client that might use the API).
>
> Here he seems to be only concerned with server and client, not
> intermediaries. He says "standardized media types that are appropriate
> for the intended audience". While we can't be 100% certain, it
> certainly sounds like he's talking about "standardized" in a way that
> makes sense to the client (maybe using an internal registry or just
> the fact that client and server code were written by the same person).
>
Roy is discussing the REST style in general. I keep qualifying my
assertions with "HTTP over the Internet" because I'm discussing that
particular instantiation of REST. If you're on an intranet, then your
custom media type is expected to be understood by any client that might
use the API. On the Internet, if you're designing for serendipitous re-
use and anarchic scalability, then you need to use a media type that's
generally understood by some class of deployed client. Quoting Roy:
"REST does not demand that everyone agree on a single format for the
exchange of data -- only that the participants in the communication
agree."
Using HTTP over the Internet makes potentially everybody a participant
in the communication, in which case you need to use standardized types,
unless of course your API really is a unique snowflake that can't be
refactored to use standardized types (which is much more of an edge
case than all the custom media types flying around in REST discussions
would seem to indicate).
>
> This seems to accommodate the case where I create a RESTful
> application using my own (non IANA registered) media types, which is
> used across all branch offices, where I know my clients and servers,
>
OK...
>
> but the traffic traverses the public Internet.
>
Not OK, unless you're using HTTPS. There is simply no way to square
the use of an unregistered identifier with REST if you're talking about
sending it over the Internet via HTTP.
>
> I understand your points about intermediaries on the Internet. They
> don't know what to do with your data if the content type is
> proprietary. However, if you are forced to restrict yourself to IANA
> standardized media types, even if maybe something more
> custom-designed would be more appropriate, then you risk those same
> intermediaries to screw up your data.
>
If you're using media types properly, this is a trivial concern when
compared to the entirely undefined treatment of unregistered
identifiers.
>
> For example, there are some proxies that will re-compress images into
> lower-quality versions in order to reduce bandwidth usage. But there
> was this one example (I think Roy himself did that?) where he used a
> PNG to send a bit-array. "How brilliant! An IANA standard type used
> for something so surprising!" we proclaim. But if that thing goes
> through one of those compressing proxies ... well, your information
> is lost.
>
I won't dispute the existence of accelerators out there. But I've
never heard of one taking the time to compress a black-and-white GIF,
unless the largest dimensions for that GIF in the markup are smaller
than the actual dimensions of the GIF, in which case it'll be resized.
Regardless, there are ways to head off this problem. The Content-MD5
header allows the payload to be validated. If it's been changed, the
client can repeat the request using 'Cache-Control: max-age=0' to
ensure the response comes from the origin server, bypassing any such
transcoding proxies. This also heads off any man-in-the-middle
exploits added to the GIF.
>
> So, by being forced to use IANA standard types for everything we could
> give those intermediaries a false sense of security as to what they
> can do with the content.
>
No, not if the media type is being used properly. GIF files have a
known security profile, obvious to anyone who sees 'image/gif' as an
identifier. Using a GIF to hold a sparse-bit array doesn't change the
codec or anything else, from how every other GIF is handled. There may
be security considerations for the consumer of that data *after* the
image is decoded into an array, but this has no bearing on the security
profile of image/gif as understood by intermediaries.
A gateway filtering GIF images for known exploits is unaffected by the
post-decoding security considerations of a consumer. Defining a new
media type for a sparse-bit array would result in an identifier with an
unknown security profile. Using GIF but assigning it a private
identifier bypasses the Web's security architecture.
Using GIF means your system is leveraging the known security profile of
image/gif and the fact that intermediaries MAY filter out known
exploits, making your communications inherently more secure. The
origin server and any consumers can implement standard libraries to
filter out known exploits, in case the image/gif being transferred
wasn't exploit-filtered by chance somewhere and was compromised.
So I'm quite comfortable with using GIF as a data format for something
besides images meant for viewing, because there aren't any unknowns I
need to worry about -- like there are with any new media type until
it's been out there long enough to become a standard, and even then the
security profile won't be as well-known as that of image/gif due to the
maturity of that media type.
-Eric
> > I can't take much help from all this, quite the contrary > unfortunately. > Here's a piece of advice, whether you want it or not. This 'LISTEN' method of yours sounds like a reverse GET. The RESTful solution is not to violate the uniform interface by creating an unspecified new method nobody has ever heard of. The RESTful solution would be to support an evolving standard like rHTTP, which makes use of HTTP's 'Upgrade' facility to reverse the direction of the transaction. The method would still be GET, for a uniform interface. -Eric http://tools.ietf.org/html/draft-lentczner-rhttp-00
Tim Williams wrote: > > > Standard has lots of meanings. To take the IETF viewpoint, for > > example, atom is not a standard (only an RFC). > Yes, but we're only interested in how RFC 2616 (and the RFCs it refers to) defines "standard" when it comes to media types -- which it defines as RFCs, in the case of the "standards tree". So it is a fact, that when using HTTP over the Internet, "standard media type" encompasses Atom because it *is* an RFC (well, two RFCs), pointed to by the IANA registry entry for the application/atom+xml identifier. > > FWIW, Roy also defined what he understands standard to mean too[1]. > The context of that weblog entry is only to call out three major corporations for releasing a spec and calling it a "standard" and the industry rags reporting it as such, despite its lack of approval by any standards body -- Roy's entire point there is that CMIS is an industry-created specification, NOT a standard, unless and until it's approved as an official standard by some standards body. Registered media type identifiers support self-descriptive messaging. But only standardized media types, meaning those which have been approved by a standards body, support the uniform interface of HTTP on the Internet -- and even then, only those which become ubiquitous enough to be at least minimally deployed. If you're using an obscure standard, you aren't getting any of the network-effect benefits of REST like serendipitous re-use and anarchic scalability, which are inherent in ubiquitous types. Unless your system really is a unique snowflake requiring a new media type, REST development is all about refactoring to fit the uniform interface as it exists today. That way, you hit the sweet spot of the Web that you miss entirely by sending unknown identifiers (registered and/or standardized, or not). Serendipitous re-use and anarchic scalability are only available on the HTTP Internet when you use widely-deployed (what I call ubiquitous) types. If you create a new media type, you engage in a waiting game to see if it's ever adopted widely enough to gain any appreciable network-effect benefits from intermediaries. Whereas the re-use of ubiquitous types gives you immediate access to those benefits of REST... kinda the point! If you really do have a compelling need to create a new media type, then you're likely not the only one with that problem, so go ahead and post a spec and see who responds. If you're right not to use existing ubiquitous types, then the waiting game will be short, and without changing your system it goes from "REST mismatch" to "REST OK" at some indeterminate point of uptake (the grey area of "increasing RESTfulness" Roy refers to). If you'd have been better off adapting your system to ubiquitous types, you'll know it by the lack of uptake of your media type, and the lack of network-effect benefits -- a standard that's only deployed on your system and that of your corporate partners will never achieve the network-effect benefits of the uniform interface: serendipitous re-use and anarchic scalability. Only those ubiquitous, standardized media types with registered identifiers can be *expected* to yield REST's benefits right off the bat. Straying away from ubiquitous types turns that solid expectation of what happens with tried-and-true identifiers, into a total unknown. Do you need the benefits of REST today? Then *don't* create custom media types -- instead, refactor your system to leverage known knowns. > > He's also said that a particular specification need not necessarily be > a standard[2]. I know this forum has a tendency towards black and > white, but I take it Roy this[3] as our liberty to live in the gray a > little: > > "The degree to which the format chosen is a commonly accepted > standard is less important than making sure that the sender and > recipient agree to the same thing, and that's all I meant by an > evolving set of standard data types." > Once again, Roy is discussing REST in generic terms, while I am translating that into hard-and-fast rules for instantiating REST over the Internet via HTTP -- like only using IANA-registered types -- because that *is* a pragmatic black-and-white issue in said context. "Hence, both sender and recipient agree to a common registration authority (the standard) for associating media types with data format descriptions." Since, on the Internet, "recipient" could mean anybody, if you're using HTTP then you MUST agree to use IANA. Otherwise your messages aren't self-descriptive to intermediaries (except those on your intranet). If you only care about one class of recipient, say partner corporations, and you don't care about serendipitous re-use or anarchic scalability, then why aren't you using HTTPS... since you're basically excluding intermediaries from being much besides dumb routers by sending opaque identifiers over HTTP? In fact, why bother with REST at all? You cannot meet the self-descriptive messaging constraint using opaque identifiers (meaning unregistered, or registered but not pointing to a standard), it isn't even guaranteed that you'll meet the constraint using ubiquitous identifiers, when using the Internet via HTTP. So the starting point for meeting the self-descriptive messaging constraint using HTTP over the Internet is the IANA registry, beyond any shadow of a doubt. -Eric
> > If you really do have a compelling need to create a new media type, > then you're likely not the only one with that problem, so go ahead and > post a spec and see who responds. If you're right not to use existing > ubiquitous types, then the waiting game will be short, and without > changing your system it goes from "REST mismatch" to "REST OK" at some > indeterminate point of uptake (the grey area of "increasing > RESTfulness" Roy refers to). > > If you'd have been better off adapting your system to ubiquitous > types, you'll know it by the lack of uptake of your media type, and > the lack of network-effect benefits -- a standard that's only > deployed on your system and that of your corporate partners will > never achieve the network-effect benefits of the uniform interface: > serendipitous re-use and anarchic scalability. > What I'm saying there is exactly what Roy means by: "There is absolutely nothing wrong with that choice when it is made with eyes wide open." My problem, is that folks on this list keep denying that such tradeoffs exist, and recommending the creation of custom media types without providing the knowledge necessary to make that choice with "eyes wide open" about immediate vs. long-term benefits/consequences. My advice remains, prototype your system using ubiquitous types -- only then will you have a clear picture of the limitations of those types in the context of your problem. Without that knowledge you're really just winging it on creating a new type, and ignoring the cost-benefit analysis of fixing the inadequacies with a new type vs. deploying a system that's "good enough" using ubiquitous types. Does the cost of marshalling a standards effort and waiting to gain the benefits of REST pale in comparison to the immediate benefits of REST realized through the prototype's use of ubiquitous types? You can't make such a choice with "eyes wide open" if you deny there's even a REST mismatch to consider! -Eric
> > You can't make such a choice with "eyes wide open" if you deny there's > even a REST mismatch to consider! > That Blinksale API thread does keep coming up, as folks try to convince me that the entire REST community disagrees with me. So I just re-read it all the way through, and this particular comment stood out: http://tech.groups.yahoo.com/group/rest-discuss/message/6569 +1. It's interesting how, out of all the times that thread's been quoted over the past four years, that post never seems to get surfaced... Why didn't Roy correct Mark on that, considering how they were both active participants in the thread? Just askin'... ;-) -Eric
--- In rest-discuss@yahoogroups.com, "Eric J. Bowman" <eric@...> wrote:
>
> >
> > You can't make such a choice with "eyes wide open" if you deny there's
> > even a REST mismatch to consider!
> >
>
> That Blinksale API thread does keep coming up, as folks try to convince
> me that the entire REST community disagrees with me. So I just re-read
> it all the way through, and this particular comment stood out:
>
> http://tech.groups.yahoo.com/group/rest-discuss/message/6569
>
> +1. It's interesting how, out of all the times that thread's been
> quoted over the past four years, that post never seems to get
> surfaced...
>
> Why didn't Roy correct Mark on that, considering how they were both
> active participants in the thread? Just askin'... ;-)
>
> -Eric
>
He does correct him. It's here: http://tech.groups.yahoo.com/group/rest-discuss/message/6594
Well technically, he corrected someone who was referencing another posting where Mark had made this assertion. I believe he is still correcting the same point.
Mark then asks for clarification:
http://tech.groups.yahoo.com/group/rest-discuss/message/6600
Which Roy provides:
http://tech.groups.yahoo.com/group/rest-discuss/message/6613
Unfortunately I think we are seeing evidence in this thread of what Roy said here:
"This is one of those gray areas of increasing RESTfulness that
will doubtless drive some people nuts."
I think both sides are making good points but there is no simple black and white answer.
Regards,
Andrew
On Aug 25, 2010, at 9:17 PM, wahbedahbe wrote: > > > --- In rest-discuss@yahoogroups.com, "Eric J. Bowman" <eric@...> wrote: >> >>> >>> You can't make such a choice with "eyes wide open" if you deny there's >>> even a REST mismatch to consider! >>> >> >> That Blinksale API thread does keep coming up, as folks try to convince >> me that the entire REST community disagrees with me. So I just re-read >> it all the way through, and this particular comment stood out: >> >> http://tech.groups.yahoo.com/group/rest-discuss/message/6569 >> >> +1. It's interesting how, out of all the times that thread's been >> quoted over the past four years, that post never seems to get >> surfaced... >> >> Why didn't Roy correct Mark on that, considering how they were both >> active participants in the thread? Just askin'... ;-) >> >> -Eric >> > > He does correct him. It's here: http://tech.groups.yahoo.com/group/rest-discuss/message/6594 > Well technically, he corrected someone That's me :-) > who was referencing another posting where Mark had made this assertion. I believe he is still correcting the same point. > Mark then asks for clarification: > http://tech.groups.yahoo.com/group/rest-discuss/message/6600 > Which Roy provides: > http://tech.groups.yahoo.com/group/rest-discuss/message/6613 Funny. IIRC think I had quoted that message very early on in this thread :-) Good you dug that up again - I think Roy pretty much makes everything very clear there. Jan > > Unfortunately I think we are seeing evidence in this thread of what Roy said here: > "This is one of those gray areas of increasing RESTfulness that > will doubtless drive some people nuts." > > I think both sides are making good points but there is no simple black and white answer. > > Regards, > > Andrew > > > > > > ------------------------------------ > > Yahoo! Groups Links > > >
"wahbedahbe" wrote: > > Well technically, he corrected someone who was referencing another > posting where Mark had made this assertion. > Actually, the context there is Mark's interview with Stefan, here: http://www.infoq.com/articles/mark-baker-REST > > I believe he is still correcting the same point. Mark then asks for > clarification [which] Roy provides: > By explaining the difference between the REST style and the Web instantiation of REST. Mark's comments, given in the context of feedback for an HTTP API on the Internet, were being misconstrued as truisms of the style itself. Roy _isn't_ calling this wrong: http://tech.groups.yahoo.com/group/rest-discuss/message/6569 > > Unfortunately I think we are seeing evidence in this thread of what > Roy said here: "This is one of those gray areas of increasing > RESTfulness that will doubtless drive some people nuts." > Yes, I've been thinking about that, and would like to retract my prior statement that Roy's being contradictory with his other statements about there not being any degrees of REST -- you either meet the constraints or you don't. An approved-standard type with a registered identifier meets the self- descriptive messaging constraint for HTTP on the Internet. How RESTful the result is, depends on the proliferation of the standard -- due to the nature of REST as a network-based API, not a library-based API. > > I think both sides are making good points but there is no simple > black and white answer. > There is a black-and-white answer, and a gray area, for RESTful HTTP on the Internet. The truism is that Content-Type MUST contain an IANA- registered identifier that points to an *approved* standard. Even a "Rand Paul" standard... he's the politician who's also a board-certified opthalmologist -- but only by his own certification board, which has only ever certified him... nice work if you can get it... The gray area is a function of uptake, i.e. ubiquity. Early adopters of Atom weren't violating the self-descriptive messaging constraint. These implementations have, over time and without needing to be changed, become one helluva lot more RESTful than they were initially. Why? Because the proliferation of Atom is what makes the interface uniform, over and above just being a standard. Nowadays, standard Atom libraries abound, it's in all the browsers that matter, there are feed readers and aggregation services -- all those things we call "network effects" which drive the serendipitous re-use and anarchic scalability we're presumably trying to harness with pragmatic REST development. -Eric
Am 05.08.10 23:31, schrieb Eric J. Bowman: > Juergen Brendel wrote: >> >>> What's important to remember here, is the importance of the initial >>> GET or HEAD request. "REST" APIs which "just know" how to pop a >>> stack are not hypertext driven. Whereas, say, Xforms allows you to >>> define a button, let's label it 'pop'. When the user selects 'pop' >>> as the state transition, hypertext informs the user agent to fetch >>> an Etag with HEAD, then uses that Etag to make a conditional >>> request. >> >> Out of curiosity (and since I'm not familiar with Xforms): How does >> the hypertext inform the user agent to fetch an Etag with HEAD? >> > > By specifying the HEAD method of a target URI, and using some Javascript > (a blackbox incurring a visibility penalty) to write that Etag into a > <header> element of the next submission, then calling that submission. > IOW, by applying the optional Code on Demand constraint. Is there a way in HTTP for the server to ensure that the client makes use of the Etag? Is it sufficient to reponse with 412 (Precondition failed) or maybe better 417 (Expectation failed) if the client does not send an If-Match-Header? In genearl I wonder how a "delete button" for a browser should behave. Should it negotiate on the ETag, Last-Modifed and so on or not? -billy.
On Aug 25, 2010, at 10:52 PM, meier@... wrote: > Am 05.08.10 23:31, schrieb Eric J. Bowman: >> Juergen Brendel wrote: >>> >>>> What's important to remember here, is the importance of the initial >>>> GET or HEAD request. "REST" APIs which "just know" how to pop a >>>> stack are not hypertext driven. Whereas, say, Xforms allows you to >>>> define a button, let's label it 'pop'. When the user selects 'pop' >>>> as the state transition, hypertext informs the user agent to fetch >>>> an Etag with HEAD, then uses that Etag to make a conditional >>>> request. >>> >>> Out of curiosity (and since I'm not familiar with Xforms): How does >>> the hypertext inform the user agent to fetch an Etag with HEAD? >>> >> >> By specifying the HEAD method of a target URI, and using some Javascript >> (a blackbox incurring a visibility penalty) to write that Etag into a >> <header> element of the next submission, then calling that submission. >> IOW, by applying the optional Code on Demand constraint. > > Is there a way in HTTP for the server to ensure that the client makes > use of the Etag? Is it sufficient to reponse with 412 (Precondition > failed) or maybe better 417 (Expectation failed) if the client does not > send an If-Match-Header? Good question. Has been asked some time ago but IIRC without satisfying result. 412 is what I would do (417 is the anser to a use of the Expect header) > > In genearl I wonder how a "delete button" for a browser should behave. > Should it negotiate on the ETag, Last-Modifed and so on or not? IMHO it should remember the ETag or Last-Modified and delete with a conditional request. However, it is unlikely that anyone could DELETE any resource so the user would usually likely 'own' the resource to DELETE anyhow. Jan > > -billy. > > > > ------------------------------------ > > Yahoo! Groups Links > > >
This is a newbie open-ended question. So if this is too generic I will read up more and come back. I am trying to understand how REST and metadata initiatives are related to each other. Why do you need Dublin Core/XMP etc. ? Are these microformats ?
> This is a newbie open-ended question. So if this is too generic I will read up more and come back. It is a big, open question, but I'll try to push you in the right direction. > I am trying to understand how REST and metadata initiatives are related to each other. Why do you need Dublin Core/XMP etc. ? Are these microformats ? On the surface, there is no direct connection except the Web architecture. REST is an architectural style for managing information resources. It has very little standard metadata associated with it other than that it inherits from HTTP. Dublin Core is a framework for describing publication metadata. It was produced by a bunch of librarians through the OCLC in Dublin, OH. It originally started around the Warwick Framework but was recast as the poster child for RDF along the way. Dublin Core is mostly used to describe authorship, subject designation, publication dates, etc. It is actually a more complicated framework that supports interoperability across metadata profiles, but for your purposes here, it is an RDF vocabulary for describing resources with standard metadata terms (dc:title, dc:subject, dc:creator, etc.) It would be used either directly as RDF: http://bosatsu.net/index.html http://purl.org/dc/terms/creator http://purl.org/net/bsletten This is a simple fact or "triple" connecting a document to an author, indicated by a 303 non-network-addressable resource through the Dublin Core creator relationship). RDF statements follow a subject - predicate - value relationship but can have many different serializations. In this case, both the subject and the relationship are global and resolvable: http://purl.org/dc/terms/creator This can resolve both human-readable and machine-processable versions of the relationship. The data model allows you to use relationships from other vocabularies so it makes it very easy to accumulate data from the Web. People are now starting to weave RDF into XHTML, HTML, SVG, ODF, etc., generating it on the fly, exposing it natively as part of the Linked Data Project (http://linkedata.org). There are technologies that build on RDF such as SKOS and OWL to allow you to organize the terms and resources in new and interesting ways. You can then start to do certain types of inference over the data organized this way. One of the exciting parts is that you can organize other people's data the way you want to see it relatively easily. RDF and microformats serve similar goals (to describe documents and resources) but they have much different scopes. RDF has a data model associated with it and is largely intended to support global references and relationships. Microformats are intended to be simple, developer-friendly ways of encoding certain domains (events, people, reviews, organizations, etc.) The good news is that it is easy to convert Microformats into a form that can be used with RDF so it is all good metadata. XMP is based on an older version of RDF and was intended as a way of allowing Adobe's various partners to contribute tools in a document-processing framework and allowing them all to annotate a document, image, etc. with metadata (camera information, filters applied, etc.) It isn't super-wildly used but I think the adoption of RDFa by ODF is going to help spur interest here again. The excellent "RESTful Web Services Cookbook" and "REST in Practice" books touch upon the relationship between REST and Semantic Web technologies like RDF, but I am taking a much deeper dive in a book I am writing for Addison-Wesley called "Resource-Oriented Architectures : Building Webs of Data".
I feel this thread can take one more post :-) Just came across this very nice SO answer (first one) by Darrel: http://stackoverflow.com/questions/880881/rest-media-type-explosion Felt it somehow touches the issue we had here. Jan On Aug 25, 2010, at 10:02 PM, Jan Algermissen wrote: > > On Aug 25, 2010, at 9:17 PM, wahbedahbe wrote: > >> >> >> --- In rest-discuss@yahoogroups.com, "Eric J. Bowman" <eric@...> wrote: >>> >>>> >>>> You can't make such a choice with "eyes wide open" if you deny there's >>>> even a REST mismatch to consider! >>>> >>> >>> That Blinksale API thread does keep coming up, as folks try to convince >>> me that the entire REST community disagrees with me. So I just re-read >>> it all the way through, and this particular comment stood out: >>> >>> http://tech.groups.yahoo.com/group/rest-discuss/message/6569 >>> >>> +1. It's interesting how, out of all the times that thread's been >>> quoted over the past four years, that post never seems to get >>> surfaced... >>> >>> Why didn't Roy correct Mark on that, considering how they were both >>> active participants in the thread? Just askin'... ;-) >>> >>> -Eric >>> >> >> He does correct him. It's here: http://tech.groups.yahoo.com/group/rest-discuss/message/6594 >> Well technically, he corrected someone > > That's me :-) > >> who was referencing another posting where Mark had made this assertion. I believe he is still correcting the same point. >> Mark then asks for clarification: >> http://tech.groups.yahoo.com/group/rest-discuss/message/6600 >> Which Roy provides: >> http://tech.groups.yahoo.com/group/rest-discuss/message/6613 > > Funny. IIRC think I had quoted that message very early on in this thread :-) > > Good you dug that up again - I think Roy pretty much makes everything very clear there. > > Jan > >> >> Unfortunately I think we are seeing evidence in this thread of what Roy said here: >> "This is one of those gray areas of increasing RESTfulness that >> will doubtless drive some people nuts." >> >> I think both sides are making good points but there is no simple black and white answer. >> >> Regards, >> >> Andrew >> >> >> >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> > > > > ------------------------------------ > > Yahoo! Groups Links > > >
Hi, in my opinion, server components in RESTful systems are typed in the sense that server components are selected for interaction based on such a 'type'. Examples: - if the user goal is to buy a book, the networked application will be configured to use a server component that 'is an online book seller' - if the user goal is to track updates to a bunch of blogs the application will be configured to interact with server components that 'are atompub servers' - if the user goal is to review open incident tickets the application will be configured to interact with a server component that is a ticket ing system. (not very accurate, but I hope you get the idea). I am having trouble to make up my mind whether these 'types' exist a priori, in the absence of any notion of an application or whether the 'types' only come into being because applications need some server side component of some perceived type. This is especially interesting since the type of a server side component can vary depending on the application. Suppose the application is to index a bunch of web sites then the intended server side component type is something like 'any'. It is, I think, also reasonable to ask whether there is some sort of primary application for a given server component that defines its type. Reactions to this brain dump are most welcome... Jan
Jan Algermissen wrote: > in my opinion, server components in RESTful systems are typed in the > sense that server components are selected for interaction based on such > a 'type'. > > Examples: > > - if the user goal is to buy a book, the networked application will be > configured to use a server component that 'is an online book seller' > - if the user goal is to track updates to a bunch of blogs the > application will be configured to interact with server components that > 'are atompub servers' > - if the user goal is to review open incident tickets the application > will be configured to interact with a server component that is a ticket > ing system. > > (not very accurate, but I hope you get the idea). > > I am having trouble to make up my mind whether these 'types' exist a > priori, in the absence of any notion of an application or whether the > 'types' only come into being because applications need some server side > component of some perceived type. > > This is especially interesting since the type of a server side > component can vary depending on the application. Suppose the > application is to index a bunch of web sites then the intended server > side component type is something like 'any'. > > It is, I think, also reasonable to ask whether there is some sort of > primary application for a given server component that defines its type. Type is an illusion. Bunchtype doubly so. It's difficult for 'types' based on goals to "come into being" since they do not, in fact, exist. Clients may perceive that they do exist, until Amazon morphs from an online book seller to an online marketplace for everything to an online rating system to an online community forum without ever flipping its 'type' bit from one state to another. Servers may believe that they exist, until a client tracks updates to a bunch of books instead of buying them. A goal is not merely the recognition of a domain, or even a range. "These apples are red" does not define a goal. "I will buy red apples" does, because it marks off the limits of an activity (purchasing) based on the range of a domain (red apples). Domain boundaries that have no relationship to activities are uninteresting; the set of red apples on its own is boring. The "primary application" could be defined as the set of state transition paths which the server developers desire most. But the degree to which clients actually follow those desired paths varies widely, and then the representations vary widely, and then the states and paths themselves vary widely, as the business changes. RESTful systems support this. Types and goals are merely momentary, mutable marketing. Robert Brewer fumanchu@...
Robert, On Aug 29, 2010, at 11:54 PM, Robert Brewer wrote: > Jan Algermissen wrote: >> in my opinion, server components in RESTful systems are typed in the >> sense that server components are selected for interaction based on > such >> a 'type'. >> >> Examples: >> >> - if the user goal is to buy a book, the networked application will be >> configured to use a server component that 'is an online book seller' >> - if the user goal is to track updates to a bunch of blogs the >> application will be configured to interact with server components that >> 'are atompub servers' >> - if the user goal is to review open incident tickets the application >> will be configured to interact with a server component that is a > ticket >> ing system. >> >> (not very accurate, but I hope you get the idea). >> >> I am having trouble to make up my mind whether these 'types' exist a >> priori, in the absence of any notion of an application or whether the >> 'types' only come into being because applications need some server > side >> component of some perceived type. >> >> This is especially interesting since the type of a server side >> component can vary depending on the application. Suppose the >> application is to index a bunch of web sites then the intended server >> side component type is something like 'any'. >> >> It is, I think, also reasonable to ask whether there is some sort of >> primary application for a given server component that defines its > type. > > Type is an illusion. Bunchtype doubly so. > > It's difficult for 'types' based on goals to "come into being" since > they do not, in fact, exist. Clients may perceive that they do exist, > until Amazon morphs from an online book seller to an online marketplace > for everything to an online rating system to an online community forum > without ever flipping its 'type' bit from one state to another. Servers > may believe that they exist, until a client tracks updates to a bunch of > books instead of buying them. > > A goal is not merely the recognition of a domain, or even a range. > "These apples are red" does not define a goal. "I will buy red apples" > does, because it marks off the limits of an activity (purchasing) based > on the range of a domain (red apples). Domain boundaries that have no > relationship to activities are uninteresting; the set of red apples on > its own is boring. > > The "primary application" could be defined as the set of state > transition paths which the server developers desire most. But the degree > to which clients actually follow those desired paths varies widely, and > then the representations vary widely, and then the states and paths > themselves vary widely, as the business changes. RESTful systems support > this. Types and goals are merely momentary, mutable marketing. Very well said! Like that a lot. Thanks. Jan > > > Robert Brewer > fumanchu@...
I have an API where a Document can be made private, sharable (eg blind URL) or can be published (appears in lists of published documents).
The naive approach would be to update some "visibility". Eg.
PUT /documents/guid { ..., visibility = "published" }
The problem I have with this is the intent is lost. I see there being 3 quite distinct events. MakePrivate (or Hide), Share, and Publish. By
Another approach would be to update say a state or visibility:
PUT /documents/guid/visibility { "published" }
This at least separates out the modification of visibility from the rest of the document but still hides the fact that I want to indicate that we "Published" the document not that we changed some arbitrary state.
A perhaps kooky approach would be to model them as separate resources:
/documents/guid/published
/documents/guid/shared
Is there an idiomatic way to handle these situations? Do people generally just degenerate into PUTting values for every attribute?
Any insight or guidance greatly appreciated.
Cheers,
Simon
>>>>> "Simon" == Simon <haruki_zaemon@...> writes:
Simon> A perhaps kooky approach would be to model them as separate
Simon> resources:
Simon> /documents/guid/published /documents/guid/shared
Simon> Is there an idiomatic way to handle these situations? Do
Simon> people generally just degenerate into PUTting values for
Simon> every attribute?
Simon> Any insight or guidance greatly appreciated.
I would go for the latter. But I think you're actually talking about
security here, as share seems to indicate you can have a list of
friends to share with???
But assuming just private and public: you have two collections, and
both return a set of documents.
You could do MOVE /private/guid /public/guid
as you want this atomic.
--
All the best,
Berend de Boer
Another possibility is to model this as three collections - /private-documents/ - /shared-documents/ - published-documents/ You can POST URIs to this list to add an existing document, DELETE to remove it. Your server can decide if a document can appear on more than one list at a time and respond to attempts to POST accordingly. mca http://amundsen.com/blog/ http://mamund.com/foaf.rdf#me On Sun, Aug 29, 2010 at 22:40, Berend de Boer <berend@pobox.com> wrote: >>>>>> "Simon" == Simon <haruki_zaemon@...> writes: > > Simon> A perhaps kooky approach would be to model them as separate > Simon> resources: > > Simon> /documents/guid/published /documents/guid/shared > > Simon> Is there an idiomatic way to handle these situations? Do > Simon> people generally just degenerate into PUTting values for > Simon> every attribute? > > Simon> Any insight or guidance greatly appreciated. > > I would go for the latter. But I think you're actually talking about > security here, as share seems to indicate you can have a list of > friends to share with??? > > But assuming just private and public: you have two collections, and > both return a set of documents. > > You could do MOVE /private/guid /public/guid > > as you want this atomic. > > -- > All the best, > > Berend de Boer > > > ------------------------------------ > > Yahoo! Groups Links > > > >
On 29 Aug, 2010,at 11:54 PM, Robert Brewer <fumanchu@...> wrote: > Types and goals are merely momentary, mutable marketing. Agreed. Nevertheless, they have an important role regarding the configuration of components when forming an application. When I want to search the Web, I am instructing the browser to connect to a server component by entering a URL in the location bar. When doing this, my intention is to connect to a server component that I presume will satisfy the intent I have. I am not just entering any URI. Usually I want to connect to a particular instance, but at times (as in the case of search) the presumed 'type' of the server component is more or less sufficient. This notion of type is the same notion of 'type' I use to organize my bookmarks. In that sense the bookmark categories in my browser constitue a set of 'server component types' . 'Type' of course meant as the presumed capability to satisfy some intent. I find it interesting to compare the role of 'type' when assembling applications in a RESTbased architecture with the use of the name of components when assembling a unix commandline (-application). Suppose you configure a commandline like this $ cat access.log | grep 404 > 404.log The choice of 'grep' is based on a 'type'-assumption connected to the name 'grep' (I have no idea which implementation of grep will actually be executed). Entering 'http://www.myshop.com' into the location bar when the intent is to purchase apples is conceptually the same binding of expectation (that component will enable purchase of apples) to name (the URI). Jan > > > > Robert Brewer > fumanchu@... > > > ------------------------------------ > > Yahoo! Groups Links > > >
Ok. I was away and lost all the fun, got late to the party, and so... But cannot resist putting my two cents in this. 1. I share most of Antonio's feelings in his rant. Sometimes I do not agree on his appreciations, but the points he made are valid. Actually, there are point I see on other threads as well. 2. REST, as I understand, is a style. You can build it with whatever you want. We need to apply the "practicality" principle (do things in a practical way) and avoid the Golden Hammer Syndrome. 3. To me, Standardization is more related to common understanding, of all participants in a networked application. The registering part is a way of doing it. There me be better or worse ways, all accomplishing the same goal. 4. REST is free to be used privately, or so I feel it. Only a few guys and me. There is nothing I can find that forces me to use the WEB in general and to take into account all and each of the nodes that are connected to it. 5. Not all is written, yet. It is not true the current "standards", official or ad hoc, are the only ones that will ever be, no new kids allowed. 6. If I use the web, doesn't mean I'm entering the big and only one networked application, that I should agree with all the existing nodes in the web the way I send my messages (and the ones that will be in the future too!). And here (as with everything else) I can be wrong: I see the web more as a framework, a supporting implementation for trillions of networked, individual apps. There I can use others services, create my own, be a global provider or simply build a small page for my family to see mi baby's pictures. I see no practical use to have a full body of standards watching over me and punishing if I post the pictures using my own, non-patent, 4D format. I see practical use in having that body registering the common, most used, most practical, proven and approved format we can use, so if I'm new in town, I can go there and pick the best for my app, if that one exists. And the best means the one that will support many of my quality properties needs, which MAY be: interoperability, readiness, compactness, legibility, security, etc. 7. Example? We work on a testing system that builds thousands of nodes, for a few minutes, to load test servers. That is a network of testing nodes, on the cloud, talking between them with proprietary, efficient formats. Once all that info is gathered, it is served in standard formats to clients. That is practical. RESTful? May be, but who cares? It is working and working fine. Cheers! William Martinez. --- In rest-discuss@yahoogroups.com, mike amundsen <mamund@...> wrote: > > Antnio: > > Not sure if this is exactly where you are heading, but here my POV: > > REST style is protocol-agnostic (not limited to HTTP) > REST style is not limited to Web or Internet usage (e. g. has > application for communication between autonomous devices in a closed > custom network) > REST style using HTTP over the Web is not limited to using the common > Browser for the "client" (e. g. desktop applications. console apps, > bots, etc.) > > Finally, the REST style is not the only interesting style for building > distributed network applications. > > mca > http://amundsen.com/blog/ > http://mamund.com/foaf#me > > Join me at #RESTFest 2010 Sep 17 & 18 > http://restfest.org > http://restfest.org/workshop > > ---------- Forwarded message ---------- > From: Antnio Mota <amsmota@...> > Date: 2010/8/12 > Subject: Re: [rest-discuss] Atom feed vs. list of orders > To: "Eric J. Bowman" <eric@...> > Cc: Jan Algermissen <algermissen1971@...>, Peter Williams > <pezra@...>, Rest List <rest-discuss@yahoogroups.com> > > > I think there is a fundamental question that should clarified > unequivocally by the experts on this list - I do have a opinion but > I'm not a expert so it's just that, a opinion. > > Is REST realm - the problem-space where it should be applied, or where > it makes sense to apply it - exclusively the Web? Or it should, or it > can, be applied to the more general space of network-based software > architectures, thus including intranets (network based apps that runs > exclusively inside a company) and extranets (the use of private > networks and/or the public infrastructure of the internet to connect a > limited number of companies - considering limited does not equal > small)? > > Because if it is indeed only applicable to the Web - and please note > that Web != Internet - to making websites that are going to be used by > humans using a browser, most of the discussions here don't really make > sense. Otherwise, some people assumptions when they deal with the > issues presented on this list are, to say the least, limited. Or plain > wrong, not to say the least. Now I do understand that one or the other > approach may have to do with each one background, people who's work is > limited to the web may have a different point of view than others who > worked over several areas and platforms along the years. > > For instance, what is the sense in saying "Media type identifers > inform clients what codec or engine to use for deciphering the > payload" if my clients are *not* browsers? And also please someone > correct if I'm wrong by saying "ubiquitous" != "standard"... > > While it is true that intermediaries don't look inside the msg to > perform their function, why is that true also for servers that are not > web-servers? If I have a application/mystuff+xml, all the > intermediaries understand what they need to understand - they read > this as application/xml. Why should the server be limited to this, > knowing that I, as a architect/designer, although I *do not* have > control on intermediaries I *do* have control over the server? The > coupling using application/xml or application/mystuff+xml is, from > this point of view, exactly the same. And I do see advantage of using > "application/mystuff+xml" on content-negociation *on the server side*, > because like that I can even put that content-negociation on my > *server-side connector* - which I also control, thus relieving the > server from workload, improving balancing, implementing scalability > and effectively implementing layered design - but of course all this > is just impelementation. > > Also, and all please excuse my rant but these things must be said... > why some people on this list insist in treating people sometimes like > morons and sometimes like little kids in the classroom in front of the > master? Shouldn't we all consider the others as pairs, even if the > knowledge of some are superior to the one of others? Aren't we all > professionals? What's the "you kids"? I'm trying not to say harsh > words, but do I have to publicize that I develop software for the past > 30 years, 27 of then as a professional, 20 of then as a independent > consultant / contractor? That I started to do web sites since the > earlier 90's and I kept working on the web (although not exclusively) > until as recently as 2007, when I designed and implemented a web-site > login method using telephony - where you had to call a number to be > authorized to enter the site and you will be logged in until you hang > up the call. Should I also say that I did a web site to a chicken > delivery service - I didn't put that on my CV since I made it to a > friend and it was not a paid job - or actually it was, I got paid in > chickens... I even designed myself many animated gif's... How's that > for publicity? I don't know, maybe it's just me that dislike being > treated with this kind of disdain? > > Nevertheless, I really think that this list should clarify the > question I talked above, because frankly if REST is only about the > web, I am plain wrong in my approach and it is better for me to > understand that now and move on to other technologies. And I think > others will benefit from that clarification also. > > > > On 12 August 2010 07:19, Eric J. Bowman <eric@...> wrote: > > > > > > > > Media type identifers inform clients what codec or engine to use for > > deciphering the payload. Nothing more. Clients are limited by how > > recent their codec/engine is for the given type, or how fully they > > implement that type. Media type identifiers are _not_ meant to say > > anything about the nature of the payload. Doing so would introduce > > coupling, violating the layered-system constraint, and result in more > > media types than anyone could possibly keep up with, defeating the > > whole purpose of self-descriptive messaging. > > > > > ------------------------------------ > > Yahoo! Groups Links >
If I'm using a 3-tier app with a RESTful resource orientated service in the middle tier accessed via HTTP, what is the best way to provide orthogonal resources to the UI tier?
An example of this would be a 'User' resource which has a field/property for a Country, now in the UI tier when editing the User I want to be able to pick from a drop down and then update the resource via a PUT operation.
The question is how does the Country list get to the UI for editting the User? - do I make 2 seperate requests to the service, one for the Country resources and one for the User resource or do I combine these into 1 request.
Ta
Ollie
***********************************************************************************
The Royal Bank of Scotland plc. Registered in Scotland No 90312.
Registered Office: 36 St Andrew Square, Edinburgh EH2 2YB.
Authorised and regulated by the Financial Services Authority. The
Royal Bank of Scotland N.V. is authorised and regulated by the
De Nederlandsche Bank and has its seat at Amsterdam, the
Netherlands, and is registered in the Commercial Register under
number 33002587. Registered Office: Gustav Mahlerlaan 10,
Amsterdam, The Netherlands. The Royal Bank of Scotland N.V. and
The Royal Bank of Scotland plc are authorised to act as agent for each
other in certain jurisdictions.
This e-mail message is confidential and for use by the addressee only.
If the message is received by anyone other than the addressee, please
return the message to the sender by replying to it and then delete the
message from your computer. Internet e-mails are not necessarily
secure. The Royal Bank of Scotland plc and The Royal Bank of Scotland
N.V. including its affiliates ("RBS group") does not accept responsibility
for changes made to this message after it was sent.
Whilst all reasonable care has been taken to avoid the transmission of
viruses, it is the responsibility of the recipient to ensure that the onward
transmission, opening or use of this message and any attachments will
not adversely affect its systems or data. No responsibility is accepted
by the RBS group in this regard and the recipient should carry out such
virus and other checks as it considers appropriate.
Visit our website at www.rbs.com
***********************************************************************************
Hey Ollie - On Thu, Sep 2, 2010 at 5:38 AM, <oliver.riches@...> wrote: > > > If I'm using a 3-tier app with a RESTful resource orientated service in > the middle tier accessed via HTTP, what is the best way to provide > orthogonal resources to the UI tier? > > An example of this would be a 'User' resource which has a field/property > for a Country, now in the UI tier when editing the User I want to be able to > pick from a drop down and then update the resource via a PUT operation. > > The question is how does the Country list get to the UI for editting the > User? - do I make 2 seperate requests to the service, one for the Country > resources and one for the User resource or do I combine these into 1 > request. > > Ta > > Ollie > However you want it, I suppose. I'm not sure there is a "right" answer here. You could have a resource that gets User(s) + list of countries or two separate resources for each. But I think you'll need to think about reuse and other business requirements that you might have. Eb
Eb,
Thats what I thought :)
Ollie Riches
RBS Global Banking & Markets
Office: +44 203 361 4071
________________________________
From: Eb [mailto:amaeze@...]
Sent: 02 September 2010 11:34
To: RICHES, Oliver, GBM
Cc: rest-discuss@yahoogroups.com
Subject: Re: [rest-discuss] Orthogonal resource concerns
Hey Ollie -
On Thu, Sep 2, 2010 at 5:38 AM, <oliver.riches@rbs.com<mailto:oliver.riches@...>> wrote:
If I'm using a 3-tier app with a RESTful resource orientated service in the middle tier accessed via HTTP, what is the best way to provide orthogonal resources to the UI tier?
An example of this would be a 'User' resource which has a field/property for a Country, now in the UI tier when editing the User I want to be able to pick from a drop down and then update the resource via a PUT operation.
The question is how does the Country list get to the UI for editting the User? - do I make 2 seperate requests to the service, one for the Country resources and one for the User resource or do I combine these into 1 request.
Ta
Ollie
However you want it, I suppose. I'm not sure there is a "right" answer here. You could have a resource that gets User(s) + list of countries or two separate resources for each. But I think you'll need to think about reuse and other business requirements that you might have.
Eb
***********************************************************************************
The Royal Bank of Scotland plc. Registered in Scotland No 90312.
Registered Office: 36 St Andrew Square, Edinburgh EH2 2YB.
Authorised and regulated by the Financial Services Authority. The
Royal Bank of Scotland N.V. is authorised and regulated by the
De Nederlandsche Bank and has its seat at Amsterdam, the
Netherlands, and is registered in the Commercial Register under
number 33002587. Registered Office: Gustav Mahlerlaan 10,
Amsterdam, The Netherlands. The Royal Bank of Scotland N.V. and
The Royal Bank of Scotland plc are authorised to act as agent for each
other in certain jurisdictions.
This e-mail message is confidential and for use by the addressee only.
If the message is received by anyone other than the addressee, please
return the message to the sender by replying to it and then delete the
message from your computer. Internet e-mails are not necessarily
secure. The Royal Bank of Scotland plc and The Royal Bank of Scotland
N.V. including its affiliates ("RBS group") does not accept responsibility
for changes made to this message after it was sent.
Whilst all reasonable care has been taken to avoid the transmission of
viruses, it is the responsibility of the recipient to ensure that the onward
transmission, opening or use of this message and any attachments will
not adversely affect its systems or data. No responsibility is accepted
by the RBS group in this regard and the recipient should carry out such
virus and other checks as it considers appropriate.
Visit our website at www.rbs.com
***********************************************************************************
What do people consider best practices for versioning of RESTful web services? There seem to be two common approaches: stuff the versions into the URI somewhere and version the media types and use content negotiation. What are the pros and cons of each approach?
>>>>> "bryan" == bryan w taylor <bryan_w_taylor@...> writes:
bryan> What do people consider best practices for versioning of
bryan> RESTful web services? There seem to be two common
bryan> approaches: stuff the versions into the URI somewhere and
bryan> version the media types and use content negotiation. What
bryan> are the pros and cons of each approach?
I fail to understand this I think.
If you have a PUT of a resource with a field that is new, and the old
API without that field is still valid, you can somehow make up that
new field, right?
So why can the existing API not handle that new field?
So my point is that if the old API still has to work, you can always
differentiate between calls doing it the old/new way, and make up for
the differences transparently.
If thinks have to break, you can't use versioning anyway.
--
All the best,
Berend de Boer
--- In rest-discuss@yahoogroups.com, Berend de Boer <berend@...> wrote: > > >>>>> "bryan" == bryan w taylor <bryan_w_taylor@...> writes: > > bryan> What do people consider best practices for versioning of > bryan> RESTful web services? There seem to be two common > bryan> approaches: stuff the versions into the URI somewhere and > bryan> version the media types and use content negotiation. What > bryan> are the pros and cons of each approach? > > I fail to understand this I think. > > If you have a PUT of a resource with a field that is new, and the old > API without that field is still valid, you can somehow make up that > new field, right? I could default it to something reasonable when it's not present, like null, or "N/A" !?!? > So why can the existing API not handle that new field? Adding a field might be the kind of extensibility that does not result in a compatibility problem. Clients can ignore fields they don't understand and servers can default them if not provided. No problem. What about changes that cannot be backwards compatible? > So my point is that if the old API still has to work, you can always > differentiate between calls doing it the old/new way, and make up for > the differences transparently. Yeah, this "differentiate between calls doing it the old/new way" thing is what I'm asking about. > If thinks have to break, you can't use versioning anyway.
On Sep 3, 2010, at 11:16 AM, bryan_w_taylor wrote: > > > What about changes that cannot be backwards compatible? Create a new media type and use conneg. The conneg might redirect the new-version-understanding client to an entirely different URI space or it might simply return the new media type version. Jan > >> So my point is that if the old API still has to work, you can always >> differentiate between calls doing it the old/new way, and make up for >> the differences transparently. > > Yeah, this "differentiate between calls doing it the old/new way" thing is what I'm asking about. See above - use conneg. There is no API versioning issue because, guess what :-), the API is uniform. Jan > >> If thinks have to break, you can't use versioning anyway. > > > > > ------------------------------------ > > Yahoo! Groups Links > > >
On Fri, Sep 3, 2010 at 4:29 AM, bryan_w_taylor <bryan_w_taylor@...> wrote: > What do people consider best practices for versioning of RESTful web services? There seem to > be two common approaches: stuff the versions into the URI somewhere and version the media > types and use content negotiation. What are the pros and cons of each approach? http://barelyenough.org/blog/2008/05/versioning-rest-web-services/ http://tech.groups.yahoo.com/group/rest-discuss/message/13218 --tim
--- In rest-discuss@yahoogroups.com, Jan Algermissen <algermissen1971@...> wrote: > > > On Sep 3, 2010, at 11:16 AM, bryan_w_taylor wrote: > > > > What about changes that cannot be backwards compatible? > > Create a new media type and use conneg. > > The conneg might redirect the new-version-understanding client to an entirely different URI space or it might simply return the new media type version. Ok, so you've come down unequivocally of new media type + content negotiation over version numbers in the URI. The pushback I often get on using content negotiation is that it's hard and not all clients can do it well. It sort of the same reason why so many people give their variants URLs like http://example.com/mydoc.xml and http://example.com/mydoc.json . Browsers in particular don't give users good control of this. What do you think of this argument? In some sense, it seems like using http://example.com/v1/mydoc.xml vs http://example.com/v2/mydoc.xml are just a different manifestation of this. The different versions ARE representation variants of the same resource and giving them their own URLs allows clients with content negotation impediments (CNIs?) the ability to cope. What's wrong with that?
OK, another vote on the side of content negotiation. Thanks for digging up that second link, btw, it was quite helpful. So was the first, but I'd seen it already. It seems that there are two approaches to versioning the media type: "application/myformat.v2+xml" vs "application/myformat+xml;version=2.0" I hadn't seen the latter. What do we call the part of the media type after the semicolon? Does using "version" here have semantics recognized by any RFC or spec? Can I assume clients understand that "application/myformat+xml;version=2.0" and "application/myformat+xml;version=3.0" are different? --- In rest-discuss@yahoogroups.com, Tim Williams <williamstw@...> wrote: > > On Fri, Sep 3, 2010 at 4:29 AM, bryan_w_taylor <bryan_w_taylor@...> wrote: > > What do people consider best practices for versioning of RESTful web services? There seem to > > be two common approaches: stuff the versions into the URI somewhere and version the media > > types and use content negotiation. What are the pros and cons of each approach? > > http://barelyenough.org/blog/2008/05/versioning-rest-web-services/ > http://tech.groups.yahoo.com/group/rest-discuss/message/13218 > > --tim >
> > In some sense, it seems like using http://example.com/v1/mydoc.xml vs http://example.com/v2/mydoc.xml are just a different manifestation of this. The different versions ARE representation variants of the same resource and giving them their own URLs allows clients with content negotation impediments (CNIs?) the ability to cope. What's wrong with that? You'll end up in Link maintenance hell. Jan > > > > > > ------------------------------------ > > Yahoo! Groups Links > > >
On Fri, Sep 3, 2010 at 6:42 AM, bryan_w_taylor <bryan_w_taylor@...> wrote: > OK, another vote on the side of content negotiation. > > Thanks for digging up that second link, btw, it was quite helpful. So was the first, but I'd seen it already. > > It seems that there are two approaches to versioning the media type: > > "application/myformat.v2+xml" vs "application/myformat+xml;version=2.0" > > I hadn't seen the latter. What do we call the part of the media type after the semicolon? > Does using "version" here have semantics recognized by any RFC or spec? Media Type Parameter? http://www.w3.org/Protocols/rfc2616/rfc2616-sec3.html#sec3.7 http://tech.groups.yahoo.com/group/rest-discuss/message/15727 > Can I assume clients understand that "application/myformat+xml;version=2.0" and > "application/myformat+xml;version=3.0" are different? If your media type "application/myformat+xml" has documented support for the version parameter, I reckon you could assume that clients might program to it. --tim
--- In rest-discuss@yahoogroups.com, Jan Algermissen <algermissen1971@...> wrote: > > In some sense, it seems like using http://example.com/v1/mydoc.xml vs http://example.com/v2/mydoc.xml are just a different manifestation of this. The different versions ARE representation variants of the same resource and giving them their own URLs allows clients with content negotation impediments (CNIs?) the ability to cope. What's wrong with that? > > You'll end up in Link maintenance hell. Because these aren't permalinks. I agree. But it seems like a painful choice to force clients to support content negotiation on my custom media types. What do you think if I try to split the baby and offer a permalink at http://example.com/mything that allows content negotiation if the client supports it and redirects to the variant that matches the requested media type, such as http://example.com/v2/mything . If they can't do content negotiation, or submit an invalid choice we can redirect them to a document that enumerates all the versions. This actually gives me a URN (the permalink) to use to refer to the entity itself (the car, not the document about the car as TBL says). Representations that want to refer to my car should use its permalink URN.
Comparing "application/myformat.v2+xml" vs "application/myformat+xml;version=2.0" I note that the former is shorter. Is there any offsetting advantage for the latter?
On Sep 3, 2010, at 1:08 PM, bryan_w_taylor wrote: > Comparing "application/myformat.v2+xml" vs "application/myformat+xml;version=2.0" > > I note that the former is shorter. Is there any offsetting advantage for the latter? The latter does not do what you want. The parameter might be ignored or get stripped on the way. Jan > > > > ------------------------------------ > > Yahoo! Groups Links > > >
On Sep 3, 2010, at 1:05 PM, bryan_w_taylor wrote: > > > --- In rest-discuss@yahoogroups.com, Jan Algermissen <algermissen1971@...> wrote: > >>> In some sense, it seems like using http://example.com/v1/mydoc.xml vs http://example.com/v2/mydoc.xml are just a different manifestation of this. The different versions ARE representation variants of the same resource and giving them their own URLs allows clients with content negotation impediments (CNIs?) the ability to cope. What's wrong with that? >> >> You'll end up in Link maintenance hell. > > Because these aren't permalinks. I agree. But it seems like a painful choice to force clients to support content negotiation on my custom media types. Well, if your client says GET /foo HTTP/1.1 it better be speaking HTTP 1.1, eh? Jan > > What do you think if I try to split the baby and offer a permalink at http://example.com/mything that allows content negotiation if the client supports it and redirects to the variant that matches the requested media type, such as http://example.com/v2/mything . If they can't do content negotiation, or submit an invalid choice we can redirect them to a document that enumerates all the versions. > > This actually gives me a URN (the permalink) to use to refer to the entity itself (the car, not the document about the car as TBL says). Representations that want to refer to my car should use its permalink URN. > > > > > > ------------------------------------ > > Yahoo! Groups Links > > >
On Fri, Sep 3, 2010 at 7:10 AM, Jan Algermissen <algermissen1971@...> wrote: > > On Sep 3, 2010, at 1:08 PM, bryan_w_taylor wrote: > >> Comparing "application/myformat.v2+xml" vs "application/myformat+xml;version=2.0" >> >> I note that the former is shorter. Is there any offsetting advantage for the latter? > > The latter does not do what you want. The parameter might be ignored or get stripped on the > way. Why? Have you seen mime parameters being stripped? and, "ignored" by whom (server? client?)? --tim
On Sep 3, 2010, at 1:28 PM, Tim Williams wrote: > On Fri, Sep 3, 2010 at 7:10 AM, Jan Algermissen <algermissen1971@...> wrote: >> >> On Sep 3, 2010, at 1:08 PM, bryan_w_taylor wrote: >> >>> Comparing "application/myformat.v2+xml" vs "application/myformat+xml;version=2.0" >>> >>> I note that the former is shorter. Is there any offsetting advantage for the latter? >> >> The latter does not do what you want. The parameter might be ignored or get stripped on the >> way. > > Why? Have you seen mime parameters being stripped? No, but I have heard intermediaries might do so. Since you never know what hangs out there in the middle, you should not rely on parameters in the Content-Type header. At least not special ones. > and, "ignored" by > whom (server? client?)? Intermediaries. I guess they might parse the mime information and then maybe not re-assemble it correctly before they pass it on. You'd have to ask ietf-http-wg list for answers by the pros I guess :-) Jan > > --tim > > > ------------------------------------ > > Yahoo! Groups Links > > >
--- In rest-discuss@yahoogroups.com, Jan Algermissen <algermissen1971@...> wrote: > Well, if your client says > > GET /foo HTTP/1.1 > > it better be speaking HTTP 1.1, eh? Of course. It's more a question of whether I want to create a barrier to entry for clients that don't speak HTTP with great fluency. I'm imagining here an API over the web for the masses.
On Sep 3, 2010, at 1:34 PM, bryan_w_taylor wrote: > > > --- In rest-discuss@yahoogroups.com, Jan Algermissen <algermissen1971@...> wrote: > >> Well, if your client says >> >> GET /foo HTTP/1.1 >> >> it better be speaking HTTP 1.1, eh? > > Of course. It's more a question of whether I want to create a barrier to entry for clients that don't speak HTTP with great fluency. I'm imagining here an API over the web for the masses. Which user agents do you have in mind that do not support conneg? Jan > > > > ------------------------------------ > > Yahoo! Groups Links > > >
--- In rest-discuss@yahoogroups.com, Jan Algermissen <algermissen1971@...> wrote: > Which user agents do you have in mind that do not support conneg? Browsers. If you email me a link, intending to show me an issue in v1, I want pull up the media type "application/myformat-v1+xml", but my stupid browser sends this: Accept:text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 The */* matches both v1 and v2, and the server to pick v2. So I can't pull up v1 in the browser. Also, we want to be tolerant of programmers who may not understand all this accepts header mumbo jumbo. We want the knowledge barrier needed to be as low as possible because it's good for business.
On Fri, Sep 3, 2010 at 7:29 AM, bryan_w_taylor <bryan_w_taylor@...> wrote: > --- In rest-discuss@yahoogroups.com, Jan Algermissen <algermissen1971@...> wrote: > >> Which user agents do you have in mind that do not support conneg? > > > Browsers. If you email me a link, intending to show me an issue in v1, I want pull up the media type "application/myformat-v1+xml", but my stupid browser sends this: > Accept:text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 > > The */* matches both v1 and v2, and the server to pick v2. So I can't pull up v1 in the browser. > > Also, we want to be tolerant of programmers who may not understand all this accepts header mumbo jumbo. We > want the knowledge barrier needed to be as low as possible because it's good for business. I am not sure we really do want to allow people to mis-use HTTP base services in a way that increases the maintenance costs of both the server and client. That being said, if you really feel such support is required a media type based extension is a good way to provide it. For example, <http://example.com/foo.msv> for version 1 and <http://example.com/foo.msv2> for version 2. The use of extensions implies to the humans involved that these two resources are basically the same except for the the format of their representations. Your server development platform might even provide this for free. (RoR does, for example.) It is worth noting that it is better, whenever possible, to extend the existing media type rather than creating a new incompatible version. However, that is not always possible. When a break with the past is required creating media types is the best available solution. Peter <http://barelyenough.org>
On Sep 3, 2010, at 3:29 PM, bryan_w_taylor wrote: > Also, we want to be tolerant of programmers who may not understand all this accepts header mumbo jumbo. We want the knowledge barrier needed to be as low as possible because it's good for business. Is that so?? Ah, gotcha: consulting business you must mean :-) What is your goal? Use REST because you want the guaranteed properties to be induced into your system? Or do you want to throw a couple of HTTP stacks at your system, tolerate that your devs interpret Web architecture as it fits their skills and then clean up the mess you likely end up with next year? I do not see the point in doing that. Why not use RMI or so in the first place? It's straight-forward for most problems at hand. Jan
Hi,
I have a requirement to send a bookmarkable url in an email to
clients. My application does not conform to principles of REST at this
ti me but I want to know how to do this in a secure REST way so that
the browser need not log-in. Regeneration of password could be a use
case when the old one is lost. So in this case the user cannot login
until he answers some security questions but allowing a user to enter
a web application without logging in is a problem.
If I send a link in an email for regeneration of a password then that
link has to expire after a certain number of days. The url should
become stale. How can I treat a particular url stale after a certain
period ? I can store the time when the url was generated in the
database but all url's will look the same in this case.
This seems to be a very common usecase.
Thanks,
Mohan
"William Martinez Pomares" wrote: > > 2. REST, as I understand, is a style. You can build it with whatever > you want. We need to apply the "practicality" principle (do things in > a practical way) and avoid the Golden Hammer Syndrome. > No, you can't build whatever you want with REST. There are problem areas out there for which REST is not a solution. REST is what it is, not what it isn't. Where REST is impractical, the advice here always comes down to, "then don't use it." > > 4. REST is free to be used privately, or so I feel it. Only a few > guys and me. There is nothing I can find that forces me to use the > WEB in general and to take into account all and each of the nodes > that are connected to it. > Nobody has ever maintained otherwise. But, if you're sending HTTP over the Internet, well, that's the definition of the Web. If you don't care about anyone else re-using your system, then why bother with REST? If you do care, then you can't ignore all those other nodes out there, because REST is all *about* those other nodes out there. > > 5. Not all is written, yet. It is not true the current "standards", > official or ad hoc, are the only ones that will ever be, no new kids > allowed. > Nobody has ever maintained otherwise. > > 6. If I use the web, doesn't mean I'm entering the big and only one > networked application, that I should agree with all the existing > nodes in the web the way I send my messages (and the ones that will > be in the future too!). > Nobody has ever maintained otherwise. But, you cannot achieve the goals of REST without self-descriptive messaging. Which, on the Web, means agreeing to use IANA-registered standard media type identifiers. > > And here (as with everything else) I can be wrong: I see the web more > as a framework, a supporting implementation for trillions of > networked, individual apps. > Only a small subset of which may be considered RESTful. Being RESTful is neither a requirement for, nor a result of, using the Web. Nor is using the Web a requirement of REST. > > There I can use others services, create my own, be a global provider > or simply build a small page for my family to see mi baby's pictures. > I see no practical use to have a full body of standards watching over > me and punishing if I post the pictures using my own, non-patent, 4D > format. > No standards body is going to punish you for this, nor is there any Web requirement to use IANA-registered standard media types. You simply won't achieve the goals of REST this way, because you're violating the self-descriptive messaging constraint. If the beneficial effects of self-descriptive messaging aren't required by your system, then why bother with a style that requires it? If the goals of your system are incongruous with the goals of REST, then just don't use REST -- there's nothing wrong with that! Just don't point to such a system and insist that it's REST, then respond to criticism by saying that REST is "just a style" and therefore its constraints may be freely ignored... because that's NOT REST. REST is design-by-constraint, not unbounded creativity. > > 7. Example? We work on a testing system that builds thousands of > nodes, for a few minutes, to load test servers. That is a network of > testing nodes, on the cloud, talking between them with proprietary, > efficient formats. > If the system's goals are incongruous to REST, then don't use REST. It is not a requirement of using the Web, that you implement the constraints of REST. It *IS* a requirement of REST that your messaging be self-descriptive, which, on the Web, means IANA-registered standard types *are* required. > > Once all that info is gathered, it is served in standard formats to > clients. That is practical. RESTful? May be, but who cares? It is > working and working fine. > I certainly don't care. I don't know how folks get confused on this point. There is no best architectural style, only the architectural style that's best for your system. If your goals are incongruous with the goals of REST, then REST is clearly inappropriate to your needs. The only thing anyone here cares about is that the term REST be reserved for systems that actually follow the REST style -- which does NOT include HTTP-over-the-Internet systems which ignore the IANA registry, or use registered types that aren't actually standards. -Eric
Hello Eric! Ok, let's begin my answer by clarifying a couple of things: You are understanding things I may had not said, and that I agree with most of your thoughts, but disagree with a small part. 1. "Nobody has ever maintained otherwise." Great, it seems everybody agrees, but I doubt it a little bit. Anyway, I was just stating what I think, not stating anyone else is saying the contrary, at least in this list. 2. I said: "REST, as I understand, is a style. You can build it with whatever you want" and you said: "No, you can't build whatever you want with REST.". Kind of different things, right? Same words, different order, different meaning. 3. You said: "Where REST is impractical, the advice here always comes down to, "then don't use it."". Agree completely. If you read all my posts, I had never said the contrary. 4. Disagree with several other things. a. REST is not only for reuse. If I don't care reuse, I can still have a case for REST. b. REST is NOT about all the nodes out there, is about my application. c. I may be wrong, but registering is not the only way of achieving self-descriptive language. Again, maybe Roy can tell me why, if it is, REST only solution is registration. d. Non using IANA violates self-description? That is what all this discussion is all about. e. I have never said a "style" means freely ignores constrains. That is not the definition of style. Style means you can use other things, following the same principles. It means do not force to use IANA and WEB for REST. f. I'm not confused at all. I do promote the use of the right architectural style for the problem at hand. Read my posts. I have no problem if you tell me my system is not REST. That is what I mean here. Hope we are clear! William Martinez.
Just add a timestamp to the URL and sign/encrypt it (the timestamp) with a key known only to your web site. Then base64 encode the encrypted binary value and pass it as a query parameter. Check the timestamp on arival. I haven't tried this tool, but it might be useful: http://www.codeproject.com/KB/aspnet/TamperProofQueryString.aspx There are lots of similar libraries. Google for secure query strings. /J�rn ----- Original Message ----- From: "Mohan Radhakrishnan" <radhakrishnan.mohan@...> To: <rest-discuss@yahoogroups.com> Sent: Saturday, September 04, 2010 8:02 PM Subject: [rest-discuss] Bookmarkable url's > Hi, > I have a requirement to send a bookmarkable url in an email to > clients. My application does not conform to principles of REST at this > ti me but I want to know how to do this in a secure REST way so that > the browser need not log-in. Regeneration of password could be a use > case when the old one is lost. So in this case the user cannot login > until he answers some security questions but allowing a user to enter > a web application without logging in is a problem. > > If I send a link in an email for regeneration of a password then that > link has to expire after a certain number of days. The url should > become stale. How can I treat a particular url stale after a certain > period ? I can store the time when the url was generated in the > database but all url's will look the same in this case. > > This seems to be a very common usecase. > > Thanks, > Mohan >
"William Martinez Pomares" wrote: > > 2. I said: "REST, as I understand, is a style. You can build it with > whatever you want" and you said: "No, you can't build whatever you > want with REST.". Kind of different things, right? Same words, > different order, different meaning. > Sorry about that -- I read it as "you can build with it..." which isn't what you wrote. > > a. REST is not only for reuse. If I don't care reuse, I can still > have a case for REST. > But re-use is the entire basis for REST: "By applying the software engineering principle of generality to the component interface, the overall system architecture is simplified and the visibility of interactions is improved." You can't have a uniform interface, otherwise. > > b. REST is NOT about all the nodes out there, is about my > application. > REST is an explanation of an "Internet-scale distributed hypermedia system" known as the Web. While REST may be applied to other domains, if you're using the Web, you can't ignore all those other nodes out there, because those other nodes are what make the Web "Internet scale." If none of the nodes between producer and consumer are capable of understanding your messaging, then your application is *not* capable of achieving "Internet scale," therefore your application cannot achieve the goals of REST -- such an architecture may be considered REST-like, or inspired by REST, but cannot be considered REST. > > c. I may be wrong, but registering is not the only way of achieving > self-descriptive language. Again, maybe Roy can tell me why, if it > is, REST only solution is registration. > Please don't separate my point from the context in which it was made. On the Web, i.e. using HTTP over the Internet, there is simply no alternative to using IANA-registered, standardized types. This does *not* mean that REST's only solution to self-descriptive messaging is the IANA registry -- it *does* mean there is no other way to achieve self-descriptive messaging on the Web. > > d. Non using IANA violates self-description? > On the Web, if your system is following REST, you MUST use IANA- registered standardized types, otherwise your messaging is *not* self- descriptive for the "Internet-scale distributed hypermedia system" you're supposedly targeting by pursuing REST using HTTP over the Internet. > > Style means you can use other things, following the same principles. > True. > > It means do not force to use IANA and WEB for REST. > Nobody ever said it did. If you *are* using the Web, then yes, REST *does* force you to use IANA-registered, standardized types. Because that's the only mechanism HTTP defines, and that everybody agrees to, for the most important aspect of self-descriptive messaging -- exposing the processing model of the payload without having to introspect. -Eric
Hi again, Erick I feel we are saying the same, only that I use to talk in general terms and you in Web terms. Need to think about reuse, interesting point but not totally convinced yet. Also need to revisit the idea of REST as 'explanation of an "Internet-scale distributed hypermedia system" known as the Web'. I feel that was not the idea in the dissertation. It is a little more that just an explanation. But that may be a gray area. Will keep in touch. William. --- In rest-discuss@yahoogroups.com, "Eric J. Bowman" <eric@...> wrote: > > "William Martinez Pomares" wrote: > > > > 2. I said: "REST, as I understand, is a style. You can build it with > > whatever you want" and you said: "No, you can't build whatever you > > want with REST.". Kind of different things, right? Same words, > > different order, different meaning. > > > > Sorry about that -- I read it as "you can build with it..." which isn't > what you wrote. > > > > > a. REST is not only for reuse. If I don't care reuse, I can still > > have a case for REST. > > > > But re-use is the entire basis for REST: > > "By applying the software engineering principle of generality to the > component interface, the overall system architecture is simplified and > the visibility of interactions is improved." > > You can't have a uniform interface, otherwise. > > > > > b. REST is NOT about all the nodes out there, is about my > > application. > > > > REST is an explanation of an "Internet-scale distributed hypermedia > system" known as the Web. While REST may be applied to other domains, > if you're using the Web, you can't ignore all those other nodes out > there, because those other nodes are what make the Web "Internet scale." > > If none of the nodes between producer and consumer are capable of > understanding your messaging, then your application is *not* capable of > achieving "Internet scale," therefore your application cannot achieve > the goals of REST -- such an architecture may be considered REST-like, > or inspired by REST, but cannot be considered REST. > > > > > c. I may be wrong, but registering is not the only way of achieving > > self-descriptive language. Again, maybe Roy can tell me why, if it > > is, REST only solution is registration. > > > > Please don't separate my point from the context in which it was made. > On the Web, i.e. using HTTP over the Internet, there is simply no > alternative to using IANA-registered, standardized types. This does > *not* mean that REST's only solution to self-descriptive messaging is > the IANA registry -- it *does* mean there is no other way to achieve > self-descriptive messaging on the Web. > > > > > d. Non using IANA violates self-description? > > > > On the Web, if your system is following REST, you MUST use IANA- > registered standardized types, otherwise your messaging is *not* self- > descriptive for the "Internet-scale distributed hypermedia system" > you're supposedly targeting by pursuing REST using HTTP over the > Internet. > > > > > Style means you can use other things, following the same principles. > > > > True. > > > > > It means do not force to use IANA and WEB for REST. > > > > Nobody ever said it did. If you *are* using the Web, then yes, REST > *does* force you to use IANA-registered, standardized types. Because > that's the only mechanism HTTP defines, and that everybody agrees to, > for the most important aspect of self-descriptive messaging -- exposing > the processing model of the payload without having to introspect. > > -Eric >
On Sun, Sep 5, 2010 at 7:19 PM, Eric J. Bowman <eric@...> wrote: > "William Martinez Pomares" wrote: > >> >> It means do not force to use IANA and WEB for REST. >> > > Nobody ever said it did. If you *are* using the Web, then yes, REST > *does* force you to use IANA-registered, standardized types. Because > that's the only mechanism HTTP defines, and that everybody agrees to, > for the most important aspect of self-descriptive messaging -- exposing > the processing model of the payload without having to introspect. .. what? The "most important" aspects of self-descriptive messaging on the web are derived directly from the HTTP; i.e. the control data - particularly the headers that make caching possible like Cache-Control and Vary. Most xhtml representations on the web are served with a *wrong* (not even just opaque!) content-type, and yet that seems to have scaled ok. Cheers, Mike
On Sep 5, 2010, at 9:48 PM, Mike Kelly wrote: > > The "most important" aspects of self-descriptive messaging on the web > are derived directly from the HTTP; i.e. the control data - > particularly the headers that make caching possible like Cache-Control > and Vary. Yep, exactly. (Do not forget the method and status codes :-) Jan > > Most xhtml representations on the web are served with a *wrong* (not > even just opaque!) content-type, and yet that seems to have scaled ok. > > Cheers, > Mike > > > ------------------------------------ > > Yahoo! Groups Links > > >
The Self-Descriptive Messaging constraint applies as much to the representation format as it does to the transfer protocol; Section 5.1.2 spells it out. "Most important" is a pretty subjective attribute -- most important to/for what? But I think it's fair to say that self-descriptive representation formats are not unimportant as they yield key benefits (again see 5.1.2 -- but they're important for data encapsulation and evolvability). While a lot of xhtml is served as text/html instead of application/xhtml+xml, I don't think that discounts the value of the IANA registry. HTML and XHTML have pretty much the same semantics just different syntax. And the entire industry is aware that the two are often confused and code is written to cope with this. You couldn't serve XHTML as application/foo and expect things to always work even with the amount of content sniffing that goes on. (Well I suppose I haven't tried but I certainly wouldn't have high hopes...) It gets back to the key requirement being that the client, server and intermediaries are all able to understand each other. If you want to ensure the broadest interoperability across the entire web, then ya, you pretty much have to go with IANA registered standards. If your "system" is a smaller community on the open web then you are likely supplementing the IANA registry with another "registry" of sorts specific to the community (e.g. a site that defines some yet-to-be-standardized formats that the community agrees on). It all depends on the scope of your system. Andrew --- In rest-discuss@yahoogroups.com, Mike Kelly <mike@...> wrote: > > On Sun, Sep 5, 2010 at 7:19 PM, Eric J. Bowman <eric@...> wrote: > > "William Martinez Pomares" wrote: > > > >> > >> It means do not force to use IANA and WEB for REST. > >> > > > > Nobody ever said it did. If you *are* using the Web, then yes, REST > > *does* force you to use IANA-registered, standardized types. Because > > that's the only mechanism HTTP defines, and that everybody agrees to, > > for the most important aspect of self-descriptive messaging -- exposing > > the processing model of the payload without having to introspect. > > .. what? > > The "most important" aspects of self-descriptive messaging on the web > are derived directly from the HTTP; i.e. the control data - > particularly the headers that make caching possible like Cache-Control > and Vary. > > Most xhtml representations on the web are served with a *wrong* (not > even just opaque!) content-type, and yet that seems to have scaled ok. > > Cheers, > Mike >
Mike Kelly wrote: > > > Nobody ever said it did. If you *are* using the Web, then yes, REST > > *does* force you to use IANA-registered, standardized types. > > Because that's the only mechanism HTTP defines, and that everybody > > agrees to, for the most important aspect of self-descriptive > > messaging -- exposing the processing model of the payload without > > having to introspect. > > The "most important" aspects of self-descriptive messaging on the web > are derived directly from the HTTP; i.e. the control data - > particularly the headers that make caching possible like Cache-Control > and Vary. > No. REST _requires_ "a shared understanding of data types with metadata." If nobody understands your data type then the constraint is violated, regardless of any other control data that may be present, regardless of protocol. REST doesn't _require_ an HTTP response message with a payload (or not, considering HEAD) to have _any_ response headers besides Status, Content-Type, and one or the other of Content-Length or 'Transfer-Coding: chunked'. Gopher also supports self-descriptive messaging by using media type identifiers. This holds true for *any* protocol that has a concept of resource vs. representation. If the server doesn't specify a processing model, the entity is *meaningless* no matter how much other control data is present, unless clients and intermediaries resort to introspection -- which clearly violates REST. So, yes, the absolutely-beyond-any-shadow-of-doubt, most-important aspect of RESTful self-descriptive messaging, regardless of protocol, is the media type identifier, for any response where such identifier is required. This is yet another of those fundamental issues which SHOULD NOT require any debate, so yet again, I'm befuddled by the pushback. > > Most xhtml representations on the web are served with a *wrong* (not > even just opaque!) content-type, and yet that seems to have scaled ok. > Most XHTML is served as text/html, which is allowed; what folks get wrong are the rules for serving XHTML as text/html, detailed in XHTML 1.0 Appendix C. The text/html identifier triggers a processing model that works for the payload, regardless of its conformity to Appendix C. -Eric
"wahbedahbe" wrote: > > The Self-Descriptive Messaging constraint applies as much to the > representation format as it does to the transfer protocol; Section > 5.1.2 spells it out. "Most important" is a pretty subjective > attribute -- most important to/for what? > Most important to understanding the nature of the payload, which is what this discussion is all about. In any system with a notion of resource vs. representation, it's essential that producers be able to communicate the intended processing model to consumers. Sometimes I serve (X)HTML as text/plain, if my intent is that the entity be displayed rather than rendered. Absent any clear description of producer intent, payloads have zero meaning in any system where the resource/representation dichotomy exists (i.e. RESTful systems). > > But I think it's fair to say that self-descriptive representation > formats are not unimportant as they yield key benefits (again see > 5.1.2 -- but they're important for data encapsulation and > evolvability). > You lost me there. REST only requires that messaging be self- descriptive, not the data format itself. -Eric
> > This is yet another of those fundamental issues which SHOULD NOT > require any debate, so yet again, I'm befuddled by the pushback. > Please refer to Roy's MIME-respect w3c note... "In Web architecture, communication between agents consists of exchanging messages with predefined syntax and semantics: a shared expectation of how each message's control data and payload (representation data and metadata) will be interpreted by the recipient." http://www.w3.org/2001/tag/doc/mime-respect.html The key words being "predefined... shared expectation." Of _course_ this is the "most important aspect" of self-descriptive messaging, the entire style is predicated on it! -Eric
--- In rest-discuss@yahoogroups.com, "Eric J. Bowman" <eric@...> wrote: > > "wahbedahbe" wrote: > > > > The Self-Descriptive Messaging constraint applies as much to the > > representation format as it does to the transfer protocol; Section > > 5.1.2 spells it out. "Most important" is a pretty subjective > > attribute -- most important to/for what? > > > > Most important to understanding the nature of the payload, which is > what this discussion is all about. In any system with a notion of > resource vs. representation, it's essential that producers be able to > communicate the intended processing model to consumers. Sometimes I > serve (X)HTML as text/plain, if my intent is that the entity be > displayed rather than rendered. > > Absent any clear description of producer intent, payloads have zero > meaning in any system where the resource/representation dichotomy > exists (i.e. RESTful systems). Fair enough -- just think that Mike was using "most important" quite differently. > > > > > But I think it's fair to say that self-descriptive representation > > formats are not unimportant as they yield key benefits (again see > > 5.1.2 -- but they're important for data encapsulation and > > evolvability). > > > > You lost me there. REST only requires that messaging be self- > descriptive, not the data format itself. > > -Eric > It is fair for you to call me out on that -- I suppose I'm inventing terminology without explaining it, though I've used it before without getting questions so I assumed it was clear. I could say a "standardized and registered" format if it weren't for the fact that this is only required to meet the self-descriptive messaging constraint in certain contexts (as discussed at length in this thread). As I've been using it: a format used in the payload of a message (along with a corresponding media type value in the headers), is "self-descriptive" if it allows the entire message to be self-descriptive. i.e. it's a format with an assigned media type and specification capable of being understood (e.g. it's listed in the system's registry or equivalent) by all parties in the system. Open to other suggestions for terminology. Andrew
"William Martinez Pomares" wrote: > > XML and JSOn? Requesting a representation in the URL? > This is one of REST's finer points. Although there is no "late binding of representation to resource constraint" it doesn't hurt to think of it like there is. What's actually being violated is the identification of resources constraint. "Cool URIs Don't Change" isn't a normative REST reference, because URIs are opaque, but that's no reason not to follow it... Every resource of interest must be assigned a URI. Where multiple variant representations of the same resource exist, they need their own unique URIs, given their nature as discrete resources. I've harped on enough about assigning URIs to variants; what we have here is the opposite, and just as critical, problem -- failing to assign the negotiated URI. My online demo, initial conneg-free variant, has that very REST mismatch itself. Instead of resources whose representations vary by media type, what I have in effect is a different "channel" for each media type. Compare and contrast these two variants: http://charger.bisonsystems.net/xmltest/index.xht http://charger.bisonsystems.net/xmltest/index.axm Notice the filename extensions (my upcoming parameter example, and Google's use of the query string to select format, are bike-shed colors) have to be reflected in the links embedded in each representation -- why I say "channels". This aliasing impacts scalability -- each "channel" has links which are coupled to the media type of the response... which is totally un-cool. Thus, sharing a link commits the recipient to understanding a specific media type. Ideally, sharing a link allows the user agent to negotiate for whichever available media type it understands best. IOW, if I share a link that's coupled to application/xhtml+xml, it's useless to IE users, whereas sharing a link to a negotiated resource allows IE to receive application/xml: http://charger.bisonsystems.net/conneg/ No "channels" there! Each variant uses the _same_ links, allowing links to be shared independent of user-agent concerns. See the decoupling? William's concern is spot-on, and Google ought to know better. If I want to link to a representation of a specific media type, I use the URI I assigned to the variant: http://charger.bisonsystems.net/conneg/;type=xhtm http://charger.bisonsystems.net/conneg/;type=xml http://charger.bisonsystems.net/conneg/;type=txml http://charger.bisonsystems.net/conneg/;type=html (Yeah, I need to fix my XSLT so the "type menu", etc. works...) But, doing so is an edge case. The general case is to link to the negotiated resource, which allows the server to determine the best match for the requesting client. *That's* what the identification of resources constraint is all about -- not simply "use URIs," which seems to be a common misunderstanding of the constraint. If a system provides multiple variant representations of a resource, it is NOT REST if it fails to provide _both_ negotiated URIs _and_ variant URIs. Meeting the identification of resources constraint is what allows for "service discovery" -- try this in a shell: # curl -I -H 'Accept: application/atom+xml' http://charger.bisonsystems.net/conneg/ Without the negotiated resource, there would be no way to service Atom- only requests -- unless the variant URI is known in advance, the response must be 406 (perhaps with an Alternates: header). With both negotiated and variant URIs, the 301-redirect is possible -- voila, service discovery, courtesy of late binding of representation to resource. What's important, is that there exist negotiated URIs for general- purpose linking; variant URIs as identifiers only, or for special- purpose linking; and the ability to vary selection headers for the purpose of service discovery at negotiated URIs. What's the theme in that last paragraph? Identification of resources, especially negotiated resources, which are one of the power features of REST we can tell Google's missing out on just by looking at their URI allocation scheme. I again point to the BBC website as an example of extensive conneg running at Web scale (if not Google scale). -Eric
I'm still zeroing in on having all this stuff work, but it's mostly
there save for a few last bugs, then I'll make the XSLT work right,
especially for IE (broken outright). Mostly I'm testing with 'curl -I'
and 'curl -I -b', but it's easier to get a feel for the system using
Firefox/IE + HttpWatch -- I couldn't have built this without HttpWatch.
Here's a list of the features I've added in to the demo under /conneg/:
Negotiated resources:
All ending in '/' or '/{#}' or '/comment-{#}' Vary: Cookie, Accept
/conneg/skin/csi.xsl Vary: Cookie
/conneg/skin/style.css Vary: Cookie
negotiating the fixed style references allows skin switching without
requiring the negotiated URIs to have variants for those, also
auth-based conneg pending for Xforms (coming soon) support resources
Pagination:
All ending in '/', i.e. /conneg/?page=2
All index.atom, i.e. /conneg/index.atom?page=2
page != # returns 400; no pagination is 404, ?page=1 301-cancels
requesting a page for other resources is (rather, should be) 400
Server-side XPointer service:
All ending in '/{#}.atom' allow //@thr:count, i.e.:
/conneg/2006/aug/09/11.atom?xptr=(//@thr:count)
returns application/json with entity = {#}
requesting an xptr for other resources, or unapproved xptr, is 400
Matrix URIs:
All negotiated resources except CSS/XSLT allow parameters in this order,
preceding query string:
/conneg/;view=(main|xfm|reset);type=(xhtm|xml|html|txml|txt|reset)
/conneg/;type=xhtm;view=xfm would first 301-redirect to
/conneg/;view=xfm;type=xhtm
or may be reset outright:
/conneg/;reset (uses 307 not 303)
(I call this 'curli', meaning command-line-interface for URLs)
Variant Cookies:
type= set after Accept-based negotiation, or can bypass Accept conneg
view= only set if matrix URI ;view=xfm
(view|type)=reset unsets one or the other or both cookies, or does
nothing if view=reset and no view cookie is present (the default)
curli=true forces a 200 response when a cookie is changed, instead of a
304 response, which is why negotiated resources must-revalidate
(returning to a cached resource may still require a reload; seems
some browsers ignore must-revalidate with Vary: Cookie when the cookie
changes, I'll be filing some bug reports)
Content-Location:
The only reason the view cookie isn't restricted to /conneg/skin/ is so
that C-L always reflects how to bookmark what the user is currently
viewing, i.e. /conneg/;view=main;type=xhtm?page=2&xptr=(foo) is the
well-formed syntax assuming some future combination of page and xptr.
Content-Disposition:
negotiated resources translate ;type= into filename extensions, all C-D
URIs are 404s...
---------------
Although, it is possible by design to directly dereference the variant
XSLT and CSS files driving skin-switching (the feature responsible for
all my cookie madness).
OK, folks, have at it! Let me know of any bugs you find, or attempt to
persuade me that my use of cookies (or anything else I'm doing) violates
REST somehow... bearing in mind those cookies have nothing to do with
storing application state on the server.
-Eric
Also, service discovery, as discussed here: http://tech.groups.yahoo.com/group/rest-discuss/message/16450 Which will eventually be implemented at the domain root, the whole /xmltest/ vs. /conneg/ dichotomy is an affectation of the demo. But, it will work for any negotiated resource. The assumption is that any resource only has one Atom variant, otherwise this method of service discovery doesn't work any better than using a hideous well-known URI. (The purist in me thinks 'Accept: text/robots' should 301-redirect to my robots.txt -- or whatever else I choose to call it -- file, which may be on a path 'shorter' than the requested hierarchy.) BTW, I'm not claiming my demo is entirely RESTful, I just meant to say I see nothing unRESTful about my use of cookies. In addition to the REST mismatch I already mentioned (XBEL), I've added another -- which I'll get around to asking about if nobody gets it, as I'm currently stumped. -Eric
On Sun, Sep 5, 2010 at 10:47 PM, Eric J. Bowman <eric@...> wrote: > Mike Kelly wrote: >> >> > Nobody ever said it did. If you *are* using the Web, then yes, REST >> > *does* force you to use IANA-registered, standardized types. >> > Because that's the only mechanism HTTP defines, and that everybody >> > agrees to, for the most important aspect of self-descriptive >> > messaging -- exposing the processing model of the payload without >> > having to introspect. >> >> The "most important" aspects of self-descriptive messaging on the web >> are derived directly from the HTTP; i.e. the control data - >> particularly the headers that make caching possible like Cache-Control >> and Vary. >> > > No. REST _requires_ "a shared understanding of data types with > metadata." If nobody understands your data type then the constraint is > violated, regardless of any other control data that may be present, > regardless of protocol. I am aware of that. Unfortunately, that says nothing about the scope/ubiquity of that shared understanding, or how that should be established and controlled over time. Self-descriptiveness/visibility is a spectrum in which systems land according to many different components, one of which is ubiquity and/or standardisation of media types. Suggesting that HTTP messages that contain non-standard media types entirely violate the self-descriptiveness constraint to a degree that they cannot be considered "RESTful" is ridiculous, Eric. Such messages still benefit from a more than significant proportion of existing Web infrastructure such as client and server libraries, as well as the best example of intermediate processing of self-descriptive messages; _caching_. > REST doesn't _require_ an HTTP response message > with a payload (or not, considering HEAD) to have _any_ response headers > besides Status, Content-Type, and one or the other of Content-Length or > 'Transfer-Coding: chunked'. Really? Caching is a REST constraint. It is a form of layering that relies on self-descriptiveness of messages. http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm#sec_5_1_4 "Cache constraints require that the data within a response to a request be implicitly or explicitly labeled as cacheable or non-cacheable" A lack of cache-control headers is an implicit form of self-descriptiveness, and fulfills a fundamental REST constraint. So it is quite important, actually. > Gopher also supports self-descriptive messaging by using media type > identifiers. This holds true for *any* protocol that has a concept of > resource vs. representation. If the server doesn't specify a > processing model, the entity is *meaningless* no matter how much other > control data is present, unless clients and intermediaries resort to > introspection -- which clearly violates REST. No.. the server *can* specify a processing model - by specifying a custom type identifier (i.e. non-IANA-registered) in the Content-Type header. Why should an intermediary like a cache or a reverse proxy router need to introspect the entity at all? What are these illusive, yet apparently invaluable and ubiquitous, web intermediaries that process the entity body (and therefore *actually rely* on its self-descriptiveness)?! > So, yes, the absolutely-beyond-any-shadow-of-doubt, most-important > aspect of RESTful self-descriptive messaging, regardless of protocol, > is the media type identifier, for any response where such identifier is > required. This is yet another of those fundamental issues which SHOULD > NOT require any debate, so yet again, I'm befuddled by the pushback. The value you are (correctly) attributing to the media type identifier is derived from it's incorporation to a given protocol, such as HTTP, for the purposes of negotiating a representation - it's role is as an identifier first and foremost, and it makes very little difference whether that identifier is opaque or otherwise. This is why clients and servers can happily negotiate non-standard media type representations via HTTP, as can intermediaries cache them. Cheers, Mike
After I specifically asked for my name to be removed from CC, since I'm no longer interested in this discussion or even this list (and also considering your insults to me in your last message), it strikes me that you try to drag me again to this, in your usual style, with a complete out of context answer with no meaning whatsoever, or at least a ridiculous meaning if seen under the original context. I can only understand your msg as another try to start another flame war that allows you to say ridiculous things without the burden of justify then or answer questions regarding then. So let me be clear once again. I am not interested in your opinions. In fact, to be perfectly clear, I think you are a fraud and a obstacle to learning REST on this list, which is why I'm going to unsubscribe from it. Note that I don't think you're intentionally a fraud, I still believe in your good will, but I do consider that your limited experience and limited capacity of thinking prevent you to understand other use cases for REST besides the limited ones you are used to. I already lost time and patience following one of your supposed "advices" - the one that all representations should have a URI, for which there is no foundation at all, besides one single mention on RFC2616 taken completely out of context. So I have enough of your style of "advices" - taken things out of context and wrongly generalizing them, like is the case with this IANA mumbo-jambo. So, again, please don't try to drag me to useless flames, please don't insult me again, and please don't give me "advices" that I do not want. I have nothing more to do on this list. 2010/8/25 Eric J. Bowman <eric@...> > > > > > I can't take much help from all this, quite the contrary > > unfortunately. > > > > Here's a piece of advice, whether you want it or not. This 'LISTEN' > method of yours sounds like a reverse GET. The RESTful solution is not > to violate the uniform interface by creating an unspecified new method > nobody has ever heard of. > > The RESTful solution would be to support an evolving standard like > rHTTP, which makes use of HTTP's 'Upgrade' facility to reverse the > direction of the transaction. The method would still be GET, for a > uniform interface. > > -Eric > > http://tools.ietf.org/html/draft-lentczner-rhttp-00
--- In rest-discuss@yahoogroups.com, Peter Williams <pezra@...> wrote: > > On Fri, Sep 3, 2010 at 7:29 AM, bryan_w_taylor <bryan_w_taylor@...> wrote: > > --- In rest-discuss@yahoogroups.com, Jan Algermissen <algermissen1971@> wrote: > > > >> Which user agents do you have in mind that do not support conneg? > > > > > > Browsers. If you email me a link, intending to show me an issue in v1, I want pull up the media type "application/myformat-v1+xml", but my stupid browser sends this: > > Accept:text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 > > > > The */* matches both v1 and v2, and the server to pick v2. So I can't pull up v1 in the browser. > > > > Also, we want to be tolerant of programmers who may not understand all this accepts header mumbo jumbo. We > > want the knowledge barrier needed to be as low as possible because it's good for business. > > I am not sure we really do want to allow people to mis-use HTTP base > services in a way that increases the maintenance costs of both the > server and client. That being said, if you really feel such support > is required a media type based extension is a good way to provide it. > For example, <http://example.com/foo.msv> for version 1 and > <http://example.com/foo.msv2> for version 2. The use of extensions > implies to the humans involved that these two resources are basically > the same except for the the format of their representations. Your > server development platform might even provide this for free. (RoR > does, for example.) > > It is worth noting that it is better, whenever possible, to extend the > existing media type rather than creating a new incompatible version. > However, that is not always possible. When a break with the past is > required creating media types is the best available solution. > > Peter > <http://barelyenough.org> > I disagree that this is a misuse of HTTP. HTTP has a concept of a variant which is a representation of a resource that may not be subject to content negotiation. http://example.com/foo.msv http://example.com/foo.msv2 are variants. So are: http://example.com/msv/foo http://example.com/msv2/foo or for that matter: http://example.com/v1/foo http://example.com/v2/foo How you expose your variant is simply an implementation detail. -jOrGe W.
Mike Kelly wrote: > > > No. REST _requires_ "a shared understanding of data types with > > metadata." If nobody understands your data type then the > > constraint is violated, regardless of any other control data that > > may be present, regardless of protocol. > > I am aware of that. Unfortunately, that says nothing about the > scope/ubiquity of that shared understanding, or how that should be > established and controlled over time. > Right -- REST defers the specifics to the protocol. HTTP explicitly defines the IANA registry and "discourages" unregistered types. Since no other mechanism is defined for the Web, REST's meaning in that context is clear -- you MUST use some standardized media type that exists in the IANA registry, otherwise nobody will understand your data type. Ubiquity is neither established nor controlled over time -- scalability is anarchic, and re-use is serendipitous. If the anal-retentive sysadmins out there haven't explicitly allowed the type you're using, it will at best be ignored. You can't control them, but you can control the ubiquity of the identifiers you choose, giving your traffic the best chance to scale. > > Self-descriptiveness/visibility is a spectrum in which systems land > according to many different components, one of which is ubiquity > and/or standardisation of media types. > Self-descriptiveness is simple. Any standard pointed to by any registered identifier will do, on the Web, as far as REST is concerned. The fact that anybody can follow their nose from the registry to the standard is what makes for visibility, the ubiquity of the media type has no impact there. HTML is no more visible where messaging is concerned, than a Rand Paul standard with a registered identifier. > > Suggesting that HTTP messages that contain non-standard media types > entirely violate the self-descriptiveness constraint to a degree that > they cannot be considered "RESTful" is ridiculous, Eric. > No, what's ridiculous is banking on none of the sysadmins whose intermediaries your traffic traverses, believing any of the 1,001 reasons to not cache, or even to block, unknown media types. Don't separate my point from its context -- what I'm saying applies for the Web. The only place *anyone* operating an intermediary knows to look in order to figure out what standard it's associated with, is the IANA registry, by definition. "Visibility" doesn't mean you can look it up on Google, it means _follow the spec_. Saying otherwise is tilting at windmills. > > Such messages still benefit from a more than significant proportion > of existing Web infrastructure such as client and server libraries, > as well as the best example of intermediate processing of > self-descriptive messages; _caching_. > You're assuming that all intermediaries your traffic traverses are merely caches? What about antivirus gateways, which aren't limited to SMTP traffic anymore? Or, just a plain old SOCKS gateway in a corporate firewall, or any other proxy, not to mention the thriving market for ISP/corporate accelerator products which do things like predictive DNS lookups and link prefetching? The deployed infrastructure of the Web includes all sorts of devices. One thing holds true for *all* of them: configuration by media type. Sysadmins have been known to block all java or javascript traffic over HTTP when hardening networks or systems, how can you bank on them allowing unknowns when they're filtering by knowns? Even if we do confine the discussion to caches, why bother caching any but the ubiquitous types that make up 99% of your traffic? Squid, for example, has the configuration directives of 'req_mime_type' and 'rep_mime_type' which are definable Access Control Lists. I've never heard of any intermediary which lacks such a configuration option. You're making huge assumptions if you think your opaque-identifier responses are cached, or even allowed to pass. O'Reilly's "Squid" book even gives an example of blocking unknown types to prevent tunneling. I'd prefer to follow REST, where I'm banking on the ubiquity of my identifier to ensure caching, not the presence of any control data which intermediaries may never otherwise consider. I don't want any end-user's paranoid antivirus gateway blocking all outgoing MIME types except for a few ubiquitous identifiers on an allow list. I want to take advantage of the deployed infrastructure of accelerators with predictive lookahead algorithms for a small number of ubiquitous identifiers. And so on and so forth -- of *course* this is what REST advocates on the Web, this still shouldn't be controversial at all. > > > REST doesn't _require_ an HTTP response message > > with a payload (or not, considering HEAD) to have _any_ response > > headers besides Status, Content-Type, and one or the other of > > Content-Length or 'Transfer-Coding: chunked'. > > Really? Caching is a REST constraint. It is a form of layering that > relies on self-descriptiveness of messages. > Careful -- there's no REST constraint being broken by setting certain responses to be uncacheable, or excluding cache-control headers. Gopher doesn't meet this constraint, because Gopher traffic is inherently uncacheable. There's nothing inherently uncacheable about HTTP traffic (except for responses that are explicitly defined as uncacheable), allowing the system to decide what's best on a resource- by-resource basis. The nature of the deployed architecture is that such traffic may just get cached anyway, the only solution to which is to use HTTPS. > > "Cache constraints require that the data within a response to a > request be implicitly or explicitly labeled as cacheable or > non-cacheable" > > A lack of cache-control headers is an implicit form of > self-descriptiveness, and fulfills a fundamental REST constraint. So > it is quite important, actually. > This is a protocol concern, not a REST concern. I agree with you that a lack of any cache-control headers in an HTTP response is implicitly labeled as uncacheable, and that it violates no REST constraint. And I agree with you that the presence or absence of cache-control headers is part of self-descriptive messaging. But I still maintain that the most important aspect of self-descriptive messaging on the Web is to use registered identifiers pointing to standardized types, because Content- Type's value determines whether any intermediary will even evaluate any other headers, with a degree of certainty that just doesn't otherwise exist. > > > Gopher also supports self-descriptive messaging by using media type > > identifiers. This holds true for *any* protocol that has a concept > > of resource vs. representation. If the server doesn't specify a > > processing model, the entity is *meaningless* no matter how much > > other control data is present, unless clients and intermediaries > > resort to introspection -- which clearly violates REST. > > No.. the server *can* specify a processing model - by specifying a > custom type identifier (i.e. non-IANA-registered) in the Content-Type > header. > Not without violating REST, it can't, on the Web. You simply can't bank on any intermediary doing anything predictable where opaque identifiers are concerned. Rejecting unknown identifiers at the firewall is a standard hardening technique which may be implemented using any HTTP proxy, the most ubiquitous example being Squid. > > Why should an intermediary like a cache or a reverse proxy router need > to introspect the entity at all? > You're apparently unfamiliar with the Web accelerator market. The most famous example, which wrought havoc and destruction in its wake and is no longer available, was the infamous "Google Web Accelerator" product. Antivirus gateways will introspect images of types with known compromises. The point is, you can't know, but you can play to the deployed architecture by sticking with ubiquitous identifiers *because* you can't know. > > What are these illusive, yet apparently invaluable and ubiquitous, web > intermediaries that process the entity body (and therefore *actually > rely* on its self-descriptiveness)?! > This Wikipedia page lists a whole bunch of accelerators: http://en.wikipedia.org/wiki/Web_accelerator There's a list of pricey big-vendor alternatives here: http://www.infoworld.com/ifwclassic/weblog//tcdaily/archives/2008/06/load_balancers.html F5, Foundry, Juniper, Cisco etc. all play in this market, and all are configurable by media type, and why would anyone who is spending all that money want to waste their resources caching content with opaque identifiers at the expense of content with ubiquitous identifiers, when the latter accounts for 99% of traffic? That's quite a deployed architecture out there, that's mostly configured to ignore all but a handful of ubiquitous types. > > The value you are (correctly) attributing to the media type identifier > is derived from it's incorporation to a given protocol, such as HTTP, > for the purposes of negotiating a representation - it's role is as an > identifier first and foremost, and it makes very little difference > whether that identifier is opaque or otherwise. This is why clients > and servers can happily negotiate non-standard media type > representations via HTTP, as can intermediaries cache them. > But it makes all the difference in the world. The Web's security architecture is based on the IANA registry. Opaque identifiers are most likely treated just like opaque HTTP methods -- ignored, if not taken for tunneling. When using ubiquitous identifiers, the percentage of intermediaries *unlikely* to participate in the communication beyond just acting as routers, is statistically insignificant. When using opaque identifiers, the percentage of intermediaries *likely* to participate in the communication beyond just acting as routers, is likewise statistically insignificant. Self-descriptiveness is easily met by using IANA-registered identifiers pointing to standardized types. Roy's "gray area of increasing RESTfulness" is all about the ubiquity of the chosen identifier -- anarchic scalability and serendipitous re-use are achieved by going with the flow of the deployed architecture, on the Web. The only certainty with that deployed architecture, is the use of ubiquitous identifiers will fail to cache on an insignificant number of caches. No such certainty exists with opaque identifiers, which don't even meet the self-descriptive messaging constraint which, on the Web, requires registered identifiers pointing to published standards. -Eric
On Tue, Sep 7, 2010 at 12:07 AM, Eric J. Bowman <eric@...> wrote: > Mike Kelly wrote: >> >> > No. REST _requires_ "a shared understanding of data types with >> > metadata." If nobody understands your data type then the >> > constraint is violated, regardless of any other control data that >> > may be present, regardless of protocol. >> >> I am aware of that. Unfortunately, that says nothing about the >> scope/ubiquity of that shared understanding, or how that should be >> established and controlled over time. >> > > Right -- REST defers the specifics to the protocol. HTTP explicitly > defines the IANA registry and "discourages" unregistered types. Since > no other mechanism is defined for the Web, REST's meaning in that > context is clear -- you MUST use some standardized media type that > exists in the IANA registry "MUST [not]"? That seems a slightly strange interpretation of the term "discourages". Maybe it's a typo in 2616 and what they actually meant was "completely forbids" - I don't know Eric; you're the expert. Cheers, Mike
Mike Kelly wrote: > > "MUST [not]"? That seems a slightly strange interpretation of the term > "discourages". > It's interpreting the spec in terms of REST constraints; HTTP != REST. You've utterly failed to explain why REST would "encourage" the IANA registry to be ignored over the Web. HTTP's wording is cognizant of non-Web usage, is all. > > Maybe it's a typo in 2616 and what they actually meant was "completely > forbids" - I don't know Eric; you're the expert. > Stop arguing against the spec, then being an ass to those telling you to follow it, OK? The last thing REST does is "encourage" you to do things that HTTP "discourages" where the Web is concerned. -Eric
On Tue, Sep 7, 2010 at 10:17 AM, Eric J. Bowman <eric@...> wrote: > Mike Kelly wrote: >> >> "MUST [not]"? That seems a slightly strange interpretation of the term >> "discourages". >> > > It's interpreting the spec in terms of REST constraints; HTTP != REST. > You've utterly failed to explain why REST would "encourage" the IANA > registry to be ignored over the Web. HTTP's wording is cognizant of > non-Web usage, is all. > >> >> Maybe it's a typo in 2616 and what they actually meant was "completely >> forbids" - I don't know Eric; you're the expert. >> > > Stop arguing against the spec, then being an ass to those telling you > to follow it, OK? The last thing REST does is "encourage" you to do > things that HTTP "discourages" where the Web is concerned. I'm not arguing against the spec, I'm arguing against your dubious interpretation of it, particularly in context of REST. I don't think I've claimed that REST encourages anything of the sort, but it also doesn't *prevent* you from doing something that is merely discouraged - which is exactly what you are implying when you state "you MUST use some standardized media type that exists in the IANA registry". Using a non-registered media type identifier is not violating any REST constraint, or HTTP stipulation. By choosing to do something that HTTP discourages you are incurring risk, but that doesn't render your application "not REST". Cheers, Mike
Mike Kelly wrote: > > I'm not arguing against the spec, I'm arguing against your dubious > interpretation of it, particularly in context of REST. > This has nothing to do with me. Quoting Roy, "Self-descriptive means that the type is registered and the registry points to a specification and the specification explains how to process the data according to its intent." On the Web, only one such registry exists -- if your identifier isn't in IANA, it isn't registered, by definition of HTTP. On your intranet, nobody cares whether your specification is standardized, let alone registered with IANA, for REST or HTTP (which, even then, still "discourages" the practice). > > I don't think I've claimed that REST encourages anything of the sort, > but it also doesn't *prevent* you from doing something that is merely > discouraged - which is exactly what you are implying when you state > "you MUST use some standardized media type that exists in the IANA > registry". > Still has nothing to do with me. Quoting Roy, "REST components communicate by transferring a representation of a resource in a format matching one of an evolving set of standard data types." And, "A standard is an approved measure against which multiple independent organizations have agreed (by choice or by force) to have their products tested for compliance." On the Web, your _registered_ identifier MUST point to an approved standard, by definition of REST -- where in REST do people get the idea that unstandardized media types are OK? > > Using a non-registered media type identifier is not violating any REST > constraint, or HTTP stipulation. By choosing to do something that HTTP > discourages you are incurring risk, but that doesn't render your > application "not REST". > Once again, don't separate my point from its context, which is the Web. What you say may be true within the confines of an intranet, but it is clearly not the case on the Web, where only one registry exists for identifiers, and any standards body will do for types. If you're doing REST on the Web, you MUST use an IANA-registered identifier which corresponds to a standardized type, because no other alternatives are defined by HTTP or allowed by REST. Which has everything to do with what Roy says, and nothing to do with what I say, so please stop trying to personalize this, and explain to me why Roy is wrong -- those quotes above couldn't be more clear. -Eric
I just don't get it: What is the point of hammering on this either/or distinction regarding standardization? Mike was spot on with his comments. Why not simply leave it at that and try not to have the last word?? Jan On 07 Sep, 2010,at 01:12 PM, "Eric J. Bowman" <eric@...> wrote: > Mike Kelly wrote: > > > > I'm not arguing against the spec, I'm arguing against your dubious > > interpretation of it, particularly in context of REST. > > > > This has nothing to do with me. Quoting Roy, "Self-descriptive means > that the type is registered and the registry points to a specification > and the specification explains how to process the data according to its > intent." > > On the Web, only one such registry exists -- if your identifier isn't > in IANA, it isn't registered, by definition of HTTP. On your intranet, > nobody cares whether your specification is standardized, let alone > registered with IANA, for REST or HTTP (which, even then, still > "discourages" the practice). > > > > > I don't think I've claimed that REST encourages anything of the sort, > > but it also doesn't *prevent* you from doing something that is merely > > discouraged - which is exactly what you are implying when you state > > "you MUST use some standardized media type that exists in the IANA > > registry". > > > > Still has nothing to do with me. Quoting Roy, "REST components > communicate by transferring a representation of a resource in a format > matching one of an evolving set of standard data types." And, "A > standard is an approved measure against which multiple independent > organizations have agreed (by choice or by force) to have their > products tested for compliance." > > On the Web, your _registered_ identifier MUST point to an approved > standard, by definition of REST -- where in REST do people get the idea > that unstandardized media types are OK? > > > > > Using a non-registered media type identifier is not violating any REST > > constraint, or HTTP stipulation. By choosing to do something that HTTP > > discourages you are incurring risk, but that doesn't render your > > application "not REST". > > > > Once again, don't separate my point from its context, which is the Web. > What you say may be true within the confines of an intranet, but it is > clearly not the case on the Web, where only one registry exists for > identifiers, and any standards body will do for types. If you're doing > REST on the Web, you MUST use an IANA-registered identifier which > corresponds to a standardized type, because no other alternatives are > defined by HTTP or allowed by REST. > > Which has everything to do with what Roy says, and nothing to do with > what I say, so please stop trying to personalize this, and explain to > me why Roy is wrong -- those quotes above couldn't be more clear. > > -Eric > > > ------------------------------------ > > Yahoo! Groups Links > > >
algermissen1971 wrote: > > I just don't get it: What is the point of hammering on this either/or > distinction regarding standardization? > Because, when the context is the Web, it is _bad_ advice to _not_ point this out: http://tech.groups.yahoo.com/group/rest-discuss/message/6569 In that context, recommending the custom media type is _wrong_, yet that is the prevalent advice in the community. It is not OK to advise against self-descriptive messaging, it only causes confusion. > > Mike was spot on with his comments. Why not simply leave it at that > and try not to have the last word?? > Because I'm right; why should I let the last word call my position "dubious"? Get real. In the context of the feedback sought, the Web, it is absolutely vital to point this out, because self-descriptive messaging isn't an optional constraint: http://tech.groups.yahoo.com/group/rest-discuss/message/6569 In that context, the custom media type solution _violates_ REST because it is _not_ self-descriptive. Why should I back down, when it's fundamentally correct, yet taboo to the point of being flamed into not posting, to point that out? REST is what it is, not what it isn't, and REST is not about sending opaque identifiers over the Internet with HTTP -- that is not at all what is meant by "standard types". Again quoting Roy: "If we want to call one more RESTful than the other, then we have to take the goal of evolution into account. I would say it is more RESTful to use a specific standard type when applicable or to define a new type that is specific to a given purpose AND intended to be standardized for that application type (i.e., proprietary types are less RESTful than industry-wide standard types, but new standard types are not less RESTful than old standard types). But that is really only my personal preference, since the style does not constrain REST-based architectures to a single standard." Why is there endless pushback against the notion that Roy's personal preferences regarding the instantiation of REST-the-style are reflected in HTTP and Web architecture? Why is it confusing to the point of telling me to shutup already, to insist that opaque identifiers aren't self-descriptive on the Web? They aren't! Making them incompatible with REST on the Web. Why can't that _fact_ be the last word? -Eric
On 07 Sep, 2010,at 02:50 PM, "Eric J. Bowman" <eric@...> wrote: > > Because I'm right; Doh....has it ever occurred to you, that you might in fact *not* be right? Jan > why should I let the last word call my position > "dubious"? Get real. In the context of the feedback sought, the Web, > it is absolutely vital to point this out, because self-descriptive > messaging isn't an optional constraint: > > http://tech.groups.yahoo.com/group/rest-discuss/message/6569 > > In that context, the custom media type solution _violates_ REST because > it is _not_ self-descriptive. Why should I back down, when it's > fundamentally correct, yet taboo to the point of being flamed into not > posting, to point that out? > > REST is what it is, not what it isn't, and REST is not about sending > opaque identifiers over the Internet with HTTP -- that is not at all > what is meant by "standard types". Again quoting Roy: > > "If we want to call one more RESTful than the other, then we have > to take the goal of evolution into account. I would say it is more > RESTful to use a specific standard type when applicable or to define > a new type that is specific to a given purpose AND intended to be > standardized for that application type (i.e., proprietary types are > less RESTful than industry-wide standard types, but new standard > types are not less RESTful than old standard types). But that is > really only my personal preference, since the style does not > constrain REST-based architectures to a single standard." > > Why is there endless pushback against the notion that Roy's personal > preferences regarding the instantiation of REST-the-style are reflected > in HTTP and Web architecture? Why is it confusing to the point of > telling me to shutup already, to insist that opaque identifiers aren't > self-descriptive on the Web? They aren't! Making them incompatible > with REST on the Web. Why can't that _fact_ be the last word? > > -Eric
On Tue, Sep 7, 2010 at 8:06 AM, algermissen1971 <algermissen1971@...> wrote: > On 07 Sep, 2010,at 02:50 PM, "Eric J. Bowman" <eric@...> wrote: > > Because I'm right; > > Doh....has it ever occurred to you, that you might in fact *not* be right? And, even if you think you are, http://xkcd.com/386/
algermissen1971 wrote: > > > > > Because I'm right; > > > > Doh....has it ever occurred to you, that you might in fact *not* be > right? > Of course it has, Jan, I'm not an idiot. I keep asking folks to explain to me how it could be possible that using nonstandardized types is congruous with REST's emphasis on standardized types. Or to explain how an identifier being used on the Web that isn't in the IANA registry is congruous with Roy's clear explanation of self-descriptiveness: "Self-descriptive means that the type is registered and the registry points to a specification and the specification explains how to process the data according to its intent." In lieu of such explanation, I'll not be shouted down, because I've backed up my assertions by quoting directly from REST, Roy's explanations of REST, Squid configuration files, explanations of various intermediaries which do things like prefetch DNS lookups (which won't work if your identifier doesn't distinguish between links that need lookups and those, like namespace URIs, which do not), and so on and so forth. Recommending in favor of the opaque identifier in this context: http://tech.groups.yahoo.com/group/rest-discuss/message/6569 Instead of pointing out exactly what Mark pointed out, not only goes against REST, but also against pragmatism. -Eric
Bob Haugen wrote: > > And, even if you think you are, http://xkcd.com/386/ > Yes, very funny, but there's also a saying, "pick your battles" and this is the only one I'm engaged in. Obviously, I consider it important to evangelize on the issue, because I don't want to see REST subsumed by an increasingly prevalent and irrational insistence that using unregistered, unstandardized types on the Web is somehow congruous with the style. -Eric
On Tue, Sep 7, 2010 at 9:39 AM, Eric J. Bowman <eric@...>wrote: > > > Bob Haugen wrote: > > > > And, even if you think you are, http://xkcd.com/386/ > > > > Yes, very funny, but there's also a saying, "pick your battles" and > this is the only one I'm engaged in. Obviously, I consider it > important to evangelize on the issue, because I don't want to see REST > subsumed by an increasingly prevalent and irrational insistence that > using unregistered, unstandardized types on the Web is somehow > congruous with the style. > > -Eric > > > Doesn't mass adoption lead to standardization? Are you saying the pre types being registered in IANA but being used on the "web", the systems that used them were not REST? I've asked this question before and I'm not sure I got an answer.
Eb wrote: > > Doesn't mass adoption lead to standardization? > Mass adoption of a standard leads to ubiquity, i.e. Roy's "gray area of increasing RESTfulness." On the Web, self-descriptiveness is met by registering an identifier which points to a standard, which is the token for admission to the gray area. > > Are you saying the pre types being registered in IANA but being used > on the "web", the systems that used them were not REST? I've asked > this question before and I'm not sure I got an answer. > Actually, I think I attempted it but got it wrong... Self-descriptiveness was met with Atom (on the Web, i.e. HTTP over the Internet) the second its first media type identifier was approved by IANA, because it pointed to an IETF-sanctioned standardization process from the get-go. Token of admittance in hand, Atom entered the "gray area" and is now as RESTful as anything else due to its ubiquity. Initially, though, Atom wasn't "in browsers" or in server-side CMS software like WP, or allowed to pass PUT and DELETE traffic through most firewalls. This ubiquity is what allows the network effects of serendipitous re- use and anarchic scalability. There's no guarantee your opaque identifier will be allowed to pass PUT and DELETE traffic through firewalls, when Atom had to evolve this ability through the adoption of Atom Protocol, and even HTML and Atom aren't guaranteed passage? The degree of certainty is an order of magnitude higher for ubiquitous types, though. But, this isn't a FUD argument. Consider the practical matter of DNS lookahead caching -- Google did that study last year where they introduced 1/10th sec. latency to some transactions, and calculated the cost to their bottom line. Which led them to release their own desktop DNS accelerator. This product type relies on ubiquitous identifiers which re-use standard methods and link relations, to determine which links to ignore. How can such a product be expected to know what a "link" is, or what method it may relate to, (or how to handle fragments, for other types of intermediary), or whether it's just an identifier, or resolve relative URIs if it doesn't know the local lingo for a base URI? Now that it's out there, DNS lookup acceleration technology is yet another serendipitous re-use of ubiquitous media types, leading to even greater anarchic scalability on today's Web than existed a decade ago. How does that technology work, if it can't determine what a link is, or whether or not the data type uses standard link relations, because the identifier you're sending isn't both registered and ubiquitous? Using ubiquitous identifiers on the Web taps your system into that deployed architecture, decreasing overall latency for end-users, especially for systems spanning (sub)domains. Some of those systems may sniff your content and decide it's just like HTML anyway, but you can't bank on that like you can on re-using ubiquitous types. If and when a new standard comes along which fills a large enough need that it's widely and quickly adopted, its RESTfulness will increase as its recognition by the deployed Web architecture increases, which is anarchic and serendipitous. -Eric
I must admit, as a lurker, I am having trouble understanding why does have issue witht he content of Eric's argument. It appears solid to me. All I can see are personal attacks and a lack of detailed, supported arguments countering Eric's argument. Can someone at least counter Eric's argument without the high school snapbacks? It appears to me that this discussion is in danger of actually creating some foundation on one of the most difficult to grok parts of ReST, and all I am seeing in response is group backlash. From Eric's argument, the standard media types constraint of ReST is fullfilled, over HTTP, by registering new media types with IANA. In the context of HTTP, this is what a standardized media type is. Now, can anyone come back with a solid, convincing argument that this is not the case? Some of us have to explain ReST to less techy, or less patient, junior developers :) (For the record, I understand some of the backlash. Eric's argument style is... forceful :). And don't get me wrong, it would make my life easier as a developer NOT to have to register my mime types with IANA, but, from Eric's arguments, he makes a very persueasive case of the benefits of doing so, of which, only one (fullfilling standardized media types constaint of ReST) is ReST based.)
On 09/07/2010 08:31 AM, omarshariffdontlikeit wrote: > > From Eric's argument, the standard media types constraint of ReST is > fullfilled, over HTTP, by registering new media types with IANA. In > the context of HTTP, this is what a standardized media type is. > So far as I understand from the premise of REST, a media type has to be self-describing. IANA registration is neither necessary, nor sufficient to be self-describing. And if by "standardized", you mean IANA registration, that's a very narrow view of standards, given that there are many standards organizations in the world. To the "sufficient" question: if I have a document that I return as XML (application/xml), what really matters to the consumer of that document - can I associate a schema with the document? If it isn't, even though the media type in use is well-defined and registered with IANA, absent some form of schema (DTD, XML Schema, RelaxNG), the client I wrote really cannot assume that the meanings are the same. Likewise, a standard HTML-based web application, if it happens to use what are informally called "web 2.0" techniques - code on demand via lots of Javascript, then at some point you cross the bounds from being a wonderfully RESTful application to something slightly different - because the reliance on the code makes it difficult for non-web-browser clients to make serendipitous use of the same data. Again, I might be using all standard media types, but that doesn't mean I get REST. As to the necessary question: Was the "Atom" specification any less valid and self-describing the day before it was registered with IANA than it was the day after? It was self-describing before and after. The internet has few baked in central control points - DNS might be the only one, really - even with respect to REST, we have a combination of standards organizations including the IETF, W3C, IANA, and OASIS that all contribute "standards" to help the web form. If China, with its hundreds of millions of internet users, decides that it can't be bothered to register a media type with IANA, yet it could easily be that *every* relevant Chinese client of that media-type might readily understand it. That sounds like a feature of the internet, not a bug. I also like to think that REST make sense to think about in the context of enterprise software. So to me, the question is, "self-describing, but for what scope?" Maybe this is a small scope, such as the software my company writes, and all of our customers. Yet, my company can then get the benefits of following a RESTful architecture. > > Now, can anyone come back with a solid, convincing argument that this > is not the case? > Hopefully, the above did that. I'd also like to point out that there's a wide gulf between the architectural style of REST, the architecture of an application, the design of the application, and finally, the actual implementation. As someone who is far to the architectural side of the chasm, I'm more than happy to let the implementation stray a little bit from architectural and design purity, particularly if it means that the application ships and people get to benefit from it. To that end, I see REST as an aspirational endpoint. I'm not going to deny fellow architects the label of "REST" if, for example, they got everything right, but used a few non-standard-IANA-registered media-types for a few small portions of their application. On the other hand, I will get annoyed with individual developers who can't be bothered to ask/understand what they're being asked to do, and simply do what they're used to. Some pages have links to other pages, and some pages might be endpoints (with merely incidental links, like "back"). I suggest that in a RESTful world, the pages that link to other pages should strongly favor standardized (and registered) media types. When you get to the "endpoint" pages that don't contribute to the hypertext state of an application (PDF, Flash, video), then the media type is much more wide open, and it should be. > > Some of us have to explain ReST to less techy, or less patient, junior > developers :) > > (For the record, I understand some of the backlash. Eric's argument > style is... forceful :). And don't get me wrong, it would make my life > easier as a developer NOT to have to register my mime types with IANA, > but, from Eric's arguments, he makes a very persueasive case of the > benefits of doing so, of which, only one (fullfilling standardized > media types constaint of ReST) is ReST based.) > Of course, any good developer should reuse what is already available, and appropriate. On the other hand, given a hammer, don't turn everything into a nail. Figuring out when to make a new media type, and when to reuse an existing one might actually be an extremely difficult call. And in the end, the answer depends on your scope. -Eric. > >
<snip> ... don't get me wrong, it would make my life easier as a developer NOT to have to register my mime types with IANA, but, from Eric's arguments, he makes a very persueasive case of the benefits of doing so, of which, only one (fullfilling standardized media types constaint of ReST) is ReST based.) </snip> Yes, Eric makes some good arguments for the value of using media types that have been registered w/ the IANA and that is the interesting point here. However, to make claims that failure to use only IANA-registered media-types when implementing a solution that uses the HTTP protocol is a "clear-cut violation" of REST is false. 1) In the HTTP spec, there simply is no requirement that media-types MUST, SHOULD, or even MAY (using the language of RFC documents) be registered before they are used. 2) Fielding's dissertation lists a number of constraints that identify the REST style. Using only media types that have been pre-registered with the IANA is not among them. Finally, this continued focus on some _proof_ that there is a _requirement_ to do so is (to me) a waste of time and a distraction. It would be far more interesting (to me, at least) to discuss the _reason_ behind the advantages of media-type registries for distributed network applications. That is a discussion I'd be more than willing to join. mca http://amundsen.com/blog/ http://mamund.com/foaf.rdf#me Join me at #RESTFest 2010 Sep 17 & 18 http://restfest.org http://restfest.org/workshop > On Tue, Sep 7, 2010 at 11:31, omarshariffdontlikeit > <omarshariffdontlikeit@...> wrote: >> I must admit, as a lurker, I am having trouble understanding why does have issue witht he content of Eric's argument. It appears solid to me. >> >> All I can see are personal attacks and a lack of detailed, supported arguments countering Eric's argument. >> >> Can someone at least counter Eric's argument without the high school snapbacks? It appears to me that this discussion is in danger of actually creating some foundation on one of the most difficult to grok parts of ReST, and all I am seeing in response is group backlash. >> >> From Eric's argument, the standard media types constraint of ReST is fullfilled, over HTTP, by registering new media types with IANA. In the context of HTTP, this is what a standardized media type is. >> >> Now, can anyone come back with a solid, convincing argument that this is not the case? >> >> Some of us have to explain ReST to less techy, or less patient, junior developers :) >> >> (For the record, I understand some of the backlash. Eric's argument style is... forceful :). And don't get me wrong, it would make my life easier as a developer NOT to have to register my mime types with IANA, but, from Eric's arguments, he makes a very persueasive case of the benefits of doing so, of which, only one (fullfilling standardized media types constaint of ReST) is ReST based.) >> >> >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> >> >
On Sat, Sep 4, 2010 at 7:54 PM, jorgeluisw99 <jorgeluisw99@...> wrote: > > http://example.com/foo.msv > http://example.com/foo.msv2 > > are variants. So are: > > http://example.com/msv/foo > http://example.com/msv2/foo > > or for that matter: > > http://example.com/v1/foo > http://example.com/v2/foo > > How you expose your variant is simply an implementation detail. Agreed. All of those are the same from a REST perspective. However, they have quite different implications to developers accessing the service. I prefer implications of mincing file extensions when providing non-negotiable resources. It encourages client developers to think in terms of the media types provided. Peter <http://barelyenough.org>
On Tue, Sep 7, 2010 at 11:31 AM, omarshariffdontlikeit < omarshariffdontlikeit@...> wrote: > > > (For the record, I understand some of the backlash. Eric's argument style > is... forceful :). And don't get me wrong, it would make my life easier as a > developer NOT to have to register my mime types with IANA, but, from Eric's > arguments, he makes a very persueasive case of the benefits of doing so, of > which, only one (fullfilling standardized media types constaint of ReST) is > ReST based.) > > > I don't believe anyone is questioning the benefits. It's definitely a best practice such as brushing your teeth after every meal is. But I think the notion that if you're doing REST over the Web (HTTP +Internet) - I still have challenges even understanding that - and your type is not registered in IANA (as it would seem that registering in IANA is the only form of standardization), it is not REST is where there is a discussion. If two parties agree on a standard and use it over the web pre registering it in IANA, I don't see how this disqualifies the solution as not being REST (even if it's violating a best practice).
Yahoo says there are 103 messages in this thread. The discussion is circular and will never end. May I suggest starting a new thread with an appropriate title to focus exclusively on the IANA registry issue, where each person who has a different position states their position clearly and succinctly, and thereafter we refer back to that thread as a FAQ? Best I think if somebody with moderator-type skills summarizes all of the contradictory positions at the end of the thread, so we don't get into a who-get-the-last-word fight. I could start one, but I don't really have a position other than wanting to shortcut permathreads.
On Tue, Sep 7, 2010 at 8:50 AM, Eric J. Bowman <eric@...>wrote: > > > REST is what it is, not what it isn't, and REST is not about sending > opaque identifiers over the Internet with HTTP -- that is not at all > what is meant by "standard types". Again quoting Roy: > > "If we want to call one more RESTful than the other, then we have > to take the goal of evolution into account. I would say it is more > RESTful to use a specific standard type when applicable or to define > a new type that is specific to a given purpose AND intended to be > standardized for that application type (i.e., proprietary types are > less RESTful than industry-wide standard types, but new standard > types are not less RESTful than old standard types). But that is > really only my personal preference, since the style does not > constrain REST-based architectures to a single standard." > > Why is there endless pushback against the notion that Roy's personal > preferences regarding the instantiation of REST-the-style are reflected > in HTTP and Web architecture? Why is it confusing to the point of > telling me to shutup already, to insist that opaque identifiers aren't > self-descriptive on the Web? They aren't! Making them incompatible > with REST on the Web. Why can't that _fact_ be the last word? > . > > Section 6.3.2 talks about self-descriptive messages, but I don't see anything in there about media-types. Am I missing something? If I want to expose my invoice service in a RESTful way do I need to create an new format and try to get it registered and standardized? Is it then impossible for me to write my "RESTful" service in less than a few years? That stinks. What am I giving up as a consequence of using a custom format? So far it seems that I'm just giving up the right to call it RESTful. -- David blog: http://www.traceback.org twitter: http://twitter.com/dstanek
Sorry for my ignorance, but in all this discussion about standard media types, is application/vnd.abc+xml considered standard or non-standard? It is XML, which is standard, but it is also a certain schema which is ... well what is it? Thanks. /J�rn ----- Original Message ----- From: "David Stanek" <dstanek@...> To: "Eric J. Bowman" <eric@...> Cc: "algermissen1971" <algermissen1971@...>; "Mike Kelly" <mike@...>; "William Martinez Pomares" <wmartinez@...>; <rest-discuss@yahoogroups.com> Sent: Wednesday, September 08, 2010 2:19 AM Subject: Re: [rest-discuss] Re: REST, HTTP, Web, Internet [was Atom feed vs. list of orders] > On Tue, Sep 7, 2010 at 8:50 AM, Eric J. Bowman > <eric@...>wrote: > >> >> >> REST is what it is, not what it isn't, and REST is not about sending >> opaque identifiers over the Internet with HTTP -- that is not at all >> what is meant by "standard types". Again quoting Roy: >> >> "If we want to call one more RESTful than the other, then we have >> to take the goal of evolution into account. I would say it is more >> RESTful to use a specific standard type when applicable or to define >> a new type that is specific to a given purpose AND intended to be >> standardized for that application type (i.e., proprietary types are >> less RESTful than industry-wide standard types, but new standard >> types are not less RESTful than old standard types). But that is >> really only my personal preference, since the style does not >> constrain REST-based architectures to a single standard." >> >> Why is there endless pushback against the notion that Roy's personal >> preferences regarding the instantiation of REST-the-style are reflected >> in HTTP and Web architecture? Why is it confusing to the point of >> telling me to shutup already, to insist that opaque identifiers aren't >> self-descriptive on the Web? They aren't! Making them incompatible >> with REST on the Web. Why can't that _fact_ be the last word? >> . >> >> > > Section 6.3.2 talks about self-descriptive messages, but I don't see > anything in there about media-types. Am I missing something? > > If I want to expose my invoice service in a RESTful way do I need to > create > an new format and try to get it registered and standardized? Is it then > impossible for me to write my "RESTful" service in less than a few years? > That stinks. > > What am I giving up as a consequence of using a custom format? So far it > seems that I'm just giving up the right to call it RESTful. > > > -- > David > blog: http://www.traceback.org > twitter: http://twitter.com/dstanek >
Eric Johnson wrote: > > So far as I understand from the premise of REST, a media type has to > be self-describing. > No, there is no requirement that a media type be self-describing. REST has a self-descriptive *messaging* constraint. > > IANA registration is neither necessary, nor sufficient to be > self-describing. And if by "standardized", you mean IANA > registration, that's a very narrow view of standards, given that > there are many standards organizations in the world. > IANA is not a standards body, it's a registry, which recognizes standards published by any entity that claims to be a standards body. On the Web, IANA registration is absolutely a requirement -- how can you square your claim that it isn't with Roy's clear definition of the term? "Self-descriptive means that the type is registered and the registry points to a specification and the specification explains how to process the data according to its intent." Can that *be* any more clear? Is there some alternate registry for the Web that I'm not aware of? If your media type isn't registered, and you're sending it over the Web, you're not even _attempting_ to be RESTful. > > To the "sufficient" question: if I have a document that I return as > XML (application/xml), what really matters to the consumer of that > document - can I associate a schema with the document? > That doesn't make your messaging self-descriptive, because you're requiring introspection to determine the nature of the document. I need to know the nature of the data by looking at the headers, not (perhaps) unzipping your content and sniffing it first -- application/ xml is *not* self-descriptive, as has been (correctly) pointed out on this list a hundred times... > > If it isn't, even though the media type in use is well-defined and > registered with IANA, absent some form of schema (DTD, XML Schema, > RelaxNG), the client I wrote really cannot assume that the meanings > are the same. > No, self-descriptiveness does not require introspection of the content to determine its processing model, it's dependent on registered media type identifiers. If I send XHTML as text/plain, you need to treat it as text/plain, not introspect it and determine that its schema means it should be treated as XHTML -- that's sniffing, and sniffing has nothing to do with REST. > > As to the necessary question: Was the "Atom" specification any less > valid and self-describing the day before it was registered with IANA > than it was the day after? It was self-describing before and after. > No, it was not. Identifiers that aren't listed in *any* registry can't *possibly* meet Roy's definition of self-descriptiveness: "Self-descriptive means that the type is registered and the registry points to a specification and the specification explains how to process the data according to its intent." Before Atom's identifier was registered in IANA, it was registered... where? If the identifier isn't in a registry, then it can't point to a specification, so it can't *possibly* meet Roy's definition. > > The internet has few baked in central control points - DNS might be > the only one, really - even with respect to REST, we have a > combination of standards organizations including the IETF, W3C, IANA, > and OASIS that all contribute "standards" to help the web form. > None of which aren't recognized by the IANA registry, which is a registry, not a standards body. Merely a means to point identifiers to descriptive documents, published by *any* entity that claims to be releasing a "standard" by the entity's own (and not IANA's) definition. Could the bar *be* any lower? > > If China, with its hundreds of millions of internet users, decides > that it can't be bothered to register a media type with IANA, yet it > could easily be that *every* relevant Chinese client of that > media-type might readily understand it. That sounds like a feature > of the internet, not a bug. > I have no problem with that. However, in _reality_ there is no alternate registry to IANA, not here, not in China, not anywhere on the planet. So discussions about theoretical alternatives to IANA are just that -- theoretical. On the Web, today, there exists only the IANA registry, therefore there is no other means to meet Roy's requirement: "Self-descriptive means that the type is registered and the registry points to a specification and the specification explains how to process the data according to its intent." Sure, if the Chinese alternative to IANA comes along, it would be a legitimate registry for meeting the self-descriptiveness constraint. But, here's a list of *actual* registries which *exist* today on the Web: IANA Sure, alternatives are possible, but pragmatically, that's _all_ there is. Even if there were some other registry, the wording of RFC 2616 would still "discourage" its use. > > I also like to think that REST make sense to think about in the > context of enterprise software. So to me, the question is, > "self-describing, but for what scope?" Maybe this is a small scope, > such as the software my company writes, and all of our customers. > Yet, my company can then get the benefits of following a RESTful > architecture. > If you're sending HTTP over the Internet, then potential participants in the communication include folks beyond your or your partners' organizational boundary. If your identifier is opaque, then these intermediaries can't be participants, but merely dumb routers, which is the result we're presumably trying to avoid by using REST in the first place. > > > Now, can anyone come back with a solid, convincing argument that > > this is not the case? > > > > Hopefully, the above did that. > Not even close. You've failed to do anything but speculate that maybe some alternative registry will emerge. A valid point, but moot, since no such alternative _has_ emerged. Therefore, in order to meet Roy's requirement for self-descriptiveness: "Self-descriptive means that the type is registered and the registry points to a specification and the specification explains how to process the data according to its intent." There exist, on the Web, exactly ZERO legitimate alternatives to the IANA registry, in practice. Where else are media type identifiers registered for anyone to look up their associated standards? Nowhere, severely restricting the options for meeting the constraint, in practice. > > To that end, I see REST as an aspirational endpoint. > I say that myself, all the time. However: > > I'm not going to deny fellow architects the label of "REST" if, for > example, they got everything right, but used a few non-standard-IANA- > registered media-types for a few small portions of their application. > If there's no intent to standardize a type or register an identifier, then there's no effort being made at self-descriptiveness. This isn't just a mismatch, this is actively thwarting the goals of REST. Using an unregistered identifier for an unstandardized type is only congruous with REST if there is an intent to change that situation -- what I'm arguing against here is the refusal to admit there's even a problem. > > Some pages have links to other pages, and some pages might be > endpoints (with merely incidental links, like "back"). I suggest > that in a RESTful world, the pages that link to other pages should > strongly favor standardized (and registered) media types. When you > get to the "endpoint" pages that don't contribute to the hypertext > state of an application (PDF, Flash, video), then the media type is > much more wide open, and it should be. > No, REST says nothing about self-descriptiveness being optional, or more important / less important based on resource type. Self- descriptive messaging is a constraint, and those who choose to ignore this _fact_ simply are not following the REST style, and should not call their results REST. I am opposed to attaching the REST label where no effort is being made to follow the style. > > Of course, any good developer should reuse what is already available, > and appropriate. On the other hand, given a hammer, don't turn > everything into a nail. Figuring out when to make a new media type, > and when to reuse an existing one might actually be an extremely > difficult call. > 100% agreed. My argument is against the cavalier usage of unregistered, unstandardized types without any mention whatsoever, and outright denial of the existence, of any constraint issues or tradeoffs associated with the practice. The bar to registering an identifier is purposefully low, yet folks act like it's an insurmountable obstacle, then pretend it isn't required. NOT REST! -Eric
http://tech.groups.yahoo.com/group/rest-discuss/message/6613 mca http://amundsen.com/blog/ http://mamund.com/foaf.rdf#me On Wed, Sep 8, 2010 at 01:05, Eric J. Bowman <eric@...> wrote: > Eric Johnson wrote: >> >> So far as I understand from the premise of REST, a media type has to >> be self-describing. >> > > No, there is no requirement that a media type be self-describing. REST > has a self-descriptive *messaging* constraint. > >> >> IANA registration is neither necessary, nor sufficient to be >> self-describing. And if by "standardized", you mean IANA >> registration, that's a very narrow view of standards, given that >> there are many standards organizations in the world. >> > > IANA is not a standards body, it's a registry, which recognizes > standards published by any entity that claims to be a standards body. > On the Web, IANA registration is absolutely a requirement -- how can > you square your claim that it isn't with Roy's clear definition of the > term? > > "Self-descriptive means that the type is registered and the registry > points to a specification and the specification explains how to process > the data according to its intent." > > Can that *be* any more clear? Is there some alternate registry for the > Web that I'm not aware of? If your media type isn't registered, and > you're sending it over the Web, you're not even _attempting_ to be > RESTful. > >> >> To the "sufficient" question: if I have a document that I return as >> XML (application/xml), what really matters to the consumer of that >> document - can I associate a schema with the document? >> > > That doesn't make your messaging self-descriptive, because you're > requiring introspection to determine the nature of the document. I > need to know the nature of the data by looking at the headers, not > (perhaps) unzipping your content and sniffing it first -- application/ > xml is *not* self-descriptive, as has been (correctly) pointed out on > this list a hundred times... > >> >> If it isn't, even though the media type in use is well-defined and >> registered with IANA, absent some form of schema (DTD, XML Schema, >> RelaxNG), the client I wrote really cannot assume that the meanings >> are the same. >> > > No, self-descriptiveness does not require introspection of the content > to determine its processing model, it's dependent on registered media > type identifiers. If I send XHTML as text/plain, you need to treat it > as text/plain, not introspect it and determine that its schema means it > should be treated as XHTML -- that's sniffing, and sniffing has nothing > to do with REST. > >> >> As to the necessary question: Was the "Atom" specification any less >> valid and self-describing the day before it was registered with IANA >> than it was the day after? It was self-describing before and after. >> > > No, it was not. Identifiers that aren't listed in *any* registry can't > *possibly* meet Roy's definition of self-descriptiveness: > > "Self-descriptive means that the type is registered and the registry > points to a specification and the specification explains how to process > the data according to its intent." > > Before Atom's identifier was registered in IANA, it was registered... > where? If the identifier isn't in a registry, then it can't point to a > specification, so it can't *possibly* meet Roy's definition. > >> >> The internet has few baked in central control points - DNS might be >> the only one, really - even with respect to REST, we have a >> combination of standards organizations including the IETF, W3C, IANA, >> and OASIS that all contribute "standards" to help the web form. >> > > None of which aren't recognized by the IANA registry, which is a > registry, not a standards body. Merely a means to point identifiers to > descriptive documents, published by *any* entity that claims to be > releasing a "standard" by the entity's own (and not IANA's) definition. > Could the bar *be* any lower? > >> >> If China, with its hundreds of millions of internet users, decides >> that it can't be bothered to register a media type with IANA, yet it >> could easily be that *every* relevant Chinese client of that >> media-type might readily understand it. That sounds like a feature >> of the internet, not a bug. >> > > I have no problem with that. However, in _reality_ there is no > alternate registry to IANA, not here, not in China, not anywhere on the > planet. So discussions about theoretical alternatives to IANA are just > that -- theoretical. On the Web, today, there exists only the IANA > registry, therefore there is no other means to meet Roy's requirement: > > "Self-descriptive means that the type is registered and the registry > points to a specification and the specification explains how to process > the data according to its intent." > > Sure, if the Chinese alternative to IANA comes along, it would be a > legitimate registry for meeting the self-descriptiveness constraint. > But, here's a list of *actual* registries which *exist* today on the > Web: > > IANA > > Sure, alternatives are possible, but pragmatically, that's _all_ there > is. Even if there were some other registry, the wording of RFC 2616 > would still "discourage" its use. > >> >> I also like to think that REST make sense to think about in the >> context of enterprise software. So to me, the question is, >> "self-describing, but for what scope?" Maybe this is a small scope, >> such as the software my company writes, and all of our customers. >> Yet, my company can then get the benefits of following a RESTful >> architecture. >> > > If you're sending HTTP over the Internet, then potential participants > in the communication include folks beyond your or your partners' > organizational boundary. If your identifier is opaque, then these > intermediaries can't be participants, but merely dumb routers, which is > the result we're presumably trying to avoid by using REST in the first > place. > >> >> > Now, can anyone come back with a solid, convincing argument that >> > this is not the case? >> > >> >> Hopefully, the above did that. >> > > Not even close. You've failed to do anything but speculate that maybe > some alternative registry will emerge. A valid point, but moot, since > no such alternative _has_ emerged. Therefore, in order to meet Roy's > requirement for self-descriptiveness: > > "Self-descriptive means that the type is registered and the registry > points to a specification and the specification explains how to process > the data according to its intent." > > There exist, on the Web, exactly ZERO legitimate alternatives to the > IANA registry, in practice. Where else are media type identifiers > registered for anyone to look up their associated standards? Nowhere, > severely restricting the options for meeting the constraint, in > practice. > >> >> To that end, I see REST as an aspirational endpoint. >> > > I say that myself, all the time. However: > >> >> I'm not going to deny fellow architects the label of "REST" if, for >> example, they got everything right, but used a few non-standard-IANA- >> registered media-types for a few small portions of their application. >> > > If there's no intent to standardize a type or register an identifier, > then there's no effort being made at self-descriptiveness. This isn't > just a mismatch, this is actively thwarting the goals of REST. Using > an unregistered identifier for an unstandardized type is only congruous > with REST if there is an intent to change that situation -- what I'm > arguing against here is the refusal to admit there's even a problem. > >> >> Some pages have links to other pages, and some pages might be >> endpoints (with merely incidental links, like "back"). I suggest >> that in a RESTful world, the pages that link to other pages should >> strongly favor standardized (and registered) media types. When you >> get to the "endpoint" pages that don't contribute to the hypertext >> state of an application (PDF, Flash, video), then the media type is >> much more wide open, and it should be. >> > > No, REST says nothing about self-descriptiveness being optional, or > more important / less important based on resource type. Self- > descriptive messaging is a constraint, and those who choose to ignore > this _fact_ simply are not following the REST style, and should not > call their results REST. I am opposed to attaching the REST label > where no effort is being made to follow the style. > >> >> Of course, any good developer should reuse what is already available, >> and appropriate. On the other hand, given a hammer, don't turn >> everything into a nail. Figuring out when to make a new media type, >> and when to reuse an existing one might actually be an extremely >> difficult call. >> > > 100% agreed. My argument is against the cavalier usage of unregistered, > unstandardized types without any mention whatsoever, and outright > denial of the existence, of any constraint issues or tradeoffs > associated with the practice. The bar to registering an identifier is > purposefully low, yet folks act like it's an insurmountable obstacle, > then pretend it isn't required. NOT REST! > > -Eric > > > ------------------------------------ > > Yahoo! Groups Links > > > >
mike amundsen wrote: > > Yes, Eric makes some good arguments for the value of using media types > that have been registered w/ the IANA and that is the interesting > point here. However, to make claims that failure to use only > IANA-registered media-types when implementing a solution that uses the > HTTP protocol is a "clear-cut violation" of REST is false. > That is not my position. Please don't separate my point from its context -- using HTTP over the Internet, i.e. the Web. If you're using the Web, there are _no_ alternate registries to consider, only IANA. > > 1) In the HTTP spec, there simply is no requirement that media-types > MUST, SHOULD, or even MAY (using the language of RFC documents) be > registered before they are used. > HTTP != REST. HTTP doesn't require the Content-Type header be sent at all -- REST does, to meet the self-descriptive messaging constraint -- and Roy's mime-respect w3c note considers it best practice for origin servers to omit Content-Type when it isn't definitively known. Here's what RFC 2616 does say about media type identifier registration: "Media-type values are registered with the Internet Assigned Number Authority (IANA). The media type registration process is outlined in RFC 1590. Use of non-registered media types is discouraged." While it's valid to point out that "discouraged" is nonnormative RFC language, it's also valid for me to point out that encouraging folks to do exactly as HTTP discourages while dismissing the notion that doing so has consequences, is bad advice. It's also valid for me to point out that the language, "ARE REGISTERED WITH IANA," is unambiguous. I tend to believe I'm right about this, since my advice is to do exactly what HTTP defines an RFC process to govern and encourages folks to use. Is there some other, non-theoretical registry in common use on the Web that I'm not aware of? Or some other RFC which extends 2616 to define an alternate registry? Or is your position that it's a bug for HTTP not to say, "MAY be registered with IANA"? > > 2) Fielding's dissertation lists a number of constraints that identify > the REST style. Using only media types that have been pre-registered > with the IANA is not among them. > Self-descriptive messaging is one of the four sub-constraints which make up the uniform interface constraint: "In order to obtain a uniform interface, multiple architectural constraints are needed to guide the behavior of components. REST is defined by four interface constraints: identification of resources; manipulation of resources through representations; self-descriptive messages; and, hypermedia as the engine of application state." So I don't see how anyone can arrive at a uniform interface unless their messaging is self-descriptive, which Roy defines as: "Self-descriptive means that the type is registered and the registry points to a specification and the specification explains how to process the data according to its intent." Is there some sort of hulu y'all are dancing around that constraint? Nobody has provided a rational explanation as to how unregistered identifiers, over the Web for which only the IANA registry is defined, constitutes self-descriptiveness. > > Finally, this continued focus on some _proof_ that there is a > _requirement_ to do so is (to me) a waste of time and a distraction. > How is it any more of a time-wasting distraction than discussing, say, the hypermedia constraint? I'm waiting for rational proof that this *isn't* a requirement, none has been forthcoming. The fact that this shouldn't be controversial at all is what keeps me going. Just provide rational proof that it's OK to disregard the self-descriptive messaging constraint and call the results REST and I'll shutup... > > It would be far more interesting (to me, at least) to discuss the > _reason_ behind the advantages of media-type registries for > distributed network applications. That is a discussion I'd be more > than willing to join. > We already discussed that at length in this thread. My terminology is "collisions" of media type identifiers. If domain A publishes a spec with the identifier of application/vnd.orders+xml, and domain B publishes a different spec with the identifier of application/vnd.orders +xml, and domain C re-uses that identifier, how does anyone operating an intermediary on the Web know to which spec domain C is referring? If the answer is to introspect the content and make the determination by looking at the schema, then that's a wrong answer, because that is not what is meant by self-descriptive messaging. If the answer is to use search services and some sort of opaque metric to determine whether domain A's or domain B's spec is the most popular, then that's a wrong answer, because self-descriptive messaging is meant as a solution to that exact problem, also. OTOH, using a registry avoids such collisions, by explicitly associating identifiers with data types. There's still plenty of room for error in selecting the appropriate identifier, but using opaque identifiers that aren't in IANA over the Web _obviously_ doesn't even begin to be self-descriptive, by definition, because it forces either introspection or googling to determine the processing model for the payload. As with Gopher, or any other protocol with a resource/representation dichotomy, being able to associate an identifier with a processing model is the primary requirement of self-descriptiveness. In order to do this on a publicly-deployed infrastructure like the Web is going to require some sort of registry to avoid collisions, so the constraint only makes sense to me, as does HTTP's "encouragement" to use IANA. -Eric
Eb wrote: > > If two parties agree on a standard and use it over the web pre > registering it in IANA, I don't see how this disqualifies the > solution as not being REST (even if it's violating a best practice). > Unless those two parties are using HTTPS, then the purpose of using the Web, presumably, is to take advantage of serendipitous re-use and anarchic scalability. These desirable properties cannot result from opaque identifiers. If your system cannot achieve the desirable properties of REST, doesn't that indicate a problem? When using HTTP over the Internet, the entire world is potentially a participant, not just the two parties exchanging data. In order for any intermediaries to reliably function as any more than dumb routers, the messaging must be self-descriptive. That way, any intermediary capable of participating in the communication, is enabled to behave as more than just a dumb router. If the identifiers are opaque, then the system uses a library-based API. REST's uniform interface is a network-based API based on a shared understanding of standardized types. If no other participants, besides sender and recipient, understand the data type, then the interface is not uniform, not a network-based API, and NOT REST. See Chapter 6. -Eric
David Stanek wrote: > > If I want to expose my invoice service in a RESTful way do I need to > create an new format and try to get it registered and standardized? > Is it then impossible for me to write my "RESTful" service in less > than a few years? That stinks. > Please don't separate my point from its context. None of what I say applies to an intranet, so you can't generalize your statement like that. If by "expose" you mean you want to create an invoice service as a Web API over the public Internet, then, yes. But why do you think you need to create a media type at all? I've never seen an invoice service that couldn't be cleanly described using Atom + (X)HTML. It's tabular data; use <table>. Annotate using RDFa + GoodRelations. There's simply no need to define a media type for this purpose. > > What am I giving up as a consequence of using a custom format? So far > it seems that I'm just giving up the right to call it RESTful. > You're giving up on a uniform interface, if by custom format you mean something you don't intend for standardization and don't intend to register with IANA. But, even if you *do* go down that road, it isn't clear that you'll ever get the full benefits of REST, since your data doesn't inherently need its own media type I doubt it will become ubiquitous. The power of REST is that by re-using ubiquitous types like Atom + XHTML to define an invoice-service payload, you get all the benefits of Internet scale *overnight* thanks to the uniform interface. -Eric
Jrn Wildt wrote: > > Sorry for my ignorance, but in all this discussion about standard > media types, is application/vnd.abc+xml considered standard or > non-standard? It is XML, which is standard, but it is also a certain > schema which is ... well what is it? > Well, first, application/vnd.abc+xml isn't a media type, it's a media type *identifier*. XBEL is a standard media type, but XBEL has no registered identifier, so serving XBEL isn't self-descriptive. I don't have enough context to answer your question. Is this type being used on an intranet, where the IANA registry is irrelevant, and it's agreed to by all parties to the transaction? Then there's no REST mismatch. You can serve XBEL self-descriptively on an intranet, by assigning it an identifier that everyone on that intranet agrees to. If the context is HTTP over the Internet, where only the IANA registry is defined, application/vnd.abc+xml can't be considered self-descriptive because it doesn't point to anything, since no registry entry exists. RFC 3023 says hosts MAY decide to fall back to application/xml as a processing model, but you can't bank on this behavior, and the whole point of self-descriptiveness is that the origin server is specifying the processing model with no such ambiguity in the first place. Unless and until application/vnd.abc+xml has an entry in IANA which points to a spec, it will not be self-descriptive using HTTP over the Internet, any more than my using application/xbel+xml (not registered) over the Web is self-descriptive (it isn't). -Eric
mike amundsen wrote: > > http://tech.groups.yahoo.com/group/rest-discuss/message/6613 > Meaning what? I've carefully explained my point by quoting from that post, you can't just quote that post back at me as "refutation". Roy is drawing a distinction between what conclusions may be drawn when using HTTP over the Internet, and what conclusions may be back-ported to REST-the-style. It contradicts nothing I've said about using the Web, and certainly doesn't indicate that this: "Self-descriptive means that the type is registered and the registry points to a specification and the specification explains how to process the data according to its intent." Doesn't mean that using IANA-registered identifiers is somehow optional on the Web. It just means that the IANA registry is implementation- specific, not a requirement of the style. If you're using the Web implementation of REST, IANA-registered identifiers are _not_ optional. -Eric
On Sep 8, 2010, at 7:44 AM, mike amundsen wrote: > http://tech.groups.yahoo.com/group/rest-discuss/message/6613 Right on. The funny (erm...disappointing) thing is that this particular thread and specifically this posting has been references several times in this discussion. Roy pretty much says it all there - why, oh, why won't this thread come to a halt? Jan > > mca > http://amundsen.com/blog/ > http://mamund.com/foaf.rdf#me > > > > > On Wed, Sep 8, 2010 at 01:05, Eric J. Bowman <eric@...> wrote: >> Eric Johnson wrote: >>> >>> So far as I understand from the premise of REST, a media type has to >>> be self-describing. >>> >> >> No, there is no requirement that a media type be self-describing. REST >> has a self-descriptive *messaging* constraint. >> >>> >>> IANA registration is neither necessary, nor sufficient to be >>> self-describing. And if by "standardized", you mean IANA >>> registration, that's a very narrow view of standards, given that >>> there are many standards organizations in the world. >>> >> >> IANA is not a standards body, it's a registry, which recognizes >> standards published by any entity that claims to be a standards body. >> On the Web, IANA registration is absolutely a requirement -- how can >> you square your claim that it isn't with Roy's clear definition of the >> term? >> >> "Self-descriptive means that the type is registered and the registry >> points to a specification and the specification explains how to process >> the data according to its intent." >> >> Can that *be* any more clear? Is there some alternate registry for the >> Web that I'm not aware of? If your media type isn't registered, and >> you're sending it over the Web, you're not even _attempting_ to be >> RESTful. >> >>> >>> To the "sufficient" question: if I have a document that I return as >>> XML (application/xml), what really matters to the consumer of that >>> document - can I associate a schema with the document? >>> >> >> That doesn't make your messaging self-descriptive, because you're >> requiring introspection to determine the nature of the document. I >> need to know the nature of the data by looking at the headers, not >> (perhaps) unzipping your content and sniffing it first -- application/ >> xml is *not* self-descriptive, as has been (correctly) pointed out on >> this list a hundred times... >> >>> >>> If it isn't, even though the media type in use is well-defined and >>> registered with IANA, absent some form of schema (DTD, XML Schema, >>> RelaxNG), the client I wrote really cannot assume that the meanings >>> are the same. >>> >> >> No, self-descriptiveness does not require introspection of the content >> to determine its processing model, it's dependent on registered media >> type identifiers. If I send XHTML as text/plain, you need to treat it >> as text/plain, not introspect it and determine that its schema means it >> should be treated as XHTML -- that's sniffing, and sniffing has nothing >> to do with REST. >> >>> >>> As to the necessary question: Was the "Atom" specification any less >>> valid and self-describing the day before it was registered with IANA >>> than it was the day after? It was self-describing before and after. >>> >> >> No, it was not. Identifiers that aren't listed in *any* registry can't >> *possibly* meet Roy's definition of self-descriptiveness: >> >> "Self-descriptive means that the type is registered and the registry >> points to a specification and the specification explains how to process >> the data according to its intent." >> >> Before Atom's identifier was registered in IANA, it was registered... >> where? If the identifier isn't in a registry, then it can't point to a >> specification, so it can't *possibly* meet Roy's definition. >> >>> >>> The internet has few baked in central control points - DNS might be >>> the only one, really - even with respect to REST, we have a >>> combination of standards organizations including the IETF, W3C, IANA, >>> and OASIS that all contribute "standards" to help the web form. >>> >> >> None of which aren't recognized by the IANA registry, which is a >> registry, not a standards body. Merely a means to point identifiers to >> descriptive documents, published by *any* entity that claims to be >> releasing a "standard" by the entity's own (and not IANA's) definition. >> Could the bar *be* any lower? >> >>> >>> If China, with its hundreds of millions of internet users, decides >>> that it can't be bothered to register a media type with IANA, yet it >>> could easily be that *every* relevant Chinese client of that >>> media-type might readily understand it. That sounds like a feature >>> of the internet, not a bug. >>> >> >> I have no problem with that. However, in _reality_ there is no >> alternate registry to IANA, not here, not in China, not anywhere on the >> planet. So discussions about theoretical alternatives to IANA are just >> that -- theoretical. On the Web, today, there exists only the IANA >> registry, therefore there is no other means to meet Roy's requirement: >> >> "Self-descriptive means that the type is registered and the registry >> points to a specification and the specification explains how to process >> the data according to its intent." >> >> Sure, if the Chinese alternative to IANA comes along, it would be a >> legitimate registry for meeting the self-descriptiveness constraint. >> But, here's a list of *actual* registries which *exist* today on the >> Web: >> >> IANA >> >> Sure, alternatives are possible, but pragmatically, that's _all_ there >> is. Even if there were some other registry, the wording of RFC 2616 >> would still "discourage" its use. >> >>> >>> I also like to think that REST make sense to think about in the >>> context of enterprise software. So to me, the question is, >>> "self-describing, but for what scope?" Maybe this is a small scope, >>> such as the software my company writes, and all of our customers. >>> Yet, my company can then get the benefits of following a RESTful >>> architecture. >>> >> >> If you're sending HTTP over the Internet, then potential participants >> in the communication include folks beyond your or your partners' >> organizational boundary. If your identifier is opaque, then these >> intermediaries can't be participants, but merely dumb routers, which is >> the result we're presumably trying to avoid by using REST in the first >> place. >> >>> >>>> Now, can anyone come back with a solid, convincing argument that >>>> this is not the case? >>>> >>> >>> Hopefully, the above did that. >>> >> >> Not even close. You've failed to do anything but speculate that maybe >> some alternative registry will emerge. A valid point, but moot, since >> no such alternative _has_ emerged. Therefore, in order to meet Roy's >> requirement for self-descriptiveness: >> >> "Self-descriptive means that the type is registered and the registry >> points to a specification and the specification explains how to process >> the data according to its intent." >> >> There exist, on the Web, exactly ZERO legitimate alternatives to the >> IANA registry, in practice. Where else are media type identifiers >> registered for anyone to look up their associated standards? Nowhere, >> severely restricting the options for meeting the constraint, in >> practice. >> >>> >>> To that end, I see REST as an aspirational endpoint. >>> >> >> I say that myself, all the time. However: >> >>> >>> I'm not going to deny fellow architects the label of "REST" if, for >>> example, they got everything right, but used a few non-standard-IANA- >>> registered media-types for a few small portions of their application. >>> >> >> If there's no intent to standardize a type or register an identifier, >> then there's no effort being made at self-descriptiveness. This isn't >> just a mismatch, this is actively thwarting the goals of REST. Using >> an unregistered identifier for an unstandardized type is only congruous >> with REST if there is an intent to change that situation -- what I'm >> arguing against here is the refusal to admit there's even a problem. >> >>> >>> Some pages have links to other pages, and some pages might be >>> endpoints (with merely incidental links, like "back"). I suggest >>> that in a RESTful world, the pages that link to other pages should >>> strongly favor standardized (and registered) media types. When you >>> get to the "endpoint" pages that don't contribute to the hypertext >>> state of an application (PDF, Flash, video), then the media type is >>> much more wide open, and it should be. >>> >> >> No, REST says nothing about self-descriptiveness being optional, or >> more important / less important based on resource type. Self- >> descriptive messaging is a constraint, and those who choose to ignore >> this _fact_ simply are not following the REST style, and should not >> call their results REST. I am opposed to attaching the REST label >> where no effort is being made to follow the style. >> >>> >>> Of course, any good developer should reuse what is already available, >>> and appropriate. On the other hand, given a hammer, don't turn >>> everything into a nail. Figuring out when to make a new media type, >>> and when to reuse an existing one might actually be an extremely >>> difficult call. >>> >> >> 100% agreed. My argument is against the cavalier usage of unregistered, >> unstandardized types without any mention whatsoever, and outright >> denial of the existence, of any constraint issues or tradeoffs >> associated with the practice. The bar to registering an identifier is >> purposefully low, yet folks act like it's an insurmountable obstacle, >> then pretend it isn't required. NOT REST! >> >> -Eric >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> >> > > > ------------------------------------ > > Yahoo! Groups Links > > >
Bob Haugen wrote: > > May I suggest starting a new thread with an appropriate title to focus > exclusively on the IANA registry issue, where each person who has a > different position states their position clearly and succinctly, and > thereafter we refer back to that thread as a FAQ? > I've been mulling the idea of suggesting on http-wg that "discouraged" be replaced with the normative "SHOULD NOT". The implication would be that HTTP implementations not using IANA would be considered, at best, "conditionally compliant". I assume there's some logic behind not stigmatizing intranet HTTP implementations like that, but I'm curious to know what it is. Still, I think it's wrong to encourage what the spec explicitly discourages. -Eric
Jan Algermissen wrote: > > Roy pretty much says it all there - why, oh, why won't this thread > come to a halt? > What, specifically, do you think Roy is saying that makes it OK to use opaque identifiers on the Web and call it self-descriptive? Do you mean this? "The problem is that I can't say 'REST requires media types to be registered' because both Internet media types and the registry controlled by IANA are a specific architecture's instance of the style." This says nothing to refute my assertion that the specific instance of REST known as the Web _does_ require media types to be IANA-registered. Yeah, it might get replaced someday, but that day is nowhere on the horizon, so at the present time, on the Web, it absolutely goes against self-descriptive messaging to do what HTTP "discourages" by passing opaque identifiers. -Eric
On Wed, Sep 8, 2010 at 7:46 AM, Eric J. Bowman <eric@...> wrote: > Bob Haugen wrote: >> >> May I suggest starting a new thread with an appropriate title to focus >> exclusively on the IANA registry issue, where each person who has a >> different position states their position clearly and succinctly, and >> thereafter we refer back to that thread as a FAQ? >> > > I've been mulling the idea of suggesting on http-wg that "discouraged" > be replaced with the normative "SHOULD NOT". The implication would be > that HTTP implementations not using IANA would be considered, at best, > "conditionally compliant". I assume there's some logic behind not > stigmatizing intranet HTTP implementations like that, but I'm curious > to know what it is. Still, I think it's wrong to encourage what the > spec explicitly discourages. > How certain are you that "not stigmatizing intranet HTTP implementations" is the one and only justification for that wording? Why don't you go off and suggest that to http-wg first and see what they come back with, instead of pointlessly dragging this 'debate' out? Cheers, Mike
Mike Kelly wrote: > > How certain are you that "not stigmatizing intranet HTTP > implementations" is the one and only justification for that wording? > I didn't mean to suggest it as a justification. What I said, was that's the result of not following SHOULD or doing stuff you SHOULD NOT; that's by definition of how those normative terms effect RFC 2616's definition of conditional vs. full compliance. I did say I was curious as to the rationale behind the decision, because I know the spec authors aren't careless with their wording. Deliberately avoiding a SHOULD/SHOULD NOT there is a conscious decision for the use/non-use of IANA for media type identifiers to have no impact on conditional vs. full compliance. -Eric
Mike Kelly wrote: > > Why don't you go off and suggest that to http-wg first and see what > they come back with, instead of pointlessly dragging this 'debate' > out? > Seeing as how you're the one who revived it by insisting that opaque identifiers are somehow self-descriptive on the Web, why don't you explain how googling for media type identifiers is self-descriptive? I don't think it's pointless to insist that you back up your position. -Eric
At the risk of not being able to walk past the flock of pigeons without throwing in the cat, I ask the following question: "As RSS is not registered with the IANA, do we still call it an opaque media type even though it has high ubiquity of use and it's definition is very widely understood?" Regards, Alan Dean On Wed, Sep 8, 2010 at 07:37, Jan Algermissen <algermissen1971@...>wrote: > > > > On Sep 8, 2010, at 7:44 AM, mike amundsen wrote: > > > http://tech.groups.yahoo.com/group/rest-discuss/message/6613 > > Right on. > > The funny (erm...disappointing) thing is that this particular thread and > specifically this posting has been references several times in this > discussion. > > Roy pretty much says it all there - why, oh, why won't this thread come to > a halt? > > Jan > > > > > > mca > > http://amundsen.com/blog/ > > http://mamund.com/foaf.rdf#me > > > > > > > > > > On Wed, Sep 8, 2010 at 01:05, Eric J. Bowman <eric@...<eric%40bisonsystems.net>> > wrote: > >> Eric Johnson wrote: > >>> > >>> So far as I understand from the premise of REST, a media type has to > >>> be self-describing. > >>> > >> > >> No, there is no requirement that a media type be self-describing. REST > >> has a self-descriptive *messaging* constraint. > >> > >>> > >>> IANA registration is neither necessary, nor sufficient to be > >>> self-describing. And if by "standardized", you mean IANA > >>> registration, that's a very narrow view of standards, given that > >>> there are many standards organizations in the world. > >>> > >> > >> IANA is not a standards body, it's a registry, which recognizes > >> standards published by any entity that claims to be a standards body. > >> On the Web, IANA registration is absolutely a requirement -- how can > >> you square your claim that it isn't with Roy's clear definition of the > >> term? > >> > >> "Self-descriptive means that the type is registered and the registry > >> points to a specification and the specification explains how to process > >> the data according to its intent." > >> > >> Can that *be* any more clear? Is there some alternate registry for the > >> Web that I'm not aware of? If your media type isn't registered, and > >> you're sending it over the Web, you're not even _attempting_ to be > >> RESTful. > >> > >>> > >>> To the "sufficient" question: if I have a document that I return as > >>> XML (application/xml), what really matters to the consumer of that > >>> document - can I associate a schema with the document? > >>> > >> > >> That doesn't make your messaging self-descriptive, because you're > >> requiring introspection to determine the nature of the document. I > >> need to know the nature of the data by looking at the headers, not > >> (perhaps) unzipping your content and sniffing it first -- application/ > >> xml is *not* self-descriptive, as has been (correctly) pointed out on > >> this list a hundred times... > >> > >>> > >>> If it isn't, even though the media type in use is well-defined and > >>> registered with IANA, absent some form of schema (DTD, XML Schema, > >>> RelaxNG), the client I wrote really cannot assume that the meanings > >>> are the same. > >>> > >> > >> No, self-descriptiveness does not require introspection of the content > >> to determine its processing model, it's dependent on registered media > >> type identifiers. If I send XHTML as text/plain, you need to treat it > >> as text/plain, not introspect it and determine that its schema means it > >> should be treated as XHTML -- that's sniffing, and sniffing has nothing > >> to do with REST. > >> > >>> > >>> As to the necessary question: Was the "Atom" specification any less > >>> valid and self-describing the day before it was registered with IANA > >>> than it was the day after? It was self-describing before and after. > >>> > >> > >> No, it was not. Identifiers that aren't listed in *any* registry can't > >> *possibly* meet Roy's definition of self-descriptiveness: > >> > >> "Self-descriptive means that the type is registered and the registry > >> points to a specification and the specification explains how to process > >> the data according to its intent." > >> > >> Before Atom's identifier was registered in IANA, it was registered... > >> where? If the identifier isn't in a registry, then it can't point to a > >> specification, so it can't *possibly* meet Roy's definition. > >> > >>> > >>> The internet has few baked in central control points - DNS might be > >>> the only one, really - even with respect to REST, we have a > >>> combination of standards organizations including the IETF, W3C, IANA, > >>> and OASIS that all contribute "standards" to help the web form. > >>> > >> > >> None of which aren't recognized by the IANA registry, which is a > >> registry, not a standards body. Merely a means to point identifiers to > >> descriptive documents, published by *any* entity that claims to be > >> releasing a "standard" by the entity's own (and not IANA's) definition. > >> Could the bar *be* any lower? > >> > >>> > >>> If China, with its hundreds of millions of internet users, decides > >>> that it can't be bothered to register a media type with IANA, yet it > >>> could easily be that *every* relevant Chinese client of that > >>> media-type might readily understand it. That sounds like a feature > >>> of the internet, not a bug. > >>> > >> > >> I have no problem with that. However, in _reality_ there is no > >> alternate registry to IANA, not here, not in China, not anywhere on the > >> planet. So discussions about theoretical alternatives to IANA are just > >> that -- theoretical. On the Web, today, there exists only the IANA > >> registry, therefore there is no other means to meet Roy's requirement: > >> > >> "Self-descriptive means that the type is registered and the registry > >> points to a specification and the specification explains how to process > >> the data according to its intent." > >> > >> Sure, if the Chinese alternative to IANA comes along, it would be a > >> legitimate registry for meeting the self-descriptiveness constraint. > >> But, here's a list of *actual* registries which *exist* today on the > >> Web: > >> > >> IANA > >> > >> Sure, alternatives are possible, but pragmatically, that's _all_ there > >> is. Even if there were some other registry, the wording of RFC 2616 > >> would still "discourage" its use. > >> > >>> > >>> I also like to think that REST make sense to think about in the > >>> context of enterprise software. So to me, the question is, > >>> "self-describing, but for what scope?" Maybe this is a small scope, > >>> such as the software my company writes, and all of our customers. > >>> Yet, my company can then get the benefits of following a RESTful > >>> architecture. > >>> > >> > >> If you're sending HTTP over the Internet, then potential participants > >> in the communication include folks beyond your or your partners' > >> organizational boundary. If your identifier is opaque, then these > >> intermediaries can't be participants, but merely dumb routers, which is > >> the result we're presumably trying to avoid by using REST in the first > >> place. > >> > >>> > >>>> Now, can anyone come back with a solid, convincing argument that > >>>> this is not the case? > >>>> > >>> > >>> Hopefully, the above did that. > >>> > >> > >> Not even close. You've failed to do anything but speculate that maybe > >> some alternative registry will emerge. A valid point, but moot, since > >> no such alternative _has_ emerged. Therefore, in order to meet Roy's > >> requirement for self-descriptiveness: > >> > >> "Self-descriptive means that the type is registered and the registry > >> points to a specification and the specification explains how to process > >> the data according to its intent." > >> > >> There exist, on the Web, exactly ZERO legitimate alternatives to the > >> IANA registry, in practice. Where else are media type identifiers > >> registered for anyone to look up their associated standards? Nowhere, > >> severely restricting the options for meeting the constraint, in > >> practice. > >> > >>> > >>> To that end, I see REST as an aspirational endpoint. > >>> > >> > >> I say that myself, all the time. However: > >> > >>> > >>> I'm not going to deny fellow architects the label of "REST" if, for > >>> example, they got everything right, but used a few non-standard-IANA- > >>> registered media-types for a few small portions of their application. > >>> > >> > >> If there's no intent to standardize a type or register an identifier, > >> then there's no effort being made at self-descriptiveness. This isn't > >> just a mismatch, this is actively thwarting the goals of REST. Using > >> an unregistered identifier for an unstandardized type is only congruous > >> with REST if there is an intent to change that situation -- what I'm > >> arguing against here is the refusal to admit there's even a problem. > >> > >>> > >>> Some pages have links to other pages, and some pages might be > >>> endpoints (with merely incidental links, like "back"). I suggest > >>> that in a RESTful world, the pages that link to other pages should > >>> strongly favor standardized (and registered) media types. When you > >>> get to the "endpoint" pages that don't contribute to the hypertext > >>> state of an application (PDF, Flash, video), then the media type is > >>> much more wide open, and it should be. > >>> > >> > >> No, REST says nothing about self-descriptiveness being optional, or > >> more important / less important based on resource type. Self- > >> descriptive messaging is a constraint, and those who choose to ignore > >> this _fact_ simply are not following the REST style, and should not > >> call their results REST. I am opposed to attaching the REST label > >> where no effort is being made to follow the style. > >> > >>> > >>> Of course, any good developer should reuse what is already available, > >>> and appropriate. On the other hand, given a hammer, don't turn > >>> everything into a nail. Figuring out when to make a new media type, > >>> and when to reuse an existing one might actually be an extremely > >>> difficult call. > >>> > >> > >> 100% agreed. My argument is against the cavalier usage of unregistered, > >> unstandardized types without any mention whatsoever, and outright > >> denial of the existence, of any constraint issues or tradeoffs > >> associated with the practice. The bar to registering an identifier is > >> purposefully low, yet folks act like it's an insurmountable obstacle, > >> then pretend it isn't required. NOT REST! > >> > >> -Eric > >> > >> > >> ------------------------------------ > >> > >> Yahoo! Groups Links > >> > >> > >> > >> > > > > > > ------------------------------------ > > > > Yahoo! Groups Links > > > > > > > > >
I've always interpreted the following passage to mean that Waka might not re-use MIME at all, let alone the IANA syntax for identifiers... "The problem is that I can't say 'REST requires media types to be registered' because both Internet media types and the registry controlled by IANA are a specific architecture's instance of the style -- they could just as well be replaced by some other mechanism for metadata description." ...meaning there's no coupling in REST of "media type" to MIME. Apparently, others have interpreted it to mean you may "just as well" ignore the IANA registry on the Web -- which can't be the correct interpretation because it doesn't square with anything else Roy has said about self-descriptiveness. The reality of the Web is that the IANA registry hasn't "just as well been replaced" by anything, nor has that been suggested as being within the scope of work for HTTPbis, which is what I meant by saying there's no replacement for IANA on the Web anywhere on the horizon. -Eric
Alan Dean wrote: > > At the risk of not being able to walk past the flock of pigeons > without throwing in the cat, I ask the following question: > > "As RSS is not registered with the IANA, do we still call it an > opaque media type even though it has high ubiquity of use and it's > definition is very widely understood?" > I wish there were a date on this, March of what year? What is the status? I don't know... http://workbench.cadenhead.org/news/2937/requesting-mime-media-type-rss ...but that pretty well sums up the arguments for registering a media type identifier for RSS on the Web, including "because Atom has one." Can anyone remember RSS being seriously discussed as an alternative to Atom on this list over the last five years, though? I attribute the disparity to Atom's ubiquitous *and* self-descriptive identifiers. If and when application/rss+xml is accepted into IANA's standards tree (judging by the syntax, also a higher bar to clear), from that point forwards it's self-descriptive. Again, when the context is HTTP over the Internet, i.e. the Web. Until then, what registry is it in? Which only makes it unRESTful on the Web. On my LAN, I'm the registration authority, not IANA, so I can meet self-descriptiveness using any identifier I please, and a standard is whatever I say it is. But, I'm not at liberty to do this with a public Web API... or an identifier with RFC-defined standards-tree syntax. So no, the ubiquity of RSS doesn't make it self-descriptive. The presence of a registered identifier for RSS in *whatever* registry the participants in the communication agree with, makes it self-descriptive. On the Web, the only registry is IANA, so until it actually is/was approved, application/rss+xml is/was an opaque identifier, because its syntax leads me to assume it's an approved standard from an outfit recognized by IANA (the RFC-mandated standards-tree requirements) -- if that isn't the case, how is it self-descriptive? Where do I look it up? -Eric
On Wed, Sep 8, 2010 at 2:21 AM, Eric J. Bowman <eric@...> wrote: > Jrn Wildt wrote: >> >> Sorry for my ignorance, but in all this discussion about standard >> media types, is application/vnd.abc+xml considered standard or >> non-standard? It is XML, which is standard, but it is also a certain >> schema which is ... well what is it? >> > > Well, first, application/vnd.abc+xml isn't a media type, it's a media > type *identifier*. XBEL is a standard media type, but XBEL has no > registered identifier, so serving XBEL isn't self-descriptive. > > I don't have enough context to answer your question. Is this type > being used on an intranet, where the IANA registry is irrelevant, and > it's agreed to by all parties to the transaction? Then there's no REST > mismatch. You can serve XBEL self-descriptively on an intranet, by > assigning it an identifier that everyone on that intranet agrees to. Is agreement by all parties to the transaction, on an intranet, all that's required? Or, is there an implied requirement that one must set up an internal "registry" on an intranet to be truly RESTful? --tim
On Wed, Sep 8, 2010 at 2:46 AM, Eric J. Bowman <eric@...> wrote: > Bob Haugen wrote: >> >> May I suggest starting a new thread with an appropriate title to focus >> exclusively on the IANA registry issue, where each person who has a >> different position states their position clearly and succinctly, and >> thereafter we refer back to that thread as a FAQ? >> > > I've been mulling the idea of suggesting on http-wg that "discouraged" > be replaced with the normative "SHOULD NOT". The implication would be > that HTTP implementations not using IANA would be considered, at best, > "conditionally compliant". I assume there's some logic behind not > stigmatizing intranet HTTP implementations like that, but I'm curious > to know what it is. Still, I think it's wrong to encourage what the > spec explicitly discourages. You keep saying things like "encourage" and "cavalier", I think it's important to point out that most folks here would readily agree that the use of existing, registered Media Type specifications is encouraged and desirable. No one is actively encouraging what the spec discourages - instead, we're saying that if you know what you're doing and there is shared understanding on all participants, it's ok. The contention seems to be this: we would encourage developers to use registered media types and you say they must use registered media types. I'm interested in what system properties[1] are negatively effected under the following scenarios: - suppose I use a well-known, widely used, but un-registered specification (e.g. OpenSearch) - suppose I use a roll-my-own format, but specify it and it's readily discoverable via google by all would-be clients. --tim [1] - http://en.wikipedia.org/wiki/List_of_system_quality_attributes
Looks like the FCC released a bunch of web APIs. I couldn't find any mention of REST (maybe I didn't look hard enough) which I found surprising given that's the coolest label around but glancing over the documentation, it makes sense that they didn't (and hopefully the omission was on purpose). I wonder how useful these APIs are. http://arstechnica.com/web/news/2010/09/calling-all-developers-fcc-releases-apis-for-key-databases.ars , http://reboot.fcc.gov/developer/license-view-api
On 7 September 2010 18:23, Bob Haugen <bob.haugen@...> wrote: > > > > Yahoo says there are 103 messages in this thread. The discussion is > circular and will never end. > > May I suggest starting a new thread with an appropriate title to focus > exclusively on the IANA registry issue, For me all this is now too little too late, because soon I have to start working on different areas than REST, but funny enough, now that you mention it, I felt compelled to actually read what the RFC says - I'm the kind of guy that follows the motto "when everything fails, read the manual"... [RFC2046 - Media Types] A media type value beginning with the characters "X-" is a private value, to be used by consenting systems by mutual agreement. Any format without a rigorous and public definition must be named with an "X-" prefix, and publicly specified values shall never begin with "X-". [RFC4288 - Media Type Specifications and Registration Procedures] For convenience and symmetry with this registration scheme, subtype names with "x." as the first facet may be used for the same purposes for which names starting in "x-" are used. These types are unregistered, experimental, and for use only with the active agreement of the parties exchanging them. (...) Types in this tree MUST NOT be registered. So, even if anyone with good-sense will agree that such things should be used as last-resource, and by no means encouraged, doesn't this mean that even the RFC's foresee the use of media-types that MUST NOT be registered (no ambiguity here whatsoever), with the only constraint that they should start with a X, and we have been wasting our time with this discussion? And thanks to Michael Schuerig for this nice post, that in my opinion exposes all this debate in such a simple and clear way that more than 100 post in this list could not... http://www.schuerig.de/michael/blog/
Antnio Mota wrote: > > And thanks to Michael Schuerig for this nice post, that in my opinion > exposes all this debate in such a simple and clear way that more than > 100 post in this list could not... > YOU are NOT entitled to bitch about the number of posts in this thread, given your sociopathic propensity to constantly take this thread, and any other thread I participate in, off-topic with your asinine, worthless, personalized attacks against my character -- yeah, that really helps others understand REST, and doesn't at all contribute to a lack of conciseness. Good grief. I thought you left, anyway? -Eric
Tim Williams wrote: > > Is agreement by all parties to the transaction, on an intranet, all > that's required? Or, is there an implied requirement that one must > set up an internal "registry" on an intranet to be truly RESTful? > The registry on my LAN exists entirely within my mind, as the only sysadmin. Ideally, any other sysadmin ought to be able to figure out what my identifiers mean, like if I die in a car crash tomorrow, because it's written down *somewhere* what they stand for. It's an incredibly low bar, except where the Web is concerned, and even then the only real difficulty is if you're targeting the "standards tree". -Eric
Tim Williams wrote: > > You keep saying things like "encourage" and "cavalier", I think it's > important to point out that most folks here would readily agree that > the use of existing, registered Media Type specifications is > encouraged and desirable. > Then how come, every time I suggest that HTML is a perfectly acceptable media type to use in REST, the responses come pouring in stating that such a view isn't serious, and that "real REST developers" mint new identifiers for every resource type they encounter? The prevailing view in the REST community has become one where it just isn't considered rational to suggest using HTML, by those who then turn around and recommend opaque identifiers that can't possibly be self- descriptive on the Web, because they aren't IANA-registered. > > No one is actively encouraging what the spec discourages - instead, > we're saying that if you know what you're doing and there is shared > understanding on all participants, it's ok. > I wish that's what folks were saying, then I'd have made my point. But, every time I point out that opaque identifiers fail to meet the self-descriptive messaging constraint, I'm shouted down by those who refuse to believe that any tradeoffs exist when they're used -- I have to agree with Roy, that if you're going into that decision with your eyes wide open, fine. However, when folks refuse to admit, and actively shout down those who point out, that there is a tradeoff, how can those reading their advice be making an informed, eyes-wide-open decision? That's what I mean by "cavalier" -- it *does* make a difference, and not a trivial one, either. On the Web, "participant" includes the entire deployed infrastrucute, not just sender and recipient. If you stick to ubiquitous identifiers whose processing rules have long been understood by the deployed infrastructure, you immediately gain the benefits of REST, which you can't bank on if the deployed infrastructure simply ignores your content because it's never encountered your identifier before. So the ability of ubiquitous identifiers to scale, is an order of magnitude (if not more) greater than exists with opaque identifiers. This is reality, and it's important to point that out to anyone expecting their meant-for-public-consumption Web API to achieve Internet scale, like Mark pointed out to the Blinksale folks: http://tech.groups.yahoo.com/group/rest-discuss/message/6569 Telling Blinksale to g'head with an opaque identifier, without advising that it needs to be registered and standardized to meet the constraints of REST, would be encouraging them to shoot themselves in the foot, as far as their stated goals are concerned. > > The contention seems to be this: we would encourage developers to use > registered media types and you say they must use registered media > types. > I keep bending over backwards to explain that my point is only valid when the context is the Web. When that context is the Web, yes, you MUST use registered identifiers, as there is no other defined means of meeting the self-descriptive messaging constraint in that context. If your API is only meant for limited redistribution and you don't care about scaling, then by all means use HTTPS and don't worry about it. But if you're expecting *anybody* else to re-use your system, including shared intermediary caches, you must play by the established rules for the Web, i.e. the only registry that anybody (and, in fact, everybody) has agreed to is IANA. And isn't that at least 80% of what we discuss regarding REST? Publishing Web APIs meant for general consumption over the public Internet? That is the one thing all the so-called REST APIs out there on the Web have in common -- they all attempt to be Internet-scale solutions, which will never happen without self-descriptive messaging. > > I'm interested in what system properties[1] are negatively effected > under the following scenarios: > REST itself provides all the explanations necessary regarding the tradeoffs of the uniform interface. Without self-descriptive messaging, none of the benefits of the uniform interface will be realized, because the interface won't be uniform. > > - suppose I use a well-known, widely used, but un-registered > specification (e.g. OpenSearch) > Last I looked, OpenSearch results are just an Atom extension, which means re-using the ubiquitous, registered application/atom+xml identifier -- the fact that the content happens to be a search result is an implementation detail that's between sender and recipient, and is not required to be exposed over the wire in Content-Type, because it doesn't affect the intended processing model for the payload. > > - suppose I use a roll-my-own format, but specify it and it's readily > discoverable via google by all would-be clients. > How do you ensure, without a registry (which Google isn't), that nobody else will ever use the same identifier to mean something else? This collision problem is exactly what the registry concept is meant to solve. Forcing sysadmins configuring intermediaries to Google in order to determine intent (which is an unrealistic expectation if the goal is serendipitous re-use) is exactly the problem self-descriptive messaging is meant to avoid: "Self-descriptive means that the type is registered and the registry points to a specification and the specification explains how to process the data according to its intent." Google, or any other search engine, is not a substitute. A self- descriptive message is one where the identifier's meaning is unambiguous. The IANA registry is the means for achieving this on the Web -- search results aren't unambiguous, like registry entries. -Eric
I undertand that you think the world revolve around you and that you suffer some kind of delusional paranoia, but no, I don't give a rat's ass about you and your limited inteligence. Once again, when things don't go your way, you resort to insult. That's typical of some kind of mentalities, I guess... On 8 Sep 2010 20:30, "Eric J. Bowman" <eric@...> wrote: Antnio Mota wrote: > > And thanks to Michael Schuerig for this nice post, that in my opinion > exp... YOU are NOT entitled to bitch about the number of posts in this thread, given your sociopathic propensity to constantly take this thread, and any other thread I participate in, off-topic with your asinine, worthless, personalized attacks against my character -- yeah, that really helps others understand REST, and doesn't at all contribute to a lack of conciseness. Good grief. I thought you left, anyway? -Eric
Antnio Mota wrote: > > So, even if anyone with good-sense will agree that such things should > be used as last-resource, and by no means encouraged, doesn't this > mean that even the RFC's foresee the use of media-types that MUST NOT > be registered (no ambiguity here whatsoever), with the only constraint > that they should start with a X, and we have been wasting our time > with this discussion? > Keeping my answer polite: Those RFCs have nothing to do with the self- descriptive messaging constraint. HTTP doesn't even require you to send Content-Type. REST requires messaging to be self-descriptive, not HTTP, not the rules for the IANA registry: "Self-descriptive means that the type is registered and the registry points to a specification and the specification explains how to process the data according to its intent." If the identifier isn't in a registry, it isn't self-descriptive, even though it's still possible to be fully-compliant with HTTP. So, no, I'm not here to waste anybody's time with irrelevancies. -Eric
In the web, all good citizens (clients, ua's, servers, intermediaries...) should respect, or comply, with the RFCs I quoted, rigth? Maybe it's not a MUST but at least a SHOULD. So, if they are good citizens, they should be aware that is allowed by those RFCs to wich they should comply with, the existence of media-types not-registered with IANA that start with a 'x'. And if they know of their existence they should know what to do with them. For instance, treat application/x.mytopic+xml as if it was application/xml Now, how those messages could comply with the self-descriptive constraint of REST, that implies that the type is registered and the registry points to a specification and etc... ? Because the "consenting systems" that reached a "mutual agreement", the "parties exchanging them" that reached a "active agreement" took care of that. For such agreement a specification must exist and a place, or registry, where to store it and find it when necessary, rigth? So, a non IANA registered type like "application/x.mytopic+xml" is self descriptive to intermediaries that comply with RFC2046 - because they are defined there -, and is self descriptive to the "consenting systems" and the "parties exchanging them" because they so agreed. All this is the way I see it, my interpretation only. I'm not saying that it must be like that, or even that should be like that. And of course also that the web citizens are not obligated to be "good". And also this my point of view is not to encourage such behavour. It should be discouraged but it is possible to do it when necessary. While maintaning the constraints of REST. On 8 Sep 2010 21:28, "Eric J. Bowman" <eric@...> wrote: Antnio Mota wrote: > > So, even if anyone with good-sense will agree that such things should > be ... Keeping my answer polite: Those RFCs have nothing to do with the self- descriptive messaging constraint. HTTP doesn't even require you to send Content-Type. REST requires messaging to be self-descriptive, not HTTP, not the rules for the IANA registry: "Self-descriptive means that the type is registered and the registry points to a specification and ... If the identifier isn't in a registry, it isn't self-descriptive, even though it's still possible to be fully-compliant with HTTP. So, no, I'm not here to waste anybody's time with irrelevancies. -Eric
Nathan wrote: > > As Mike pointed out, Roy has already discussed this at length: > http://tech.groups.yahoo.com/group/rest-discuss/message/6613 > Which I nominate as the most-misunderstood post Roy has ever made. All Roy is saying, is that REST-the-style has no concept of MIME or IANA, which does not mean that IANA-registered identifiers aren't required by the Web architecture, in order to meet self-descriptiveness. Requiring IANA-registered identifiers on the Web doesn't preclude evolution of new types; in fact, we've witnessed the adoption of Atom as a standard with registered identifiers, well after REST was written, as just one example. -Eric
Antnio Mota wrote: > > So, if they are good citizens, they should be aware that is allowed > by those RFCs to wich they should comply with, the existence of > media-types not-registered with IANA that start with a 'x'. And if > they know of their existence they should know what to do with them. > No, what it means is that unregistered identifiers from the experimental tree aren't self-descriptive. The *only* exception to this that I can think of, is that artifiact of a bygone age, application/x-www-form- urlencoded. > > For instance, treat > > application/x.mytopic+xml > > as if it was > > application/xml > Yes, according to RFC 3023, you MAY do that, but it's still an opaque identifier, and you can't bank on that behavior. For example, serving XBEL as application/xml, or treating application/xbel+xml as application/xml, is not self-descriptive because XBEL defines the <bookmark> element as a link. How can any intermediary infer this by treating the payload as application/xml? Your argument infers that application/xml is a valid identifier in REST systems, which everyone including Roy has said it isn't. The identifier must specify the proper processing model, if intermediaries have to fall back to application/xml then they've obviously not understood that the processing model is different for your type. Intermediaries MAY treat application/atom+xml as application/xml if they don't grok Atom, but what good does that do? > > Now, how those messages could comply with the self-descriptive > constraint of REST, that implies that the type is registered and the > registry points to a specification and etc... ? Because the > "consenting systems" that reached a "mutual agreement", the "parties > exchanging them" that reached a "active agreement" took care of > that. For such agreement a specification must exist and a place, or > registry, where to store it and find it when necessary, rigth? > No intermediary can reach a "mutual agreement" about an unregistered type. Which is why, on the Web, self-descriptive messaging requires exactly what Roy says it requires: "Self-descriptive means that the type is registered and the registry points to a specification and the specification explains how to process the data according to its intent." Since X identifiers are by definition not registerable, then they are not, by definition, self-descriptive because they do not, by definition, point to a specification. If only the sender and recipient agree to the meaning, then it's just as Roy describes in Chapter 6: a library-based API, not a network-based API, therefore not a uniform interface. > > So, a non IANA registered type like "application/x.mytopic+xml" is > self descriptive to intermediaries that comply with RFC2046 - because > they are defined there -, and is self descriptive to the "consenting > systems" and the "parties exchanging them" because they so agreed. > Reducing any intermediary to behaving as a dumb router, exactly the situation REST tries to avoid by requiring self-descriptive messaging. See Chapter 6. You can't say that "application/x.mytopic+xml" is registered by pointing to RFC 2046's statement that it's unregisterable. That doesn't make any sense. > > And also this my point of view is not to encourage such behavour. It > should be discouraged but it is possible to do it when necessary. > While maintaning the constraints of REST. > Sending opaque identifiers that aren't in any registry, meets self- descriptiveness, how? Where do I go to look up the meaning? The existence of unregistered experimental identifiers isn't proof that unregistered identifiers are self-descriptive. Not for the Web, where the deployed infrastructure is entirely geared towards the common understanding of a limited number of ubiquitous, registered identifiers. -Eric
Nathan wrote: > > Wow did you really just snip everything interesting, of value, and > specific to the ^^subject line^^ to get back to the angels on pins > bit of the previous thread? > Yes, because I don't understand how we can debate the merits of using unregistered identifiers vis-a-vis self-descriptive messaging, when the majority is still insisting that unregistered identifiers are self- descriptive. I'm not trying to piss you off; just trying to understand your position, specifically: > > This issue is orthogonal to the "making up a new media type for every > resource encountered" discussion, that's a best practise thing I hope > we could discuss. > If we assume that opaque identifiers are somehow self-descriptive, then there's room to have this debate. I don't understand how they can be, so I can't see the issue as orthogonal, rather than central. > > Eric, FACT is that using IANA registered media types on the web is a > SHOULD, using a non registered type is "discouraged" - do we really > need to take this to the TAG and to the IETF http working group and > waste there time with something which is widely understood. > The WebArch document states that we SHOULD send identifiers, but says nothing about whether such identifiers need to be registered. HTTP says not using IANA is "discouraged" which I think is quite clear; however, judging from the pushback, I disagree that it's "widely understood" to mean what it says. > > This issue is orthogonal to the "making up a new media type for every > resource encountered" discussion, that's a best practise thing I hope > we could discuss. > That's exactly the discussion I've been trying to have all year. But we keep coming back to the assumption that application/foo+xml is just as self-descriptive as application/xhtml+xml, on the Web. I'm not actually _trying_ to get folks upset with me, I just don't understand how this is seen as an orthogonal concern? -Eric
Antnio Mota wrote: > > "To be registered" in REST does not mean to be registered in IANA, so > unregistered IANA types can be registered elsewere, being suficient > that "consenting systems" and "parties exchanging them" agree where > that place be. > I never said that it did. I have specifically stated that on the Web, only the IANA registry exists, therefore any identifier not in that registry cannot be self-descriptive. If your system's goal is to scale on the Web, then you'll need an identifier whose understanding isn't restricted to those systems you know about, but one whose correlation with a standard is documented in the only registry those systems you don't know about can refer to. > > Intermediares are not a interested partie in this agreement > Yes, they are, if you're attempting to scale a system on the Web. If you don't care about scaling, or serendipitous re-use, which come from using self-descriptive messaging, then I suppose this is irrelevant. But since the common case is to attempt to scale by taking advantage of caches and such (the existing deployed infrastructure), intermediaries must be considered as potential participants in the communication. > > it's enough for them that they know that possible IANA unregistered > types - easelly identified by the 'x' - can exist, and what to do with > then, like treating "application/x.mytopic+xml" as "application/xml". > No, because that's resorting to guesswork, while self-descriptive messaging is all about not resorting to guesswork. Knowing that a type is unregistered isn't the same as knowing its processing model. Knowing that a type is based on XML isn't the same as knowing its processing model. If an identifier is registered, *anybody* can determine precisely what the intended processing model is -- this is critical to the style. If the goal of your system is to scale on the Web, then any intermediary must be able to deduce precisely the intended processing model, without needing to look beyond the IANA registry. -Eric
Nathan wrote: > > This issue is orthogonal to the "making up a new media type for every > resource encountered" discussion, that's a best practise thing I hope > we could discuss. > Shorter version of my point: It would be great if we could stipulate that any theoretical identifiers under discussion are assumed to represent a proposed standard and are intended to be IANA-registered for use on the Web, _then_ have that debate about re-use vs. evolving new types. But, that stipulation fundamentally changes the nature of that debate, as without it, all things may not otherwise be considered equal unless we limit the discussion to non-Internet uses of HTTP. Unless you were suggesting that we leave HTTP out of it and only consider REST-the-style. But, I don't think that's realistic, since the best example I can give of how the preference to re-use scales better than the preference to create, is to point to the deployed Web infrastructure that's geared around the shared understanding of a limited number of ubiquitous types. While I appreciate your effort to take the debate in a different direction, I hope you understand my concerns. -Eric
Eric, What is the most succinct but relatively complete version of your position on this issue? I'm really hoping that we can avoid another 100-message permathread, get all of the conflicting positions on the table, have somebody nicely summarize, and then just point to this thread until something significantly changes in the HTTP universe. All of the possible arguments have already been stated, over and over. It is not productive to go through them all again. Thanks, Bob Haugen
On Wednesday, September 8, 2010, Eric J. Bowman <eric@...> wrote: > > I never said that it did. I have specifically stated that on the Web, > only the IANA registry exists, therefore any identifier not in that > registry cannot be self-descriptive. If your system's goal is to scale > on the Web, then you'll need an identifier whose understanding isn't > restricted to those systems you know about, but one whose correlation > with a standard is documented in the only registry those systems you > don't know about can refer to. The RFCs I quoted clearly say that unregistered IANA types can exist, identifiable by the 'x'. Where does it say that those 'x' types can not be registered elsewere, namelly by the parties that agreed on then, thus fullfilling that registered aspect of self-descriptness? > >> >> Intermediares are not a interested partie in this agreement >> > > Yes, they are, No they aren't, because (quoting myself) >> it's enough for them that they know that possible IANA unregistered >> types - easelly identified by the 'x' - can exist, and what to do with >> then, like treating "application/x.mytopic+xml" as "application/xml". >> > > No, because that's resorting to guesswork, No it's not, because application/xml is a IANA registered type, and so intermediaries can know about i's security level, they can cache it, they can do a lot of things with it that they normally do. They don't know their processing model and so they can't prefetch links and the like? True, but that's the trade-off that Roy also talked about. Do you want quotes about this? while self-descriptive > messaging is all about not resorting to guesswork. Knowing that a type > is unregistered isn't the same as knowing its processing model. Again, being unregistered in IANA like the x-types doesent imply they cannot be registered elsewhere, namelly by the interested parties that agreed on their definition. Knowing > that a type is based on XML isn't the same as knowing its processing > model. If an identifier is registered, *anybody* can determine > precisely what the intended processing model is -- this is critical to > the style. No, what is critical to the style is that interested parties determine precisely what the intended processing model is - that *can or can not* be anybody. If the goal of your system is to scale on the Web, then any > intermediary must be able to deduce precisely the intended processing > model, without needing to look beyond the IANA registry. Do you really believe that intermediaries look up the IANA registry every time they see a media type they don't already know? Let's say I register a type application/gatosapato+xml where I defined <miau> to indicate a hipermedia transition. Do you think that when a intermediarie sees that it will prefetch the target of <miau>? If not, what is the diference between a IANA registered application/gatosapato+xml and a IANA unregistered - but registered at Gatos&Sapatos.com with the agreement of all parties interested in Gatos and in Sapatos - whose identifier is application/x-gatosapato+xml? -- * Melhores cumprimentos / Beir beannacht / Best regards **_____________________________________________________________* *Antnio Manuel dos Santos Mota Contacts: http://card.ly/amsmota **_____________________________________________________________ If you're on a mobile phone you can add my contact by scanning the code on the card below * ** <http://lh3.ggpht.com/_1aTCd17_nho/TEblN4fV-_I/AAAAAAAAAHw/wZ51kXrfJcs/qrcode_bc_1.jpg> Please click on the image to enlarge it<http://lh3.ggpht.com/_1aTCd17_nho/TEblN4fV-_I/AAAAAAAAAHw/wZ51kXrfJcs/qrcode_bc_1.jpg> *_____________________________________________________________ * *Disclaimer: The opinions expressed herein are just my opinions and they are not necessary right.* *_____________________________________________________________*
Bob Haugen wrote: > > What is the most succinct but relatively complete version of your > position on this issue? > You'd have to look back further than this thread for background; I've spent all of 2010 advocating for the re-use of ubiquitous identifiers, and in particular, using HTML and Atom (which is easily extended), in opposition to the notion of creating a new identifier for every implementation of every resource type out there. And being called an idiot, incessantly and from all sides, because of it. I have neither fancy degrees nor corporatist experience, but am merely a self-glorified Web developer/host/ISP who is incapable of seeing REST outside the confines of my experience, therefore I am the one responsible for all the confusion as to what constitutes a self- descriptive message, and may as well be ignored, if not asked to just stop even trying to make a valid point... Well, if your alternative to re-using HTML/Atom is based on your self- glorified credentials and experience, and has led you to believe that minting endless opaque identifiers for use on the Web is congruous with the REST style, then perhaps you _should_ be listening to the benefit of my particular knowledge and experience, because my messaging is self-descriptive and yours isn't, as far as the reality of the deployed infrastructure is concerned. Not to personalize this or anything... but apparently even my thick skin to Internet flaming has its limits, and I've definitely reached that point today. ;-) Succinctly? Minting new types and identifiers willy-nilly incurs substantial cost which will not be recovered any time soon, whereas re- use, even of such mundane markup as HTML, gives immediate payback without incurring any of those costs. Shouldn't even be controversial. -Eric
On Wed, Sep 8, 2010 at 7:40 PM, Eric J. Bowman <eric@...>wrote: > > Succinctly? Minting new types and identifiers willy-nilly incurs > substantial cost which will not be recovered any time soon, whereas re- > use, even of such mundane markup as HTML, gives immediate payback > without incurring any of those costs. Shouldn't even be controversial. > > -Eric > > _ > You get no argument from me (and possibly most others) on this point. However, suggesting that doing so (on the Web) for whatever reasons is not REST is where things become sticky and hard to swallow because I am personally yet to see why it is so. You've demonstrated why it is (or may be) "bad" to do so, being being bad in my opinion doesn't equate to nullify the possible RESTfulness of an application. I think reasonable people can agree to disagree and maybe this is just one of those places where we do so.
I've been fighting the overuse of ATOM in all things for a long time, and I stand by this position. Reusing of a media type only makes sense if you can map the model that this media type provides in a way that clients understanding such media type, without the extensions, can still do something meaningful with them. If they don't, then it's a no go. If no state transfer can be operated on the existing media type, by carrying the opaque extensions without understanding them, then the media type reuse is a fallacy and a hindrance to innovation. It's not a matter of black and white, and it certainly isn't about fetching anything that has a series of records in an atom feed, or any data in an microformat in html. It's a cost / benefit analysis that you ought to do, balancing the benefits for clients not understanding embedding, and still doing something useful with the document, with the cost of specializing and minting a new mediatype. Seb ________________________________________ From: rest-discuss@yahoogroups.com [rest-discuss@yahoogroups.com] on behalf of Eric J. Bowman [eric@...] Sent: 09 September 2010 00:40 To: Bob Haugen Cc: REST-Discuss Group Discussion Subject: Re: [rest-discuss] Re: To use registered media-types or not? Bob Haugen wrote: > > What is the most succinct but relatively complete version of your > position on this issue? > You'd have to look back further than this thread for background; I've spent all of 2010 advocating for the re-use of ubiquitous identifiers, and in particular, using HTML and Atom (which is easily extended), in opposition to the notion of creating a new identifier for every implementation of every resource type out there. And being called an idiot, incessantly and from all sides, because of it. I have neither fancy degrees nor corporatist experience, but am merely a self-glorified Web developer/host/ISP who is incapable of seeing REST outside the confines of my experience, therefore I am the one responsible for all the confusion as to what constitutes a self- descriptive message, and may as well be ignored, if not asked to just stop even trying to make a valid point... Well, if your alternative to re-using HTML/Atom is based on your self- glorified credentials and experience, and has led you to believe that minting endless opaque identifiers for use on the Web is congruous with the REST style, then perhaps you _should_ be listening to the benefit of my particular knowledge and experience, because my messaging is self-descriptive and yours isn't, as far as the reality of the deployed infrastructure is concerned. Not to personalize this or anything... but apparently even my thick skin to Internet flaming has its limits, and I've definitely reached that point today. ;-) Succinctly? Minting new types and identifiers willy-nilly incurs substantial cost which will not be recovered any time soon, whereas re- use, even of such mundane markup as HTML, gives immediate payback without incurring any of those costs. Shouldn't even be controversial. -Eric ------------------------------------ Yahoo! Groups Links
Antnio Mota wrote: > > The RFCs I quoted clearly say that unregistered IANA types can exist, > identifiable by the 'x'. Where does it say that those 'x' types can > not be registered elsewere, namelly by the parties that agreed on > then, thus fullfilling that registered aspect of self-descriptness? > It doesn't. But if you care one whit about anybody else, particularly an intermediary (or Google), re-using your content (like, to pre-cache DNS lookups, or store it in cache) on the Web, then the only registry anybody and everybody agrees on is IANA, which by virtue of not listing them, declares them to be "experimental" as opposed to "standardized," thus rendering them non-self-descriptive. > > No it's not, because application/xml is a IANA registered type, and so > intermediaries can know about i's security level, they can cache it, > they can do a lot of things with it that they normally do. They don't > know their processing model and so they can't prefetch links and the > like? True, but that's the trade-off that Roy also talked about. Do > you want quotes about this? > The unRESTfulness of application/xml is an established fact within this community, which has been subject to plenty of debate, with the final word on the subject coming from Roy. If anything, this is off-topic to this thread. My every assertion isn't based on some whim. Registered and ubiquitous, yes, but my demo's conneg result of serving application/ xhtml+xml to any browser it can is RESTful, while its serving application/xml to IE is indisputably a REST mismatch, no matter how pragmatic it is for me to do so. > > Again, being unregistered in IANA like the x-types doesent imply they > cannot be registered elsewhere, namelly by the interested parties that > agreed on their definition. > But if you're expecting it to scale on the deployed infrastructure of the Web, you're in for a rude awakening -- nobody else will ever be an interested party, making your API library-based, not network-based, as discussed in REST Chapter 6. > > No, what is critical to the style is that interested parties determine > precisely what the intended processing model is - that *can or can > not* be anybody. > If your goal is to scale on the deployed infrastructure of the Web, then you must assume that *everybody* is a potential participant, which is what is meant by serendipitous re-use and anarchic scalability -- if you make it possible for random folks to ramp your system up to Internet scale by using ubiquitous identifiers, they will do just that, and you don't even have to ask... > > >If the goal of your system is to scale on the Web, then any > > intermediary must be able to deduce precisely the intended > > processing model, without needing to look beyond the IANA registry. > > Do you really believe that intermediaries look up the IANA registry > every time they see a media type they don't already know? > Do you really believe that any sysadmin at Google, or responsible for some intermediary's configuration at some ISP, is going to spend one second of their time searching for the meaning of experimental, opaque identifiers, rather than just excluding those types wholesale? The only thing looking anything up in the IANA registry is people, and those people don't even have time to do that -- rather, they just configure for the limited number of ubiquitous types whose well-known processing models account for 100% of the traffic they care to cache or otherwise interact with. > > Let's say I register a type application/gatosapato+xml where I > defined <miau> to indicate a hipermedia transition. Do you think > that when a intermediarie sees that it will prefetch the target of > <miau>? > No, I think that's the *last* thing you can expect them to do. The most you can expect is to be treated as application/xml, which in practice means that, *if* an intermediary bothers with it at all (beyond caching, which will be reliable), it will either determine to treat it as HTML or, if that doesn't work, consider links to be XInclude, rdf:about, or XLink. Is that really your processing model? Then how would any intermediary determine that <miau> has anything to do with linking? Better, from the sysadmin-at-large-on-the-Web reality of things, to simply ignore your content, if not block it as an attempt at tunneling, particularly on PUT or POST. Although, anything ending +xml will probably cache OK, but caching is hardly the be-all and end-all of what Web intermediaries do these days. > > If not, what is the diference between a IANA registered > application/gatosapato+xml and a IANA unregistered - but registered at > Gatos&Sapatos.com with the agreement of all parties interested in > Gatos and in Sapatos - whose identifier is > application/x-gatosapato+xml? > That still comes down to the goals of your system. If you expect Google or anyone else to re-use your content in a serendipitous fashion aimed at anarchic scalability, you'll need to stick with ubiquitous identifiers, because it's unlikely to occur otherwise. You can at least meet the self-descriptive messaging constraint by registering an identifier, but the one you propose is unlikely to be approved (its syntax is that of the standards tree), the correct syntax would be application/vnd.gatosapato+xml, which has a very low bar for approval. If you only care about caching, that would be good enough. But if you want your links understood as links, you'll need to stick to well-known processing models, unless you expect your vnd. type to be adopted widely enough to ever be more than a blip on the radar. Again, my context is the Web, not your intranet/extranet -- if the only consumer that you care about is a partner corporation, then you should be using HTTPS to traverse the Internet, which makes IANA irrelevant to your needs. -Eric
Nathan wrote: > > Eric J. Bowman wrote: > > Nathan wrote: > >> This issue is orthogonal to the "making up a new media type for > >> every resource encountered" discussion, that's a best practise > >> thing I hope we could discuss. > >> > > > > Shorter version of my point: It would be great if we could > > stipulate that any theoretical identifiers under discussion are > > assumed to represent a proposed standard and are intended to be > > IANA-registered for use on the Web, _then_ have that debate about > > re-use vs. evolving new types. But, that stipulation fundamentally > > changes the nature of that debate, as without it, all things may > > not otherwise be considered equal unless we limit the discussion to > > non-Internet uses of HTTP. > > Yes, I (personally) think it would be most useful to add that > constraint to the conversation and agree that it fundamentally > changes the nature of the debate - well, think it's more than clear > it does by now! > Thank you, that's pretty much all I needed to hear from anyone, to step off and shutup for a while. > > To perhaps hit media types 'only in REST-the-style' then it would be > my understanding that a shared understanding of the media type by all > parties is needed, else the messages cannot be understood and the > whole thing is pointless. How that understanding is formulated is not > part of REST, however logic and practicalities would probably point > us right back to a registry of media types and each one with a spec. > This point is exactly why I discuss Gopher. The 'h' identifier is widely understood to mean HTML, but that understanding isn't defined anywhere but within the common, late-era client and server libraries. While Gopher is inherently unRESTful due to its lack of caching, it does represent a uniform interface with a network-based API -- right up until you use 'h' and relegate yourself to a library-based API, by breaking the self-descriptive messaging constraint. IOW, Gopher's lack of a registry illustrates exactly why we likely need some sort of registry, for any protocol that separates resource from representation, which is a basic requirement for instantiating REST. -Eric
Nathan wrote: > > sadly, +1 too - personally consider it 'bad-practise' in all but the > "plan to go web-scale and IANA-registered with it" scenario; more > realistically though is the "is RESTful" / "not RESTful" badge really > that important, and if so does it do anybody any good to go labelling > implementations containing known bad-practises as "is RESTful" and > therefore conveying to the web community that X app, by implication > is 'good practise' and all RESTy?? > Given the immediate-need Web-scale goals of the Blinksale API, is this merely best practice: http://tech.groups.yahoo.com/group/rest-discuss/message/6569 Or does forging ahead with application/vnd.blinksale.person+xml with no intent to ever publish a standard, outright violate REST? I believe the thesis is clear in its repeated emphasis on variations of the word "standard" when discussing self-descriptive messaging and the uniform interface. If there's no intent to ever publish or register, then there's zero chance of meeting the self-descriptiveness constraint on the Web. On an intranet, this simply doesn't matter. On the Web, the advice Mark gives to Blinksale is absolutely vital to their goals. REST is a tool for long-term planning by identifying and working to resolve mismatches. It's one thing to use an identifier before it's standardized, if the system evolves towards REST without needing to be changed. But it's another thing to use an opaque identifier which works against that evolution, since it will never be self-descriptive. Don't the real-world implications of this difference, on the deployed Web infrastructure, prove that this point of distinction between what's REST and what isn't, is both well-defined and vital, as opposed to opinion? Calling it opinion is, to me, like calling it coincidental that Web architecture reflects Roy's personal preferences on the issue, as stated here: http://tech.groups.yahoo.com/group/rest-discuss/message/6613 I'm certainly not going to call an intranet system unRESTful if it doesn't have a registry, you guys are right, that doesn't do anyone any good. But in the context of the Web, there is only one way to meet the self-descriptiveness constraint, so it can't be considered an esoteric or trivial distraction. OK, I'll shutup *soon*... -Eric
Sebastien Lambla wrote: > > I've been fighting the overuse of ATOM in all things for a long time, > and I stand by this position. > Whereas I believe too few ever even consider extending or re-using Atom. See, now, that's the sort of thing I have no problem agreeing to disagree on -- it's a matter of preference, with no vital make-or-break REST constraint at stake. :-) -Eric
Again, I really think it's useless to try to argue with you, and I really hope others will say their saying. You insist in saying words like serendipity, anarchic scablity and the like almost like political slogans, without taking into account the concept behind the words. You insist saying that Rest on the web is about *anybody* when Roy talks about "participants in the communication", RFC2046 about "consenting systems" and RFC4288 about "parties exchanging them", wich are clearly not *anybody* nor *everibody*. Well, I'm beated by fatigue, have it your way... I hope other people can make their own opinions independently of your opinion and mine too. On 9 Sep 2010 01:14, "Eric J. Bowman" <eric@...> wrote: Antnio Mota wrote: > > The RFCs I quoted clearly say that unregistered IANA types can exist, > ide... It doesn't. But if you care one whit about anybody else, particularly an intermediary (or Google), re-using your content (like, to pre-cache DNS lookups, or store it in cache) on the Web, then the only registry anybody and everybody agrees on is IANA, which by virtue of not listing them, declares them to be "experimental" as opposed to "standardized," thus rendering them non-self-descriptive. > > No it's not, because application/xml is a IANA registered type, and so > intermediaries can kn... The unRESTfulness of application/xml is an established fact within this community, which has been subject to plenty of debate, with the final word on the subject coming from Roy. If anything, this is off-topic to this thread. My every assertion isn't based on some whim. Registered and ubiquitous, yes, but my demo's conneg result of serving application/ xhtml+xml to any browser it can is RESTful, while its serving application/xml to IE is indisputably a REST mismatch, no matter how pragmatic it is for me to do so. > > Again, being unregistered in IANA like the x-types doesent imply they > cannot be registered e... But if you're expecting it to scale on the deployed infrastructure of the Web, you're in for a rude awakening -- nobody else will ever be an interested party, making your API library-based, not network-based, as discussed in REST Chapter 6. > > No, what is critical to the style is that interested parties determine > precisely what the in... If your goal is to scale on the deployed infrastructure of the Web, then you must assume that *everybody* is a potential participant, which is what is meant by serendipitous re-use and anarchic scalability -- if you make it possible for random folks to ramp your system up to Internet scale by using ubiquitous identifiers, they will do just that, and you don't even have to ask... > > > If the goal of your system is to scale on the Web, then any > > intermediary must be able to ... Do you really believe that any sysadmin at Google, or responsible for some intermediary's configuration at some ISP, is going to spend one second of their time searching for the meaning of experimental, opaque identifiers, rather than just excluding those types wholesale? The only thing looking anything up in the IANA registry is people, and those people don't even have time to do that -- rather, they just configure for the limited number of ubiquitous types whose well-known processing models account for 100% of the traffic they care to cache or otherwise interact with. > > Let's say I register a type application/gatosapato+xml where I > defined <miau> to indicate a ... No, I think that's the *last* thing you can expect them to do. The most you can expect is to be treated as application/xml, which in practice means that, *if* an intermediary bothers with it at all (beyond caching, which will be reliable), it will either determine to treat it as HTML or, if that doesn't work, consider links to be XInclude, rdf:about, or XLink. Is that really your processing model? Then how would any intermediary determine that <miau> has anything to do with linking? Better, from the sysadmin-at-large-on-the-Web reality of things, to simply ignore your content, if not block it as an attempt at tunneling, particularly on PUT or POST. Although, anything ending +xml will probably cache OK, but caching is hardly the be-all and end-all of what Web intermediaries do these days. > > If not, what is the diference between a IANA registered > application/gatosapato+xml and a IANA... That still comes down to the goals of your system. If you expect Google or anyone else to re-use your content in a serendipitous fashion aimed at anarchic scalability, you'll need to stick with ubiquitous identifiers, because it's unlikely to occur otherwise. You can at least meet the self-descriptive messaging constraint by registering an identifier, but the one you propose is unlikely to be approved (its syntax is that of the standards tree), the correct syntax would be application/vnd.gatosapato+xml, which has a very low bar for approval. If you only care about caching, that would be good enough. But if you want your links understood as links, you'll need to stick to well-known processing models, unless you expect your vnd. type to be adopted widely enough to ever be more than a blip on the radar. Again, my context is the Web, not your intranet/extranet -- if the only consumer that you care about is a partner corporation, then you should be using HTTPS to traverse the Internet, which makes IANA irrelevant to your needs. -Eric
Just a off-topic question, why are here so many quotes of messages I never saw? Did I lost messages or were private messages? Another question, the title of this thread refers only to IANA registered types, rigth? On 9 Sep 2010 02:17, "Eric J. Bowman" <eric@...> wrote: Sebastien Lambla wrote: > > I've been fighting the overuse of ATOM in all things for a long time, ... Whereas I believe too few ever even consider extending or re-using Atom. See, now, that's the sort of thing I have no problem agreeing to disagree on -- it's a matter of preference, with no vital make-or-break REST constraint at stake. :-) -Eric
Eb wrote: > On Wed, Sep 8, 2010 at 7:40 PM, Eric J. Bowman <eric@...>wrote: > >> Succinctly? Minting new types and identifiers willy-nilly incurs >> substantial cost which will not be recovered any time soon, whereas re- >> use, even of such mundane markup as HTML, gives immediate payback >> without incurring any of those costs. Shouldn't even be controversial. >> >> -Eric > > You get no argument from me (and possibly most others) on this point. +1 > However, suggesting that doing so (on the Web) for whatever reasons is not > REST is where things become sticky and hard to swallow because I am > personally yet to see why it is so. You've demonstrated why it is (or may > be) "bad" to do so, being being bad in my opinion doesn't equate to nullify > the possible RESTfulness of an application. sadly, +1 too - personally consider it 'bad-practise' in all but the "plan to go web-scale and IANA-registered with it" scenario; more realistically though is the "is RESTful" / "not RESTful" badge really that important, and if so does it do anybody any good to go labelling implementations containing known bad-practises as "is RESTful" and therefore conveying to the web community that X app, by implication is 'good practise' and all RESTy?? > I think reasonable people can agree to disagree and maybe this is just one > of those places where we do so. nicely put :)
Eric J. Bowman wrote: > Tim Williams wrote: >> You keep saying things like "encourage" and "cavalier", I think it's >> important to point out that most folks here would readily agree that >> the use of existing, registered Media Type specifications is >> encouraged and desirable. >> > > Then how come, every time I suggest that HTML is a perfectly acceptable > media type to use in REST, the responses come pouring in stating that > such a view isn't serious, and that "real REST developers" mint new > identifiers for every resource type they encounter? > > The prevailing view in the REST community has become one where it just > isn't considered rational to suggest using HTML, by those who then turn > around and recommend opaque identifiers that can't possibly be self- > descriptive on the Web, because they aren't IANA-registered. Yes, key point, would make an excellent discussion! FWIW I wholeheartedly agree that not only is HTML a rational choice, but a very good choice for a RESTful system. >> No one is actively encouraging what the spec discourages - instead, >> we're saying that if you know what you're doing and there is shared >> understanding on all participants, it's ok. >> > > I wish that's what folks were saying, then I'd have made my point. > But, every time I point out that opaque identifiers fail to meet the > self-descriptive messaging constraint, I'm shouted down by those who > refuse to believe that any tradeoffs exist when they're used -- I have > to agree with Roy, that if you're going into that decision with your > eyes wide open, fine. > > However, when folks refuse to admit, and actively shout down those who > point out, that there is a tradeoff, how can those reading their advice > be making an informed, eyes-wide-open decision? That's what I mean by > "cavalier" -- it *does* make a difference, and not a trivial one, > either. > > On the Web, "participant" includes the entire deployed infrastrucute, > not just sender and recipient. If you stick to ubiquitous identifiers > whose processing rules have long been understood by the deployed > infrastructure, you immediately gain the benefits of REST, which you > can't bank on if the deployed infrastructure simply ignores your > content because it's never encountered your identifier before. Yes, also getting back to the key point here and well worth discussing - diverging in to a pedantic angels on pins discussion about whether a media type must be registered with IANA is always going to be fruitless. As Mike pointed out, Roy has already discussed this at length: http://tech.groups.yahoo.com/group/rest-discuss/message/6613 and this remains why using non registered media types is discouraged, and at best can only ever be discouraged (because you need to allow for evolution). You make some good points here, would be good to see these two particularly discussed further, imho, far more fruitful for everyone involved to get some community agreement on 'best-practise' hearing all sides for and against :) Best, Nathan ps: sorry for changing the subject line, but hopefully it'll refresh the conversation to positives.
--- In rest-discuss@yahoogroups.com, Eb <amaeze@...> wrote: > > On Tue, Sep 7, 2010 at 11:31 AM, omarshariffdontlikeit < > omarshariffdontlikeit@...> wrote: > > > > > > > (For the record, I understand some of the backlash. Eric's argument style > > is... forceful :). And don't get me wrong, it would make my life easier as a > > developer NOT to have to register my mime types with IANA, but, from Eric's > > arguments, he makes a very persueasive case of the benefits of doing so, of > > which, only one (fullfilling standardized media types constaint of ReST) is > > ReST based.) > > > > > > > > I don't believe anyone is questioning the benefits. It's definitely a best > practice such as brushing your teeth after every meal is. But I think the > notion that if you're doing REST over the Web (HTTP +Internet) - I still > have challenges even understanding that - and your type is not registered in > IANA (as it would seem that registering in IANA is the only form of > standardization), it is not REST is where there is a discussion. If two > parties agree on a standard and use it over the web pre registering it in > IANA, I don't see how this disqualifies the solution as not being REST (even > if it's violating a best practice). > Standardization != Registration Standardization is generally needed when people have differing implementations/api's for the same thing - a media type doesn't really need standardized unless this is the case. Registration however is an entirely different matter, registration puts things in the global (or internet wide) scope, and is there to both stop naming collisions (we both say our spec is text/foo) and to provide a registry of media types + specs which one can consult in the various contexts where you'd want to do this. As for using only IANA registered media types, this certainly isn't a constraint of REST, nor of HTTP. REST mentions the use of Media Types for obvious reasons, and then goes on to recognise "Some of the namespaces are governed by separate Internet standards and shared by multiple protocols (e.g., URI schemes [21], media types [48]" - i.e. IANA. It does not constrain to using registered media types though, and cannot. HTTP and HTTPbis both specify "Use of non-registered media types is discouraged.", not prohibited, not MUST NOT, not even SHOULD NOT. Use of non-registered media types cannot be stopped, primarily because the specs need to allow new media types to be used, and to go through registration - a good example would be opensearch's custom media type which is pending registration but widely used. However, like Mike Amundsen and a few others I'd be very keen to see this conversation move on to why it's discouraged, and moreover some guidance as to when is a good time to start working on a new media type vs when's a bad time. Best (and hello, first post!), Nathan
Eric J. Bowman wrote: > Nathan wrote: >> As Mike pointed out, Roy has already discussed this at length: >> http://tech.groups.yahoo.com/group/rest-discuss/message/6613 >> Wow did you really just snip everything interesting, of value, and specific to the ^^subject line^^ to get back to the angels on pins bit of the previous thread? > Which I nominate as the most-misunderstood post Roy has ever made. All > Roy is saying, is that REST-the-style has no concept of MIME or IANA, > which does not mean that IANA-registered identifiers aren't required by > the Web architecture, in order to meet self-descriptiveness. http://www.w3.org/TR/webarch/#internet-media-type Eric, FACT is that using IANA registered media types on the web is a SHOULD, using a non registered type is "discouraged" - do we really need to take this to the TAG and to the IETF http working group and waste there time with something which is widely understood. This issue is orthogonal to the "making up a new media type for every resource encountered" discussion, that's a best practise thing I hope we could discuss. Nathan
Eric J. Bowman wrote: > Yes, because I don't understand how we can debate the merits of using > unregistered identifiers vis-a-vis self-descriptive messaging, when the > majority is still insisting that unregistered identifiers are self- > descriptive. I'm not trying to piss you off; just trying to understand > your position, specifically: My position specifically, is as follows: - Use of non IANA-registered media types on the Web is discouraged - One should (re-)use existing registered media types wherever possible - Creating a new media-type should only be considered when the media-type is required for a new protocol (and where existing media types cannot be used) or where the new media-type brings benefits at a web scale not found in another existing registered media-type (such as when application/exi was introduced for Efficient XML Interchange) - The precise wording around "discouraged" in the HTTP spec is to allow new media-types to be tested and used "in the wild" whilst conveying that you should re-use existing media types where-ever possible. More generally, my preference goes to augmenting existing media types by extension, such @data- attributes in HTML, or HTML+RDFa (which keeps the text/html media type). Underlying my position is some (perhaps false) reasoning on REST to bring my understanding in-line with modern / future web architecture and paradigm shifts on the web. For instance, where at the time of writing the norm was to have server side applications and a 'web of documents' then things like multiple browser contexts were considered (I believe) unrestful - an example of this would be the iframe in a document, because the user agent could not control or reassemble it's own application state, bringing this forward to more resent (web 2.0) times the same could be said for XMLHttpRequest - however with the advent of advanced user agent implemented APIs including client side storage it's become possible to have 100% client side applications where the user agent is both an IO layer (http as the interface) and a presentation layer (browser context), and where the application state to be considered is that of the client side application, which is now outside of HTTP and thus REST. Interestingly this positions the web in a data-tier position where data is the resources named by URIs which each have a current state, that state is represented within and transferred via HTTP messages, likewise the state of a resource is manipulated with HTTP messages and usage of the HTTP verbs. Back to the point in hand, this also encourages a more modular approach to application coding where, each specific module only needs to understand part of a message. An analogy may be a user agent processing a single HTML document, where different parts of said HTML document are processed and understood by very different modules doing different tasks, one may hook in to understand <script>s of a certain type, another may hook in to dereference embedded links, another to instantiate DOM event listeners and so forth. Thus re-using, augmenting and extending existing media types (like HTML) makes perfect sense to me from all angles, including REST and it's constraints. The rest of your reply is important, but with my beliefs as stated, I can't really debate them with you. The only thing I can add that may be of some use, is that I personally invariably think of REST within the context of the current deployed (HTTP-based) web and thus everything I say about it is in this context; whereas perhaps some of the replies you have received are REST specific leaving out the web context, and thus stand true, if perhaps a little irrelevant for the majority of common uses. >> This issue is orthogonal to the "making up a new media type for every >> resource encountered" discussion, that's a best practise thing I hope >> we could discuss. >> > > If we assume that opaque identifiers are somehow self-descriptive, then > there's room to have this debate. I don't understand how they can be, > so I can't see the issue as orthogonal, rather than central. > >> Eric, FACT is that using IANA registered media types on the web is a >> SHOULD, using a non registered type is "discouraged" - do we really >> need to take this to the TAG and to the IETF http working group and >> waste there time with something which is widely understood. >> > > The WebArch document states that we SHOULD send identifiers, but says > nothing about whether such identifiers need to be registered. HTTP > says not using IANA is "discouraged" which I think is quite clear; > however, judging from the pushback, I disagree that it's "widely > understood" to mean what it says. AFAICT it (awww) points (heavily) to the (re-)use of Internet Media Types, as in RFC2046, as in IANA registered - I won't debate this but quite sure that if push came to shove both the HTTPbis working group and the TAG would confirm this to be 'best-practise' and that it should be widely understood. >> This issue is orthogonal to the "making up a new media type for every >> resource encountered" discussion, that's a best practise thing I hope >> we could discuss. >> > > That's exactly the discussion I've been trying to have all year. But > we keep coming back to the assumption that application/foo+xml is just > as self-descriptive as application/xhtml+xml, on the Web. I'm not > actually _trying_ to get folks upset with me, I just don't understand > how this is seen as an orthogonal concern? Perhaps it's not orthogonal, yet an investigation in to the +/- of creating new media types vs re-using existing ones may bring the community to a unified understanding and consensus - hopefully aligning with the HTTP spec and web arch pointers. Regards, Nathan
Eric J. Bowman wrote: > Nathan wrote: >> This issue is orthogonal to the "making up a new media type for every >> resource encountered" discussion, that's a best practise thing I hope >> we could discuss. >> > > Shorter version of my point: It would be great if we could stipulate > that any theoretical identifiers under discussion are assumed to > represent a proposed standard and are intended to be IANA-registered > for use on the Web, _then_ have that debate about re-use vs. evolving > new types. But, that stipulation fundamentally changes the nature of > that debate, as without it, all things may not otherwise be considered > equal unless we limit the discussion to non-Internet uses of HTTP. Yes, I (personally) think it would be most useful to add that constraint to the conversation and agree that it fundamentally changes the nature of the debate - well, think it's more than clear it does by now! > Unless you were suggesting that we leave HTTP out of it and only > consider REST-the-style. But, I don't think that's realistic, since > the best example I can give of how the preference to re-use scales > better than the preference to create, is to point to the deployed Web > infrastructure that's geared around the shared understanding of a > limited number of ubiquitous types. While I appreciate your effort to > take the debate in a different direction, I hope you understand my > concerns. I wasn't suggesting that we leave HTTP and the web out of it, however perhaps it would be an easy hit to consider only in REST-the-style first, or in a forked thread, then move on to HTTP and the Web as we commonly think of it. To perhaps hit media types 'only in REST-the-style' then it would be my understanding that a shared understanding of the media type by all parties is needed, else the messages cannot be understood and the whole thing is pointless. How that understanding is formulated is not part of REST, however logic and practicalities would probably point us right back to a registry of media types and each one with a spec. I would also like to hear from people who'd consider it 'a good thing' to mint new media types without the aforementioned stipulation, generally in the scenario: "I'm making a domain specific application for my client, I'll make a new media type for it (or for each resource type in the domain)". And what benefits they see in this (google tells me people do this, with examples, so interested to hear). Finally, yes I understand your concerns, and to some extent share them - but also feel to get anywhere we'd have to clear it up as you've attempted with the above stipulations or simply move on and address at the end after weighing up all the pros and cons. Regards, Nathan
On 09/08/2010 09:01 PM, Eric J. Bowman wrote: > Nathan wrote: > >> sadly, +1 too - personally consider it 'bad-practise' in all but the >> "plan to go web-scale and IANA-registered with it" scenario; more >> realistically though is the "is RESTful" / "not RESTful" badge really >> that important, and if so does it do anybody any good to go labelling >> implementations containing known bad-practises as "is RESTful" and >> therefore conveying to the web community that X app, by implication >> is 'good practise' and all RESTy?? >> >> > Given the immediate-need Web-scale goals of the Blinksale API, is this > merely best practice: > > http://tech.groups.yahoo.com/group/rest-discuss/message/6569 > > Or does forging ahead with application/vnd.blinksale.person+xml with no > intent to ever publish a standard, outright violate REST? > > I believe the thesis is clear in its repeated emphasis on variations of > the word "standard" when discussing self-descriptive messaging and the > uniform interface. If there's no intent to ever publish or register, > then there's zero chance of meeting the self-descriptiveness constraint > on the Web. On an intranet, this simply doesn't matter. On the Web, > the advice Mark gives to Blinksale is absolutely vital to their goals. > > REST is a tool for long-term planning by identifying and working to > resolve mismatches. It's one thing to use an identifier before it's > standardized, if the system evolves towards REST without needing to be > changed. But it's another thing to use an opaque identifier which works > against that evolution, since it will never be self-descriptive. > > Don't the real-world implications of this difference, on the deployed > Web infrastructure, prove that this point of distinction between what's > REST and what isn't, is both well-defined and vital, as opposed to > opinion? Calling it opinion is, to me, like calling it coincidental > that Web architecture reflects Roy's personal preferences on the issue, > as stated here: > > http://tech.groups.yahoo.com/group/rest-discuss/message/6613 > > I'm certainly not going to call an intranet system unRESTful if it > doesn't have a registry, you guys are right, that doesn't do anyone any > good. But in the context of the Web, there is only one way to meet the > self-descriptiveness constraint, so it can't be considered an esoteric > or trivial distraction. > > OK, I'll shutup *soon*... > > -Eric > I also concur that the type should be standardized but couldn't it be standardized between two parties who decide to use the web and don't want to use https and don't use IANA (for whatever reasons). Yes, "reach" is severely hampered but that's a design choice is it not? The style allows for significant, but if I choose not to leverage it fully by not (for example) having GETable resources and hence limiting caching would it not be REST? Is every type registered on IANA understood by every intermediary we have today? I realize this is not the point, but reach is not obtained by just registering either (though it potentially simplifies it). Anyway, I completely get what your saying and see merit to it. I'm just not sold on such a strong stance.
Ant�nio Mota wrote: > Just a off-topic question, why are here so many quotes of messages I never > saw? Did I lost messages or were private messages? That would be because I'm new and "Your message must be approved by the group owner before being sent to the group.", but by the powers of cc on the OP the replies make it to the list whilst my mails don't. not a PITA at all, yahoo groups ftw :| > Another question, the title of this thread refers only to IANA registered > types, rigth? yes :) Internet Media Types as per RFC2046 registered with IANA, as in: http://www.iana.org/assignments/media-types/ Given that AWWW states using IANA registered media types as best practise, HTTP discourages using media types that are not IANA registered, and REST says that media types are shared by multiple protocols and governed by IANA - then I think it's fair to say the subject infers that when we say registered we mean IANA registered. + I set the subject and yup that's what I meant. Best, Nathan
Ah ok, thanks for clarifying. Because as I was pointing in another thread x-types MUST NOT be registered in IANA but there's nothing preveting them of being registered elsewhere... On Thursday, September 9, 2010, Nathan <nathan@...> wrote: > Antnio Mota wrote: > > Just a off-topic question, why are here so many quotes of messages I never > saw? Did I lost messages or were private messages? > > > That would be because I'm new and "Your message must be approved by the group owner before being sent to the group.", but by the powers of cc on the OP the replies make it to the list whilst my mails don't. > > not a PITA at all, yahoo groups ftw :| > > > Another question, the title of this thread refers only to IANA registered > types, rigth? > > > yes :) > > Internet Media Types as per RFC2046 registered with IANA, as in: > http://www.iana.org/assignments/media-types/ > > Given that AWWW states using IANA registered media types as best practise, HTTP discourages using media types that are not IANA registered, and REST says that media types are shared by multiple protocols and governed by IANA - then I think it's fair to say the subject infers that when we say registered we mean IANA registered. > > + I set the subject and yup that's what I meant. > > Best, > > Nathan > > -- * Melhores cumprimentos / Beir beannacht / Best regards **_____________________________________________________________* *Antnio Manuel dos Santos Mota Contacts: http://card.ly/amsmota **_____________________________________________________________ If you're on a mobile phone you can add my contact by scanning the code on the card below * ** <http://lh3.ggpht.com/_1aTCd17_nho/TEblN4fV-_I/AAAAAAAAAHw/wZ51kXrfJcs/qrcode_bc_1.jpg> Please click on the image to enlarge it<http://lh3.ggpht.com/_1aTCd17_nho/TEblN4fV-_I/AAAAAAAAAHw/wZ51kXrfJcs/qrcode_bc_1.jpg> *_____________________________________________________________ * *Disclaimer: The opinions expressed herein are just my opinions and they are not necessary right.* *_____________________________________________________________*
Eb wrote: > > I also concur that the type should be standardized but couldn't it be > standardized between two parties who decide to use the web and don't > want to use https and don't use IANA (for whatever reasons). > Is that a uniform interface, then, meaning a decoupled network-based API, i.e. REST? Or is it an application-specific library-based API coupling two implementations together, i.e. NOT REST? > > Yes, "reach" is severely hampered but that's a design choice is it > not? > Nothing in REST leads me to believe that nonstandardized types are congruous with the style. This isn't a design choice -- it's some other architecture. The only defined mechanism for indicating what standardized type you're using on the Web, is the IANA registry. This "reach" is only important insofar as what architecture you choose to follow. REST is pretty unequivocal about requiring standardized types, so choosing an opaque identifier that isn't ever intended to point to anything, seems to me like a decision not to use REST. Still speaking in the context of the Web. -Eric
"Eric J. Bowman" wrote: > > Eb wrote: > > > > I also concur that the type should be standardized but couldn't it > > be standardized between two parties who decide to use the web and > > don't want to use https and don't use IANA (for whatever reasons). > > > > Is that a uniform interface, then, meaning a decoupled network-based > API, i.e. REST? Or is it an application-specific library-based API > coupling two implementations together, i.e. NOT REST? > Bear in mind, that's a rhetorical question meant to illustrate the point I'm driving at, I think we know my answer... See REST Chapter 6 in its entirety, but specifically 6.5.1. -Eric
On 09/08/2010 10:03 PM, Eric J. Bowman wrote: > Eb wrote: > >> I also concur that the type should be standardized but couldn't it be >> standardized between two parties who decide to use the web and don't >> want to use https and don't use IANA (for whatever reasons). >> >> > Is that a uniform interface, then, meaning a decoupled network-based > API, i.e. REST? Or is it an application-specific library-based API > coupling two implementations together, i.e. NOT REST? > > >> Yes, "reach" is severely hampered but that's a design choice is it >> not? >> >> > Nothing in REST leads me to believe that nonstandardized types are > congruous with the style. This isn't a design choice -- it's some > other architecture. The only defined mechanism for indicating what > standardized type you're using on the Web, is the IANA registry. > > This "reach" is only important insofar as what architecture you choose > to follow. REST is pretty unequivocal about requiring standardized > types, so choosing an opaque identifier that isn't ever intended to > point to anything, seems to me like a decision not to use REST. Still > speaking in the context of the Web. > > -Eric > Standardization is required. Need to sleep over when registering in IANA is the only way of standardizing when using the Web besides IANA being a control mechanism (which has its merits). I'm not sure why this is a requirement for the "internet" and not the "intranet". What about these environments is inherently different that permits the intranet to use whatever?
Eb wrote: > On 09/08/2010 10:03 PM, Eric J. Bowman wrote: >> Eb wrote: >> >>> I also concur that the type should be standardized but couldn't it be >>> standardized between two parties who decide to use the web and don't >>> want to use https and don't use IANA (for whatever reasons). >>> >>> >> Is that a uniform interface, then, meaning a decoupled network-based >> API, i.e. REST? Or is it an application-specific library-based API >> coupling two implementations together, i.e. NOT REST? >> >> >>> Yes, "reach" is severely hampered but that's a design choice is it >>> not? >>> >>> >> Nothing in REST leads me to believe that nonstandardized types are >> congruous with the style. This isn't a design choice -- it's some >> other architecture. The only defined mechanism for indicating what >> standardized type you're using on the Web, is the IANA registry. >> >> This "reach" is only important insofar as what architecture you choose >> to follow. REST is pretty unequivocal about requiring standardized >> types, so choosing an opaque identifier that isn't ever intended to >> point to anything, seems to me like a decision not to use REST. Still >> speaking in the context of the Web. >> >> -Eric >> > > > Standardization is required. Need to sleep over when registering in > IANA is the only way of standardizing when using the Web besides IANA > being a control mechanism (which has its merits). I'm not sure why this > is a requirement for the "internet" and not the "intranet". What about > these environments is inherently different that permits the intranet to > use whatever? This isn't really related to scoping to internet/intranet/web, between two private parties or the whole world, if you're using HTTP (anywhere) then: "HTTP uses Internet Media Types [RFC2046] in the Content-Type and Accept header fields" ... "Media-type values are registered with the Internet Assigned Number Authority (IANA)." Not sure how much clearer you can get than that.
Eb wrote: > > Standardization is required. Need to sleep over when registering in > IANA is the only way of standardizing when using the Web besides IANA > being a control mechanism (which has its merits). I'm not sure why > this is a requirement for the "internet" and not the "intranet". > What about these environments is inherently different that permits > the intranet to use whatever? > My test bench for REST development is confined within my LAN. All possible participants are known to me, and I am the only sysadmin. I can tell just by looking, what any identifier refers to, and I can keep that registry in my head. Messaging is self-descriptive to all participants, because I'm the sysadmin who configured them all. A standard is whatever I say is a standard. On the Web, a standard is only what HTTP says is a standard, which HTTP defers to RFC 2048, which says a standard is anything in the "standards tree" or perhaps the "vendor tree," which means anything the IANA registry says is a standard is what constitutes a standard, and I have no say in the matter like I do on my LAN. From REST: "The important point... is that REST does capture all of those aspects of a distributed hypermedia system that are considered central to the behavioral and performance requirements of the Web, such that optimizing behavior within the model will result in optimum behavior within the deployed Web architecture." REST insists that standardization is not an orthogonal concern, and the real-world results of sticking with standardized types on the Web is an optimization of the model which results in optimized behavior on the deployed infrastructure of the Web. Kinda leads me to believe that using opaque identifiers goes against the style, whereas registered identifiers pointing to approved standards is central to it, as far as the Web is concerned. -Eric
Nathan wrote: > > This isn't really related to scoping to internet/intranet/web, > between two private parties or the whole world, if you're using HTTP > (anywhere) then: > > "HTTP uses Internet Media Types [RFC2046] in the Content-Type and > Accept header fields" ... "Media-type values are registered with the > Internet Assigned Number Authority (IANA)." > > Not sure how much clearer you can get than that. > Right, I thought we were discussing your earlier question: "[M]ore realistically though is the 'is RESTful' / 'not RESTful' badge really that important, and if so does it do anybody any good to go labelling implementations..." What I'm saying is realistically, no, it doesn't matter at all unless we're discussing the Web, in which case yeah, it's really important. -Eric
Eric J. Bowman wrote: > Nathan wrote: >> This isn't really related to scoping to internet/intranet/web, >> between two private parties or the whole world, if you're using HTTP >> (anywhere) then: >> >> "HTTP uses Internet Media Types [RFC2046] in the Content-Type and >> Accept header fields" ... "Media-type values are registered with the >> Internet Assigned Number Authority (IANA)." >> >> Not sure how much clearer you can get than that. >> > > Right, I thought we were discussing your earlier question: "[M]ore > realistically though is the 'is RESTful' / 'not RESTful' badge really > that important, and if so does it do anybody any good to go labelling > implementations..." > > What I'm saying is realistically, no, it doesn't matter at all unless > we're discussing the Web, in which case yeah, it's really important. Hi Eric, I've came to a point where I'm a little confused about what you mean by the above. It's one or more of the following. Being RESTful only matters when discussing the Web (doesn't matter on say a LAN). The RESTfulness of a REST system only matters when discussing the Web (doesn't matter on say a LAN). Using IANA-registered media types only matters when discussing REST on the Web (doesn't matter on say a LAN). Or "other". Can you clarify? Best, Nathan
*Nathan, a month ago I asked this on this list:
*Is REST realm - the problem-space where it should be applied, or where
it makes sense to apply it - exclusively the Web? Or it should, or it
can, be applied to the more general space of network-based software
architectures, thus including intranets (network based apps that runs
exclusively inside a company) and extranets (the use of private
networks and/or the public infrastructure of the internet to connect a
limited number of companies - considering limited does not equal
small)?
*Guess what, after 20 replies on that thread, a new thread started with a
answer to that question that has now 138 messages, and this thread that
originated on that second thread that contains by now 25 messages, we are
still debating the issue...
I certainly don't believe that there is a REST for the public Internet and
another REST for the private Intranets, I think what is true for one should
also be true for the other. We can't even speak of "two
applications/implementations of REST" in regarding to internet/intranet,
because the architecture is the same. That is not to say that the
"trade-offs" you are allowed to do are the same ("allowed" in the sense of
keeping it RESTfull). *
On 9 September 2010 08:38, Nathan <nathan@...> wrote:
>
>
> Eric J. Bowman wrote:
> > Nathan wrote:
> >> This isn't really related to scoping to internet/intranet/web,
> >> between two private parties or the whole world, if you're using HTTP
> >> (anywhere) then:
> >>
> >> "HTTP uses Internet Media Types [RFC2046] in the Content-Type and
> >> Accept header fields" ... "Media-type values are registered with the
> >> Internet Assigned Number Authority (IANA)."
> >>
> >> Not sure how much clearer you can get than that.
> >>
> >
> > Right, I thought we were discussing your earlier question: "[M]ore
> > realistically though is the 'is RESTful' / 'not RESTful' badge really
> > that important, and if so does it do anybody any good to go labelling
> > implementations..."
> >
> > What I'm saying is realistically, no, it doesn't matter at all unless
> > we're discussing the Web, in which case yeah, it's really important.
>
> Hi Eric,
>
> I've came to a point where I'm a little confused about what you mean by
> the above. It's one or more of the following.
>
> Being RESTful only matters when discussing the Web (doesn't matter on
> say a LAN).
>
> The RESTfulness of a REST system only matters when discussing the Web
> (doesn't matter on say a LAN).
>
> Using IANA-registered media types only matters when discussing REST on
> the Web (doesn't matter on say a LAN).
>
> Or "other".
>
> Can you clarify?
>
> Best,
>
> Nathan
>
>
On 09 Sep, 2010,at 10:48 AM, António Mota <amsmota@...> wrote:
>
>
> Nathan, a month ago I asked this on this list:
>
> Is REST realm - the problem-space where it should be applied, or where
> it makes sense to apply it - exclusively the Web? Or it should, or it
> can, be applied to the more general space of network-based software
> architectures, thus including intranets (network based apps that runs
> exclusively inside a company) and extranets (the use of private
> networks and/or the public infrastructure of the internet to connect a
> limited number of companies - considering limited does not equal
> small)?
See last paragraph of
http://tech.groups.yahoo.com/group/rest-discuss/message/15819
Jan
>
>
> Guess what, after 20 replies on that thread, a new thread started with a answer to that question that has now 138 messages, and this thread that originated on that second thread that contains by now 25 messages, we are still debating the issue...
>
> I certainly don't believe that there is a REST for the public Internet and another REST for the private Intranets, I think what is true for one should also be true for the other. We can't even speak of "two applications/implementations of REST" in regarding to internet/intranet, because the architecture is the same. That is not to say that the "trade-offs" you are allowed to do are the same ("allowed" in the sense of keeping it RESTfull).
>
>
> On 9 September 2010 08:38, Nathan <nathan@...> wrote:
>
>
>
> Eric J. Bowman wrote:
> > Nathan wrote:
> >> This isn't really related to scoping to internet/intranet/web,
> >> between two private parties or the whole world, if you're using HTTP
> >> (anywhere) then:
> >>
> >> "HTTP uses Internet Media Types [RFC2046] in the Content-Type and
> >> Accept header fields" ... "Media-type values are registered with the
> >> Internet Assigned Number Authority (IANA)."
> >>
> >> Not sure how much clearer you can get than that.
> >>
> >
> > Right, I thought we were discussing your earlier question: "[M]ore
> > realistically though is the 'is RESTful' / 'not RESTful' badge really
> > that important, and if so does it do anybody any good to go labelling
> > implementations..."
> >
> > What I'm saying is realistically, no, it doesn't matter at all unless
> > we're discussing the Web, in which case yeah, it's really important.
>
> Hi Eric,
>
> I've came to a point where I'm a little confused about what you mean by
> the above. It's one or more of the following.
>
> Being RESTful only matters when discussing the Web (doesn't matter on
> say a LAN).
>
> The RESTfulness of a REST system only matters when discussing the Web
> (doesn't matter on say a LAN).
>
> Using IANA-registered media types only matters when discussing REST on
> the Web (doesn't matter on say a LAN).
>
> Or "other".
>
> Can you clarify?
>
> Best,
>
> Nathan
>
>
>
>
>
Can you clarify, Jan? I don't understand what you're pointing at:
The fact is that most people write message queues for systems
> that are more operational than informational -- i.e., they are
> doing something, usually at a high rate of speed, that isn't
> intended to be viewed as an information service, except in
> the form of an archive or summary of past events. Would a
> more RESTful message queue have significant architectural
> properties that outweigh the trade-off on performance, or
> would it be better to use a tightly coupled eventing protocol
> and merely provide the resulting archive and summaries via
> normal RESTful interaction? That kind of question needs to
> be answered by an architect familiar with all of the design
> contraints for the proposed system.
>
Are you saying that the answer to my question is:
That kind of question needs to
> be answered by an architect familiar with all of the design
> contraints for the proposed system.
>
If yes, my question is not about a specific system, it's a general one, and
I see now that the words "should" should not be present(!). Let me rephrase
then:
Is REST realm - the problem-space where it MAY be applied (where it makes
> sense to apply it) exclusively the Web? Or it MAY be applied to the more
> general space of network-based software architectures, thus including
> intranets (network based apps that runs exclusively inside a company) and
> extranets (the use of private networks and/or the public infrastructure of
> the internet to connect a limited number of companies - considering limited
> does not equal small)?
>
2010/9/9 algermissen1971 <algermissen1971@...>
>
>
> On 09 Sep, 2010,at 10:48 AM, Antnio Mota <amsmota@...> wrote:
>
>
>
> *Nathan, a month ago I asked this on this list:
>
> *Is REST realm - the problem-space where it should be applied, or where
> it makes sense to apply it - exclusively the Web? Or it should, or it
> can, be applied to the more general space of network-based software
> architectures, thus including intranets (network based apps that runs
> exclusively inside a company) and extranets (the use of private
> networks and/or the public infrastructure of the internet to connect a
> limited number of companies - considering limited does not equal
> small)?
>
>
> See last paragraph of
>
> http://tech.groups.yahoo.com/group/rest-discuss/message/15819
>
> Jan
>
>
>
>
>
>
> *Guess what, after 20 replies on that thread, a new thread started with a
> answer to that question that has now 138 messages, and this thread that
> originated on that second thread that contains by now 25 messages, we are
> still debating the issue...
>
> I certainly don't believe that there is a REST for the public Internet and
> another REST for the private Intranets, I think what is true for one should
> also be true for the other. We can't even speak of "two
> applications/implementations of REST" in regarding to internet/intranet,
> because the architecture is the same. That is not to say that the
> "trade-offs" you are allowed to do are the same ("allowed" in the sense of
> keeping it RESTfull). *
>
>
> On 9 September 2010 08:38, Nathan <nathan@...> wrote:
>
>>
>>
>> Eric J. Bowman wrote:
>> > Nathan wrote:
>> >> This isn't really related to scoping to internet/intranet/web,
>> >> between two private parties or the whole world, if you're using HTTP
>> >> (anywhere) then:
>> >>
>> >> "HTTP uses Internet Media Types [RFC2046] in the Content-Type and
>> >> Accept header fields" ... "Media-type values are registered with the
>> >> Internet Assigned Number Authority (IANA)."
>> >>
>> >> Not sure how much clearer you can get than that.
>> >>
>> >
>> > Right, I thought we were discussing your earlier question: "[M]ore
>> > realistically though is the 'is RESTful' / 'not RESTful' badge really
>> > that important, and if so does it do anybody any good to go labelling
>> > implementations..."
>> >
>> > What I'm saying is realistically, no, it doesn't matter at all unless
>> > we're discussing the Web, in which case yeah, it's really important.
>>
>> Hi Eric,
>>
>> I've came to a point where I'm a little confused about what you mean by
>> the above. It's one or more of the following.
>>
>> Being RESTful only matters when discussing the Web (doesn't matter on
>> say a LAN).
>>
>> The RESTfulness of a REST system only matters when discussing the Web
>> (doesn't matter on say a LAN).
>>
>> Using IANA-registered media types only matters when discussing REST on
>> the Web (doesn't matter on say a LAN).
>>
>> Or "other".
>>
>> Can you clarify?
>>
>> Best,
>>
>> Nathan
>>
>>
>
>
>
>
>
On 09 Sep, 2010,at 12:00 PM, António Mota <amsmota@...> wrote:
> Can you clarify, Jan? I don't understand what you're pointing at:
>
> The fact is that most people write message queues for systems
> that are more operational than informational -- i.e., they are
> doing something, usually at a high rate of speed, that isn't
> intended to be viewed as an information service, except in
> the form of an archive or summary of past events. Would a
> more RESTful message queue have significant architectural
> properties that outweigh the trade-off on performance, or
> would it be better to use a tightly coupled eventing protocol
> and merely provide the resulting archive and summaries via
> normal RESTful interaction? That kind of question needs to
> be answered by an architect familiar with all of the design
> contraints for the proposed system.
He is basically saying: If what you expose is not intended to be consumed in an 'information systems style' and if it is not intended for large scale integration then the performance trade-off induced by REST is probably not desired. (But you need to get a knowledgeable architect to make that decision).
Example: if you have a system that needs to receive events in the form of high-speed, small measuring events (say the pitch or speed of a plane) REST is likely not the best integration style. However, if you expose the history of events or a report of errors for consumption, REST likely is.
In my opinion it also very much depends on the anticipated number of clients (exactly one, a few, very many), the control you have over them and how much it hurts (in terms of resources, lost business transactions etc) to shutdown or evolve the server.
Jan
>
>
> Are you saying that the answer to my question is:
>
> That kind of question needs to
> be answered by an architect familiar with all of the design
> contraints for the proposed system.
>
>
> If yes, my question is not about a specific system, it's a general one, and I see now that the words "should" should not be present(!). Let me rephrase then:
>
> Is REST realm - the problem-space where it MAY be applied (where it makes sense to apply it) exclusively the Web? Or it MAY be applied to the more general space of network-based software architectures, thus including intranets (network based apps that runs exclusively inside a company) and extranets (the use of private networks and/or the public infrastructure of the internet to connect a limited number of companies - considering limited does not equal small)?
>
>
>
>
>
> 2010/9/9 algermissen1971 <algermissen1971@...>
>
>
>
> On 09 Sep, 2010,at 10:48 AM, António Mota <amsmota@...> wrote:
>
>>
>>
>> Nathan, a month ago I asked this on this list:
>>
>> Is REST realm - the problem-space where it should be applied, or where
>> it makes sense to apply it - exclusively the Web? Or it should, or it
>> can, be applied to the more general space of network-based software
>> architectures, thus including intranets (network based apps that runs
>> exclusively inside a company) and extranets (the use of private
>> networks and/or the public infrastructure of the internet to connect a
>> limited number of companies - considering limited does not equal
>> small)?
>
> See last paragraph of
>
> http://tech.groups.yahoo.com/group/rest-discuss/message/15819
>
> Jan
>
>
>
>
>>
>>
>>
>> Guess what, after 20 replies on that thread, a new thread started with a answer to that question that has now 138 messages, and this thread that originated on that second thread that contains by now 25 messages, we are still debating the issue...
>>
>> I certainly don't believe that there is a REST for the public Internet and another REST for the private Intranets, I think what is true for one should also be true for the other. We can't even speak of "two applications/implementations of REST" in regarding to internet/intranet, because the architecture is the same. That is not to say that the "trade-offs" you are allowed to do are the same ("allowed" in the sense of keeping it RESTfull).
>>
>>
>> On 9 September 2010 08:38, Nathan <nathan@...> wrote:
>>
>>
>>
>> Eric J. Bowman wrote:
>> > Nathan wrote:
>> >> This isn't really related to scoping to internet/intranet/web,
>> >> between two private parties or the whole world, if you're using HTTP
>> >> (anywhere) then:
>> >>
>> >> "HTTP uses Internet Media Types [RFC2046] in the Content-Type and
>> >> Accept header fields" ... "Media-type values are registered with the
>> >> Internet Assigned Number Authority (IANA)."
>> >>
>> >> Not sure how much clearer you can get than that.
>> >>
>> >
>> > Right, I thought we were discussing your earlier question: "[M]ore
>> > realistically though is the 'is RESTful' / 'not RESTful' badge really
>> > that important, and if so does it do anybody any good to go labelling
>> > implementations..."
>> >
>> > What I'm saying is realistically, no, it doesn't matter at all unless
>> > we're discussing the Web, in which case yeah, it's really important.
>>
>> Hi Eric,
>>
>> I've came to a point where I'm a little confused about what you mean by
>> the above. It's one or more of the following.
>>
>> Being RESTful only matters when discussing the Web (doesn't matter on
>> say a LAN).
>>
>> The RESTfulness of a REST system only matters when discussing the Web
>> (doesn't matter on say a LAN).
>>
>> Using IANA-registered media types only matters when discussing REST on
>> the Web (doesn't matter on say a LAN).
>>
>> Or "other".
>>
>> Can you clarify?
>>
>> Best,
>>
>> Nathan
>>
>>
>>
>>
>>
>
>
*Anyway, this was a side note, maybe we should not disperse and stick to this thread question, when using HTTP over the public Internet infrastructure, **to be** RESTfull we MAY use non-IANA registered media-types, or on the contrary, to be RESTfull one MUST stick to those **IANA registered media-types?*
2010/9/9 algermissen1971 <algermissen1971@...> > > > > He is basically saying: If what you expose is not intended to be consumed > in an 'information systems style' and if it is not intended for large scale > integration then the performance trade-off induced by REST is probably not > desired. (But you need to get a knowledgeable architect to make that > decision). > > Example: if you have a system that needs to receive events in the form of > high-speed, small measuring events (say the pitch or speed of a plane) REST > is likely not the best integration style. However, if you expose the history > of events or a report of errors for consumption, REST likely is. > > In my opinion it also very much depends on the anticipated number of > clients (exactly one, a few, very many), the control you have over them and > how much it hurts (in terms of resources, lost business transactions etc) to > shutdown or evolve the server. > > So your're saying that it MAY be used on intranet/extranet, rigth?
On 09 Sep, 2010,at 12:22 PM, António Mota <amsmota@...> wrote: > 2010/9/9 algermissen1971 <algermissen1971@...> > > > > > He is basically saying: If what you expose is not intended to be consumed in an 'information systems style' and if it is not intended for large scale integration then the performance trade-off induced by REST is probably not desired. (But you need to get a knowledgeable architect to make that decision). > > Example: if you have a system that needs to receive events in the form of high-speed, small measuring events (say the pitch or speed of a plane) REST is likely not the best integration style. However, if you expose the history of events or a report of errors for consumption, REST likely is
[resending - Webmailer sort of ate the message I guess] On 09 Sep, 2010,at 12:22 PM, António Mota <amsmota@...> wrote: > > > So your're saying that it MAY be used on intranet/extranet, rigth? Well, yes - of course I do. Check the title of my blog: http://www.nordsc.com/blog/ :-) Jan > > > >
On 09/08/2010 10:40 PM, Nathan wrote: > > This isn't really related to scoping to internet/intranet/web, between > two private parties or the whole world, if you're using HTTP > (anywhere) then: > > "HTTP uses Internet Media Types [RFC2046] in the Content-Type and > Accept header fields" ... "Media-type values are registered with the > Internet Assigned Number Authority (IANA)." > > Not sure how much clearer you can get than that. > I think you see why are asked the question I asked because you've asked Eric a follow up question. If HTTP is being used, there is conclusion that can be arrived that says that for your app to be RESTful, its types must be registered in IANA. I think Antonio asks this also very explicitly (although scoping it to the "public internet" only).
Nathan wrote: > > > Right, I thought we were discussing your earlier question: "[M]ore > > realistically though is the 'is RESTful' / 'not RESTful' badge > > really that important, and if so does it do anybody any good to go > > labelling implementations..." > > > > What I'm saying is realistically, no, it doesn't matter at all > > unless we're discussing the Web, in which case yeah, it's really > > important. > > Hi Eric, > > I've came to a point where I'm a little confused about what you mean > by the above. It's one or more of the following. > > Being RESTful only matters when discussing the Web (doesn't matter on > say a LAN). > > The RESTfulness of a REST system only matters when discussing the Web > (doesn't matter on say a LAN). > > Using IANA-registered media types only matters when discussing REST > on the Web (doesn't matter on say a LAN). > > Or "other". > > Can you clarify? > Being RESTful only matters when it's appropriate to the system, I'm not a purist. If REST is the goal, then all its constraints always matter. If REST on the Web is the goal, then self-descriptiveness requires using the IANA registry to point an identifier at a standard, by definition. That is not "my" interpretation, and there is no room for "alternate" interpretations, where the Web is concerned. A case may be made (what does "discouraged" mean) for alternate interpretations in other contexts, i.e. intranets or new protocols that aren't HTTP, but _not_ for HTTP over the Internet, i.e. the Web. Which is why I haven't allowed all these attempts to identify such exceptions in other contexts, sway me one bit from my assertion that on the Web, REST requires you to use IANA-registered identifiers pointing to approved standards. I'm only interested in helping folks with REST development for the "common case of the Web", and in that context, my advice on this matter is not an opinion, and there's no rational reason for it to have led to months of nonstop debate based on Google being the same thing as a registry, etc. in an effort to get me to admit that this is not a black- and-white truism. -Eric
Eb wrote: > > If HTTP is being used, there is conclusion that can be arrived that > says that for your app to be RESTful, its types must be registered in > IANA. > Exactly. REST requires standardized types. HTTP defines a standardized type as having an entry in the IANA registry. REST requires self-descriptive messaging, defined as having a registered identifier. HTTP only defines the IANA registry. Therefore, while REST in general says nothing about IANA-registered MIME types, it does require them in any HTTP instantiation of REST, which is of vital importance when developing for the "common case" of the Web. I am being RESTful in my re-use of the XBEL standard to serve a standalone blogroll. Unfortunately, that standard has no registered identifier. Therefore, as there exists no other mechanism besides the IANA registry to self-descriptively indicate to the world that I'm using the XBEL standard, my messaging is NOT self-descriptive when I serve XBEL -- my choices are the opaque string application/xbel+xml or the not-self-descriptive application/xml. I choose the former. This is absolutely a REST mismatch, but hardly an insoluble problem. Once I've taken the trouble to register the identifier, serving XBEL will no longer be a REST mismatch, with no changes to my system required. Until such registration is approved, however, my messaging can't be considered self-descriptive because my identifier is opaque, and this carries with it the real-world consequence of having such messages ignored (beyond being routed) by the deployed infrastructure of the Web -- obvious proof that it isn't RESTful, by definition. So my choice of XBEL clearly incurs costs (i.e. marshalling a new standardization effort around XBEL to assign it both a media type identifier and an XML namespace identifier) which I avoided entirely when I was using HTML to mark up my blogroll. After registration of an identifier is approved, any benefit from my choice as compared to when I was using HTML is dependent on uptake (Roy's "gray area of increasing RESTfulness"). Which I assume there will be, because the ubiquity of XBEL in many browsers as well as a plethora of online bookmark-organizing-and- sharing services means that XBEL's processing model is already widely understood. I simply lack any means of specifying that processing model over-the-wire, i.e. in a Content-Type HTTP header, which is absolutely a fundamental requirement of REST. If serving a standardized type like XBEL over HTTP can't be self- descriptive because there's no registered identifier, then how can any nonstandardized type with no registered identifier be self-descriptive? How can such a nonstandardized type be considered RESTful, even if it is given a registered identifier to make it at least self-descriptive? Standardization is central to the concept of a uniform interface, network-based API. Use of data types for which standardization is never intended, is some other architecture I'm not familiar with, and all I require from those advocating such solutions is a point-by-point rebuttal of Roy's thesis. -Eric
On 09/10/2010 11:01 AM, Eric J. Bowman wrote: > Eb wrote: > >> If HTTP is being used, there is conclusion that can be arrived that >> says that for your app to be RESTful, its types must be registered in >> IANA. >> >> > Exactly. REST requires standardized types. HTTP defines a > standardized type as having an entry in the IANA registry. REST > requires self-descriptive messaging, defined as having a registered > identifier. HTTP only defines the IANA registry. Therefore, while > REST in general says nothing about IANA-registered MIME types, it does > require them in any HTTP instantiation of REST, which is of vital > importance when developing for the "common case" of the Web. > > I am being RESTful in my re-use of the XBEL standard to serve a > standalone blogroll. Unfortunately, that standard has no registered > identifier. Therefore, as there exists no other mechanism besides the > IANA registry to self-descriptively indicate to the world that I'm > using the XBEL standard, my messaging is NOT self-descriptive when I > serve XBEL -- my choices are the opaque string application/xbel+xml or > the not-self-descriptive application/xml. I choose the former. > > This is absolutely a REST mismatch, but hardly an insoluble problem. > Once I've taken the trouble to register the identifier, serving XBEL > will no longer be a REST mismatch, with no changes to my system > required. Until such registration is approved, however, my messaging > can't be considered self-descriptive because my identifier is opaque, > and this carries with it the real-world consequence of having such > messages ignored (beyond being routed) by the deployed infrastructure > of the Web -- obvious proof that it isn't RESTful, by definition. > > So my choice of XBEL clearly incurs costs (i.e. marshalling a new > standardization effort around XBEL to assign it both a media type > identifier and an XML namespace identifier) which I avoided entirely > when I was using HTML to mark up my blogroll. After registration of an > identifier is approved, any benefit from my choice as compared to when > I was using HTML is dependent on uptake (Roy's "gray area of increasing > RESTfulness"). > > Which I assume there will be, because the ubiquity of XBEL in many > browsers as well as a plethora of online bookmark-organizing-and- > sharing services means that XBEL's processing model is already widely > understood. I simply lack any means of specifying that processing > model over-the-wire, i.e. in a Content-Type HTTP header, which is > absolutely a fundamental requirement of REST. > > If serving a standardized type like XBEL over HTTP can't be self- > descriptive because there's no registered identifier, then how can any > nonstandardized type with no registered identifier be self-descriptive? > How can such a nonstandardized type be considered RESTful, even if it > is given a registered identifier to make it at least self-descriptive? > Standardization is central to the concept of a uniform interface, > network-based API. > > Use of data types for which standardization is never intended, is some > other architecture I'm not familiar with, and all I require from those > advocating such solutions is a point-by-point rebuttal of Roy's thesis. > > -Eric > If this is the case, I don't see why the requirement would change for REST behind the firewall (if HTTP is being used). That is, as long as HTTP is being used (wherever), IANA registration is required. Eb -- blog: http://eikonne.wordpress.com twitter: http://twitter.com/eikonne
On Thu, 9 Sep 2010 11:18:50 +0100 Antnio Mota wrote: > > When using HTTP over the public Internet infrastructure, **to be** > RESTfull we MAY use non-IANA registered media-types > No. REST absolutely requires self-descriptive messaging. Roy again: "Self-descriptive means that the type is registered and the registry points to a specification and the specification explains how to process the data according to its intent." If there's no other registry on the Web but IANA, how can any identifier not IANA-registered be in a registry, and therefore self- descriptive? > > to be RESTfull one MUST stick to those **IANA registered media-types?* > Also no, that would preclude evolution of new types. The only point I've been trying to make, is that if you aren't going to re-use a standard, then you must have the intent to publish whatever you create, and IANA-register an identifier for it, otherwise it won't ever be self- descriptive. This is pushback against the prevalent notion in the community that the first step in REST development is to create a new data type, and don't worry about publishing it or registering an identifier for it, and then flaming me incessantly when I state the obvious -- on the Web, that sort of messaging isn't self-descriptive and is NOT the REST style. -Eric
Eb wrote: > > If this is the case, I don't see why the requirement would change for > REST behind the firewall (if HTTP is being used). That is, as long > as HTTP is being used (wherever), IANA registration is required. > I agree with you, but the problem is I can't say that with black-and- white, no-room-for-alternate-interpretation certainty, because of the non-normative "discouraged" wording of HTTP. If that were a SHOULD NOT, the case would be stronger because the effect would be that I could say such implementations are only conditionally-compliant with HTTP when truly RESTful systems are fully-compliant with HTTP. The fact is, on an intranet, though "discouraged" by HTTP, alternate registries are possible without violating REST's self-descriptive messaging constraint. The Web, however, only has one registry: IANA. -Eric
Eric J. Bowman wrote: > Nathan wrote: >> Hi Eric, >> >> I've came to a point where I'm a little confused about what you mean >> by the above. It's one or more of the following. >> >> Being RESTful only matters when discussing the Web (doesn't matter on >> say a LAN). >> >> The RESTfulness of a REST system only matters when discussing the Web >> (doesn't matter on say a LAN). >> >> Using IANA-registered media types only matters when discussing REST >> on the Web (doesn't matter on say a LAN). >> >> Or "other". >> >> Can you clarify? > > Being RESTful only matters when it's appropriate to the system, I'm not > a purist. If REST is the goal, then all its constraints always matter. > If REST on the Web is the goal, then self-descriptiveness requires > using the IANA registry to point an identifier at a standard, by > definition. > > That is not "my" interpretation, and there is no room for "alternate" > interpretations, where the Web is concerned. A case may be made (what > does "discouraged" mean) for alternate interpretations in other > contexts, i.e. intranets or new protocols that aren't HTTP, but _not_ > for HTTP over the Internet, i.e. the Web. > > Which is why I haven't allowed all these attempts to identify such > exceptions in other contexts, sway me one bit from my assertion that on > the Web, REST requires you to use IANA-registered identifiers pointing > to approved standards. > > I'm only interested in helping folks with REST development for the > "common case of the Web", and in that context, my advice on this matter > is not an opinion, and there's no rational reason for it to have led to > months of nonstop debate based on Google being the same thing as a > registry, etc. in an effort to get me to admit that this is not a black- > and-white truism. So, my take away is: If you are practising REST on the web then IANA registered media types must be used. If you are practising REST and the protocol is HTTP then IANA registered media types must be used. My personal, common sense based understanding is to swap the word must to should in both of the above and add an additional broader scope statement. If you are doing anything that requires media types on anything that uses the Internet Protocol then use IANA Registered Internet Media Types in every situation other than when developing new, to be IANA registered media types. Agree? Best, Nathan
Nathan wrote: > > Underlying my position is some (perhaps false) reasoning on REST to > bring my understanding in-line with modern / future web architecture > and paradigm shifts on the web. > > For instance, where at the time of writing the norm was to have > server side applications and a 'web of documents' then things like > multiple browser contexts were considered (I believe) unrestful - an > example of this would be the iframe in a document, because the user > agent could not control or reassemble it's own application state... > I consider <iframe> to be bad practice, but not unRESTful like <frame>. > > ...and where the application state to be considered is that of the > client side application, which is now outside of HTTP and thus REST. > In REST, application state does reside in the client, and is not required to reflect resource state. IOW, mashups are OK, and you can even use FTP, etc. to create that mashup. While I don't believe that client-side storage is inherently unRESTful, I do believe that the WebSockets API specifically, is entirely unRESTful. > > Interestingly this positions the web in a data-tier position where > data is the resources named by URIs which each have a current state, > that state is represented within and transferred via HTTP messages, > likewise the state of a resource is manipulated with HTTP messages > and usage of the HTTP verbs. > I prefer to think of a URI as identifying a stored procedure which returns some data, rather than identifying the data returned. But, what you're saying is exactly what Roy is trying to get across in REST Chapter 6.2.2, "Manipulating Shadows." > > Back to the point in hand, this also encourages a more modular > approach to application coding where, each specific module only needs > to understand part of a message. An analogy may be a user agent > processing a single HTML document, where different parts of said HTML > document are processed and understood by very different modules doing > different tasks, one may hook in to understand <script>s of a certain > type, another may hook in to dereference embedded links, another to > instantiate DOM event listeners and so forth. Thus re-using, > augmenting and extending existing media types (like HTML) makes > perfect sense to me from all angles, including REST and it's > constraints. > Very well-put. -Eric
On 09/10/2010 11:36 AM, Eric J. Bowman wrote: > The fact is, on an intranet, though "discouraged" by HTTP, alternate > registries are possible without violating REST's self-descriptive > messaging constraint. The Web, however, only has one registry: IANA. > > -Eric > Are we making the distinction (between internet and intranet) or is the RFC and if its the RFC, which please? Thanks. -- blog: http://eikonne.wordpress.com twitter: http://twitter.com/eikonne
2010/9/10 Eric J. Bowman <eric@...> > > No. REST absolutely requires self-descriptive messaging. Roy again: > > "Self-descriptive means that the type is registered and the registry > points to a specification and the specification explains how to process > the data according to its intent." > > If there's no other registry on the Web but IANA, how can any > identifier not IANA-registered be in a registry, and therefore self- > descriptive? > > No, let me also quote Roy: "Self-descriptive means that the type is registered and the registry points to a specification and the specification explains how to process the data according to its intent." Nothing in Roy quote says that registry MUST be IANA. Both RFC2046 and RFC4288 clearly say that there are IANA unregistered types that start with X- There's nowhere nothing to prevent thos X- types to be registered privatly. So, if I register in my company the X-media-types I want and all the interested parties agree with that, I have a registry that points to a specification and the specification explains how to process the data according to its intent. So I have self-descriptive media-types not registered with IANA. If you're going again to talk about the intermediaries and the slogans "serendipity", "anarchic-scaling" and the like, let me remember that none of those are *constraints* of a REST system. The only constraint that are related to the intermediaries is cache - and that is obtained by having the intermediaries treating application/x.mytopic+xml as if it was application/xml Please note that things like pre-caching, accelarators and other features of intermediaries *are not* REST constraints, cache is. Also, if you're going to object with what you called "collisions", it is true that they can happen with regular, non-X-types if they are not registered in IANA, but that won't happen if they are of X-type. Because the registration of a X-type outside IANA assumes the pre-agreement between the interested parties, so if they agree to use them they also know where they are registered. So, the only collision that couls eventually happen is on intermediaries, but if you're using a X-type that won't happen either because they will treat application/x.mytopic+xml as if it was application/xml. Also, if you're invoking the "everibody" or "anybody" argument, please let me state again that the Web is as much about "everibody" or "anybody" as is, like Roy, RFC2046 and RFC4288 says, about "participants in the communication" (Roy), "consenting systems"(RFC2046) and "parties exchanging" (RFC4288). I hope you don;t feel that RFC are important for some things and not for the others. Finnally, let me say also that I found very strange that you think that are some REST/HTTP rules for the internet and others for the intanets, when t
Nathan wrote: > > If you are practising REST on the web then IANA registered media > types must be used. > > If you are practising REST and the protocol is HTTP then IANA > registered media types must be used. > > My personal, common sense based understanding is to swap the word > must to should in both of the above and add an additional broader > scope statement. > I still think the first is a MUST (the Web has no other registry, period), the second is a SHOULD (based on HTTP "discouraging" but not forbidding other registries, a choice REST doesn't care about). > > If you are doing anything that requires media types on anything that > uses the Internet Protocol then use IANA Registered Internet Media > Types in every situation other than when developing new, to be IANA > registered media types. > I agree with that, if "Internet Protocol" is changed to "public Internet," otherwise it's too strict -- an intranet instantiation of REST which ignores IANA may still meet all REST constraints and still be considered fully-compliant with HTTP, due to the deliberate choice of "discouraged". Though discouraged, you MAY have some other registry on an intranet; on the Web, claiming any registry but IANA applies is a fallacy. -Eric
2010/9/10 Eric J. Bowman <eric@...> > > No. REST absolutely requires self-descriptive messaging. Roy again: > > "Self-descriptive means that the type is registered and the registry > points to a specification and the specification explains how to process > the data according to its intent." > > If there's no other registry on the Web but IANA, how can any > identifier not IANA-registered be in a registry, and therefore self- > descriptive? > > No, let me also quote Roy: "Self-descriptive means that the type is registered and the registry points to a specification and the specification explains how to process the data according to its intent." Nothing in Roy quote says that registry MUST be IANA. Both RFC2046 and RFC4288 clearly say that there are IANA unregistered types that start with X- There's nowhere nothing to prevent thos X- types to be registered privatly. So, if I register in my company the X-media-types I want and all the interested parties agree with that, I have a registry that points to a specification and the specification explains how to process the data according to its intent. So I have self-descriptive media-types not registered with IANA. If you're going again to talk about the intermediaries and the slogans "serendipity", "anarchic-scaling" and the like, let me remember that none of those are *constraints* of a REST system. The only constraint that are related to the intermediaries is cache - and that is obtained by having the intermediaries treating application/x.mytopic+xml as if it was application/xml Please note that things like pre-caching, accelarators and other features of intermediaries *are not* REST constraints, cache is. Also, if you're going to object with what you called "collisions", it is true that they can happen with regular, non-X-types if they are not registered in IANA, but that won't happen if they are of X-type. Because the registration of a X-type outside IANA assumes the pre-agreement between the interested parties, so if they agree to use them they also know where they are registered. So, the only collision that couls eventually happen is on intermediaries, but if you're using a X-type that won't happen either because they will treat application/x.mytopic+xml as if it was application/xml. Also, if you're invoking the "everibody" or "anybody" argument, please let me state again that the Web is as much about "everibody" or "anybody" as is, like Roy, RFC2046 and RFC4288 says, about "participants in the communication" (Roy), "consenting systems"(RFC2046) and "parties exchanging" (RFC4288). I hope you don;t feel that RFC are important for some things and not for the others. Finnally, let me say also that I found very strange that you think that are some REST/HTTP rules for the internet and others for the intanets, when their architecture is the same.
Eb wrote: > > Are we making the distinction (between internet and intranet) or is > the RFC and if its the RFC, which please? > My belief is that the RFC is authored in such a way as to not limit HTTP's use to the Internet. If RFC 2616 were only considering the Web, there would be plenty more MUSTs where it currently says SHOULD (or "discouraged"). It's up to those reading the spec to determine the importance of anything in the spec within the context of their system. Web architecture is its own instantiation of HTTP, where rules which may be harmlessly relaxed in other contexts are critical to the Web. -Eric
On 09/10/2010 12:17 PM, Eric J. Bowman wrote: > Eb wrote: > >> Are we making the distinction (between internet and intranet) or is >> the RFC and if its the RFC, which please? >> >> > My belief is that the RFC is authored in such a way as to not limit > HTTP's use to the Internet. If RFC 2616 were only considering the Web, > there would be plenty more MUSTs where it currently says SHOULD (or > "discouraged"). It's up to those reading the spec to determine the > importance of anything in the spec within the context of their system. > > Web architecture is its own instantiation of HTTP, where rules which > may be harmlessly relaxed in other contexts are critical to the Web. > > -Eric > Fair enough but why should it even matter (except you refer to how limited your system will be on the internet but I think that's a choice once again). But then I think its a stretch to say that in the application of REST on the Internet. IANA, must be used but not in the Intranet. I think this needs to be an "all or none" situation. This has been very educational for me. I think I'm done with this now. :-) -- blog: http://eikonne.wordpress.com twitter: http://twitter.com/eikonne
Antnio Mota wrote: > > Nothing in Roy quote says that registry MUST be IANA. > No, but it does say there must be a registry, and only one registry is defined for the Web. > > Both RFC2046 and RFC4288 clearly say that there are IANA unregistered > types that start with X- > Nonstandardized types aren't used in the REST style. Only standardized types may be registered, not experimental types. > > There's nowhere nothing to prevent thos X- types to be registered > privatly. > Except there is only one registry on the Web, IANA. Re-stating your argument over and over won't change the reality of the situation. > > So, if I register in my company the X-media-types I want and all the > interested parties agree with that, I have a registry that points to a > specification and the specification explains how to process the data > according to its intent. So I have self-descriptive media-types not > registered with IANA. > Are you talking about the Web? If you're talking intranet, you're agreeing with me (please don't let that give you a heart attack). Otherwise, what you describe is an application-specific library-based API coupling implementations together, which is not the REST style. REST defines a decoupled, uniform interface, network-based API as being based on standardized, registered types -- design by constraint vs. unbounded creativity. > > If you're going again to talk about the intermediaries and the slogans > "serendipity", "anarchic-scaling" and the like, let me remember that > none of those are *constraints* of a REST system. > Of course not, they're desirable effects of REST. Which you will NOT realize by NOT following REST, i.e. by using opaque identifiers and nonstandardized types. Do you have some other explanation of those terms, besides my usage which comes from Roy's thesis? > > The only constraint that are related to the intermediaries is cache - > and that is obtained by having the intermediaries treating > No, the self-descriptive messaging constraint is required, that's why it's called a constraint. > > application/x.mytopic+xml > > as if it was > > application/xml > Please note that things like pre-caching, accelarators and other > features of intermediaries *are not* REST constraints, cache is. > REST's cache constraints are irrelevant if your messaging fails to be self-descriptive. There is nothing self-descriptive about application/ xml, and REST is all about unambiguously specifying your intended processing model. Specifying application/foo+xml is a good way to not get cached by any cache configured based on the shared understanding of a limited number of ubiquitous types and ignoring *all* undefined types. > > Also, if you're going to object with what you called "collisions", it > is true that they can happen with regular, non-X-types if they are not > registered in IANA, but that won't happen if they are of X-type. > That's why they're called "experimental" and not "standardized". > > Because the registration of a X-type outside IANA assumes the > pre-agreement between the interested parties, so if they agree to use > them they also know where they are registered. So, the only collision > that couls eventually happen is on intermediaries, but if you're > using a X-type that won't happen either because they will treat > application/x.mytopic+xml as if it was application/xml. > So, even if the chair of w3c's TAG signs up here and tells you that I'm right (Nathan), you still insist I must be wrong? On the Web, registered media types are those in the IANA registry, not experimental, unregistered types. If you don't believe anyone who tries telling you this, then what's the point of further answering your questions? > > Also, if you're invoking the "everibody" or "anybody" argument, > please let me state again that the Web is as much about "everibody" > or "anybody" as is, like Roy, RFC2046 and RFC4288 says, about > "participants in the communication" (Roy), "consenting > systems"(RFC2046) and "parties exchanging" (RFC4288). > If nobody outside the sender and intended recipient can participate in your communications over the Web, then the failure on your part is that you've created an application-specific library-based API to couple implementations together. REST defines a decoupled, uniform interface network-based API based on standardized types with registered identifiers -- THAT'S THE STYLE -- not experimental unregistered types. > > Finnally, let me say also that I found very strange that you think > that are some REST/HTTP rules for the internet and others for the > intanets, when t > Of course there are. REST isn't limited to the Web. HTTP isn't limited to the Web. If we were required to make everything exactly match exactly how the Web behaves, the protocols wouldn't be re-usable. But _of course_ the requirements vary from one instantiation to another. -Eric
On Fri, Sep 10, 2010 at 9:41 AM, Nathan <nathan@...> wrote: > So, my take away is: > > If you are practising REST on the web then IANA registered media types > must be used. > > If you are practising REST and the protocol is HTTP then IANA registered > media types must be used. Only if you buy Eric's interpretation of REST. The consensus view is quite different. If an existing, IANA registered, media type specifies the semantics and processing model you need then you should use that media type. If no registered media type exists with the semantics and processing model you need then you should roll your own. When you create your own media type registering it in the vendor or personal tree is Good Thing. Doing so does not change the descriptiveness of the media type name. To reason by analogy, just because you happen to not know a word does not make the word any less descriptive. Using a more general concept when you really mean something specific is less descriptive, though. For example, "motorcycle" is more descriptive than "mode-of-transport", even if you have never seen the word "motorcycle" before. If you want a motorcycle you should ask for that. You should not a mode-of-transport and hope you get what you want. Like wise, if a client that needs a representation with a particular structure, or semantics, the client should ask for that. It should not ask for a generic representation and hope that it will get what it needs. Using a media type name in HTTP that is not understood by every imaginable participant does *not* change the self descriptiveness of the message. However, the more obscure the media type you choose the more likely it is that some participant will not understand the description. In my experience this has never caused any practical problems but it could conceivably do so. Peter <http://barelyenough.org>
"No, but it does say there must be a registry, and only one registry is defined for the Web" Can you please say where it is defined that there MUST NOT be another registry, when both RFC allows the existence of non IANA registered types? Where it says that I or any private partie MUST NOT create a registry of those types that are of interest to some agreeing parties? Otherwise, the simple fact of you repeating it doesen't make it true. On 10 Sep 2010 18:19, "Eric J. Bowman" <eric@...> wrote: Antnio Mota wrote: > > Nothing in Roy quote says that registry MUST be IANA. > No, but it does say there must be a registry, and only one registry is defined for the Web. > > Both RFC2046 and RFC4288 clearly say that there are IANA unregistered > types that start with X... Nonstandardized types aren't used in the REST style. Only standardized types may be registered, not experimental types. > > There's nowhere nothing to prevent thos X- types to be registered > privatly. > Except there is only one registry on the Web, IANA. Re-stating your argument over and over won't change the reality of the situation. > > So, if I register in my company the X-media-types I want and all the > interested parties agre... Are you talking about the Web? If you're talking intranet, you're agreeing with me (please don't let that give you a heart attack). Otherwise, what you describe is an application-specific library-based API coupling implementations together, which is not the REST style. REST defines a decoupled, uniform interface, network-based API as being based on standardized, registered types -- design by constraint vs. unbounded creativity. > > If you're going again to talk about the intermediaries and the slogans > "serendipity", "anarch... Of course not, they're desirable effects of REST. Which you will NOT realize by NOT following REST, i.e. by using opaque identifiers and nonstandardized types. Do you have some other explanation of those terms, besides my usage which comes from Roy's thesis? > > The only constraint that are related to the intermediaries is cache - > and that is obtained by... No, the self-descriptive messaging constraint is required, that's why it's called a constraint. > > application/x.mytopic+xml > > as if it was > > application/xml > Please note that things like... REST's cache constraints are irrelevant if your messaging fails to be self-descriptive. There is nothing self-descriptive about application/ xml, and REST is all about unambiguously specifying your intended processing model. Specifying application/foo+xml is a good way to not get cached by any cache configured based on the shared understanding of a limited number of ubiquitous types and ignoring *all* undefined types. > > Also, if you're going to object with what you called "collisions", it > is true that they can h... That's why they're called "experimental" and not "standardized". > > Because the registration of a X-type outside IANA assumes the > pre-agreement between the inter... So, even if the chair of w3c's TAG signs up here and tells you that I'm right (Nathan), you still insist I must be wrong? On the Web, registered media types are those in the IANA registry, not experimental, unregistered types. If you don't believe anyone who tries telling you this, then what's the point of further answering your questions? > > Also, if you're invoking the "everibody" or "anybody" argument, > please let me state again tha... If nobody outside the sender and intended recipient can participate in your communications over the Web, then the failure on your part is that you've created an application-specific library-based API to couple implementations together. REST defines a decoupled, uniform interface network-based API based on standardized types with registered identifiers -- THAT'S THE STYLE -- not experimental unregistered types. > > Finnally, let me say also that I found very strange that you think > that are some REST/HTTP r... Of course there are. REST isn't limited to the Web. HTTP isn't limited to the Web. If we were required to make everything exactly match exactly how the Web behaves, the protocols wouldn't be re-usable. But _of course_ the requirements vary from one instantiation to another. -Eric
Antnio Mota wrote: > > "No, but it does say there must be a registry, and only one registry > is defined for the Web" > > Can you please say where it is defined that there MUST NOT be another > registry, when both RFC allows the existence of non IANA registered > types? Where it says that I or any private partie MUST NOT create a > registry of those types that are of interest to some agreeing parties? > Asked and answered, many, many times. Including: "Given that AWWW states using IANA registered media types as best practise, HTTP discourages using media types that are not IANA registered, and REST says that media types are shared by multiple protocols and governed by IANA - then I think it's fair to say the subject infers that when we say registered we mean IANA registered." http://tech.groups.yahoo.com/group/rest-discuss/message/16526 If you desire to create some alternate registry, and expect anybody to agree to it, then by all means sign up for http-wg and suggest it there, otherwise doing so will continue to be "discouraged" and the Web will continue to only have one registry, IANA. REST on the Web is about the reality of the Web, not what might theoretically be allowed if the Web architecture was different. -Eric
Peter Williams wrote: > > Only if you buy Eric's interpretation of REST. > I agree that the consensus view is different, but I don't agree that using standardized types with registered identifiers on the Web is merely an "interpretation" of REST. > > When you create your own media type registering it in the vendor or > personal tree is Good Thing. Doing so does not change the > descriptiveness of the media type name. > How can that possibly be true? REST requires standardized types with registered identifiers. Unstandardized types that have no registered identifier do NOT meet the definition of self-descriptiveness: "Self-descriptive means that the type is registered and the registry points to a specification and the specification explains how to process the data according to its intent." Why do folks insist that Roy is wrong, by attributing that requirement to me? If your identifier isn't in the IANA registry, nobody on the Web can correlate it with anything, which seems to be the exact opposite of self-descriptiveness. If I'm wrong about this, then how come nobody will explain *why* I'm wrong, but instead everybody resorts to insisting that this is only "my" viewpoint? > > To reason by analogy, just because you happen to not know a word does > not make the word any less descriptive. > Self-descriptive messaging requires that the processing intent of the payload be clearly defined for all to see. Opaque identifiers on the Web are not defined at all, so how are they self-descriptive? Guesswork? Self-descriptive messaging is not about resorting to guesswork. If you need to create a new media type, then you need to register an identifier for it, otherwise it won't be self-descriptive, by Roy's definition I keep quoting. Roy says "Self-descriptive means that the type is registered..." and there is no other registry for the Web, so how can an opaque identifier used on the Web be RESTful? -Eric
Hi Eric, On 09/10/2010 07:58 AM, Eric J. Bowman wrote: > > > Being RESTful only matters when it's appropriate to the system, I'm not > a purist. > Really? They how come it seems like just about every response on this and related threads comes back with something to the effect of IANA registration is the *only* way to go if you want to be RESTful? > If REST is the goal, then all its constraints always matter. > If REST on the Web is the goal, then self-descriptiveness requires > using the IANA registry to point an identifier at a standard, by > definition. > As someone who responded to a previous point that I'm just arguing over semantics [2], I'm utterly befuddled that you've missed the semantics of your own statement. You're saying two different things. That constraints "matter" DOES NOT imply any requirement. It just means, well, that they matter. As someone who has worked for years on developing standards at the IETF, W3C, and OASIS, this semantic confusion isn't even a close call, and the semantics do matter. Here's what's really frustrates me. I joined this mailing list to discuss the gaps, the corner cases, the gray areas, the possible areas of confusion. I'm here to have my assumptions challenged. Instead, I'm seeing a large flurry of responses that ignore subtlety, deny different ways of thinking about the web, and deny the existence of gray areas. For the majority of the traffic, I'm not seeing my assumptions challenged, I'm seeing reality challenged, and that isn't nearly as interesting. On top of that, specifications get it wrong. Maybe the RFCs are wrong? A few years back, I saw someone point out an error in the HTTP specification, almost six years after it was published. Maybe Roy is wrong? Maybe the W3C is wrong with some of its TAG findings? How do we know, except to challenge assumptions, and follow those discussions to their conclusions? Instead, you seem determined to shut off debate. > > That is not "my" interpretation, and there is no room for "alternate" > interpretations, where the Web is concerned. > Well, er, except, there is. That's the beauty of the chaos that is the web. There are about two billion people [1] connected to it, and all of them are entitled to their own interpretations. If they see a better way, let's welcome them, and learn from them. > A case may be made (what > does "discouraged" mean) for alternate interpretations in other > contexts, i.e. intranets or new protocols that aren't HTTP, but _not_ > for HTTP over the Internet, i.e. the Web. > As someone who follows issues surrounding computer security closely, the distinction between intranet/internet is a convenient short-hand, but doesn't actually exist in practice. Many "closed" systems send their traffic over the internet, sometimes via VPN, sometimes not, sometimes over HTTP, and sometimes not. > > Which is why I haven't allowed all these attempts to identify such > exceptions in other contexts, sway me one bit from my assertion that on > the Web, REST requires you to use IANA-registered identifiers pointing > to approved standards. > > I'm only interested in helping folks with REST development for the > "common case of the Web", > OK. if you're only interested in that, then why don't you let the rest of us who are interested in other scenarios, actually talk about those other scenarios, without barging in and telling us that what we're doing is wrong for a different use-case that you're interested in? I'm specifically not interested in your use case, so I've actually come to conclude that your responses are mostly worth ignoring. Repeating them over and over with respect to your scenario won't change my mind, whereas addressing my scenarios might. > and in that context, my advice on this matter > is not an opinion, > Really? Advice implies opinion. > > and there's no rational reason for it to have led to > months of nonstop debate based on Google being the same thing as a > registry, etc. in an effort to get me to admit that this is not a black- > and-white truism. > Agreed - you don't have to admit to anything. If you disagree with what we're saying as it applies to your particular use case, you don't need to keep repeating yourself. Can you instead ask yourself whether or not the question at hand applies to your use-case, or to a different use case? Or, if the question posed is unclear on the use-case, ask to clarify that, rather than assuming it is your target? As near as I can tell, many people on this list have been trying, nicely, to say that we might mostly agree with your position with respect to your scenario, but that for some of us, our scenarios are not yours. If the question on the table applies to a different use case, can you please discuss from that alternate perspective, or simply hold your peace? -Eric. [1] http://www.internetworldstats.com/stats.htm [2] http://tech.groups.yahoo.com/group/rest-discuss/message/16372
Eric Johnson wrote: > > Really? They how come it seems like just about every response on this > and related threads comes back with something to the effect of IANA > registration is the *only* way to go if you want to be RESTful? > Because the alternative is to say, "unregistered identifiers are self- descriptive" which is irrational, given that Roy has said, "Self- descriptiveness means that the type is registered..." I'm not a purist, because I don't insist that REST be used when REST isn't appropriate. I'm only being rational when I insist that self- descriptive messaging requires registered identifiers -- if REST on the Web is a system's goal, then it's irrational to use identifiers Roy has declared not to be self-descriptive. > > That constraints "matter" DOES NOT imply any requirement. It just > means, well, that they matter. As someone who has worked for years on > developing standards at the IETF, W3C, and OASIS, this semantic > confusion isn't even a close call, and the semantics do matter. > This isn't what Roy has said regarding whether systems that disregard REST constraints may be labeled REST: "What needs to be done to make the REST architectural style clear on the notion that hypertext is a constraint? In other words, if the engine of application state (and hence the API) is not being driven by hypertext, then it cannot be RESTful and cannot be a REST API. Period. Is there some broken manual somewhere that needs to be fixed?" Substitute self-descriptive messaging, or any other constraint, for hypermedia there. Otherwise, we can call anything REST, even systems which have nothing to do with standardized data types with registered identifiers. How does it help anyone achieve the goals of REST on the Web, to not point out when constraints are being violated? > > On top of that, specifications get it wrong. Maybe the RFCs are > wrong? A few years back, I saw someone point out an error in the HTTP > specification, almost six years after it was published. Maybe Roy is > wrong? Maybe the W3C is wrong with some of its TAG findings? How do > we know, except to challenge assumptions, and follow those > discussions to their conclusions? Instead, you seem determined to > shut off debate. > No, I share the same goals as you. The folks shutting off this debate are the ones who keep bringing it back to a matter of me, personally. Like for example, stating that it's only "my" interpretation of REST where "self-descriptive messaging means that the type is registered," when that's a direct Roy quote. If Roy's wrong, then by all means folks, enlighten me as to why instead of insinuating that this is only "my" requirement. > > > That is not "my" interpretation, and there is no room for > > "alternate" interpretations, where the Web is concerned. > > > > Well, er, except, there is. That's the beauty of the chaos that is > the web. There are about two billion people [1] connected to it, and > all of them are entitled to their own interpretations. If they see a > better way, let's welcome them, and learn from them. > In general, I agree with you. However, Roy is an authority on what constitutes REST, and Roy says "self-descriptive messaging means that the type is registered." As with any hard science, I require falsifiability -- tell me why that statement is wrong, not that I'm wrong to insist on it. I don't see how it amounts to falsification, to attribute that point to me, then declare that it's OK to disregard it. Is there really some alternate interpretation of REST besides Roy's? > > As someone who follows issues surrounding computer security closely, > the distinction between intranet/internet is a convenient short-hand, > but doesn't actually exist in practice. Many "closed" systems send > their traffic over the internet, sometimes via VPN, sometimes not, > sometimes over HTTP, and sometimes not. > Of course. But for any "open" system attempting Internet scale by following REST on the Web, those payoffs are achieved by playing by the rules (registering in IANA), not looking for loopholes and expecting REST's promises to hold true. > > > I'm only interested in helping folks with REST development for the > > "common case of the Web", > > > > OK. if you're only interested in that, then why don't you let the rest > of us who are interested in other scenarios, actually talk about those > other scenarios, without barging in and telling us that what we're > doing is wrong for a different use-case that you're interested in? > I've only "barged in" to make this point, where the context has been "open" systems targeted at the Web. I've consistently stated that this is not a big deal otherwise, IOW, I have been limiting my comments to the "common case of the Web" use-case I'm interested in. When the rest of you start talking about how standardization of data types has nothing to do with the style, on the Web, then *somebody* has to point out that such systems are really some other style that's distinctly different from REST, otherwise the thread would be off-topic. > > I'm specifically not interested in your use case, so I've actually > come to conclude that your responses are mostly worth ignoring. > Repeating them over and over with respect to your scenario won't > change my mind, whereas addressing my scenarios might. > If you're talking about using nonstandardized types with unregistered identifiers without using the Web, you'll have no worries from me -- the only thing I've ever said in such a case is, why are you bothering with REST if you're not interested in any of the benefits from the style, and have no intention to follow its constraints? We'd both have each other on ignore. For those folks who do give a damn about Internet scale, it's essential to point out the difference between tightly-coupled library-based APIs and REST's uniform interface, because optimization within the model leads to real-world benefits which *are* the goal in such scenarios. > > Really? Advice implies opinion. > My advice for those looking for Internet scale in their distributed hypermedia systems for the Web, i.e. the "common case," is to follow REST. That's an opinion. It is not an opinion, but rather an authoritative, normative reference, to point out that Roy's definition of the term self-descriptive requires the identifier to be registered. > > If the question on the table applies to a different use case, can you > please discuss from that alternate perspective, or simply hold your > peace? > Where have I done otherwise? If you look back far enough in this thread, what touched me off was the guy from Microsoft being led to believe that nonstandardized, unregistered types are congruous with the REST style, when his context is the "common case of the Web." -Eric
Indulge me Eric, can you point me to a "official" document, like a RFC, a IETF doc, a W3C doc, where it says that regarding media-types registries, "there can be only one" like that old movie... Specifically, since the RFC clearly state that there *are* IANA unregistered media types, where is it stated that other parties can't have the initiative of designing and maintain a regstry of the media types they designed? If this was AAA many many times, it would be easy for you to enlight me, rigth? Thanks in advance for that. On 10 Sep 2010 19:01, "Eric J. Bowman" <eric@...> wrote: Antnio Mota wrote: > > "No, but it does say there must be a registry, and only one registry > is de... Asked and answered, many, many times. Including: "Given that AWWW states using IANA registered media types as best practise, HTTP discourages using ... http://tech.groups.yahoo.com/group/rest-discuss/message/16526 If you desire to create some alternate registry, and expect anybody to agree to it, then by all means sign up for http-wg and suggest it there, otherwise doing so will continue to be "discouraged" and the Web will continue to only have one registry, IANA. REST on the Web is about the reality of the Web, not what might theoretically be allowed if the Web architecture was different. -Eric
Antnio Mota wrote: > > Indulge me Eric, can you point me to a "official" document, like a > RFC, a IETF doc, a W3C doc, where it says that regarding media-types > registries, "there can be only one" like that old movie... > RFC 2616 does not preclude some other registry being defined as an extension of RFC 2616. No other RFC exists, which defines any such alternate registry for the Web, however. So it's just a reflection of reality to state that only one registry exists on the Web. > > Specifically, since the RFC clearly state that there *are* IANA > unregistered media types, where is it stated that other parties > can't have the initiative of designing and maintain a regstry of the > media types they designed? > It isn't. If you wish to define such a registry, get it in the RFC for HTTP, otherwise its use will always be "discouraged", on the Internet or not. Then maybe the AWWW document will evolve to include it, instead of saying you SHOULD use the IANA registry for the Web. Otherwise, I don't see how any alternate registry is viable for HTTP, Web or not, as it wouldn't be defined and nobody would be encouraged to use it, if they'd even be aware of its existence. But, that's the process, and the entire Internet is a result of the RFC process. -Eric
http://www.ietf.org/rfc/rfc2046.txt
Multipurpose Internet Mail Extensions
(MIME) Part Two:
Media Types
"In order to ensure that the set of such values is developed in an
orderly, well-specified, and public manner, MIME sets up a
registration process which uses the Internet Assigned Numbers
Authority (IANA) as a central registry for MIME's various areas of
extensibility. The registration process for these areas is
described in a companion document, RFC 2048."
http://www.w3.org/TR/webarch/#URI-registration
Architecture of the World Wide Web, Volume One
"When designing a new data format, the preferred mechanism to promote
its deployment on the Web is the Internet media type (see
Representation Types and Internet Media Types (�3.2)). Media types
also provide a means for building new information applications, as
described in future directions for data formats (�4.6)."
http://tools.ietf.org/html/draft-ietf-httpbis-p3-payload-11#section-2.3
HTTP/1.1, part 3: Message Payload and Content Negotiation
"HTTP uses Internet Media Types [RFC2046] in the Content-Type
(Section 6.9) and Accept (Section 6.1) header fields in order to
provide open and extensible data typing and type negotiation."
It doesn't really matter if "there can only be one" because we only use
one, and only one registry is specified and encouraged by all parties;
moreover they are very clear on the reason why there is only one.
Best,
Nathan
Ant�nio Mota wrote:
> Indulge me Eric, can you point me to a "official" document, like a RFC, a
> IETF doc, a W3C doc, where it says that regarding media-types registries,
> "there can be only one" like that old movie...
>
> Specifically, since the RFC clearly state that there *are* IANA unregistered
> media types, where is it stated that other parties can't have the
> initiative of designing and maintain a regstry of the media types they
> designed?
>
> If this was AAA many many times, it would be easy for you to enlight me,
> rigth?
>
> Thanks in advance for that.
>
> On 10 Sep 2010 19:01, "Eric J. Bowman" <eric@...> wrote:
>
> Ant�nio Mota wrote:
>> "No, but it does say there must be a registry, and only one registry
>> is de...
> Asked and answered, many, many times. Including:
>
>
> "Given that AWWW states using IANA registered media types as best
> practise, HTTP discourages using ...
> http://tech.groups.yahoo.com/group/rest-discuss/message/16526
>
> If you desire to create some alternate registry, and expect anybody to
> agree to it, then by all means sign up for http-wg and suggest it
> there, otherwise doing so will continue to be "discouraged" and the Web
> will continue to only have one registry, IANA.
>
> REST on the Web is about the reality of the Web, not what might
> theoretically be allowed if the Web architecture was different.
>
> -Eric
>
Alas, at least for me, the signal vs. noise threshold has now been crossed. Bye. -Eric. On 09/10/2010 12:14 PM, Eric J. Bowman wrote: > Eric Johnson wrote: >> Really? They how come it seems like just about every response on this >> and related threads comes back with something to the effect of IANA >> registration is the *only* way to go if you want to be RESTful? >> > Because the alternative is to say, "unregistered identifiers are self- > descriptive" which is irrational, given that Roy has said, "Self- > descriptiveness means that the type is registered..." > > I'm not a purist, because I don't insist that REST be used when REST > isn't appropriate. I'm only being rational when I insist that self- > descriptive messaging requires registered identifiers -- if REST on the > Web is a system's goal, then it's irrational to use identifiers Roy has > declared not to be self-descriptive. > >> That constraints "matter" DOES NOT imply any requirement. It just >> means, well, that they matter. As someone who has worked for years on >> developing standards at the IETF, W3C, and OASIS, this semantic >> confusion isn't even a close call, and the semantics do matter. >> > This isn't what Roy has said regarding whether systems that disregard > REST constraints may be labeled REST: > > "What needs to be done to make the REST architectural style clear on > the notion that hypertext is a constraint? In other words, if the > engine of application state (and hence the API) is not being driven by > hypertext, then it cannot be RESTful and cannot be a REST API. Period. > Is there some broken manual somewhere that needs to be fixed?" > > Substitute self-descriptive messaging, or any other constraint, for > hypermedia there. Otherwise, we can call anything REST, even systems > which have nothing to do with standardized data types with registered > identifiers. How does it help anyone achieve the goals of REST on the > Web, to not point out when constraints are being violated? > >> On top of that, specifications get it wrong. Maybe the RFCs are >> wrong? A few years back, I saw someone point out an error in the HTTP >> specification, almost six years after it was published. Maybe Roy is >> wrong? Maybe the W3C is wrong with some of its TAG findings? How do >> we know, except to challenge assumptions, and follow those >> discussions to their conclusions? Instead, you seem determined to >> shut off debate. >> > No, I share the same goals as you. The folks shutting off this debate > are the ones who keep bringing it back to a matter of me, personally. > Like for example, stating that it's only "my" interpretation of REST > where "self-descriptive messaging means that the type is registered," > when that's a direct Roy quote. If Roy's wrong, then by all means > folks, enlighten me as to why instead of insinuating that this is only > "my" requirement. > >>> That is not "my" interpretation, and there is no room for >>> "alternate" interpretations, where the Web is concerned. >>> >> Well, er, except, there is. That's the beauty of the chaos that is >> the web. There are about two billion people [1] connected to it, and >> all of them are entitled to their own interpretations. If they see a >> better way, let's welcome them, and learn from them. >> > In general, I agree with you. However, Roy is an authority on what > constitutes REST, and Roy says "self-descriptive messaging means that > the type is registered." As with any hard science, I require > falsifiability -- tell me why that statement is wrong, not that I'm > wrong to insist on it. I don't see how it amounts to falsification, to > attribute that point to me, then declare that it's OK to disregard it. > Is there really some alternate interpretation of REST besides Roy's? > >> As someone who follows issues surrounding computer security closely, >> the distinction between intranet/internet is a convenient short-hand, >> but doesn't actually exist in practice. Many "closed" systems send >> their traffic over the internet, sometimes via VPN, sometimes not, >> sometimes over HTTP, and sometimes not. >> > Of course. But for any "open" system attempting Internet scale by > following REST on the Web, those payoffs are achieved by playing by the > rules (registering in IANA), not looking for loopholes and expecting > REST's promises to hold true. > >>> I'm only interested in helping folks with REST development for the >>> "common case of the Web", >>> >> OK. if you're only interested in that, then why don't you let the rest >> of us who are interested in other scenarios, actually talk about those >> other scenarios, without barging in and telling us that what we're >> doing is wrong for a different use-case that you're interested in? >> > I've only "barged in" to make this point, where the context has been > "open" systems targeted at the Web. I've consistently stated that this > is not a big deal otherwise, IOW, I have been limiting my comments to > the "common case of the Web" use-case I'm interested in. > > When the rest of you start talking about how standardization of data > types has nothing to do with the style, on the Web, then *somebody* has > to point out that such systems are really some other style that's > distinctly different from REST, otherwise the thread would be off-topic. > >> I'm specifically not interested in your use case, so I've actually >> come to conclude that your responses are mostly worth ignoring. >> Repeating them over and over with respect to your scenario won't >> change my mind, whereas addressing my scenarios might. >> > If you're talking about using nonstandardized types with unregistered > identifiers without using the Web, you'll have no worries from me -- the > only thing I've ever said in such a case is, why are you bothering with > REST if you're not interested in any of the benefits from the style, > and have no intention to follow its constraints? We'd both have each > other on ignore. > > For those folks who do give a damn about Internet scale, it's essential > to point out the difference between tightly-coupled library-based APIs > and REST's uniform interface, because optimization within the model > leads to real-world benefits which *are* the goal in such scenarios. > >> Really? Advice implies opinion. >> > My advice for those looking for Internet scale in their distributed > hypermedia systems for the Web, i.e. the "common case," is to follow > REST. That's an opinion. It is not an opinion, but rather an > authoritative, normative reference, to point out that Roy's definition > of the term self-descriptive requires the identifier to be registered. > >> If the question on the table applies to a different use case, can you >> please discuss from that alternate perspective, or simply hold your >> peace? >> > Where have I done otherwise? If you look back far enough in this > thread, what touched me off was the guy from Microsoft being led to > believe that nonstandardized, unregistered types are congruous with the > REST style, when his context is the "common case of the Web." > > -Eric
Eric Johnson wrote: > > Alas, at least for me, the signal vs. noise threshold has now been > crossed. > Yet again, my point gets dismissed without any explanation for how it is that REST is OK with nonstandardized types with unregistered identifiers, when everything I've read (and quoted) says just the opposite, especially where the Web is concerned. Can *anybody* explain that without making their answer about me? -Eric
Eric: <snip> > Yet again, my point gets dismissed without any explanation for how it... </snip> Your point has not been dismissed - it _cannot_ be dismissed. Your _repetition_ of your assertions is a problem. In a single thread [1] starting just over a month ago you have written thousands of words in hundreds of messages; all essentially saying the same thing with relatively little movement from most of the participants in that thread. The thread has been renamed more than once and yet the same material is re-hashed. Your passion now looks more like obsession and it reflects badly on both you and your argument. I am asking you, as a courtesy to me, to stop discussing this topic here. I am asking you, as a friend, to take this line of inquiry somewhere else (a blog, another forum, IRC, etc.) and to end what has become your unusual domination of the traffic on this list. I am not asking you to change your point of view, to concede defeat in any way or, to state your agreement to anyone else's opinion or interpretation. I am just asking you to leave this alone. And I am doing it as nicely as I can. [1] http://tech.groups.yahoo.com/group/rest-discuss/message/16194 mca http://amundsen.com/blog/ http://mamund.com/foaf.rdf#me On Fri, Sep 10, 2010 at 16:11, Eric J. Bowman <eric@...> wrote: > Eric Johnson wrote: >> >> Alas, at least for me, the signal vs. noise threshold has now been >> crossed. >> > > Yet again, my point gets dismissed without any explanation for how it > is that REST is OK with nonstandardized types with unregistered > identifiers, when everything I've read (and quoted) says just the > opposite, especially where the Web is concerned. Can *anybody* explain > that without making their answer about me? > > -Eric > > > ------------------------------------ > > Yahoo! Groups Links > > > >
OK, Bob, OK, Mike. My frustration is that nobody can even succinctly explain to me what the argument against my position even *is* anymore. Or rather, what is the rationale behind ignoring self-descriptive messaging, defined as standardized types with registered identifiers? I've given an example of my own use of application/xbel+xml on the Web for the intended purpose (the exchange of hierarchical collections of annotated links) of the standardized type it *allegedly* points to. My contention is that this is not self-descriptive messaging, because Roy states that the constraint requires the identifier to be registered. It may become self-descriptive, if it's registered. But currently, it is a REST mismatch, because it is not registered and does not point to anything. The other side of this debate, then, would be that it *is* self- descriptive despite not being registered, because? Seriously. I'd like to know the rationale behind *not* calling this a REST mismatch. Signing off, Eric
> "In order to ensure that the set of such values is developed in an > orderly, well-specified, and public manner, MIME sets up a > registration process which uses the Internet Assigned Numbers > Authority (IANA) as a central registry for MIME's various areas of > extensibility. The registration process for these areas is > described in a companion document, RFC 2048." As a central registry. Not as "the" registry. If their intention was to make a "unique" registry they wouldn't use this wording, a "central" registry clearly means that can be other, non-central, peripheric (or specialized, or private) registries. And it has to be like that because there's no way to impose such uniqueniss on the net. That would be like trying to impose Google as the unique search engine on the net. Those "private" registries - that don't even have to be as formal as to be called "registry" - exist as soon as more than one partie agrees with it. And as soon they agree with it, it becomes a standard for those agreeing parties. That what means to be a standard. A standard is a agreement between interested parties. As soon as you agree with it, you know you have to play by the rules, with the certainty that other agreeing parties will do so as well. There's no such thing as a universal or ubiquous standard. If you live in the US and you think driving on the rigth side of the road is standard, beware if you ever go to the UK and drive by that standard. > > It doesn't really matter if "there can only be one" because we only use one, and only one registry is specified and encouraged by all parties; moreover they are very clear on the reason why there is only one. > We? Who are "we"? Why "we" use only one? If "we" are supposed to use only one registry, why the heck the RFC foresee the use of unregistered types? For the other wees? Again, a "registry" doesent have to be formal organization, it can be anything that some agreeing parties agree to be. Like a standard does not have to be universal. Nothing of this, ofcourse, means that non IANA formats should be encouraged when the IANA ones do the job. -- * Melhores cumprimentos / Beir beannacht / Best regards **_____________________________________________________________* *Antnio Manuel dos Santos Mota Contacts: http://card.ly/amsmota **_____________________________________________________________ If you're on a mobile phone you can add my contact by scanning the code on the card below * ** <http://lh3.ggpht.com/_1aTCd17_nho/TEblN4fV-_I/AAAAAAAAAHw/wZ51kXrfJcs/qrcode_bc_1.jpg> Please click on the image to enlarge it<http://lh3.ggpht.com/_1aTCd17_nho/TEblN4fV-_I/AAAAAAAAAHw/wZ51kXrfJcs/qrcode_bc_1.jpg> *_____________________________________________________________ * *Disclaimer: The opinions expressed herein are just my opinions and they are not necessary right.* *_____________________________________________________________*
On Friday, September 10, 2010, Eric J. Bowman <eric@...> wrote: > Antnio Mota wrote: >> >> Indulge me Eric, can you point me to a "official" document, like a >> RFC, a IETF doc, a W3C doc, where it says that regarding media-types >> registries, "there can be only one" like that old movie... >> > > RFC 2616 does not preclude some other registry being defined as an > extension of RFC 2616. No other RFC exists, which defines any such > alternate registry for the Web, however. So it's just a reflection of > reality to state that only one registry exists on the Web. There is no need for a RFC to exist, just that "participants in the communication", "consenting systems" or "parties exchanging" agree on it. > >> >> Specifically, since the RFC clearly state that there *are* IANA >> unregistered media types, where is it stated that other parties >> can't have the initiative of designing and maintain a regstry of the >> media types they designed? >> > > It isn't. If you wish to define such a registry, get it in the RFC for > HTTP, otherwise its use will always be "discouraged", on the Internet > or not. Then maybe the AWWW document will evolve to include it, > instead of saying you SHOULD use the IANA registry for the Web. Discouraged yes, not forbiden. Discouraged because it will be a bad practice to define something already, or very similar to already defined. Not forbiden because diferent parties could eventually agree that sitation exists for their needs. But again, to define such a registry is not necessarry a RFC. Its enough the mutual agreement between"participants in the communication", "consenting systems" or "parties exchanging". > >> >> Specifically, since the RFC clearly state that there *are* IANA Discouraged yes, not forbiden. Discouraged because it will be a bad practice to define something already, or very similar to already defined. Not forbiden because diferent parties could eventually agree that sitation exists for their needs. But again, to define such a registry is not necessarry a RFC. Its enough the mutual agreement between "participants in the communication", "consenting systems" or "parties exchanging". > > Otherwise, I don't see how any alternate registry is viable for HTTP, > Web or not, as it wouldn't be defined and nobody would be encouraged to > use it, if they'd even be aware of its existence. But, that's the > process, and the entire Internet is a result of the RFC process. > It would be defined by those interested parties, and probably parties with similar interests would be encouraged if they saw they can have benefits from it. The internet is a result of many diferents things, for wich RFC are the foundation. That's why I don't need a RFC to define a search engine, for instance. -- * Melhores cumprimentos / Beir beannacht / Best regards **_____________________________________________________________* *Antnio Manuel dos Santos Mota Contacts: http://card.ly/amsmota **_____________________________________________________________ If you're on a mobile phone you can add my contact by scanning the code on the card below * ** <http://lh3.ggpht.com/_1aTCd17_nho/TEblN4fV-_I/AAAAAAAAAHw/wZ51kXrfJcs/qrcode_bc_1.jpg> Please click on the image to enlarge it<http://lh3.ggpht.com/_1aTCd17_nho/TEblN4fV-_I/AAAAAAAAAHw/wZ51kXrfJcs/qrcode_bc_1.jpg> *_____________________________________________________________ * *Disclaimer: The opinions expressed herein are just my opinions and they are not necessary right.* *_____________________________________________________________*
On Fri, Sep 10, 2010 at 1:39 PM, Eric J. Bowman <eric@...> wrote: > OK, Bob, OK, Mike. My frustration is that nobody can even succinctly > explain to me what the argument against my position even *is* anymore. > Or rather, what is the rationale behind ignoring self-descriptive > messaging, defined as standardized types with registered identifiers? > > I've given an example of my own use of application/xbel+xml on the Web > for the intended purpose (the exchange of hierarchical collections of > annotated links) of the standardized type it *allegedly* points to. > > My contention is that this is not self-descriptive messaging, because > Roy states that the constraint requires the identifier to be registered. > It may become self-descriptive, if it's registered. But currently, it > is a REST mismatch, because it is not registered and does not point to > anything. > > The other side of this debate, then, would be that it *is* self- > descriptive despite not being registered, because? Seriously. I'd > like to know the rationale behind *not* calling this a REST mismatch. The problem becomes that basically if you have a system that walks, quacks, swims, flies and looks like a duck, the fact that it doesn't use a data type registered with a specific entity (rather than simply being available and documented equally well some place else), is not a duck. It's a non-duck, an un-duck, the Anti-duck. This is equivalent to saying that if I write a client or server that talks over, say, port 80, and sends all these headers and follows the other details of the HTTP spec, but I happen to use data/mystuff as a data type, a data type not registered with IANA, then I'm not using HTTP. I can't say "yea, you can talk to my service via HTTP". Because of this clause in the HTTP spec: " Media-type values are registered with the Internet Assigned Number Authority (IANA [19]). The media type registration process is outlined in RFC 1590 [17]. Use of non-registered media types is discouraged." Therefor I'm using...I dunno, the NHTTP, "Not HTTP", protocol. The HTTP police will darken the skies around my office were I to dare suggest that I was using HTTP(r)(c)(tm)(reg.us.pat.off.) protocol for this process. Or maybe they'll simply come over with big sticks to "discourage" me. Or, I guess, if I'm doing all of this sacrilege behind a firewall "well you can call it whatever you want". That's what this seems to be all boiling down to. If you use a data type that's not IANA registered then, well, that's the ball game. It's over. You can't call it REST. It's not 1% or 99% REST. It's unREST. May as well file it under POX RPC over HTTP (oh wait, it's not HTTP either...). Same thing. All that other stuff regarding the architecture and the style is nothing compared to using a registered data type. 2 O'Reilly books on REST, almost 800 pages. And all they talked about was how to use registered data types from IANA. Wait, that's not right either. How can that be? How can they write all that text when they're clearly missing most important aspect of REST? Sure, they mention using standard payloads, but it's not like the cornerstone of their work. They talk about all sorts of other things. Stuff folks seem to have real issues grasping and have lots of questions on. Obviously, that's all just padding and filler for the book. Seems to me that if you're using a data type, and that it's well and publicly documented, that's likely more than acceptable to anyone wishing to leverage and access the service. Seems odd that you can have an entire system designed, coded, and in operation, and the only thing keeping it from being "REST" is a, perhaps pending, registration at IANA. And if and when the data type IS registered, shazam, from zero to hero, from "POX RPC over (not)HTTP" to "REST". That's where the "shades of gray" come in to play. Why many do not feel REST is a black and white tag. Why it's not all or nothing. Especially over something as minor as where the documentation is published. In contrast to everything else that goes in to designing and structuring a capable REST system. Make no mistake, Eric, I enjoy your posts and your insight, and point of view. Your posts have given me a new look at how things can be done. But when I saw this thread take off, I knew to stay away, because I knew what it was all about without looking at it. So, my argument is that, despite whatever it may say, whatever the "letter of the law" may be regarding where a data type is published and its "RESTiness", the SPIRIT of having it publicly available and accessible trumps something as pedantic as the specific site where this information is available. I argue that a data type can be "self descriptive" as long as its registered SOMEWHERE, and not NECESSARILY IANA. For example, I do not see someone from the Healthcare IT industry dropping the megabytes of data specifications on to the IANA some day, and I don't see a system trafficing in those payloads any less "REST" than an identical system that happens to use an IANA registered data type. In the Healthcare domain, those data types are routine, and ubiquitous. If you're in HIT, you know how to work with them (as well as anybody anyway -- they are non-trivial documents). Best Regards, Will Hartung (willh@...)
I will try to summarize this whole controversy on the parallel thread entitled "To use registered media-types or not?": http://tech.groups.yahoo.com/group/rest-discuss/message/16503 I hope after that we can put this issue to rest (no pun) for awhile and talk about something else. Thank you.
Attempted summary of this issue: 1. Everybody agrees that, on HTTP, using media types registered with IANA is the best thing to do. 2. We have some disagreement about situations where no registered media type fits well. Would it be ok to say that in those situations, the best thing to do is to experiment with candidate media types, aiming to register with IANA as soon as practical? 3. We do appear to have some people who think registration does not matter much, that the main point of self-descriptive messages is that they are self-describing to the participants and registered in some way that is acceptable to the participants. Would it be correct to say that in that case, you lose some of the benefits of REST, including reach? Please help to improve this summary, but please do not rerun the whole repetitive argument. It has all been said. Really. You do not need the last word.
On Fri, Sep 10, 2010 at 5:26 PM, Bob Haugen <bob.haugen@...> wrote: > 3. We do appear to have some people who think registration does not > matter much, that the main point of self-descriptive messages is that > they are self-describing to the participants and registered in some > way that is acceptable to the participants. > > Would it be correct to say that in that case, you lose some of the > benefits of REST, including reach? Registration is important. I would simply posit that #3 covers #1 as well, and focus moves to what / who is an acceptable registrar(s) for the parties involved. Regards, Will Hartung (willh@...)
Hello! I would add this to point (3): On Fri, 2010-09-10 at 19:26 -0500, Bob Haugen wrote: > > 3. We do appear to have some people who think registration does not > matter much, that the main point of self-descriptive messages is that > they are self-describing to the participants and registered in some > way that is acceptable to the participants. "Other people are saying that 'the participants' does not only include the two end-points of the communication, but also all intermediaries, such as caches, load-balancers, etc. Thus - they argue - officially registered media types are strongly preferable, since the creator of the a custom media type does not necessarily have control over the intermediary participants in the conversation and therefore cannot impart on them any potentially required special knowledge about the custom media type." How's that? Juergen -- Juergen Brendel RESTx - the fastest and easiest way to created RESTful web services http://restx.org
Juergen Brendel wrote: > > I would add this to point (3): > > Bob Haugen wrote: > > > > 3. We do appear to have some people who think registration does not > > matter much, that the main point of self-descriptive messages is > > that they are self-describing to the participants and registered in > > some way that is acceptable to the participants. > > > "Other people are saying that 'the participants' does not only include > the two end-points of the communication, but also all intermediaries, > such as caches, load-balancers, etc. Thus - they argue - officially > registered media types are strongly preferable, since the creator of > the a custom media type does not necessarily have control over the > intermediary participants in the conversation and therefore cannot > impart on them any potentially required special knowledge about the > custom media type." > > How's that? > Yes, thank you. My XBEL example, Mark's answer to Blinksale and Glenn's situation at Microsoft, all share as context the "common case of the Web" where the solution to this problem can't be to define-down "participant" to mean only those parties that have agreed to not give a hang about the IANA registry, because that isn't the "common case" we're targeting with our APIs. Given that I expect my XBEL to be understood as such by anybody and everybody the world-over (Internet scale), how can it be considered self-descriptive without an IANA entry? This is not a rhetorical question, if there is an answer to it which means I'm wrong about having a REST mismatch until that identifier is registered (and that Mark's advice to Blinksale was flat-out wrong), I'd like to know. OK, *now* I think this discussion can go on fine without me, so I'll shutup for real. -Eric
On Fri, Sep 10, 2010 at 11:08 PM, Eric J. Bowman <eric@...> wrote: > Juergen Brendel wrote: >> >> I would add this to point (3): >> >> Bob Haugen wrote: >> > >> > 3. We do appear to have some people who think registration does not >> > matter much, that the main point of self-descriptive messages is >> > that they are self-describing to the participants and registered in >> > some way that is acceptable to the participants. >> >> >> "Other people are saying that 'the participants' does not only include >> the two end-points of the communication, but also all intermediaries, >> such as caches, load-balancers, etc. Thus - they argue - officially >> registered media types are strongly preferable, since the creator of >> the a custom media type does not necessarily have control over the >> intermediary participants in the conversation and therefore cannot >> impart on them any potentially required special knowledge about the >> custom media type." >> >> How's that? >> > > Yes, thank you. My XBEL example, Mark's answer to Blinksale and Glenn's > situation at Microsoft, all share as context the "common case of the > Web" where the solution to this problem can't be to define-down > "participant" to mean only those parties that have agreed to not give a > hang about the IANA registry, because that isn't the "common case" we're > targeting with our APIs. > > Given that I expect my XBEL to be understood as such by anybody and > everybody the world-over (Internet scale), how can it be considered > self-descriptive without an IANA entry? This is not a rhetorical > question, if there is an answer to it which means I'm wrong about > having a REST mismatch until that identifier is registered (and that > Mark's advice to Blinksale was flat-out wrong), I'd like to know. application/opensearchdescription+xml - they created a media type that made sense and had valuable data to represent and nearly every interested party has just adopted it. application/rss+xml - you know. the reality seems to be that people will adopt a representation - and therefore gain the benefits of the uniform interface - based on the merits of the format and the value of the data represented in that format. i wonder what system properties are negatively effected by using one of those two formats? > OK, *now* I think this discussion can go on fine without me, so I'll > shutup for real. Sorry for feeding the beast... --tim
Tim Williams wrote: > > Eric J. Bowman wrote: > > Juergen Brendel wrote: > >> > >> I would add this to point (3): > >> > >> Bob Haugen wrote: > >> > > >> > 3. We do appear to have some people who think registration does > >> > not matter much, that the main point of self-descriptive > >> > messages is that they are self-describing to the participants > >> > and registered in some way that is acceptable to the > >> > participants. > >> > >> > >> "Other people are saying that 'the participants' does not only > >> include the two end-points of the communication, but also all > >> intermediaries, such as caches, load-balancers, etc. Thus - they > >> argue - officially registered media types are strongly preferable, > >> since the creator of the a custom media type does not necessarily > >> have control over the intermediary participants in the > >> conversation and therefore cannot impart on them any potentially > >> required special knowledge about the custom media type." > >> > >> How's that? > >> > > > > Yes, thank you. My XBEL example, Mark's answer to Blinksale and > > Glenn's situation at Microsoft, all share as context the "common > > case of the Web" where the solution to this problem can't be to > > define-down "participant" to mean only those parties that have > > agreed to not give a hang about the IANA registry, because that > > isn't the "common case" we're targeting with our APIs. > > > > Given that I expect my XBEL to be understood as such by anybody and > > everybody the world-over (Internet scale), how can it be considered > > self-descriptive without an IANA entry? This is not a rhetorical > > question, if there is an answer to it which means I'm wrong about > > having a REST mismatch until that identifier is registered (and that > > Mark's advice to Blinksale was flat-out wrong), I'd like to know. > > application/opensearchdescription+xml - they created a media type that > made sense and had valuable data to represent and nearly every > interested party has just adopted it. > > application/rss+xml - you know. > > the reality seems to be that people will adopt a representation - and > therefore gain the benefits of the uniform interface - based on the > merits of the format and the value of the data represented in that > format. > > i wonder what system properties are negatively effected by using one > of those two formats? > > > OK, *now* I think this discussion can go on fine without me, so I'll > > shutup for real. > > Sorry for feeding the beast... > No, you bring up valid points that I'm content to let others discuss. Why *does* HTTP discourage this practice, and AWWW recommend against it? Why *does* Roy state that self-descriptive=registered? The only thing I'd point out is that the list Roy maintains for Apache doesn't include any of these unregistered types, and sees significant re-use in other contexts, especially in outdated form. And, that if these identifiers are in common use, they SHOULD be registered, otherwise depending on them is like 'h' meaning HTML in Gopher -- unspecified, changing the nature of the API from network-based to library-based. IOW, evolution down a defined path via a registry, vs. ad-hoc evolution which was already tried with Gopher and led to HTTP re-using MIME. But it's a damn good point -- if (and I'm not agreeing this is the case) application/rss+xml gains the same uniform interface benefits as application/atom+xml despite not technically being part of the uniform interface as defined by REST, what does it matter? Or, do the benefits of the uniform interface accrue to all ubiquitous identifiers whether they're registered or not? I appreciate you bringing up a line of inquiry which, instead of concerning me, does exactly what I asked -- leads us towards a falsification of REST, in this case the self- descriptive messaging constraint, i.e. it's hard science. :-) -Eric
On Fri, Sep 10, 2010 at 11:56 PM, Eric J. Bowman <eric@...> wrote: > Tim Williams wrote: >> >> Eric J. Bowman wrote: >> > Juergen Brendel wrote: >> >> >> >> I would add this to point (3): >> >> >> >> Bob Haugen wrote: >> >> > >> >> > 3. We do appear to have some people who think registration does >> >> > not matter much, that the main point of self-descriptive >> >> > messages is that they are self-describing to the participants >> >> > and registered in some way that is acceptable to the >> >> > participants. >> >> >> >> >> >> "Other people are saying that 'the participants' does not only >> >> include the two end-points of the communication, but also all >> >> intermediaries, such as caches, load-balancers, etc. Thus - they >> >> argue - officially registered media types are strongly preferable, >> >> since the creator of the a custom media type does not necessarily >> >> have control over the intermediary participants in the >> >> conversation and therefore cannot impart on them any potentially >> >> required special knowledge about the custom media type." >> >> >> >> How's that? >> >> >> > >> > Yes, thank you. My XBEL example, Mark's answer to Blinksale and >> > Glenn's situation at Microsoft, all share as context the "common >> > case of the Web" where the solution to this problem can't be to >> > define-down "participant" to mean only those parties that have >> > agreed to not give a hang about the IANA registry, because that >> > isn't the "common case" we're targeting with our APIs. >> > >> > Given that I expect my XBEL to be understood as such by anybody and >> > everybody the world-over (Internet scale), how can it be considered >> > self-descriptive without an IANA entry? This is not a rhetorical >> > question, if there is an answer to it which means I'm wrong about >> > having a REST mismatch until that identifier is registered (and that >> > Mark's advice to Blinksale was flat-out wrong), I'd like to know. >> >> application/opensearchdescription+xml - they created a media type that >> made sense and had valuable data to represent and nearly every >> interested party has just adopted it. >> >> application/rss+xml - you know. >> >> the reality seems to be that people will adopt a representation - and >> therefore gain the benefits of the uniform interface - based on the >> merits of the format and the value of the data represented in that >> format. >> >> i wonder what system properties are negatively effected by using one >> of those two formats? >> >> > OK, *now* I think this discussion can go on fine without me, so I'll >> > shutup for real. >> >> Sorry for feeding the beast... >> > > No, you bring up valid points that I'm content to let others discuss. > Why *does* HTTP discourage this practice, and AWWW recommend against > it? Why *does* Roy state that self-descriptive=registered? The only > thing I'd point out is that the list Roy maintains for Apache doesn't > include any of these unregistered types It was fielding's commit r571339 that added application/rss+xml to the http daemon... his commit message: "Add extensions for types that are reasonably unique and discoverable on the Web." http://svn.apache.org/viewvc?view=revision&revision=571339 Not that an apache commit log is normative in any sense:) --tim
On Fri, Sep 10, 2010 at 11:35 PM, Tim Williams <williamstw@...> wrote: > On Fri, Sep 10, 2010 at 11:08 PM, Eric J. Bowman <eric@...> wrote: >> Juergen Brendel wrote: >>> >>> I would add this to point (3): >>> >>> Bob Haugen wrote: >>> > >>> > 3. We do appear to have some people who think registration does not >>> > matter much, that the main point of self-descriptive messages is >>> > that they are self-describing to the participants and registered in >>> > some way that is acceptable to the participants. >>> >>> >>> "Other people are saying that 'the participants' does not only include >>> the two end-points of the communication, but also all intermediaries, >>> such as caches, load-balancers, etc. Thus - they argue - officially >>> registered media types are strongly preferable, since the creator of >>> the a custom media type does not necessarily have control over the >>> intermediary participants in the conversation and therefore cannot >>> impart on them any potentially required special knowledge about the >>> custom media type." >>> >>> How's that? >>> >> >> Yes, thank you. My XBEL example, Mark's answer to Blinksale and Glenn's >> situation at Microsoft, all share as context the "common case of the >> Web" where the solution to this problem can't be to define-down >> "participant" to mean only those parties that have agreed to not give a >> hang about the IANA registry, because that isn't the "common case" we're >> targeting with our APIs. >> >> Given that I expect my XBEL to be understood as such by anybody and >> everybody the world-over (Internet scale), how can it be considered >> self-descriptive without an IANA entry? This is not a rhetorical >> question, if there is an answer to it which means I'm wrong about >> having a REST mismatch until that identifier is registered (and that >> Mark's advice to Blinksale was flat-out wrong), I'd like to know. > > application/opensearchdescription+xml - they created a media type that > made sense and had valuable data to represent and nearly every > interested party has just adopted it. > > application/rss+xml - you know. > > the reality seems to be that people will adopt a representation - and > therefore gain the benefits of the uniform interface - based on the > merits of the format and the value of the data represented in that > format. to expand a bit, i think a reality that's missing from your interpretation so far is that an architecture exists independent from its style. and, the implementation(reality) exists independent from the architecture. there must be a loop wherein the implementation informs evolved versions of the architecture and the style. there will be inconsistent points in time and that's ok. --tim
> > Eric J. Bowman wrote: > > Tim Williams wrote: > >> > >> Eric J. Bowman wrote: > >> > Juergen Brendel wrote: > >> >> > >> >> I would add this to point (3): > >> >> > >> >> Bob Haugen wrote: > >> >> > > >> >> > 3. We do appear to have some people who think registration > >> >> > does not matter much, that the main point of self-descriptive > >> >> > messages is that they are self-describing to the participants > >> >> > and registered in some way that is acceptable to the > >> >> > participants. > >> >> > >> >> > >> >> "Other people are saying that 'the participants' does not only > >> >> include the two end-points of the communication, but also all > >> >> intermediaries, such as caches, load-balancers, etc. Thus - they > >> >> argue - officially registered media types are strongly > >> >> preferable, since the creator of the a custom media type does > >> >> not necessarily have control over the intermediary participants > >> >> in the conversation and therefore cannot impart on them any > >> >> potentially required special knowledge about the custom media > >> >> type." > >> >> > >> >> How's that? > >> >> > >> > > >> > Yes, thank you. My XBEL example, Mark's answer to Blinksale and > >> > Glenn's situation at Microsoft, all share as context the "common > >> > case of the Web" where the solution to this problem can't be to > >> > define-down "participant" to mean only those parties that have > >> > agreed to not give a hang about the IANA registry, because that > >> > isn't the "common case" we're targeting with our APIs. > >> > > >> > Given that I expect my XBEL to be understood as such by anybody > >> > and everybody the world-over (Internet scale), how can it be > >> > considered self-descriptive without an IANA entry? This is not > >> > a rhetorical question, if there is an answer to it which means > >> > I'm wrong about having a REST mismatch until that identifier is > >> > registered (and that Mark's advice to Blinksale was flat-out > >> > wrong), I'd like to know. > >> > >> application/opensearchdescription+xml - they created a media type > >> that made sense and had valuable data to represent and nearly every > >> interested party has just adopted it. > >> > >> application/rss+xml - you know. > >> > >> the reality seems to be that people will adopt a representation - > >> and therefore gain the benefits of the uniform interface - based > >> on the merits of the format and the value of the data represented > >> in that format. > >> > >> i wonder what system properties are negatively effected by using > >> one of those two formats? > >> > >> > OK, *now* I think this discussion can go on fine without me, so > >> > I'll shutup for real. > >> > >> Sorry for feeding the beast... > >> > > > > No, you bring up valid points that I'm content to let others > > discuss. Why *does* HTTP discourage this practice, and AWWW > > recommend against it? Why *does* Roy state that > > self-descriptive=registered? The only thing I'd point out is that > > the list Roy maintains for Apache doesn't include any of these > > unregistered types > > It was fielding's commit r571339 that added application/rss+xml to the > http daemon... his commit message: > > "Add extensions for types that are reasonably unique and > discoverable on the Web." > > http://svn.apache.org/viewvc?view=revision&revision=571339 > > Not that an apache commit log is normative in any sense:) > OK, I stand corrected. If application/rss+xml is in browsers, Apache, and any intermediary filtering application/*+xml based on Roy's list, and if it's a safe bet to assume that it's also generally understood by caches, proxies, accelerators and so forth, the contention is that the identifier is self-descriptive for the common case of the Web, even if it isn't in IANA. I'll agree that it's *something*, "visible" perhaps, but the only way it fits the normative definition of the self-descriptiveness constraint is if that identifier is ultimately approved by IANA. In the case of application/vnd.blinksale.person+xml or application/xbel+xml, no such registration effort exists, and neither is ubiquitous enough to be considered as _visible_ as application/rss+xml. The relevance, is that the normative definition of self-descriptiveness represents the fundamental distinction between a library-based API and a network-based API. Using 'h' to mean HTML in Gopher may gain all of Gopher's uniform interface benefits, but it still isn't fundamentally self-descriptive no matter how visible it may be. If registration is fundamental to self-descriptiveness, how pragmatic an architectural choice is this requirement, in practice? > > to expand a bit, i think a reality that's missing from your > interpretation so far is that an architecture exists independent from > its style. and, the implementation(reality) exists independent from > the architecture. there must be a loop wherein the implementation > informs evolved versions of the architecture and the style. there > will be inconsistent points in time and that's ok. > Governing that inconsistency to allow non-chaotic evolution with a registry seems like a sound architectural choice to me. Deciding to ignore the IANA registry on the Web and banking on uptake similar to RSS or OpenSearch seems like a decision not to use REST, in favor of a library-based API. But, what does this say about REST, vs. what this says about the deficiencies of HTTP's decision to re-use MIME? When the bulk of the REST community is ignoring IANA registration on the Web despite the constraint, does the problem lay with the style, or with Web architecture? Are we falsifying self-descriptiveness, or is the community sending a loud-and-clear message that it's time to replace IANA with something more flexible? -Eric
> > When the bulk of the REST community is ignoring IANA registration on > the Web despite the constraint, does the problem lay with the style, > or with Web architecture? Are we falsifying self-descriptiveness, or > is the community sending a loud-and-clear message that it's time to > replace IANA with something more flexible? > Because if that's the holdup, then *now* is the time to advocate for expanding the scope-of-work for HTTPbis to include pointing to some other RFC process which defines a registry folks will actually _use_. Or, falsify the REST constraint which sensibly requires a registry... -Eric
I'd just like to get a clarification on an issue as I am not quite sure of the consensus: If I register my media type under the vnd or prs sub-tree with IANA, and provide a link to a specification, is that media type then considered self-descriptive. Thanks, Darrel
Eric J. Bowman wrote: > Tim Williams wrote: >> application/opensearchdescription+xml >> >> application/rss+xml - you know. >> >> i wonder what system properties are negatively effected by using one >> of those two formats? >> > > No, you bring up valid points that I'm content to let others discuss. Perhaps it would be beneficial to look in to why these specific media types are not IANA registered, considering both of them had/have applications [1],[2]. Indeed it would be very interesting to know why, considering the exposure and usage they have at web scale, why they still remain unregistered. To hazard a guess myself, if you first consider application/rss+xml [1] it aims to group all versions of RSS under a common media type, however some of those versions are actually application/rdf+xml with an ontology, others are application/xml with a doctype and still others are simply XML which probably does warrant it's own media type registration (specifically versions 2.0+). RSS has a been a PITA for years and afaik every RDF parser has to incorporate a sniffing algorithm (on the first 512 bytes) to figure out which version it is, ironically, registered or not, it does break the self descriptive messaging constraint (afaict) and certainly has had wide spread deployment problems to this day. As for application/opensearchdescription+xml, well simply "This type is pending IANA registration." and should registration actually happen, then I guess this would be a good example of making a new media type, proving it's worth, and then getting registered and explain why there is scope for unregistered types to be used on the web (although discouraged), if it was a MUST use IANA registered types then this kind of evolution in the registry simply wouldn't happen. Best, Nathan [1] http://www.rssboard.org/rss-mime-type-application.txt [2] http://tools.ietf.org/html/draft-ellermann-opensearch-01
I think it would also be useful to add a bit to this thread about "what is the point of self-descriptive messaging and standard media types" in the first place. It is not about bragging that you are RESTful. Each of the REST constraints brings specific benefits, and failing to follow a constraint loses at least some of those benefits. REST is aimed at global scale and reach, while (probably) most software is designed for a particular set of users and use cases. It is a different mind-set. Self-descriptive messages and standard media types are potentially usable by anybody anywhere using any software and for purposes that were not envisioned by the original author. For example, mashups, and all of the extensions, add-ons and plugins for popular services like Flickr, Twitter and Amazon. (Not that those are always models of RESTful design...) (Please feel free to improve...)
Hello! Hm. This email turned out a bit longer than I had intended... This entire discussion reminds me of the way people see Richard Stallman's stance on free software: Many people consider him extreme (since he's not willing to compromise at all), but his extreme (purist?) points of view and sometimes hard-to-digest statements certainly help to keep the discussion going and to communicate important issues. Eric, I don't know you personally, but I applaud your insistence on your points. Sometimes people who insist on details to the point where others begin to roll their eyes are necessary to eventually help us see things more clearly. Nevertheless, engineering (in the real world) is always about making the right compromises. I'm sure we can all agree on that. While various specs and dissertations tell us (or imply) that IANA types are preferred and others are discouraged, the reality still comes in shades of gray. If someone doesn't use an IANA registered type, I won't tell them that their system is not RESTful if all other constraints are met. It may be only 95% RESTful, but for all practical purposes, that's "good enough"! It's RESTful. I'm happy they are willing to put up with all the other REST constraints (which takes some effort) and give it a good shot. Everything can always be improved and maybe using IANA types would be an improvement. In the meantime, I applaud them for the effort. But to make the right compromises as engineers, we need to be informed. Your insistence on IANA types certainly has helped me (probably many of us) to consider the matter more closely. Maybe what we could say is that there is a series of degrees by which an architecture can be more and more RESTful. For example, the RMM ( http://martinfowler.com/articles/richardsonMaturityModel.html ) doesn't mention media types at all! Yet, it discusses how an architecture can come closer to the REST ideal, by focusing on some of the other constraints. I would say, let's agree that "some level" of RESTfulnes is better than none. We can describe the ideal system, but should realize that in the real world the ideal will rarely be reached. And to be really useful, we could try to outline which issues we would have to deal with for every constraint we chose NOT to implement 100%. Tomorrow I will give a talk about REST to some people and in preparing this talk, I just realized again that every time you don't design your system in a RESTful manner you end up having to hack around the way the Internet works for you. It (the Internet infrastructure, tools, systems, etc.) tries to be helpful, but can't if you don't have a RESTful system. If your system is RESTful, it all tends to fall in place so nicely, it's always amazing to see, almost beautiful in a way. So, if we are insisting of various REST constraints, it might be good to fight that battle with the technical arguments, rather than quotes from specs. Help me to make the right compromises! For example, if intermediaries on the Internet don't understand my media type, what could be some of the technical consequences/risks? That only needs to be stated once, clearly, and then the discussion can be over and done with. Because once it is communicated we have the knowledge to make informed decisions. The same analysis could be made for every constraint. Then, as an engineer, I can make an informed decision to implement a RESTful system (yes, still 'RESTful'), which ignores a particular constraint with open eyes, for a particular, technical reason I might have. You always here that discussions about REST can take on an somewhat scary religious intensity. Let's avoid that. If I don't adhere to a constraint I won't burn in hell. I might have to deal with some technical issues eventually, but that's something I may chose to deal with. Quotes won't convince me, but technical arguments may under some circumstances. Juergen -- Juergen Brendel RESTx - the fastest and easiest way to created RESTful web services http://restx.org
Bob Haugen wrote: > > I think it would also be useful to add a bit to this thread about > "what is the point of self-descriptive messaging and standard media > types" in the first place. It is not about bragging that you are > RESTful. Each of the REST constraints brings specific benefits, and > failing to follow a constraint loses at least some of those benefits. > > REST is aimed at global scale and reach, while (probably) most > software is designed for a particular set of users and use cases. It > is a different mind-set. > > Self-descriptive messages and standard media types are potentially > usable by anybody anywhere using any software and for purposes that > were not envisioned by the original author. For example, mashups, and > all of the extensions, add-ons and plugins for popular services like > Flickr, Twitter and Amazon. (Not that those are always models of > RESTful design...) > > (Please feel free to improve...) > Yes, thank you -- or consider Google's re-use of application/pdf: without even asking, they'll create variant resources for you that return HTML representations. That won't happen with an unregistered identifier pointing to an application-specific format, but Google does a whole lot of interesting stuff with various ubiquitous types. -Eric
Darrel Miller wrote: > > I'd just like to get a clarification on an issue as I am not quite > sure of the consensus: > > If I register my media type under the vnd or prs sub-tree with IANA, > and provide a link to a specification, is that media type then > considered self-descriptive. > Maybe. That's just a minimum requirement. REST requires the identified type to be standardized, too. Which begs the question, "What is a standard?" Consider the syntax of application/xbel+xml -- that syntax is defined by an RFC process, so it screams "IANA-approved standards-tree identifier" to the world. Getting it approved would require either convincing IANA to recognize the Python Working Group as an approved standards body for that tree, or marshalling an RFC process around XBEL. So maybe I shouldn't set my sights so high, and consider something else, perhaps as an interim strategy -- application/prs.xbel+xml has a much lower bar for approval in a much shorter timeframe. Once approved, it would be self-descriptive, in that it's registered and points to an approved standard by *some* standards body, just not one recognized for standards-tree identifiers -- no big deal to REST since that's just an implementation detail. If I have no intention of ever registering an identifier, then I should use application/x-xbel+xml so as not to cause any confusion with the standards-tree syntax, or indicate that I'm not using IANA at all by adopting some other syntax, i.e. jabberwocky/xbel, which won't get confused with the RFC-governed namespace. -Eric
Nathan wrote: > > Perhaps it would be beneficial to look in to why these specific media > types are not IANA registered, considering both of them had/have > applications [1],[2]. Indeed it would be very interesting to know > why, considering the exposure and usage they have at web scale, why > they still remain unregistered. > > To hazard a guess myself, if you first consider application/rss+xml > [1] it aims to group all versions of RSS under a common media type, > however some of those versions are actually application/rdf+xml with > an ontology, others are application/xml with a doctype and still > others are simply XML which probably does warrant it's own media type > registration (specifically versions 2.0+). RSS has a been a PITA for > years and afaik every RDF parser has to incorporate a sniffing > algorithm (on the first 512 bytes) to figure out which version it is, > ironically, registered or not, it does break the self descriptive > messaging constraint (afaict) and certainly has had wide spread > deployment problems to this day. > Right -- the identifier doesn't begin to declare an explicit processing model, the content must still be introspected, so it won't be self- descriptive even if it is registered, but I'm not sure this is why it hasn't been approved -- it's an application to the standards tree which refers to a whole slew of specs from rssboard.org, which is not an IANA-sanctioned standards body where the standards tree is concerned. I'm sure there's a procedure for applying for recognition, for rssboard and Python WG, but I'm darned if I can find it. > > As for application/opensearchdescription+xml, well simply "This type > is pending IANA registration." and should registration actually > happen, then I guess this would be a good example of making a new > media type, proving it's worth, and then getting registered and > explain why there is scope for unregistered types to be used on the > web (although discouraged), if it was a MUST use IANA registered > types then this kind of evolution in the registry simply wouldn't > happen. > At least this uses the right process, i.e. there's an RFC, so a standards-tree identifier for opensearch does at least stand a chance at being registered. But, what happens if it's rejected? Will it ever be self-descriptive? Isn't that a risk to be aware of? The possibility of your identifier being rejected *has* to be part of your decision- making process when considering re-use vs. minting new. I've been trying to avoid stating my position as REST says you MUST use IANA-registered types, because that would preclude evolution of new types, which is exactly what the mechanism of a registry is intended to allow. I'm sure I've misstated that, so to borrow a phrase from Obama, "let me be clear:" REST says identifiers MUST be IANA-registered on the Web in order to be self-descriptive (Roy says self-descriptive =registered, and it's a fact that IANA is the only registry for the Web). This enforces the agreement to use a specified registry of defined identifiers, a key feature of any network-based API. I've been trying to oppose the notion that it is not necessary to even consider IANA-registering newly-minted identifiers, but still OK to call them self-descriptive. Pending approval, application/opensearchdescription +xml doesn't meet the constraint. After approval, it does. This may seem like a rigid, PITA position, but I see it as necessary to promote re-use. Is it *really* worth the tradeoff to define a new media type if, after all that trouble and effort, IANA rejects your identifier for one reason or another? Compared to using something that's already ubiquitous, even if it isn't the most-efficient choice? Is it *really* possible to define a new media type with its own processing model over morning coffee? Or is that evidence in and of itself that it's application-specific? Every useful, standardized type I can think of took _years_ to develop, and I don't see a problem with recognizing this reality by promoting the orderly, considered evolution of new media types as opposed to an ad-hoc free-for-all (which wouldn't require a registry to encourage). IOW, I think the IANA registry is *supposed* to discourage the willy- nilly minting of new identifiers. But I'm also willing to entertain the notion that it's too strict, to the point of stifling innovation. -Eric
Juergen Brendel wrote: > > Eric, I don't know you personally, but I applaud your insistence on > your points. Sometimes people who insist on details to the point > where others begin to roll their eyes are necessary to eventually > help us see things more clearly. > Just defending my work. I've posted my /date service example many times over the past few years: http://en.wiski.org/date?iso=2010-09-12 The most-prevalent feedback is that it would be more RESTful if, instead of HTML, I were to define an application-specific media type with an unregistered identifier. This is how it goes with all my HTML examples, so what good options am I left with? Digging in at the expense of people getting pissed off at me (right up to, and even a little past, the limits of my own sanity), seemed the best of a whole passle of bad alternatives. Should I stop giving advice on rest- discuss anymore to avoid being mocked for suggesting not only using HTML, but that there are quantifiable advantages to doing so? Following the predominant advice being given would require me to write a ton of out-of-band documentation I otherwise wouldn't have to bother with, after massively increasing time-to-market for the service due to the development of new types. Really? For /date? It doesn't *do* anything, yet it would be more RESTful if only I'd make it 10x more complicated? So yeah, I thought this was an important point to raise. Whatever happened to the simplicity of being able to debug most REST sytems using common tools like browsers, instead of requiring custom clients? I can debug /date in a browser, even though it isn't a "web site." > > Nevertheless, engineering (in the real world) is always about making > the right compromises. I'm sure we can all agree on that. > Absolutely! But the feedback on /date advises me to make the wrong compromise -- REST trades away efficiency in favor of scale; not re- using HTML would be trading away scale for efficiency. We first need to agree that each alternative involves such a tradeoff, before we can discuss the implications of the alternatives. My use of XBEL instead of HTML is sensible. HTML is perfectly capable of expressing a hierarchical collection of annotated links, but no identifier for HTML can instruct browsers to ask users if they'd like to import that collection as bookmarks. Until a registered identifier exists for XBEL, its use is limited to pointing a browser at a file known to be XBEL from an in-chrome configuration, which isn't the hypertext constraint at all, so Content-Type doesn't matter, which is why everyone uses application/xml for XBEL (ugh). So the deliberate choice to be NOT REST with my bookmarks (for a while, anyway) is a valid decision because there is currently no RESTful way to accomplish my goals, and therefore no benefit to my system from the re-use of _anything_ with a ubiquitous identifier for my bookmarks. Only a new self-descriptive identifier will do, which means I need to IANA-register *something* which points to XBEL, even if it's in prs. to start with. Otherwise I can only do what my XSLT code is doing now -- consuming it as XML -- and will never be able to instruct a browser to ask users to import it using hypertext controls. I may never reach that goal even with registration, but it's certainly the required place to start. > > While various specs and dissertations tell us (or imply) that IANA > types are preferred and others are discouraged, the reality still > comes in shades of gray. If someone doesn't use an IANA registered > type, I won't tell them that their system is not RESTful if all other > constraints are met. It may be only 95% RESTful, but for all > practical purposes, that's "good enough"! It's RESTful. I'm happy > they are willing to put up with all the other REST constraints (which > takes some effort) and give it a good shot. Everything can always be > improved and maybe using IANA types would be an improvement. In the > meantime, I applaud them for the effort. > I sort-of agree. If the identifier used looks like it's likely to be registered, and such registration is pending, then pointing out that it isn't self-descriptive *yet* is nit-picking. But if there's no intention to ever register the identifier and the context is the Web, it *is* important to identify that as a mismatch, if REST is to have any utility at all as a guide to the long-term development of the system. Also, what are the stated goals of the system, how important to those goals is the violated constraint, and is it really good enough or is this an assumption that can't be made outside of the style, where results aren't time-tested knowns, like they mostly are within the style? What I'm driving at, is that it's a combination of all the constraints of a uniform interface which yield optimum results on the Web. Leaving one or two out means you can't expect to get the optimum results on the Web, which are known to result from implementing *all* the constraints. In which case, what yardstick are you measuring with? I'm out there every day violating the self-descriptive messaging constraint with one minor part of my system. So that one mismatch isn't going to bother me at all, it'll work itself out eventually, at which time, if it's widely adopted, I will get some payoff for all that time spent at "95% REST" because "100% REST" wasn't possible without a new identifier, but will be, then. But, if my system were dedicated to the exchange of bookmarks between people and their various browsers, or with other people, IOW were built around XBEL, the system wouldn't be "95% REST," it'd be more like "5% REST," and that would remain the case for a considerable amount of time pending IANA approval, at which point that score starts to increase depending on uptake. Making the registration of an identifier what, 90% more important for such a system as compared to mine? ;-) Maybe that's by design, for our own good... > > Maybe what we could say is that there is a series of degrees by which > an architecture can be more and more RESTful. For example, the RMM > ( http://martinfowler.com/articles/richardsonMaturityModel.html ) > doesn't mention media types at all! Yet, it discusses how an > architecture can come closer to the REST ideal, by focusing on some of > the other constraints. > Re-reading Roy leads me to believe that there are degrees of RESTfulness between systems that implement *all* the constraints, and I believe we should focus on that, rather than defining how RESTful it is to fail one or more constraints. You can implement all the constraints and *still* fail to be very RESTful if nobody else implements your processing model. You can lead a horse to water, but you can't make it drink. With a registered identifier, it becomes possible for the existing Web-based bookmark-exchange services to make importing an XBEL collection of links into the browser as bookmarks, a hypertext operation. It also becomes possible for Google to recognize collections of bookmarks for exactly what they are, and re-use that knowledge to their advantage, usually with reciprocal benefit to the producer. Or it could be ignored... The benefits of uptake accrue because the uniform interface is what makes serendipitous re-use and anarchic scalability possible to begin with. These benefits aren't guaranteed to accrue with application/rss +xml because it doesn't explicitly define a processing model, only narrows down the possibilities, even if it's approved, regardless of uptake -- it isn't self-descriptive if it requires introspection. Looks like RSS would've been better off by choosing long ago to use the IETF RFC process, to me... maybe this is why IANA is picky about recognizing standards bodies for standards-tree identifiers. > > I would say, let's agree that "some level" of RESTfulnes is better > than none. We can describe the ideal system, but should realize that > in the real world the ideal will rarely be reached. And to be really > useful, we could try to outline which issues we would have to deal > with for every constraint we chose NOT to implement 100%. > Actually, there can't be any "fully RESTful" system out there until some successor protocol to HTTP comes along. Chapter 6 explains some inherent REST mismatches in the existing architecture that we all have to accept for the time being. They're important to understand, in terms of using REST as a tool to guide long-term development of systems. I would say that a minimal level of RESTfulness would be a network- based API instead of a library-based API, IOW, a uniform interface is more important than meeting cache or layered-system constraints -- or even, in this day and age, the client-server constraint. REST prior to Chapter 5 explains all the concepts being re-used, then Chapter 5 explains that the key feature of REST, the uniform interface, is based on the principle of generality (i.e. re-use), then Chapter 6 discusses the benefits and consequences of the re-use of MIME by HTTP, as well as other decisions about re-use in practice. So it makes sense to me, to focus on re-use of standardized what-have- yous as an inescapable fundamental aspect of the style. Or, put another way, focus on the uniform interface constraints, they're harder to fix down the road if they're gotten wrong initially, unlike caching which you'll likely be fiddling with all the time anyway. > > Tomorrow I will give a talk about REST to some people and in preparing > this talk, I just realized again that every time you don't design your > system in a RESTful manner you end up having to hack around the way > the Internet works for you. It (the Internet infrastructure, tools, > systems, etc.) tries to be helpful, but can't if you don't have a > RESTful system. If your system is RESTful, it all tends to fall in > place so nicely, it's always amazing to see, almost beautiful in a > way. > Exactly. My self-interest as somebody who's decided to specialize in REST architecture, is to be able to teach the style through positive reinforcement by pointing to a variety of big-corporation REST APIs and explaining why their design choices are *correct*. If just one of these corporations would actually give by-the-thesis REST a try for once, I believe they'd see an amazing and beautiful benefit to their bottom line, and actually consider the benefits of REST as an architectural style instead of just as a buzzword, and evangelize accordingly... I can dream, can't I? ;-) > > So, if we are insisting of various REST constraints, it might be good > to fight that battle with the technical arguments, rather than quotes > from specs. > That's what I thought I was doing by presenting arguments in Gopher, to break us out of the mindset of MIME types with IANA-syntax identifiers, in order to explain the difference between library-based and network- based APIs, by drilling down to the fundamental technical essence of any uniform interface distributed hypertext application protocol. The nut of the problem is the resource/representation dichotomy from which all else flows. Some mechanism is required to send an identifier external to the payload, for expressing the sender's intended processing model, because this mechanism is what decouples representation from resource. The implementation of this mechanism is perhaps the key distinction between uniform interface styles, and all other styles with a notion of resource vs. representation. When an identifier is encountered, how do we determine the shared understanding of what processing model it points to? If the answer is to look up its normative reference in a spec, then it's prima facie evidence we're dealing with a network-based API. OTOH, if we have to make a case-by-case determination like we did by searching for application/rss+xml in a DB of Apache code-commits, it's prima facie evidence we're dealing with a library-based API. You can decouple resource from representation easily enough in practice, but *how* you go about it makes all the difference between a uniform interface network-based API (REST, Gopher or other style) and a library- based API which misses the mark completely. In Gopher (sorry to quote a spec, but it's legitimate to understand REST by comparing and contrasting known specs which instantiate the resource/representation dichotomy even if they don't call it that), if 'h'=HTML came between 'g'=GIF and 'i'=plaintext, then it would also be a self-descriptive identifier, one required distinction of any network- based API (extensible registry like HTTP, or baked-in like Gopher). Instead, in order to discover the shared understanding of 'h'=HTML, we need to go digging through the code libraries of daemons and browsers which implement the Gopher protocol. That *use* of Gopher constitutes a library-based API, whereas using Gopher's self-descriptive identifiers constitutes a uniform interface. The failure of Gopher to allow the evolution of new identifiers/types without versioning the spec, is a strong argument in favor of REST/HTTP requiring a registry. Having a registry is a requirement of a RESTful uniform interface, not network- based APIs in general. > > The same analysis could be made for every constraint. Then, as an > engineer, I can make an informed decision to implement a RESTful > system (yes, still 'RESTful'), which ignores a particular constraint > with open eyes, for a particular, technical reason I might have. > Or even within the same constraint. Another key distinction between a library-based API and a network-based API is the use of standardized methods. Gopher has no method other than the general retrieval method. HTTP has evolved to include... the IANA HTTP Method Registry! This conversation has crossed over into discussion of nonstandardized methods. No, they aren't RESTful, and no, I don't wonder why they can't be, not on the Web. All the same arguments apply -- if I can look up your method name in a registry, your network-based API messaging is self-descriptive. If I need to go digging through some codebase to determine what the shared understanding of your method name might be, it's a library-based API which is fundamentally opposed to any uniform interface style. Using, of course, concepts and definitions of terms from Roy's thesis, but ad- libbing to avoid having to quote specs extensively to back up what shouldn't be a controversial position. Conversely, I believe that REST no more encourages willy-nilly creation of new data types with IANA-unregistered identifiers than it encourages request method = WINGIT. There's a fundamental difference between library-based and network-based APIs, only one of which may be remotely considered a "uniform" interface based on the principle of generality (Gopher section 4 makes no mention of said principle, but the spirit of what's being said is exactly the same). > > Help me to make the right compromises! For example, if intermediaries > on the Internet don't understand my media type, what could be some of > the technical consequences/risks? That only needs to be stated once, > clearly, and then the discussion can be over and done with. Because > once it is communicated we have the knowledge to make informed > decisions. > I wish the simple takeaway here could be to not look for loopholes like "discouraged" in the specs. The strongest technical argument I have ever been able to make as regards the configuration of any Internet protocol for any purpose, is that you can't operate outside the RFCs and yet still _expect_ interoperability. If you aren't doing the things which are required to achieve the expected results, you can't expect those results to be achieved, and it's this lack of benefit that's more important than any discussion of the consequences, which by necessity spreads out into all sorts of different intermediary behaviors, and we already went down that road in this discussion. If you're choosing to ignore a constraint, you ought to be able to articulate the perceived benefit to your system, like I did above with XBEL, instead of requiring a detailed list of all the things that could go wrong -- which nobody can possibly know, but the more experienced amongst us know are lurking everywhere (see how quickly http-wg and TAG discussions stipulate to not having any idea *what* intermediaries may be up to these days). > > You always here that discussions about REST can take on an somewhat > scary religious intensity. Let's avoid that. If I don't adhere to a > constraint I won't burn in hell. I might have to deal with some > technical issues eventually, but that's something I may chose to deal > with. Quotes won't convince me, but technical arguments may under some > circumstances. > You can't operate outside the uniform interface and expect to have any yardstick by which to judge your system's performance. Any uniform interface network-based API is based on not going outside of what's been explicitly defined as compatible, unless there's compelling need to add something new. Behavior outside the uniform interface is undefined -- you can't reliably benchmark your caching decisions over time without a registered identifier, because you can't be certain your messages aren't being ignored by the majority of caches. You can't test your system against the Internet, you can only model it against what's known to work, and I promise you that everything that's known to work has been standardized. Which doesn't preclude new standards from evolving, of course -- just sayin' that the proof that something *is* known to work, is ubiquitous uptake, i.e. standardization, which is why standardization is so central to REST. Maybe the extent to which I'll go to make a point about standardization is scary, but I believe my insistence on it is technically sound, and has everything to do with following RFCs for 17 years, rather than being my attempt at shamanism. :-) -Eric
On Sun, Sep 12, 2010 at 10:58 PM, Eric J. Bowman <eric@...> wrote: > Darrel Miller wrote: >> >> I'd just like to get a clarification on an issue as I am not quite >> sure of the consensus: >> >> If I register my media type under the vnd or prs sub-tree with IANA, >> and provide a link to a specification, is that media type then >> considered self-descriptive. >> > > Maybe. That's just a minimum requirement. REST requires the identified > type to be standardized, too. On Sun, Sep 12, 2010 at 10:58 PM, Eric J. Bowman <eric@...> wrote: > Darrel Miller wrote: >> >> I'd just like to get a clarification on an issue as I am not quite >> sure of the consensus: >> >> If I register my media type under the vnd or prs sub-tree with IANA, >> and provide a link to a specification, is that media type then >> considered self-descriptive. >> > > Maybe. That's just a minimum requirement. REST requires the identified > type to be standardized, too. It's not that clear. The dissertation says "standard methods and representations" but I understand this[1] as Roy clarifying that he meant specified not necessarily standardized. It seems far-reaching to me that an architectural style could go so far as to constrain the governance model of formats. The goal is shared understanding, which can be effectively achieved regardless of how the format was crafted (this is the same as the ubiquitous types line of reasoning too:). The original question is odd though. "Media Type's" aren't self-descriptive - messages are, and media types are one component of self-descriptiveness: o) "interaction is stateless between requests" o) "standard methods and media types are used to indicate semantics and exchange information" o) "responses explicitly indicate cacheability" o) "the type is registered and the registry points to a specification and the specification explains how to process the data according to its intent" ** To answer the original question, the vnd/prs trees can lead to shared understanding and satisfy that part of self-descriptiveness. --tim ** not in the dissertation but clarified by Roy via mailing list[1] [1] -http://tech.groups.yahoo.com/group/rest-discuss/message/6594
On Mon, Sep 13, 2010 at 7:51 AM, Tim Williams <williamstw@...> wrote: > > The original question is odd though. "Media Type's" aren't > self-descriptive - messages are, and media types are one component of > self-descriptiveness: I agree, it was poor phrasing on my part. I intended to says something more like "would the use of such a media-type be compatible with self-descriptive messages". Darrel
On Mon, Sep 13, 2010 at 4:51 AM, Tim Williams <williamstw@...> wrote: > > It's not that clear. The dissertation says "standard methods and > representations" but I understand this[1] as Roy clarifying that he > meant specified not necessarily standardized. It seems far-reaching > to me that an architectural style could go so far as to constrain the > governance model of formats. The goal is shared understanding, which > can be effectively achieved regardless of how the format was crafted > (this is the same as the ubiquitous types line of reasoning too:). > *snip* > > o) "the type is registered and the registry points to a specification > and the specification explains how to process the data according to > its intent" ** > > To answer the original question, the vnd/prs trees can lead to shared > understanding and satisfy that part of self-descriptiveness. > > [1] -http://tech.groups.yahoo.com/group/rest-discuss/message/6594 That seems a much more "common sense" approach to the issue of registration. It needs to be registered, somewhere, and that registry needs to point to a specification. But there are a lot of registries, a lot of standard bodies, used in all sorts of domains. IANA will register anything in the vendor/personal tree. And IANA doesn't maintain or assert any validity of these registrations. There's an example of an IANA registered text type that apparently have no surviving specifications. Both links associated with the media type are dead. So, even if something is up on IANA, it can well be an incomplete registration, and IANA doesn't seem to have any vetting or maintenance process to prevent that from happening. Now, "the Web" is often referred to, and tied to IANA. If "the Web" means standards that Web Browser creators use, and standard HTTP servers use when targeting Web Browser clients, then perhaps it is fair to consider IETF, via IANA, authoritative for that domain. But REST is not "the Web", HTTP servers can (and do) serve clients other than web browsers. And other domains and industries will be leveraging HTTP servers, and REST(-like?) architectures, and they will be negotiating and standardizing their traffic through organizations and working groups other than IANA and IETF. So, either the definition of REST needs to be clarified, or relaxed, on this point, or there needs to be some new term to describe these other systems. Regards, Will Hartung (willh@...)
Tim Williams wrote: > > Eric J. Bowman wrote: > > Darrel Miller wrote: > >> > >> I'd just like to get a clarification on an issue as I am not quite > >> sure of the consensus: > >> > >> If I register my media type under the vnd or prs sub-tree with > >> IANA, and provide a link to a specification, is that media type > >> then considered self-descriptive. > >> > > > > Maybe. That's just a minimum requirement. REST requires the > > identified type to be standardized, too. > > It's not that clear. The dissertation says "standard methods and > representations" but I understand this[1] as Roy clarifying that he > meant specified not necessarily standardized. > The wording of my answer was off, as well. Being self-descriptive, i.e. registered, is the minimum requirement for an identifier. REST's uniform interface benefits are tied to uptake, i.e. standardization, of the identified media type. Standardization doesn't imply "standard" to me, it implies that an open process is being followed, or is intended to be followed. What I get from Roy's statement, is that the preferred process for developing a new media type is to register an identifier and start using it, then if it becomes popular enough to justify the effort, get it accepted as a standard by some standards body or another. If a new media type doesn't generate enough interest to become a standard, the benefits of a uniform interface won't be realized by systems which depend on it. "We could try to standardize something like what I describe above, but it would require multiple independent implementations and a lot more free time than it probably deserves." http://tech.groups.yahoo.com/group/rest-discuss/message/15819 The problem with using 'MIC' as a request header, is that it isn't in the IANA registry that's been defined for those, so it isn't self- descriptive. The question then becomes, where Web architecture is concerned, how to get such a header registered -- does IANA require an approved standards body to sign off on 'MIC'? If so, then yeah, those standards bodies require multiple independent implementations first... On the Web, there is a defined set of rules for registering new media type identifiers, headers, link relations and request methods as part of the network-based uniform interface. The fact that identifiers only need to be registered to be self-descriptive doesn't mean that nonstandard methods, link relations or headers are allowed to be registered, so it doesn't mean that standardization isn't important to the overall definition of self-descriptive messaging. Where media type identifiers are concerned, on the Web, self-descriptive means IANA-registered. I don't care if you're using an unregistered identifier that's pending registration, provided you're aware of the consequences of that registration being rejected, and provided you're aware that such registration is a requirement of self-descriptiveness. > > It seems far-reaching to me that an architectural style could go so > far as to constrain the governance model of formats. The goal is > shared understanding, which can be effectively achieved regardless of > how the format was crafted (this is the same as the ubiquitous types > line of reasoning too:). > The benefit of any uniform interface network-based API comes from the shared understanding being documented for all to see, instead of hidden inside a code library. In meta-style terms, there's no requirement for a registry. REST requires a registry, not a governance model. The Web architecture is what constrains the governance model to IANA. Self- descriptive identifiers have nothing to do with a registry or IANA or MIME in Gopher, which is why I keep bringing up Gopher -- it isn't REST but it is a uniform interface network-based API if and only if the shared understanding of identifiers is contained within the spec, instead of code libraries. "That doesn't sound like a problem encountered by RESTful architectures. Reliable upload of multiple files can be performed using a single zip file, but the assumption being made here is that the client has a shared understanding of what the server is intending to do with those files. That's coupling." http://tech.groups.yahoo.com/group/rest-discuss/message/15797 The problem with two organizations sharing an understanding that's embedded in code libraries, is that it leads to IDLs and contracts specifying fixed behavior. This couples systems together, instead of uncoupling them based on external specifications. Any architectural style that's fundamentally a distributed hypermedia application protocol, minimally requires the shared understanding of the identifier to be public, in order that the interface may be considered uniform. If you define-down "participant" to mean only partner corporations, it still isn't an argument for using a library-based API, because those participants will run into the same evolvability problems encountered in the early Web architecture when systems were coupled together by libwww -- requiring IDLs and interface-level contracts (as opposed to contracts between stakeholders, agreeing to use the same standards). The library-based API approach didn't work at the small scale of the Web in the early 90's between CERN and NCSA, so how can it be expected to work on today's Web between larger entities? The REST style requires registration, and the Web architecture has evolved to include registries which specifically define what is self- descriptive and what isn't, including a proposal for a registry for authentication schemes (initially containing basic and digest). REST requires such registries, Web architecture defines IANA registries to meet that requirement. At least, that's how the Web has evolved in practice, for reasons I assume to be directly related to REST. > > The original question is odd though. "Media Type's" aren't > self-descriptive - messages are, and media types are one component of > self-descriptiveness: > > o) "interaction is stateless between requests" > o) "standard methods and media types are used to indicate semantics > and exchange information" > o) "responses explicitly indicate cacheability" > o) "the type is registered and the registry points to a specification > and the specification explains how to process the data according to > its intent" ** > That's right, self-descriptiveness isn't entirely about the identifier, but a registered identifier is the minimal requirement. Beyond that, nowhere does Roy state that methods, media types and link relations only need registration but not standardization. Where Web architecture is concerned, we'd need to look up the requirements for each of those IANA registries, to determine if it's even possible to register anything that isn't already standardized, and determine sanctioned standards bodies. It isn't forbidden to evolve new things which may be defined in the IANA registries, it's just that systems which rely on them won't accrue the benefits of REST until they're standardized. IOW, if you *do* create a new thing which is of general interest, then you MUST standardize it such that not only you, but anyone else, can use it as part of a network-based API. Such standardization is not an a-priori requirement, to allow for evolution, but it is ultimately unavoidable when instantiating REST systems for the common case of the Web. > > To answer the original question, the vnd/prs trees can lead to shared > understanding and satisfy that part of self-descriptiveness. > Close -- registration in any tree leads to a network-based shared understanding of the identifier, required for self-descriptiveness. Is the shared understanding between participants embedded within code libraries? Then they're coupled. If those participants are coded against an external specification of shared understanding, they're decoupled. Systems reliant upon identifiers and media types (or headers or auth schemes or link relations or methods) that aren't intended to ever be published as standards, will never accrue the benefits of REST, because they'll always be library-based APIs by virtue of failing the self-descriptiveness constraint. -Eric
Will Hartung wrote: > > IANA will register anything in the vendor/personal tree. And IANA > doesn't maintain or assert any validity of these registrations. > There's an example of an IANA registered text type that apparently > have no surviving specifications. Both links associated with the media > type are dead. So, even if something is up on IANA, it can well be an > incomplete registration, and IANA doesn't seem to have any vetting or > maintenance process to prevent that from happening. > That's why I'm saying that registration of an identifier is only the minimal requirement for self-descriptiveness. > > Now, "the Web" is often referred to, and tied to IANA. If "the Web" > means standards that Web Browser creators use, and standard HTTP > servers use when targeting Web Browser clients, then perhaps it is > fair to consider IETF, via IANA, authoritative for that domain. > Yes, but the common case of the Web also includes a wide variety of intermediaries for widely varying purposes, as "participants." We do need to agree on an authoritative registry. Which can remain IANA, or we can propose some other registry or registries in addition to IANA, but on today's Web IANA is the only reality, and that definition of Web isn't remotely tied to browsers. Googlebot is not a browser, and indexes things like PDFs which can't be natively displayed in a browser, but it's still the Web if it's HTTP over the Internet. > > But REST is not "the Web", HTTP servers can (and do) serve clients > other than web browsers. And other domains and industries will be > leveraging HTTP servers, and REST(-like?) architectures, and they will > be negotiating and standardizing their traffic through organizations > and working groups other than IANA and IETF. > REST may not be limited to the Web, but it still requires a network- based API where all participants agree to a registration authority (or multiple registration authorities) which explicitly defines what is self-descriptive and what isn't, instead of embedding shared understanding within code libraries (or tying it to the protocol spec version), because that's the difference between having a uniform interface or not having a uniform interface, as defined by REST. > > So, either the definition of REST needs to be clarified, or relaxed, > on this point, or there needs to be some new term to describe these > other systems. > I would still need to see some sort of counter-example which shows my Gopher-based explanation of the difference between library-based APIs and network-based APIs is wrong, or some other justification as to why registries should be considered irrelevant to the style, before I can agree to relaxing REST's uniform interface constraint, i.e. falsify the reasoning behind self-descriptive messaging. IOW, I disagree that any new terminology is needed -- such systems are by definition library-based APIs, which are by definition not the same as uniform interface network-based APIs. You can't define-down "participant" to mean only those systems coupled together by a shared understanding which resides in code libraries, and still call it REST, Web or no Web. -Eric
Will Hartung wrote: > > But REST is not "the Web", HTTP servers can (and do) serve clients > other than web browsers. And other domains and industries will be > leveraging HTTP servers, and REST(-like?) architectures, and they will > be negotiating and standardizing their traffic through organizations > and working groups other than IANA and IETF. > If you don't want HTTP over the Internet to mean the Web, where "participant" means anybody and everybody whether you want it to or not, then by all means tunnel. If you're using some other registry, then don't re-use IANA syntax, if you're unwilling to tunnel. But don't assume you're not better off tunneling, if you're not using IANA, regardless of syntax. My point remains that, on the Web, application/foo+xml is _not_ OK unless you intend to IANA-register it via an IANA-sanctioned standards process, because that's the RFC-defined syntax of the IANA standards tree, you're not free to re-use that syntax ad-hoc, and you can't expect widespread interoperability if you do. Until it's approved for inclusion in the IANA standards tree, application/foo+xml is _not_ self- descriptive for the common case of the Web. -Eric
> > My point remains that, on the Web, application/foo+xml is _not_ OK > unless you intend to IANA-register it via an IANA-sanctioned standards > process, because that's the RFC-defined syntax of the IANA standards > tree, you're not free to re-use that syntax ad-hoc, and you can't > expect widespread interoperability if you do. Until it's approved for > inclusion in the IANA standards tree, application/foo+xml is _not_ > self-descriptive for the common case of the Web. > I'm open to any debate about why this _shouldn't_ be the case, but have yet to see any convincing argument that this _isn't_ the case on today's Web. Please let me know which option you're discussing. I'm done debating whether or not this _is_ the case, and will leave you alone to make your case. I'm very interested in discussing why this _should_ be changed, or debating the merits of the standards tree vs. the vnd/prs trees and such, based on accepting the premise. I won't *ignore* further debate as to how application/foo+xml can be considered self-descriptive for the common case of the Web without being IANA-registered in the here-and-now, but I will just lurk. -Eric
> > If you don't want HTTP over the Internet to mean the Web, where > "participant" means anybody and everybody whether you want it to or > not, then by all means tunnel. If you're using some other registry, > then don't re-use IANA syntax, if you're unwilling to tunnel. But > don't assume you're not better off tunneling, if you're not using > IANA, regardless of syntax. > The argument I've given is that this isn't just a REST requirement, it's fundamental to instantiating any style on the Web which is based around a separation of resource and representation by using Content- Type to declare a processing model, which strives to be a uniform interface network-based API by imposing a self-descriptiveness constraint. IOW, the meta-style is distributed hypermedia application protocol, which may be a uniform interface network-based API, or not, i.e. a library-based API. This architectural distinction, sans registry, is also central to the uniform interface of Gopher, which is a different style from REST. Gopher is a distributed hypermedia application protocol which may also be instantiated as either network-based or library-based, depending on whether messaging is self-descriptive or not. Aside from Gopher, and aside from REST, HTTP may be used as a protocol to instantiate some other style of distributed hypermedia application architecture. Such a style would also use self-descriptiveness as a constraint between being library-based or network-based, and thus also require the IANA registry for use on the Web, if that style were also intended to have a uniform interface. This is derived from Chapter 6.5, which describes the general architectural lessons learned from applying the REST style to Web architecture. In general, any uniform interface instantiated using HTTP over the Internet, RESTful or not, will require the re-use of the IANA registry, or attempt to extend HTTP to include alternatives to the IANA registry, because self-descriptiveness is endemic to any uniform interface style for distributed hypermedia applications, which, if they're using HTTP over the Internet, means there's a registry which defines self-descriptive messaging. -Eric
On Tue, Sep 14, 2010 at 1:18 PM, Eric J. Bowman <eric@...> wrote: > This is derived from Chapter 6.5, which describes the general > architectural lessons learned from applying the REST style to Web > architecture. In general, any uniform interface instantiated using > HTTP over the Internet, RESTful or not, will require the re-use of the > IANA registry, or attempt to extend HTTP to include alternatives to the > IANA registry, because self-descriptiveness is endemic to any uniform > interface style for distributed hypermedia applications, which, if > they're using HTTP over the Internet, means there's a registry which > defines self-descriptive messaging. This is the key right here "or attempt to extend HTTP to include alternatives to the IANA registry". Nobody is denying registration, it's a matter of the authoritative registry being used. In HTTP, using registered media types is a SHOULD, not a MUST, and not using IANA registered media types is discouraged, not illegal. RFC 2616 3.7 "Media Types" "Media-type values are registered with the Internet Assigned Number Authority (IANA [19]). The media type registration process is outlined in RFC 1590 [17]. Use of non-registered media types is discouraged." I can't speak to the "intermediary" participants you talked about earlier. Notably caches and proxies and other such "transparent" infrastructure bits. I understand how in some applications these pieces of infrastructure may well transform content for media types that it is aware of. For example, if it's delivering to mobile devices, there may be some transformation taking place. In that case, the infrastructure is not transparent, rather it's active. However, from a practical standpoint, its hard to imagine these devices NOT being transparent to media types they are not familiar with. Either they can reject them outright (which seems to be more a policy decision than a technical decision), or it can pass them through untouched. Worst case, perhaps, is that a caching proxy may decide to ignore a media type it is unfamiliar with and not cache the content, even though it has otherwise conforming HTTP headers giving proper caching instructions. I really can't speak to this as I have not worked directly with caching proxies. But, again, that would seem more a local implementations policy requirement than a technical limitation of the agent. But it's hard to see the Web as we know it today working or evolving if there was a wide spread limit on the data types allowed to be sent through the network, and middle ware infrastructure was actively denying this media to pass through unhindered. If this were actively happening, on a wide scale, it's hard to imagine any new media types being created, even new types with full registration and standardization. That information would have to percolate through all of the configurations of all the systems denying the up to now unknown data type so they could properly pass it through. Let's consider a popular example. Flash. Flash is widely popular. It's been said that "You don't support the Web if you don't support Flash". Its media type is "application/x-shockwave-flash". There is no entry for this at IANA. There is no IETF RFC for Flash (that I could find). Yet, somehow, Google can index this media, my browser can load it, and wget can fetch it over HTTP. If I were interested in reading a Flash file, I can use a popular internet search engine that will inevitably send me to Adobe where I can find a specification for the Flash file format. Now, let's consider PDF. PDF IS registered with IANA. "application/pdf", and it does have an associated IETF RFC. An RFC that is, I might add, remarkably short. It's short because the RFC is not a specification for PDF, unlike many other RFCs (such as HTTP). Rather, there's a footnote with a URL to Adobe's specification, the same company hosting the Flash specification it turns out. If you go to the URL in the RFC, it redirects you to a different page that does not contain the PDF specification. Rather it documents the changes Adobe has done to the official ISO PDF specification. The ISO specification is not available on the web, it apparently must be purchased. However, Adobe has an archive section linked from this page where you can find the PDF version from Adobe that the ISO standard is based on. You can also find an older version that the RFC is based upon. So, through Adobes goodwill, we have a freely available specification of PDF. Both one that is equivalent to the current ISO standard, as well as the one referred to by the IETF RFC. But, it's easy to see how Adobe could have chosen to not provide these, leaving the only official document of the PDF format to the ISO publication, which not only must be purchased, but is NOT the document specified by the RFC. Given all that, Google can index this media, my browser can deliver them, and wget can fetch these. So, given those considerations, here's the conundrum. I have two very simple applications. Here's one of them. Request: GET /doc.pdf HTTP/1.1 Response: HTTP/1.1 200 OK Date: Tue, 14 Sep 2010 11:22:33 GMT Server: Acme Server 1.0 Last-Modified: Tue, 14 Sep 2010 11:22:33 GMT Content-Length: 1234 Content-Type: application/pdf Here's the other: Request: GET /game.flv HTTP/1.1 Response: HTTP/1.1 200 OK Date: Tue, 14 Sep 2010 11:22:33 GMT Server: Acme Server 1.0 Last-Modified: Tue, 14 Sep 2010 11:22:33 GMT Content-Length: 1234 Content-Type: application/x-shockwave-flash The first application is REST. The second application is not REST. This is what you are saying. This is what it boils down to. This is where the impasse occurs. I view these applications as identical. Apparently, you do not. I feel that the label REST should be able to be applied equally to these two applications. All here agree to the need to specification, and documentation. If I were to publish endpoints to a REST system, and a supporting document describing the endpoints and the media types involved, I feel that as long as those specification are openly available, whether from IANAs servers, a published RFC, Adobe, or my own web server, then those media types are descriptive enough to meet the spirit and letter of what the REST style and architecture require. If someone wishes to use my system and the required media types, all of the information is available to them. If a user trying to use my system encounters an error using an non-IANA registered mime type because of some intervening middleware policy, I'd treat that similarly to a person behind a firewall that blocks port 25 (another popular, IETF approved port and specification), as a policy issue that would need to be addressed locally. Yes, this can hinder use overall, but, from a practical standpoint, frankly, I don't see it as an issue. So, that's my pitch. I completely appreciate the motivations and spirit from which your discussions come from. I am simply going to have to disagree with you on the pedantry with which you apply to this specific point of the REST style. If my second application example can not be called REST, then so be it. But I may call it that anyway. Best Regards, Will Hartung (willh@...)
What is the difference between unbounded creativity and design by constraint? The former is characterized by looking at the problem, then defining the parameters by which it is to be solved. The latter is characterized by attempting to solve the problem within predefined parameters. I have a few metaphors... Team-building workshops and engineering-school challenges which involve some goal (a standard variation on which is keeping an egg safe), to be accomplished within defined parameters (each team is given some paper- towel-roll tubes, popsicle sticks and a dab of glue). The problem isn't meant to be insoluble, so while the answer may be, "Yes, it would be easier with a roll of duct tape," you can't have one. REST on the Web says you can have duct tape, you just can't use it for at least a year... This sort of situation is standard fodder in the movies, think "Apollo 13" or the last "Star Trek". In the former, you can't assume there's any more velcro than you've been given to work with. If you follow my logic, I'd have been more impressed with Captain Kirk's Kobayashi Maru cheat if he'd changed the specification of a starship to include a Klingon-shield-negating-device and then ordered it activated. Discovering such an imperative design goal could then inform the evolution of the starship specification. The final stage of USMC boot camp is the "crucible" challenge, which involves squad-based solving of several problems, one or more of which is deliberately insoluble. Insoluble problems are meant to be determined by first making an effort to solve the problem at hand within the given parameters. Here, you can't have duct tape, ever. The Marines won't let you cheat on the Kobayashi Maru. This is more rigid than REST. http://memory-alpha.org/wiki/Kobayashi_Maru_scenario The takeaway is that you don't always know that a problem is insoluble within the uniform interface without trying it first, because such failure is what informs the design of any extension to the uniform interface. If you're using the public Internet, then standardization means you're contributing your solution to an otherwise-insoluble problem (or just your idea for a better mousetrap) back to the public- network-based uniform interface, for the purposes of general shared understanding, which is the mechanism for promoting uptake. REST isn't just encouraging re-use, it's encouraging re-sharing when you can't re-use. REST on the Web doesn't just encourage such behavior, it specifies it, which means there's a "right way to cheat" in the Kobayashi Maru situation (re-spec the starship instead of defining-down participants to not have shields). Cheating the right way makes a contribution to the public commons of the Web, cheating the wrong way just fractures it. IANA registries are a curious thing, as they're a technically-required social contract. But without such registries, evolution would be ad- hoc -- a social-contract choice which technically precludes a public- network-based API. So if you do create something for the Web, think of what you're doing in terms of extending the uniform interface, and understand standardization in those terms -- share it back, and you may be rewarded with Internet scale, if it's a useful contribution. Having a rationale, i.e. explaining why other solutions fail to solve your problem, matters a great deal to your success -- which is why it's helpful to prototype within the uniform interface, to determine what real requirements you have which can't otherwise be met, in order to guide your extension of the uniform interface. -Eric
Will Hartung wrote: > > But it's hard to see the Web as we know it today working or evolving > if there was a wide spread limit on the data types allowed to be sent > through the network, and middle ware infrastructure was actively > denying this media to pass through unhindered. If this were actively > happening, on a wide scale, it's hard to imagine any new media types > being created, even new types with full registration and > standardization. That information would have to percolate through all > of the configurations of all the systems denying the up to now unknown > data type so they could properly pass it through. > That isn't the problem. The problem with a library-based API is just what Roy says it is in his thesis: "Why is this important? Because it differentiates a system where network intermediaries can be effective agents from a system where they can be, at most, routers." If all any participant can ever at most do is route your payload, you're missing out on the benefits of a uniform interface, particularly anarchic scalability and serendipitous re-use, which come about when systems you've never even heard of are acting as effective agents, which only occurs with ubiquitous identifiers (and even then, isn't guaranteed). I've configured more than one Web cache in my day, and what I do is restrict it to a limited number of types which are ubiquitous enough to matter. I'm not alone in this behavior. Why *should* I cache application/foo+xml if it's undefined? What benefit is in it for me, as the owner of a cache, if it's an insignificant percentage of traffic? Why should I let any undefined payload take up cache resources, at the expense of something that's a million times more likely to be re-used? -Eric
Will Hartung wrote: > > In HTTP, using registered media types is a SHOULD, not a MUST, and not > using IANA registered media types is discouraged, not illegal. > On the Web, no other registry exists, so how can an unregistered identifier meet the self-descriptiveness requirement of being registered, unless some other registry is first defined? Or, you re- define self-descriptive to suit your whim... We're debating the REST constraint. HTTP has no self-descriptiveness constraint and neither does Web architecture. They define the IANA registry as the only means to be self-descriptive, as REST requires. > > Its media type is "application/x-shockwave-flash". There is no entry > for this at IANA. There is no IETF RFC for Flash (that I could find). > Which is but one reason it isn't RESTful. I don't care if it's ubiquitous on the Web, so are stateful cookies, it doesn't make it REST. > > Yet, somehow, Google can index this media, my browser can load it, and > wget can fetch it over HTTP. > No, your browser can use the identifier to load an extension to read it, as with PDF, but this is a library-based API where Flash is concerned. I've never heard of anyone claiming Flash is RESTful before. > > If you go to the URL in the RFC, it redirects you to a different page > that does not contain the PDF specification. Rather it documents the > changes Adobe has done to the official ISO PDF specification. The ISO > specification is not available on the web, it apparently must be > purchased. > ISO is an IANA-sanctioned standards body, regardless of the fact that they charge a fee -- you still followed your nose from an identifier to a spec, without having to decipher the browser's PDF plugin in order to determine shared understanding. That's the difference between being network-based and library-based. > > Given all that, Google can index this media, my browser can deliver > them, and wget can fetch these. > The key distinction being that, as a network-based API, this can be done RESTfully with PDF, but not with Flash, which is library-based. > > The first application is REST. > It might be, there are other constraints. > > The second application is not REST. > It can't be, self-descriptive = registered, and it isn't registered. That makes it a library-based API, not a network-based API. Refute the normative definition, don't just say I'm wrong for following it. > > All here agree to the need to specification, and documentation. > No. In REST, participants must agree to a registry which contains the mapping from identifier to documentation. A decision not to use a registry (application/x-shockwave-flash) at all, is a decision not to use REST. The mapping of application/x-shockwave-flash to *any* document must be assumed, or deduced via introspection, or searched for, which is incongruous with declaring an identifier which self- descriptively points to a processing model via registry lookup with no ambiguity. It's a library-based API. > > If I were to publish endpoints to a REST system, and a supporting > document describing the endpoints and the media types involved, I feel > that as long as those specification are openly available, whether from > IANAs servers, a published RFC, Adobe, or my own web server, then > those media types are descriptive enough to meet the spirit and letter > of what the REST style and architecture require. If someone wishes to > use my system and the required media types, all of the information is > available to them. > That standpoint requires a non-normative definition of self-descriptive which doesn't require registration. If you want to seriously take that position, you need to falsify the constraint which requires registration. Because what matters to REST, is that the mapping can be unambiguously determined from a registry entry. It is not the media type that needs to be self-descriptive, it's the identifier. XBEL has no identifier. Creating an identifier for it is easy -- just use one. But, if it isn't registered, that isn't self-descriptive, by the normative definition of the term -- there is solid reasoning for this, if you don't like it, then refute that reasoning with a rational counter-argument, please. Library-based APIs have nothing to do with the "spirit and intent" of REST, they're just the opposite. > > So, that's my pitch. I completely appreciate the motivations and > spirit from which your discussions come from. I am simply going to > have to disagree with you on the pedantry with which you apply to this > specific point of the REST style. > You can't reject my position like that by redefining self-descriptive to mean whatever you need it to mean, thesis be damned -- that's just wrong, and it's unhelpful to anyone's understanding of REST not to point that out. Self-descriptive = registered; on the Web, registry = IANA. Refute the definitions, rather than ignoring them with an ad-hominem argument -- that's worse than being pedantic. -Eric
If the only person allowed to exhibit confidence when discussing REST is Roy, and everybody else is being pedantic, then REST is unlearnable. OTOH, if REST is pragmatic, then anybody with a sound argument ought to be able to point out when a REST constraint is violated, and it ought to take a sound argument to refute their position. Otherwise REST is unteachable by anyone but Roy, because it can't be understood by anyone but Roy, in which case it's really pointless to use REST for any purpose other than buzzword, which seems to be the limit of some folks' desire to understand it. REST is what it is, not anything you want it to be. -Eric
Eric J. Bowman wrote: > Will Hartung wrote: >> In HTTP, using registered media types is a SHOULD, not a MUST, and not >> using IANA registered media types is discouraged, not illegal. > > We're debating the REST constraint. I think there may be a definitive answer to this. http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm#sec_5_2_1_2 "The data format of a representation is known as a media type [48]." ... "[48] N. Freed, J. Klensin, and J. Postel. Multipurpose Internet Mail Extensions (MIME) Part Four: Registration Procedures. Internet RFC 2048, Nov. 1996." That's very very clear, and from the REST dissertation, and that would cover all uses of REST both on the web, and off (intranet). But, alas, that doesn't clear this up at all, because the long debate isn't about using media types, it's about using "unregistered media types" or not - now, it appears to be that since we *must* get a boolean yes or no answer, and since the REST dissertation clearly states "the data format of a representation is known as a media type", that there is only one possible thing to do: Write an email to (probably Ned Freed the media type reviewer on the IETF types mailing list and ask him the very pedantic question "if a media type is as yet unregistered then is it an Internet Media Type or not?". Would somebody who cares enough like to do this? FWIW, people seem to be using "media type" like it's a general term that simply means "the type of media", as in "chalkboard", it's not, "media type" is a specific thing, short for "Internet media type" which has only a single registry, the IANA one. You could of course make up something else to to describe "the type of media" and make another registry, but this would not be "a media type" which could be used in any protocol that requires "media types", and could not be used to describe the data format of a representation in a RESTful system. Best, Nathan
To take this right back to the subject line, Given that almost every protocol and bit of software out their does use Internet media types, and given: - the IANA registry and the registration process have accounted for Vendor types, Personal or Vanity types, and Experimental types. - "universal support and implementation of a media type is NOT a requirement for registration. However, if a media type is explicitly intended for limited use, this MUST be noted in its registration. The "Restrictions on Usage" field is provided for this purpose." - all to-be-registered media types are first reviewed, in public, by a very experienced and seasoned media types reviewer. - all protocols and web architecture documents "discourage the use of unregistered media types" Then please, can anybody here give me a good reason why they'd ever want to use a type that isn't registered being designed to be registered, or currently going the through the registration & review process? If nobody actually has a reason to answer the above, then I fear Eric may be working himself in to an early grave for nothing, and hopefully everybody can move on. FWIW, I personally, can't think of any reason at all; and quite conversely would try to use registered, well known media types whereever possible, given all arguments, good common sense and with considerations for everything from web arch down to specific styles like REST. Best, Nathan
Eric, Have you considered posting these non-question philosophical thought exercises to a blog instead? I'd suggest that'd a better, discoverable, longer-term home for them. Btw, U.S. Marines are created through 'recruit training' *not* 'boot camp' ;) Semper Fi, --tim On Tue, Sep 14, 2010 at 6:08 PM, Eric J. Bowman <eric@...> wrote: > What is the difference between unbounded creativity and design by > constraint? The former is characterized by looking at the problem, > then defining the parameters by which it is to be solved. The latter > is characterized by attempting to solve the problem within predefined > parameters. I have a few metaphors... > > Team-building workshops and engineering-school challenges which involve > some goal (a standard variation on which is keeping an egg safe), to be > accomplished within defined parameters (each team is given some paper- > towel-roll tubes, popsicle sticks and a dab of glue). The problem > isn't meant to be insoluble, so while the answer may be, "Yes, it would > be easier with a roll of duct tape," you can't have one. REST on the > Web says you can have duct tape, you just can't use it for at least a > year... > > This sort of situation is standard fodder in the movies, think "Apollo > 13" or the last "Star Trek". In the former, you can't assume there's > any more velcro than you've been given to work with. If you follow my > logic, I'd have been more impressed with Captain Kirk's Kobayashi Maru > cheat if he'd changed the specification of a starship to include a > Klingon-shield-negating-device and then ordered it activated. > Discovering such an imperative design goal could then inform the > evolution of the starship specification. > > The final stage of USMC boot camp is the "crucible" challenge, which > involves squad-based solving of several problems, one or more of which > is deliberately insoluble. Insoluble problems are meant to be > determined by first making an effort to solve the problem at hand > within the given parameters. Here, you can't have duct tape, ever. > The Marines won't let you cheat on the Kobayashi Maru. This is more > rigid than REST. > > http://memory-alpha.org/wiki/Kobayashi_Maru_scenario > > The takeaway is that you don't always know that a problem is insoluble > within the uniform interface without trying it first, because such > failure is what informs the design of any extension to the uniform > interface. If you're using the public Internet, then standardization > means you're contributing your solution to an otherwise-insoluble > problem (or just your idea for a better mousetrap) back to the public- > network-based uniform interface, for the purposes of general shared > understanding, which is the mechanism for promoting uptake. > > REST isn't just encouraging re-use, it's encouraging re-sharing when > you can't re-use. REST on the Web doesn't just encourage such behavior, > it specifies it, which means there's a "right way to cheat" in the > Kobayashi Maru situation (re-spec the starship instead of defining-down > participants to not have shields). Cheating the right way makes a > contribution to the public commons of the Web, cheating the wrong way > just fractures it. > > IANA registries are a curious thing, as they're a technically-required > social contract. But without such registries, evolution would be ad- > hoc -- a social-contract choice which technically precludes a public- > network-based API. So if you do create something for the Web, think of > what you're doing in terms of extending the uniform interface, and > understand standardization in those terms -- share it back, and you may > be rewarded with Internet scale, if it's a useful contribution. > > Having a rationale, i.e. explaining why other solutions fail to solve > your problem, matters a great deal to your success -- which is why it's > helpful to prototype within the uniform interface, to determine what > real requirements you have which can't otherwise be met, in order to > guide your extension of the uniform interface. > > -Eric > > > ------------------------------------ > > Yahoo! Groups Links > > > >
Nathan wrote: > > I think there may be a definitive answer to this. > I agree. Believe it or not, I also agree that my answer may not be correct. If I'm not correct, I'm the first person who wants to learn from it. Thank you for keeping this debate framed around the technical merits. > > Then please, can anybody here give me a good reason why they'd ever > want to use a type that isn't registered being designed to be > registered, or currently going the through the registration & review > process? > Without resorting to arguments like pedantism, which may so easily be rebutted with accusations that your interest lies in using REST as a buzzword, not in understanding it as an architectural style? Please. > > If nobody actually has a reason to answer the above, then I fear Eric > may be working himself in to an early grave for nothing, and > hopefully everybody can move on. > I don't actually consider it "for nothing." The last step in the understanding of any science, is being able to turn around and teach it to those who follow. Can I truly understand REST without understanding the community's en-masse rejection of the IANA registry? Or is the inability to make this point, proof that REST isn't of use beyond being a buzzword? What I'd like to believe, is that in REST we have agreed to basic definitions of terms, to avoid exactly the passing-each-other-at-30,000- feet problem we've experienced here, as discussed in relation to SemWeb, here: http://seanmcgrath.blogspot.com/2010/09/semantic-web-is-not-data-format.html Because I think I've touched on this being the "point which is the sticking point for many who are dubious about the brouhaha surrounding" REST, in my opinion. The reason I have such a hard time dropping the issue, is that if REST can't be learned and can't be taught and thus has no pragmatic value to Web development, I need to re-examine my priorities, i.e. my decision to specialize in REST, if it's just a buzzword. The science of it is where the appeal lies for me; but I don't consider what can't be learned and can't be taught due to a lack of normative definitions, to be hard science. > > FWIW, I personally, can't think of any reason at all; and quite > conversely would try to use registered, well known media types > whereever possible, given all arguments, good common sense and with > considerations for everything from web arch down to specific styles > like REST. > I find the inability to reach consensus on this issue, even when the context is limited to the Web, just as troubling from the CompSci perspective as Sean finds the portrayal of SemWeb as a new thing with nothing to learn from the existing body of AI work, to the point where there's actually no benefit to be gained from participating in any debates on the subject since nobody can agree to any definition of any terms -- we can't establish the semantics of self-descriptive, thus anything is self-descriptive, so what's the point of debating self- descriptiveness? Or any other constraint, if they're meaningless? My last attempt on this issue, before deciding to limit anything I say about REST to my own weblog (whenever that may go live), is here: http://tech.groups.yahoo.com/group/rest-discuss/message/16602 I would appreciate feedback on the metaphors. The consequences of my being right on this issue, actually look like benefits to me, and while I think it's more appropriate to compare Roy to Stallman, as the spirit of standardization in REST is much the same as the spirit of the GPL. -Eric
Tim Williams wrote: > > Have you considered posting these non-question philosophical thought > exercises to a blog instead? I'd suggest that'd a better, > discoverable, longer-term home for them. > Yes, I'm just about at the point of discontinuing participation on this list -- until recently I've always been more of a lurker anyway -- and withholding anything I have to say about REST until my weblog goes live. In the interim, based on whether anything I've said has gotten through to anyone besides Nathan, I'll be deciding whether or not to allow comments on said weblog when my topic is REST, if I don't come to the conclusion that REST is just pseudoscience and blog about *that*. I'm totally disturbed by my inability to make a point about something I think I really understand, but the lack of counter-argument convinces me that there's something wrong with REST more than it convinces me there's something wrong with me noggin. -Eric
Eric, Don't give up on REST or this group. Both drive all of us crazy from time to time. But what always brings me back to this group is that, despite all the drama and misunderstanding, the discussion is mostly focused on the most important interface/interaction architectural issues around: - Reuse - Generality - Evolvability - Standardization - Registration - Indirection - Loose coupling - Etc Compare this to SOA discussions or WS-* discussions (or any other "middleware" or integration discussion), which are either platitudes or irrelevant. I've tried to drive certain points home in this group, eg generality, application neutrality, with limited success. But its been worthwhile trying. -- Nick Nick Gall Phone: +1.781.608.5871 Twitter: ironick AOL IM: Nicholas Gall Yahoo IM: nick_gall_1117 MSN IM: (same as email) Google Talk: (same as email) Email: nick.gall AT-SIGN gmail DOT com Weblog: http://ironick.typepad.com/ironick/ On Tue, Sep 14, 2010 at 9:31 PM, Eric J. Bowman <eric@...>wrote: > Tim Williams wrote: > > > > Have you considered posting these non-question philosophical thought > > exercises to a blog instead? I'd suggest that'd a better, > > discoverable, longer-term home for them. > > > > Yes, I'm just about at the point of discontinuing participation on this > list -- until recently I've always been more of a lurker anyway -- and > withholding anything I have to say about REST until my weblog goes live. > In the interim, based on whether anything I've said has gotten through > to anyone besides Nathan, I'll be deciding whether or not to allow > comments on said weblog when my topic is REST, if I don't come to the > conclusion that REST is just pseudoscience and blog about *that*. I'm > totally disturbed by my inability to make a point about something I > think I really understand, but the lack of counter-argument convinces > me that there's something wrong with REST more than it convinces me > there's something wrong with me noggin. > > -Eric > > > ------------------------------------ > > Yahoo! Groups Links > > > >
Nick Gall wrote: > > Don't give up on REST or this group... > Perhaps I'm a bit melodramatic sometimes, but the danger in defining REST down such that it's OK to use it as a buzzword, is that it becomes pseudoscience *as practiced*. If that's the reality, then there will be fewer people over time who understand it as science, increasing the S/N ratio on groups like www-tag and http-wg -- where REST is currently practiced by introducing more IANA registries to draw a clear line between what's self-descriptive and what isn't. Seems like a valid topic to blog about... Calling me out as full of myself or pedantic or whatever, is not backed up by my grand total of two posts on http-wg; one caught a typo, the other is an open issue. I have one post on www-tag this year reminding the TAG that they can't recommend against RFC 2616, and one thread this year questioning the propriety of specifying barth-sniff in a variety of specs, which led to those specs being changed to not define any sniffing algorithm (with only the lightest prodding on my part). Tautological? Guilty as charged. :-) While I do believe that I understand REST, I'm out of my depth on those lists, which is to say that I recognize that the participants in general have more knowledge and experience than I do in these matters. What I try to do here, is explain to others the "ideal form" of the Web described by REST, to promote understanding of the style by discussing why the Web has evolved the way that it has, and is evolving the way that it is. I only participate on http-wg or www-tag if I feel something important is being overlooked, and wouldn't make a big deal here over something trivial, either. -Eric
Hi http edge-case question, opinions welcome: For a 'successful' POST (to create, eg as in AtomPub) or DELETE that does not return an entity body - what should the response Content-Type be set to when an Accept-Header is sent? Put another way - if the client sets the Accept header is set, should the response Content-Type always be sent back? Or should the Accept and default be ignored when there is not content returned? Bill
Don't see a reason to include a Content-Type if there is no entity-body in the response. Afaik, Accept is just a part of the negotiation process and has no predetermined outcome on responses as far as HTTP is concerned. Cheers, Mike On Wed, Sep 15, 2010 at 12:22 PM, Bill de hra <bill@...t> wrote: > Hi > > http edge-case question, opinions welcome: > > For a 'successful' POST (to create, eg as in AtomPub) or DELETE that > does not return an entity body - what should the response Content-Type > be set to when an Accept-Header is sent? > > Put another way - if the client sets the Accept header is set, should > the response Content-Type always be sent back? Or should the Accept and > default be ignored when there is not content returned? > > Bill >
Bill de hÓra wrote:
> Hi
>
> http edge-case question, opinions welcome:
>
> For a 'successful' POST (to create, eg as in AtomPub) or DELETE that
> does not return an entity body - what should the response Content-Type
> be set to when an Accept-Header is sent?
>
> Put another way - if the client sets the Accept header is set, should
> the response Content-Type always be sent back? Or should the Accept and
> default be ignored when there is not content returned?
DELETE...
A successful response SHOULD be 200 (OK) if the response includes an
representation describing the status, 202 (Accepted) if the action
has not yet been enacted, or 204 (No Content) if the action has been
enacted but the response does not include a representation
204
The 204 response MUST NOT include a message-body, and thus is always
terminated by the first empty line after the header fields.
no message-body = no Content-Type header
also note:
The presence of a message-body in a request is signaled by the
inclusion of a Content-Length or Transfer-Encoding header field in
the request's header fields, even if the request method does not
define any use for a message-body. This allows the request message
framing algorithm to be independent of method semantics.
So you'll want to be avoiding a Content-Length on that 204 as well.
Regards,
Nathan
I'm trying to help my company develop standard idioms for identifying resources so that service implementation teams don't have to rehash the same arguments every time. One issue that seems to be giving us grief is how to map an inheritance hierarchy.
Suppose I am a zoo or vetrinary clinic and I am modelling resources for animals. My object and XML representations involve a hierarchy of specializations:
Animal
|- Mammal
|- Dog
|- Cat
|- Reptile
|- Snake
|- Lizard
Anyway, suppose I store information about individual animals in my system and I want to present operations on them in a RESTful way.
My thinking is that I should give them a canonical representation like:
http://animalsRus.com/animals/123
I should **NOT** additionally create aliased URIs like:
http://animalsRus.com/dogs/123
for several reasons. I may have to create the animal before I know it is a dog, or I may have to correct a data entry error.
I am hesitant to put the inheritance chain in the the URI like
http://animalsRus.com/animals/mammals/dogs/123
because refactoring of the class hierarchy shouldn't create tension between it and the URI structure. I view the class hierarchy as an implementation detail, but it does refer to concepts that are meaningful in the business domain.
I would generally search for any animial like this:
http://animalsRus.com/animals?owner=Joe
But I might allow searching just for dogs like so:
http://animalsRus.com/dogs?owner=Joe
Does this seem reasonable? What approaches have others taken to domain nouns that are organized into an inheritance tree?
bryan_w_taylor wrote: > I'm trying to help my company develop standard idioms for identifying resources so that service implementation teams don't have to rehash the same arguments every time. One issue that seems to be giving us grief is how to map an inheritance hierarchy. > > Suppose I am a zoo or vetrinary clinic and I am modelling resources for animals. My object and XML representations involve a hierarchy of specializations: > Animal > |- Mammal > |- Dog > |- Cat > |- Reptile > |- Snake > |- Lizard > > Anyway, suppose I store information about individual animals in my system and I want to present operations on them in a RESTful way. > > My thinking is that I should give them a canonical representation like: > http://animalsRus.com/animals/123 > > I should **NOT** additionally create aliased URIs like: > http://animalsRus.com/dogs/123 > for several reasons. I may have to create the animal before I know it is a dog, or I may have to correct a data entry error. > > I am hesitant to put the inheritance chain in the the URI like > http://animalsRus.com/animals/mammals/dogs/123 > because refactoring of the class hierarchy shouldn't create tension between it and the URI structure. I view the class hierarchy as an implementation detail, but it does refer to concepts that are meaningful in the business domain. > > I would generally search for any animial like this: > http://animalsRus.com/animals?owner=Joe > > But I might allow searching just for dogs like so: > http://animalsRus.com/dogs?owner=Joe > > Does this seem reasonable? What approaches have others taken to domain nouns that are organized into an inheritance tree? IMHO, the domain inheritance (schema) and the names (URIs) for each thing (Animal) are completely orthogonal. The canonical URIs you suggest (one for each animal) appears to make sense and I agree that the aliased URIs are unneeded. As for the searching side of things, all I'll add in to the equation is, why not the following? http://animalsRus.com/animals?owner=Joe&type=dog Quite sure somebody will reply with a full on REST answer soon, Regards, Nathan
On Thu, Sep 16, 2010 at 5:41 PM, Nathan <nathan@...> wrote: > > Quite sure somebody will reply with a full on REST answer soon, > As long as the URIs are valid and the query URIs can be constructed by the hypermedia in use then it doesn't matter at all. The only benefit you will get from rules/convention like this is being able to establish a reusable library for routing on the server side a la Rails' routing DSL - the benefit of which is questionable, and invites bad, non-hypertext practices on the client side. Cheers, Mike
--- In rest-discuss@yahoogroups.com, Nathan <nathan@...> wrote: > As for the searching side of things, all I'll add in to the > equation is, why not the following? > http://animalsRus.com/animals?owner=Joe&type=dog That would work fine, and it could have the same meaning as: http://animalsRus.com/dogs?owner=Joe In fact, it would seem fine to have both if you wanted.
--- In rest-discuss@yahoogroups.com, Mike Kelly <mike@...> wrote: > As long as the URIs are valid and the query URIs can be constructed by > the hypermedia in use then it doesn't matter at all. Right - HATEOAS applies so that clients shouldn't need to know the idioms. I'm looking at it from the service implementation side. What idioms should I recommend to service creators so that their task is simplified and we can streamline their implementation complexity. > The only benefit you will get from rules/convention like this > is being able to establish a reusable library for routing on > the server side a la Rails' routing DSL - the benefit of which > is questionable, and invites bad, non-hypertext practices on > the client side. I don't understand why such tooling would make any more or less likely to diverge from being truly hypertext driven. This is a common recurring pattern and not having to have N services solve it N times from scratch seems very beneficial to me in a company setting where time to deliver and the total productivity of the entire software development effort are key.
Hi,
I work in academia and do my best to inform students as to the benefits of
RESTful designs. When I first came across REST it was explained as a style that
follows the following constraints [1]:
* Give everything an id
* Link things to each other
* Use standard methods
* Multiple representations
* Stateless communication
I read Roy's thesis over the summer and my summary of it is as follows:
"REST is a hybrid style derived from other network-based architectural styles.
The use of an architectural style applies the associated constraints on the
system. Each constraint induces certain properties e.g. simplicity and
scalability. Thus, a style applies (it’s) constraints, which induce certain
properties.
Fielding defines the properties of key interest when considering the target
architecture of network-based hypermedia (the Web); for example: scalability,
simplicity, visibility and independent evolvability. Fielding then evaluates
several common network-based architectural styles (e.g. client-server) for the
properties they would induce. Fielding then derives REST by applying the styles
that induce the properties he requires. To do this, Fielding firstly defines the
“null” style i.e. a style with no constraints at all. Fielding then adds certain
pre-defined styles, which induce the desired properties for the target
architecture of network-based hypermedia. This hybrid style is combined with
other constraints (most notably the uniform interface constraint to form the
REST architectural style. REST = LCODC$SS + Uniform Interface"
Would this be an accurate summary of REST from Roys' thesis? Which way should I
explain REST to my students - I suspect both. Note that I will use [1] in any
event as I view it as an excellent presentation by Stefan.
Thanks,
Sean.
[1] Stefan
Tilkov, http://wiki.parleys.com/display/PARLEYS/Home#talk=31817742;slide=13;
On Fri, Sep 17, 2010 at 5:08 PM, bryan_w_taylor <bryan_w_taylor@...> wrote: > > > --- In rest-discuss@yahoogroups.com, Mike Kelly <mike@...> wrote: >> As long as the URIs are valid and the query URIs can be constructed by >> the hypermedia in use then it doesn't matter at all. > > Right - HATEOAS applies so that clients shouldn't need to know the idioms. > > I'm looking at it from the service implementation side. What idioms should I recommend to service creators so that their task is simplified and we can streamline their implementation complexity. Sure, in terms of URI patterns the key "idioms" are derived from the limitations of the hypertext in use. i.e. if your system is driven by html you should adopt the "?key=value&..." query part pattern it specifies ( http://www.w3.org/TR/html401/interact/forms.html#h-17.13.1 ). In contrast a system revolving around hypertext types that use URI templates would be much more liberal. The mapping of URI to Resource should be trivial - if unnecessarily specifying URI patterns significantly reduces complexity then the tools you have probably aren't fit for purpose. >> The only benefit you will get from rules/convention like this >> is being able to establish a reusable library for routing on >> the server side a la Rails' routing DSL - the benefit of which >> is questionable, and invites bad, non-hypertext practices on >> the client side. > > I don't understand why such tooling would make any more or less likely to diverge from being truly hypertext driven. If your clients become aware that you are using patterns and your URIs start to appear more transparent to them; there's a higher likelihood they will start generating URIs instead of just following the hyperlinks. I doesn't necessarily cause that behaviour - it just invites it and makes it more likely to happen. Removing that potential is probably a good thing in a distributed system. > > This is a common recurring pattern and not having to have N services solve it N times from scratch seems very beneficial to me in a company setting where time to deliver and the total productivity of the entire software development effort are key. > Sure. imo - URI patterns are not the right place to look for productivity gains, beyond enabling the handling of patterns enforced by the hypertext in your system. Cheers, Mike
Hello Sean. I work on academia too. But I teach Software Architecture fundamentals. So, I mention REST simple as an example in many of the class sessions, and I have one just devoted to explain the constrains and actually to wipe the idea of REST as an easy Web Service engine. But, I do not start in any of the two explanations you just mentioned. I actually start with Chapter 4. My students already know what an architectural style is and also many of the properties. Chapter 4 explains what properties and requirements the Web had. It helps a lot to understand why the constrains, and also helps to build up criteria to choose REST for an app or not. BTW, chapter 2 ( 2.3 Architectural Properties of Key Interest) may be confused with the actual properties of the web. Be careful with that. Section 2.3 lists the properties that are somehow affected by the styles in chapter 3, and they are properties that are generally important for Networked applications. The web is just one particular type of networked application, it is a large distributed hypermedia system. The particular requirements (and the ones REST in chapter 5 tries to fulfill) are in chapter 4. To me, the dissertation is an excellent real life example of architecting, following a clear methodology to generate a style, and that can be applied to other applications, but that doesn't mean all my next applications should be REST. Roy gives an example in the conclusions, REST is optimized for large grain hypermedia transfer, if you have a computational intensive system, REST may not be for you. In other words, I use REST as an analytical example. I try not to teach it as a set of rules and constrains. Cheers! William Martinez. --- In rest-discuss@yahoogroups.com, Sean Kennedy <seandkennedy@...> wrote: > > Hi, > I work in academia and do my best to inform students as to the benefits of > RESTful designs. When I first came across REST it was explained as a style that > follows the following constraints [1]: > > * Give everything an id > * Link things to each other > * Use standard methods > * Multiple representations > * Stateless communication > I read Roy's thesis over the summer and my summary of it is as follows: > > "REST is a hybrid style derived from other network-based architectural styles. > The use of an architectural style applies the associated constraints on the > system. Each constraint induces certain properties e.g. simplicity and > scalability. Thus, a style applies (it’s) constraints, which induce certain > properties. > > > Fielding defines the properties of key interest when considering the target > architecture of network-based hypermedia (the Web); for example: scalability, > simplicity, visibility and independent evolvability. Fielding then evaluates > several common network-based architectural styles (e.g. client-server) for the > properties they would induce. Fielding then derives REST by applying the styles > that induce the properties he requires. To do this, Fielding firstly defines the > “null” style i.e. a style with no constraints at all. Fielding then adds certain > pre-defined styles, which induce the desired properties for the target > architecture of network-based hypermedia. This hybrid style is combined with > other constraints (most notably the uniform interface constraint to form the > REST architectural style. REST = LCODC$SS + Uniform Interface" > > Would this be an accurate summary of REST from Roys' thesis? Which way should I > explain REST to my students - I suspect both. Note that I will use [1] in any > event as I view it as an excellent presentation by Stefan. > > Thanks, > Sean. > > [1] Stefan > Tilkov, http://wiki.parleys.com/display/PARLEYS/Home#talk=31817742;slide=13; >
<snip> In other words, I use REST as an analytical example. I try not to teach it as a set of rules and constrains. </snip> +1 mca http://amundsen.com/blog/ http://mamund.com/foaf.rdf#me On Sun, Sep 19, 2010 at 13:47, William Martinez Pomares <wmartinez@...> wrote: > Hello Sean. > I work on academia too. But I teach Software Architecture fundamentals. > > So, I mention REST simple as an example in many of the class sessions, and I have one just devoted to explain the constrains and actually to wipe the idea of REST as an easy Web Service engine. > > But, I do not start in any of the two explanations you just mentioned. I actually start with Chapter 4. My students already know what an architectural style is and also many of the properties. Chapter 4 explains what properties and requirements the Web had. It helps a lot to understand why the constrains, and also helps to build up criteria to choose REST for an app or not. > > BTW, chapter 2 ( 2.3 Architectural Properties of Key Interest) may be confused with the actual properties of the web. Be careful with that. Section 2.3 lists the properties that are somehow affected by the styles in chapter 3, and they are properties that are generally important for Networked applications. The web is just one particular type of networked application, it is a large distributed hypermedia system. The particular requirements (and the ones REST in chapter 5 tries to fulfill) are in chapter 4. > > To me, the dissertation is an excellent real life example of architecting, following a clear methodology to generate a style, and that can be applied to other applications, but that doesn't mean all my next applications should be REST. Roy gives an example in the conclusions, REST is optimized for large grain hypermedia transfer, if you have a computational intensive system, REST may not be for you. > > In other words, I use REST as an analytical example. I try not to teach it as a set of rules and constrains. > Cheers! > > William Martinez. > > --- In rest-discuss@yahoogroups.com, Sean Kennedy <seandkennedy@...> wrote: >> >> Hi, >> I work in academia and do my best to inform students as to the benefits of >> RESTful designs. When I first came across REST it was explained as a style that >> follows the following constraints [1]: >> >> * Give everything an id >> * Link things to each other >> * Use standard methods >> * Multiple representations >> * Stateless communication >> I read Roy's thesis over the summer and my summary of it is as follows: >> >> "REST is a hybrid style derived from other network-based architectural styles. >> The use of an architectural style applies the associated constraints on the >> system. Each constraint induces certain properties e.g. simplicity and >> scalability. Thus, a style applies (it’s) constraints, which induce certain >> properties. >> >> >> Fielding defines the properties of key interest when considering the target >> architecture of network-based hypermedia (the Web); for example: scalability, >> simplicity, visibility and independent evolvability. Fielding then evaluates >> several common network-based architectural styles (e.g. client-server) for the >> properties they would induce. Fielding then derives REST by applying the styles >> that induce the properties he requires. To do this, Fielding firstly defines the >> “null style i.e. a style with no constraints at all. Fielding then adds certain >> pre-defined styles, which induce the desired properties for the target >> architecture of network-based hypermedia. This hybrid style is combined with >> other constraints (most notably the uniform interface constraint to form the >> REST architectural style. REST = LCODC$SS + Uniform Interface" >> >> Would this be an accurate summary of REST from Roys' thesis? Which way should I >> explain REST to my students - I suspect both. Note that I will use [1] in any >> event as I view it as an excellent presentation by Stefan. >> >> Thanks, >> Sean. >> >> [1] Stefan >> Tilkov, http://wiki.parleys.com/display/PARLEYS/Home#talk=31817742;slide=13; >> > > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
On Sep 19, 2010, at 10:47 AM, William Martinez Pomares wrote: > Hello Sean. > I work on academia too. But I teach Software Architecture fundamentals. > > So, I mention REST simple as an example in many of the class sessions, and I have one just devoted to explain the constrains and actually to wipe the idea of REST as an easy Web Service engine. > > But, I do not start in any of the two explanations you just mentioned. I actually start with Chapter 4. My students already know what an architectural style is and also many of the properties. Chapter 4 explains what properties and requirements the Web had. It helps a lot to understand why the constrains, and also helps to build up criteria to choose REST for an app or not. > > BTW, chapter 2 ( 2.3 Architectural Properties of Key Interest) may be confused with the actual properties of the web. Be careful with that. Section 2.3 lists the properties that are somehow affected by the styles in chapter 3, and they are properties that are generally important for Networked applications. The web is just one particular type of networked application, it is a large distributed hypermedia system. The particular requirements (and the ones REST in chapter 5 tries to fulfill) are in chapter 4. Yes, and you should note that I only included those properties that are later used in the explanations -- there are many more out there in the wild, just as there are more potential constraints (and styles). I only had a finite time to finish writing. Last year, I found (via a link from Mark Nottingham) an excellent description of the design process. Unfortunately, I can't find the book now (moving sucks). It is in the intro pages of the really big book by Charles & Ray Eames: http://www.amazon.com/Eames-Design-John-Neuhart/dp/0810908794 and we are fortunate that Amazon's "look inside" feature includes it (see pages 13-15). > To me, the dissertation is an excellent real life example of architecting, following a clear methodology to generate a style, and that can be applied to other applications, but that doesn't mean all my next applications should be REST. Roy gives an example in the conclusions, REST is optimized for large grain hypermedia transfer, if you have a computational intensive system, REST may not be for you. > > In other words, I use REST as an analytical example. I try not to teach it as a set of rules and constrains. > Cheers! > > William Martinez. Excellent summary. Cheers, ....Roy
Agree that the Eames interview is one pearl of wisdom after another. The pdf of the interview is available here: http://bit.ly/bQlQ1e . *Q: To whom does design address itself: to the* *greatest number (the masses)? to the specialists* *or the enlightened amateur? To a privileged* *social class?* *A: To the need.* -- Nick Nick Gall Phone: +1.781.608.5871 Twitter: ironick AOL IM: Nicholas Gall Yahoo IM: nick_gall_1117 MSN IM: (same as email) Google Talk: (same as email) Email: nick.gall AT-SIGN gmail DOT com Weblog: http://ironick.typepad.com/ironick/ On Sun, Sep 19, 2010 at 4:26 PM, Roy T. Fielding <fielding@...> wrote: > On Sep 19, 2010, at 10:47 AM, William Martinez Pomares wrote: > > > Hello Sean. > > I work on academia too. But I teach Software Architecture fundamentals. > > > > So, I mention REST simple as an example in many of the class sessions, > and I have one just devoted to explain the constrains and actually to wipe > the idea of REST as an easy Web Service engine. > > > > But, I do not start in any of the two explanations you just mentioned. I > actually start with Chapter 4. My students already know what an > architectural style is and also many of the properties. Chapter 4 explains > what properties and requirements the Web had. It helps a lot to understand > why the constrains, and also helps to build up criteria to choose REST for > an app or not. > > > > BTW, chapter 2 ( 2.3 Architectural Properties of Key Interest) may be > confused with the actual properties of the web. Be careful with that. > Section 2.3 lists the properties that are somehow affected by the styles in > chapter 3, and they are properties that are generally important for > Networked applications. The web is just one particular type of networked > application, it is a large distributed hypermedia system. The particular > requirements (and the ones REST in chapter 5 tries to fulfill) are in > chapter 4. > > Yes, and you should note that I only included those properties that are > later > used in the explanations -- there are many more out there in the wild, just > as there are more potential constraints (and styles). I only had a finite > time to finish writing. > > Last year, I found (via a link from Mark Nottingham) an excellent > description > of the design process. Unfortunately, I can't find the book now (moving > sucks). > It is in the intro pages of the really big book by Charles & Ray Eames: > > http://www.amazon.com/Eames-Design-John-Neuhart/dp/0810908794 > > and we are fortunate that Amazon's "look inside" feature includes it > (see pages 13-15). > > > To me, the dissertation is an excellent real life example of > architecting, following a clear methodology to generate a style, and that > can be applied to other applications, but that doesn't mean all my next > applications should be REST. Roy gives an example in the conclusions, REST > is optimized for large grain hypermedia transfer, if you have a > computational intensive system, REST may not be for you. > > > > In other words, I use REST as an analytical example. I try not to teach > it as a set of rules and constrains. > > Cheers! > > > > William Martinez. > > Excellent summary. Cheers, > > ....Roy > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
On Sep 19, 2010, at 2:11 PM, Nick Gall wrote: > Agree that the Eames interview is one pearl of wisdom after another. The pdf of the interview is available here: http://bit.ly/bQlQ1e . > > Q: To whom does design address itself: to the > greatest number (the masses)? to the specialists > or the enlightened amateur? To a privileged > social class? > A: To the need. Yes, but it isn't complete without his diagram on page 13. ....Roy
"William Martinez Pomares" wrote: > > I work on academia too. But I teach Software Architecture > fundamentals. > > In other words, I use REST as an analytical example. I try not to > teach it as a set of rules and constraints. > This is a reasonable decision for your context, no arguments from me, in fact it's the same approach taken by the "Software Architecture: Foundations, Theory and Practice" textbook. But, I don't see how it's possible to enlighten anyone as to how to implement a REST system for the Web, except by teaching the constraints -- which is pragmatism, not dogma. I use REST as a guideline to inform design decisions -- one of the unwritten benefits of the style is simply having such a guideline, and I think that's worth teaching. The rest of this post is by way of example, to illustrate my point... ---------------------------------- I've posted a demo of my efforts to design an integrated wiki/weblog/ forum system. The design choice that leads me to REST, is the choice to go with a document-oriented distributed hypermedia solution. REST's uniform interface constrains me to choose processing models which best fit my needs from those which are well-known (or define a new one, and start down the road to standardizing it). I decided that a hierarchical collection of Atom Feed and Entry documents fits best. This means my smallest unit of data is the Atom Entry. But, a mainstay of my user interface will be the expansion of a displayed summary into the display of the entire post. This is a user goal, it doesn't matter if the request is synchronous or asynchronous, the request is for an Atom Entry document. Is that the most-efficient solution? No. It would be more efficient to define a subresource, like so: /11.atom?xptr=(//content) But, what media type do I use for that response? It's just a snippet of HTML, not an HTML document, so text/html and application/xhtml+xml are out, and it obviously isn't Atom any more. Leaving me with application/xml... which doesn't define any of the desired semantics of the payload. When that content is contained within an Atom Entry, it has well-known semantics like, this is the content of an entry with(out) a summary, which links in to the rest of the system using standard link relations, has various metadata like author, and <div> content defers to the XHTML processing model. This context is lost when the standalone content is assigned a URI, making that message fail the self-descriptive messaging and hypertext constraints (in the case of my example). More efficient, sure, but also less visible -- "The trade-off, though, is that a uniform interface degrades efficiency, since information is transferred in a standardized form rather than one which is specific to an application's needs." This choice doesn't have any impact on user-perceived performance. When the user goal is to expand a summary to reveal the full content, what matters to user-perceived performance is the latency until the request begins to render, not the latency until the request finishes rendering. Shaving a few bytes off that transfer size should make no more difference to user-perceived performance than the natural variation in content size from one entry to the next (i.e. none). Letting the uniform interface constraints inform my design decisions, means I have to weigh the benefits of serving standalone content against the cost of standardizing such a solution. In practice, though, it doesn't save enough bandwidth to outweigh the cost of the added complexity, let alone the effort involved in standardization. So whatever tiny benefits may be derived from going outside the bounds of the uniform interface, aren't worth the tradeoff, and don't justify pursuing standardization of some sort of 'subatomic' media type. Thus, I've chosen to remain within the uniform interface. This is exactly the sort of real-world design decision REST is meant to pragmatically inform, by allowing it to be analyzed against an idealized model. It's up to the developer to determine the benefits and consequences of developing within the uniform interface constraint. I've also defined the following service for my system: /11.atom?xptr=(//@thr:count) The response is an integer, so application/json is a best-fit choice. But, there is no context and no link relations, so even though it may be self-descriptive, it fails the hypertext constraint. However, my purpose for doing this is to allow greater cache retention where the representation includes post count, without impacting the user experience -- no such benefits exist for the standalone content example. So the failure to apply the uniform interface constraint here, yields a benefit to my system, but the whole thing is too insignificant to warrant a standardization effort -- leading to my decision not to care that this aspect of my system lies outside the uniform interface. Yet I still find it beneficial to understand that I have this REST mismatch in my system, even if I don't care. Just like it's beneficial to understand how all HTTP messaging fails to be self-descriptive, even though we don't care (because there's nothing for it, as yet). My point is, there is pragmatic value in teaching REST by explaining how to apply its constraints in practice. Not if, like you say, the goal is to use REST as an example of an architectural style for the purpose of teaching software architecture -- although it would help to point out that REST is useful as a development guideline rather than just as a label (unlike, say, 'client-server') -- but definitely, if the goal is the implementation of a REST system on the Web. -Eric
Perhaps you would be better mapping roles instead of treating your domain as schema? On Thu, Sep 16, 2010 at 6:29 PM, bryan_w_taylor <bryan_w_taylor@yahoo.com>wrote: > > > I'm trying to help my company develop standard idioms for identifying > resources so that service implementation teams don't have to rehash the same > arguments every time. One issue that seems to be giving us grief is how to > map an inheritance hierarchy. > > Suppose I am a zoo or vetrinary clinic and I am modelling resources for > animals. My object and XML representations involve a hierarchy of > specializations: > Animal > |- Mammal > |- Dog > |- Cat > |- Reptile > |- Snake > |- Lizard > > Anyway, suppose I store information about individual animals in my system > and I want to present operations on them in a RESTful way. > > My thinking is that I should give them a canonical representation like: > http://animalsRus.com/animals/123 <http://animalsrus.com/animals/123> > > I should **NOT** additionally create aliased URIs like: > http://animalsRus.com/dogs/123 <http://animalsrus.com/dogs/123> > for several reasons. I may have to create the animal before I know it is a > dog, or I may have to correct a data entry error. > > I am hesitant to put the inheritance chain in the the URI like > http://animalsRus.com/animals/mammals/dogs/123<http://animalsrus.com/animals/mammals/dogs/123> > because refactoring of the class hierarchy shouldn't create tension between > it and the URI structure. I view the class hierarchy as an implementation > detail, but it does refer to concepts that are meaningful in the business > domain. > > I would generally search for any animial like this: > http://animalsRus.com/animals?owner=Joe<http://animalsrus.com/animals?owner=Joe> > > But I might allow searching just for dogs like so: > http://animalsRus.com/dogs?owner=Joe<http://animalsrus.com/dogs?owner=Joe> > > Does this seem reasonable? What approaches have others taken to domain > nouns that are organized into an inheritance tree? > > > -- Les erreurs de grammaire et de syntaxe ont t incluses pour m'assurer de votre attention
Agreed. Here's the diagram: http://bit.ly/cpLSuh . It's part of a nice Library of Congress site about the Eameses: http://www.loc.gov/exhibits/eames/eameshome.html . -- Nick Nick Gall Phone: +1.781.608.5871 Twitter: ironick AOL IM: Nicholas Gall Yahoo IM: nick_gall_1117 MSN IM: (same as email) Google Talk: (same as email) Email: nick.gall AT-SIGN gmail DOT com Weblog: http://ironick.typepad.com/ironick/ On Sun, Sep 19, 2010 at 5:27 PM, Roy T. Fielding <fielding@...> wrote: > On Sep 19, 2010, at 2:11 PM, Nick Gall wrote: > > > Agree that the Eames interview is one pearl of wisdom after another. The > pdf of the interview is available here: http://bit.ly/bQlQ1e . > > > > Q: To whom does design address itself: to the > > greatest number (the masses)? to the specialists > > or the enlightened amateur? To a privileged > > social class? > > A: To the need. > > Yes, but it isn't complete without his diagram on page 13. > > ....Roy > >
Totally agree, Erick. It depends on what do you want to teach, and the level. Actually, we go through all the constrains to see why it is there, what is the benefit, and what is the trade-off. William Martinez. --- In rest-discuss@yahoogroups.com, "Eric J. Bowman" <eric@...> wrote: > > "William Martinez Pomares" wrote: > > > > I work on academia too. But I teach Software Architecture > > fundamentals. > > > > In other words, I use REST as an analytical example. I try not to > > teach it as a set of rules and constraints. > > > > This is a reasonable decision for your context, no arguments from me, > in fact it's the same approach taken by the "Software Architecture: > Foundations, Theory and Practice" textbook. > > But, I don't see how it's possible to enlighten anyone as to how to > implement a REST system for the Web, except by teaching the constraints > -- which is pragmatism, not dogma. > > I use REST as a guideline to inform design decisions -- one of the > unwritten benefits of the style is simply having such a guideline, and > I think that's worth teaching. The rest of this post is by way of > example, to illustrate my point... > > ---------------------------------- > > I've posted a demo of my efforts to design an integrated wiki/weblog/ > forum system. The design choice that leads me to REST, is the choice > to go with a document-oriented distributed hypermedia solution. > > REST's uniform interface constrains me to choose processing models > which best fit my needs from those which are well-known (or define a > new one, and start down the road to standardizing it). I decided that a > hierarchical collection of Atom Feed and Entry documents fits best. > > This means my smallest unit of data is the Atom Entry. But, a mainstay > of my user interface will be the expansion of a displayed summary into > the display of the entire post. This is a user goal, it doesn't matter > if the request is synchronous or asynchronous, the request is for an > Atom Entry document. > > Is that the most-efficient solution? No. It would be more efficient > to define a subresource, like so: > > /11.atom?xptr=(//content) > > But, what media type do I use for that response? It's just a snippet > of HTML, not an HTML document, so text/html and application/xhtml+xml > are out, and it obviously isn't Atom any more. Leaving me with > application/xml... which doesn't define any of the desired semantics of > the payload. > > When that content is contained within an Atom Entry, it has well-known > semantics like, this is the content of an entry with(out) a summary, > which links in to the rest of the system using standard link relations, > has various metadata like author, and <div> content defers to the XHTML > processing model. > > This context is lost when the standalone content is assigned a URI, > making that message fail the self-descriptive messaging and hypertext > constraints (in the case of my example). More efficient, sure, but also > less visible -- "The trade-off, though, is that a uniform interface > degrades efficiency, since information is transferred in a standardized > form rather than one which is specific to an application's needs." > > This choice doesn't have any impact on user-perceived performance. When > the user goal is to expand a summary to reveal the full content, what > matters to user-perceived performance is the latency until the request > begins to render, not the latency until the request finishes rendering. > Shaving a few bytes off that transfer size should make no more > difference to user-perceived performance than the natural variation in > content size from one entry to the next (i.e. none). > > Letting the uniform interface constraints inform my design decisions, > means I have to weigh the benefits of serving standalone content against > the cost of standardizing such a solution. In practice, though, it > doesn't save enough bandwidth to outweigh the cost of the added > complexity, let alone the effort involved in standardization. > > So whatever tiny benefits may be derived from going outside the bounds > of the uniform interface, aren't worth the tradeoff, and don't justify > pursuing standardization of some sort of 'subatomic' media type. Thus, > I've chosen to remain within the uniform interface. > > This is exactly the sort of real-world design decision REST is meant to > pragmatically inform, by allowing it to be analyzed against an idealized > model. It's up to the developer to determine the benefits and > consequences of developing within the uniform interface constraint. > > I've also defined the following service for my system: > > /11.atom?xptr=(//@thr:count) > > The response is an integer, so application/json is a best-fit choice. > But, there is no context and no link relations, so even though it may be > self-descriptive, it fails the hypertext constraint. > > However, my purpose for doing this is to allow greater cache retention > where the representation includes post count, without impacting the user > experience -- no such benefits exist for the standalone content example. > > So the failure to apply the uniform interface constraint here, yields a > benefit to my system, but the whole thing is too insignificant to > warrant a standardization effort -- leading to my decision not to care > that this aspect of my system lies outside the uniform interface. > > Yet I still find it beneficial to understand that I have this REST > mismatch in my system, even if I don't care. Just like it's beneficial > to understand how all HTTP messaging fails to be self-descriptive, even > though we don't care (because there's nothing for it, as yet). > > My point is, there is pragmatic value in teaching REST by explaining > how to apply its constraints in practice. Not if, like you say, the > goal is to use REST as an example of an architectural style for the > purpose of teaching software architecture -- although it would help to > point out that REST is useful as a development guideline rather than > just as a label (unlike, say, 'client-server') -- but definitely, if the > goal is the implementation of a REST system on the Web. > > -Eric >
> > Actually, we go through all the constrains to see why it is there, > what is the benefit, and what is the trade-off. > Hopefully at a "meta" level, rather than in terms of HTTP or the Web. You're trying to teach REST at a scope which includes the development of protocols, I'm not. You can use the Web to illustrate various aspects of some constraint, whereas I can use it to delineate between what does and doesn't meet that constraint within an existing, deployed architecture. The fact that REST may be used to both purposes, rather than being restricted to use as a label for classification (client-server), is something I find interesting. If I were in academia, and had students trying to learn REST, I might ask them to describe the differences between these two documents in terms of REST: http://www.w3.org/TR/webarch/ http://www.w3.org/DesignIssues/Architecture.html I'm sure assignments like that are why my students would hate me... Both documents share a similar architectural vision for an HTTP-based Web of documents. But, in certain cases, one follows the assumptions of REST while the other challenges those assumptions. There's disagreement around both how to apply certain constraints, as well as around which constraints are even necessary, which can help illuminate what REST *is* -- a discussion which offers no pragmatic assistance to me as an implementer trying to figure out *how* to apply those constraints. -Eric (BTW, mca asked a while back about what term to use in lieu of calling the entity self-descriptive; the second link coins self-describing. Yes, in natural language these are synonyms; but technically, they have discrete definitions, neither of which are synonymous with self- documenting.)
Hello!
Some of you know that I'm working on an open source project called
'RESTx'. It's a specialized, small and stand-alone server to create new
RESTful resources and web services very quickly and easily. It
automatically produces a self-documented RESTful API for you, while at
it.
You can find a complete, self-guided demo of the system here:
http://restxdemo.mulesoft.org/static/demo/start.html
I would like to invite you (the members of this group here) to be the
first to try out the demo and let me know what you think. If you have a
minute to spare, I would really love to get your feedback. On one hand
I'm feeling a bit nervous about what you might say, since many of you
are so knowledgeable about REST, but on the other hand I know that your
feedback would really help me to make RESTx better. Constructive
criticism is certainly welcome, and if you have something good to say
about it, I wouldn't mind hearing that either. :-)
Some background information:
* RESTx is not an application framework in the usual sense: It's
all about the individual services. Small, self-contained pieces
of code ('components') perform specific tasks (such as accessing
a DB, a custom API, a cloud service, implementing data
integration logic, etc.). You can provide new configurations for
a component in order to create a new resource (or web service),
which gets its own URI: Access that URI and the stored
configuration is applied to the component.
* You can create new resources by POSTing the parameter sets to
the component or by filling out a simple form in a web browser.
So, even non-developers can create their own specialized web
services.
* If you need custom components, you can easily write those in
either Java, Python or server-side JavaScript.
* It does support conneg, so you can see information from the
server as HTML (in a browser) or JSON (in a client application),
for example.
Known limitations:
* Links don't get special tags. Right now it somewhat naively
assumes that if it looks like a URI it should be a URI. I know I
need to come up with something better there.
* Currently, it doesn't support the proper HTTP caching tags and
headers yet. That will be added soon.
* I'm using generic content types, such as 'application/json'.
Anyway, so there it is. I really appreciate you taking the time to have
a quick look at it and I'm already grateful for your feedback.
Thank you very much!
Juergen
--
Juergen Brendel
RESTx - the fastest and easiest way to created RESTful web services
http://restx.org
This made me chuckle http://williamstw.blogspot.com/2010/09/rest-and-self-descriptiveness.html The thought occurs to me that perhaps media type discovery / self-description itself would be better if the type name was a URI, e.g. http://example.com/media-types/application/custom-type For backwards compatibility, the absence of a URI would indicate an implicit link to the IANA, i.e. thus application/sparql-query would implicitly link to http://www.iana.org/assignments/media-types/application/#sparql-query (although the IANA website would probably need a tweak to support this cleanly and redirect to the actual doc; so http://www.iana.org/assignments/media-types/application/sparql-query could be used) Regards, Alan Dean
Alan Dean wrote: > > The thought occurs to me that perhaps media type discovery / > self-description itself would be better if the type name was a URI, > e.g. http://example.com/media-types/application/custom-type > Self-descriptive messaging means all your headers and the data they contain may be easily understood by anybody, it isn't simply a matter of media type identifier -- the GET method is self-descriptive, the FOO method isn't, regardless of protocol. That being said, your thought has occurred to Tim Berners-Lee, also: " [A]s another example, the headers in an HTTP request which specify attributes of the object. These are defined within the scope of particular specifications. There is always pressure to extend these specifications in a flexible way. HTTP header names are generally extended arbitrarily by those doing experiments. The same can also be true of HTML elements and extension mechanisms have been proposed for both. If we look generically at the very wide space of all such metadata attribute names, we find something in which the dictionary would be so large that ad hoc arbitrary extension would be just as chaotic as central registration would be stifling. " http://www.w3.org/DesignIssues/Metadata.html It's that last bit there that's important. A central registry may indeed stifle innovation; however, ad-hoc extension defeats the goal of generic interoperability. You can't expect method=FOO to interoperate any better than if you use foo:// as your URI scheme or 'foo' as an identifier, unless they're registered, and no distributed registry mechanism has yet been defined for anything with any bearing on self- descriptiveness. More from Tim: " There is an open question as to what the process should be for formulating new URI schemes, but it is clear that to allow unfettered proliferation would be a serious mistake. In almost all other areas, proliferation of new designs is welcomed and the Web can be used as a distributed registry of them, but not for the case of URI schemes. " http://www.w3.org/DesignIssues/Architecture.html Clearly, there's some friction between Tim's view and Roy's view. The question is whether it's appropriate to use the Web as a distributed registry for media type identifiers. I don't believe I've offered any opinion on that (beyond Google not being a registry); I've only pointed out that there is no definition in any of the specs which supports anything other than the IANA registry for media type identifiers, therefore only IANA-registered identifiers (which point to a spec, not a 404) may be considered self-descriptive on the Web today. But, here's the problem with allowing ad-hoc extensibility: " The introduction of any other method apart from GET which has no side-effects and is simply a function of the URI is also incorrect, because the results of such an operation effectively form a separate address space, which violates the universality. " http://www.w3.org/DesignIssues/Architecture.html A distributed registry for HTTP methods would allow FOO to have the same semantics as GET, just as a distributed registry for identifiers would allow application/foo+xml to identify the same processing model as application/xhtml+xml -- which defeats the goal of generic interoperability by splitting the identifier namespace. While media types may have multiple identifiers, each processing model has a one-to-one mapping to an identifier in the IANA standards tree, which I see as the benefit of a central registry. > > For backwards compatibility, the absence of a URI would indicate an > implicit link to the IANA, i.e. thus application/sparql-query would > implicitly link to > http://www.iana.org/assignments/media-types/application/#sparql-query > (although the IANA website would probably need a tweak to support > this cleanly and redirect to the actual doc; so > http://www.iana.org/assignments/media-types/application/sparql-query > could be used) > This thought has also occurred to Tim: " In HTTP, the format of data is defined by a "MIME type". This formally refers to a central registry kept by IANA. However, architecturally this is an unnecessary central point of control, and there is no reason why the Web itself should not be used as a repository for new types. Indeed, a transition plan, in which unqualified MIME types are taken as relative URIs within a standard reference URI in an online MIME registry, would allow migration of MIME types to become first class objects. " http://www.w3.org/DesignIssues/Architecture.html OK, sure, nobody is saying the IANA registry is the best solution -- just that it's the only *existing* solution (no "transition plan" exists). So, how are you going to make this change? HTTP re-uses MIME. It's understood that this is sub-optimal for HTTP: " The problem with MIME syntax is that it assumes the transport is lossy, deliberately corrupting things like line breaks and content lengths. The syntax is therefore verbose and inefficient for any system not based on a lossy transport, which makes it inappropriate for HTTP. Since HTTP/1.1 has the capability to support deployment of incompatible protocols, retaining the MIME syntax won't be necessary for the next major version of HTTP, even though it will likely continue to use the many standardized protocol elements for representation metadata. " http://www.ics.uci.edu/~fielding/pubs/dissertation/evaluation.htm (Representation metadata includes media type identifier, btw.) So, are you suggesting that HTTP be changed/replaced, or that MIME be changed? Changing MIME to allow some other registry mechanism isn't just a change to HTTP, but a fundamental change to the Internet. My point remains that absent some successor to the IANA registry, there is no other way to meet the self-descriptive messaging constraint on the Web and *expect* widespread interoperability, not that IANA is the best-and-only alternative end-of-story -- just the current reality. What transition plan are you anti-IANA-registry folks proposing? Or do we just use Google as a registry without any transition plan or formal declaration anywhere in the specs allowing that as an option? -Eric
Eric, Bear in mind that my proposition is not the same as arbitrarily defining a new HTTP method or URI scheme, so I'm not sure that the arguments are the same. For one thing, a media type link could be followed by a human and the content perused. In any event, I'm not anti-IANA - I just think that there needs to be better support for media type extensibility to greater specificity. Feel free to blame application/xml and others for basically defining how to structure a glob of data but not what it means . Also, to be honest, I consider that saying that you can't be self-descriptive unless your specific subtype is registered with the IANA is nonsensical (I gave the example of RSS in an earlier thread, but there are plenty of other well-known unregistered types and subtypes). If the IANA were to offer some procedure by which I could easily register my application/vnd.something-specific+xml then I would be happier, but they don't and the pragmatic reality of web dev is that absent an easy type / subtype registration process, the types simply won't get registered (which, in practice, has little or no bearing upon their utility or how well-known they actually are). I floated my idea simply because it's an obviously easy way to federate registration. I'm not sure how broad a change it implies to the existing standards stack (it may even be a very significant change for all I know) but that need not necessitate calling it a bad idea (although I acknowledge it might warrant calling it impractical) but I don't see standards are requiring immutability anyway. There is work going on right now looking at how HTTP might be improved in the light of 1.1 having been around for over a decade, so it's not inconceivable that we could see an HTTP/1.2 at some point. Regards, Alan Dean On Thu, Sep 23, 2010 at 07:41, Eric J. Bowman <eric@...> wrote: > Alan Dean wrote: > > > > The thought occurs to me that perhaps media type discovery / > > self-description itself would be better if the type name was a URI, > > e.g. http://example.com/media-types/application/custom-type > > > > Self-descriptive messaging means all your headers and the data they > contain may be easily understood by anybody, it isn't simply a matter > of media type identifier -- the GET method is self-descriptive, the FOO > method isn't, regardless of protocol. > > That being said, your thought has occurred to Tim Berners-Lee, also: > > " > [A]s another example, the headers in an HTTP request which specify > attributes of the object. These are defined within the scope of > particular specifications. There is always pressure to extend these > specifications in a flexible way. HTTP header names are generally > extended arbitrarily by those doing experiments. The same can also be > true of HTML elements and extension mechanisms have been proposed for > both. If we look generically at the very wide space of all such > metadata attribute names, we find something in which the dictionary > would be so large that ad hoc arbitrary extension would be just as > chaotic as central registration would be stifling. > " > > http://www.w3.org/DesignIssues/Metadata.html > > It's that last bit there that's important. A central registry may > indeed stifle innovation; however, ad-hoc extension defeats the goal of > generic interoperability. You can't expect method=FOO to interoperate > any better than if you use foo:// as your URI scheme or 'foo' as an > identifier, unless they're registered, and no distributed registry > mechanism has yet been defined for anything with any bearing on self- > descriptiveness. > > More from Tim: > > " > There is an open question as to what the process should be for > formulating new URI schemes, but it is clear that to allow unfettered > proliferation would be a serious mistake. In almost all other areas, > proliferation of new designs is welcomed and the Web can be used as a > distributed registry of them, but not for the case of URI schemes. > " > > http://www.w3.org/DesignIssues/Architecture.html > > Clearly, there's some friction between Tim's view and Roy's view. The > question is whether it's appropriate to use the Web as a distributed > registry for media type identifiers. I don't believe I've offered any > opinion on that (beyond Google not being a registry); I've only pointed > out that there is no definition in any of the specs which supports > anything other than the IANA registry for media type identifiers, > therefore only IANA-registered identifiers (which point to a spec, not > a 404) may be considered self-descriptive on the Web today. > > But, here's the problem with allowing ad-hoc extensibility: > > " > The introduction of any other method apart from GET which has no > side-effects and is simply a function of the URI is also incorrect, > because the results of such an operation effectively form a separate > address space, which violates the universality. > " > > http://www.w3.org/DesignIssues/Architecture.html > > A distributed registry for HTTP methods would allow FOO to have the > same semantics as GET, just as a distributed registry for identifiers > would allow application/foo+xml to identify the same processing model > as application/xhtml+xml -- which defeats the goal of generic > interoperability by splitting the identifier namespace. While media > types may have multiple identifiers, each processing model has a > one-to-one mapping to an identifier in the IANA standards tree, which I > see as the benefit of a central registry. > > > > > For backwards compatibility, the absence of a URI would indicate an > > implicit link to the IANA, i.e. thus application/sparql-query would > > implicitly link to > > http://www.iana.org/assignments/media-types/application/#sparql-query > > (although the IANA website would probably need a tweak to support > > this cleanly and redirect to the actual doc; so > > http://www.iana.org/assignments/media-types/application/sparql-query > > could be used) > > > > This thought has also occurred to Tim: > > " > In HTTP, the format of data is defined by a "MIME type". This formally > refers to a central registry kept by IANA. However, architecturally > this is an unnecessary central point of control, and there is no reason > why the Web itself should not be used as a repository for new types. > Indeed, a transition plan, in which unqualified MIME types are taken as > relative URIs within a standard reference URI in an online MIME > registry, would allow migration of MIME types to become first class > objects. > " > > http://www.w3.org/DesignIssues/Architecture.html > > OK, sure, nobody is saying the IANA registry is the best solution -- > just that it's the only *existing* solution (no "transition plan" > exists). So, how are you going to make this change? HTTP re-uses > MIME. It's understood that this is sub-optimal for HTTP: > > " > The problem with MIME syntax is that it assumes the transport is > lossy, deliberately corrupting things like line breaks and content > lengths. The syntax is therefore verbose and inefficient for any system > not based on a lossy transport, which makes it inappropriate for HTTP. > Since HTTP/1.1 has the capability to support deployment of incompatible > protocols, retaining the MIME syntax won't be necessary for the next > major version of HTTP, even though it will likely continue to use the > many standardized protocol elements for representation metadata. > " > > http://www.ics.uci.edu/~fielding/pubs/dissertation/evaluation.htm > > (Representation metadata includes media type identifier, btw.) > > So, are you suggesting that HTTP be changed/replaced, or that MIME be > changed? Changing MIME to allow some other registry mechanism isn't > just a change to HTTP, but a fundamental change to the Internet. My > point remains that absent some successor to the IANA registry, there is > no other way to meet the self-descriptive messaging constraint on the > Web and *expect* widespread interoperability, not that IANA is the > best-and-only alternative end-of-story -- just the current reality. > > What transition plan are you anti-IANA-registry folks proposing? Or do > we just use Google as a registry without any transition plan or formal > declaration anywhere in the specs allowing that as an option? > > -Eric >
On Thu, Sep 23, 2010 at 6:07 AM, Alan Dean <alan.dean@...> wrote: > > > This made me chucklehttp://williamstw.blogspot.com/2010/09/rest-and-self-descriptiveness.html .... :) > The thought occurs to me that perhaps media type discovery / self-description itself would be better if the type name was a URI, e.g. http://example.com/media-types/application/custom-type Agreed: http://tech.groups.yahoo.com/group/rest-discuss/message/16360 Even if that was implemented it wouldn't help reconcile differences of opinion on how standardisation does/should occur. i.e. 'messy' emergence vs. 'tidy' IANA ordination. Cheers, Mike
Alan Dean wrote: > This made me chuckle > http://williamstw.blogspot.com/2010/09/rest-and-self-descriptiveness.html > > The thought occurs to me that perhaps media type discovery / > self-description itself would be better if the type name was a URI, e.g. > http://example.com/media-types/application/custom-type > > For backwards compatibility, the absence of a URI would indicate an implicit > link to the IANA, i.e. thus application/sparql-query would implicitly link > to http://www.iana.org/assignments/media-types/application/#sparql-query > (although > the IANA website would probably need a tweak to support this cleanly and > redirect to the actual doc; so > http://www.iana.org/assignments/media-types/application/sparql-query could > be used) Okay, let's roll with it and say this came in to effect on the 22nd September 2009, last year. Given that SPARQL has many 'needed' features missing, how many different variations of SPARQL do you think we'd have by now? perhaps 5? 10? more? Maybe if we throw RDF in to the equation too, care to hazard a guess at how many variations we'd have one year on? each with their own version of 'named graphs', subject literals, and all kinds of other things. Maybe we should scope out of sem web territory and consider JSON, do you think there may be a couple of media types minted for a JSON representation of a 'User' yet? any other objects that somebody somewhere would think it's a good idea to define as a media type and mint a new URI for? Maybe if we throw in XML and HTML too, in fact every existing media type from every domain and every expert who thinks X media type they use often is lacking something, and all of those who think there's a better way to do Y, and all those people who simply don't understand Z yet so have invented FOO; perhaps we should also consider all of those vendors who'd like some lock in, so they've moved to some custom media types and minted a few URIs, and of course are only supporting those media types now. If you think for a second that you may just create a new media type and mint a URI for it, then you can knock the number of media types you think there will be one year on up by (at least) one, then add on a couple for every person on this list you think might be wanting to create a new media type, quite sure you could get +100 out of that :) Might be fun to say this came in to effect on the 22nd September 2005! Should be about the same but 5 years on, wonder if any of those domains have dropped yet.. wonder which ones the major vendors are supporting.. think any little payments might have been made to tool providers to get support for X media type in there? Wonder how many of these media type authors have minted a new URI for each improved version of their media type. Wonder if anybody's supporting that media type you created yet, wonder if 5 years on your still supporting it. Actually, tbh, I'm mainly wondering just how many of those HTTP message-body's I can understand. There are probably considerably more factors that I haven't considered here, most notably what do you get when you dereference one of these URIs that identify a media type, oh, and what's the media type of the thing you do get when you dereference it - interesting for sure. Best, Nathan ps: of course I really don't mean to come over as rude, just thought it may be interesting to nudge people to consider what might happen by asking a series of questions :)
Alan Dean wrote: > > Bear in mind that my proposition is not the same as arbitrarily > defining a new HTTP method or URI scheme, so I'm not sure that the > arguments are the same. > It's all the same constraint. The argument is that you can't just call anything that gets used self-descriptive by virtue of its being used -- that may be self-descriptive in the natural-language sense, but it isn't self-descriptive in the normative, technical sense. It isn't self-descriptive "enough" to not be self-descriptive at all. > > For one thing, a media type link could be followed by a human and the > content perused. > So can a link relation link. But how is that standardized? REST isn't meant to be a label that applies to any HTTP solution that works, or even that's scalable, it's meant to describe a subset of solutions that work by using defined standards. Changing how standards are defined is a reasonable debate; it's even reasonable to debate whether standardization is necessary in practice. It isn't reasonable to argue that nonstandardized solutions are acceptable in REST, because that would be some other architectural style, by definition. Unregistered identifiers aren't standardized, therefore they aren't the REST style, even in "spirit." > > In any event, I'm not anti-IANA - I just think that there needs to be > better support for media type extensibility to greater specificity. > I don't disagree with that at all. I can think of half a dozen reasons off the top of my head, for why the IANA registry is broken and needs to be fixed. > > Feel free to blame application/xml and others for basically defining > how to structure a glob of data but not what it means . Also, to be > honest, I consider that saying that you can't be self-descriptive > unless your specific subtype is registered with the IANA is > nonsensical (I gave the example of RSS in an earlier thread, but > there are plenty of other well-known unregistered types and subtypes). > But that requires redefining the meaning of self-descriptive. RFC 3023 defines +xml, meaning such types *may* be registered, it doesn't mean any +xml type *is* registered, let alone standardized. It's nonsensical to me to treat this as a loophole around registration, or to state that such an unregistered type is self-descriptive -- self-descriptive means registered, technically speaking. Using +xml isn't some sort of automatic standard, or even registered. By stating that registration is unneccessary, the bar for self- descriptiveness is raised to ubiquitous, i.e. tied to uptake, which isn't the point of the constraint. Self-descriptive has nothing to do with uptake, which is reasonable, because otherwise we have to determine at what point an identifier becomes ubiquitous enough to be deemed self-descriptive (which requires redefining self-descriptive, first). Without redefining the term, there is no way that any unregistered media type, no matter how ubiquitous, can possibly qualify -- that isn't what the term means. My cousin isn't my brother even if we're so close that he's my bro -- technically speaking. > > If the IANA were to offer some procedure by which I could easily > register my application/vnd.something-specific+xml then I would be > happier, but they don't... > Agreed, but this problem may be solved without changing the definition of self-descriptive such that we don't need to solve this problem. > > ...and the pragmatic reality of web dev is that absent an easy type / > subtype registration process, the types simply won't get registered > (which, in practice, has little or no bearing upon their utility or > how well-known they actually are). > Just like self-descriptive says nothing about utility or ubiquity. If the mechanism for self-descriptiveness is broken, then it needs fixing. Abandoning the constraint because it's too hard in practice, isn't the solution. > > I floated my idea simply because it's an obviously easy way to > federate registration. I'm not sure how broad a change it implies to > the existing standards stack (it may even be a very significant > change for all I know) but that need not necessitate calling it a bad > idea (although I acknowledge it might warrant calling it impractical) > but I don't see standards are requiring immutability anyway. There is > work going on right now looking at how HTTP might be improved in the > light of 1.1 having been around for over a decade, so it's not > inconceivable that we could see an HTTP/1.2 at some point. > I'm not saying it's a bad idea; I'm pushing back against the notion that unregistered identifiers are self-descriptive because registries serve no purpose, in which case anything and everything is self- descriptive, so what's the point of the constraint? By definition, such a change would not be HTTP/1.2 because it requires a new major version number to replace the IANA registry. HTTP 1.2, or even HTTPbis, could be changed to define some other registry in addition to IANA. I'm trying to figure out what you mean to change, HTTP or MIME? Either way may be a very good idea, I'm not passing judgment -- only saying that self-descriptive requires registration, so declaring the IANA registry broken and not using it doesn't meet the constraint. -Eric
Nathan, Thank you :-) excellent questions. My initial answer is: guess what you have now - exactly that situation, only it isn't surfaced as a media type issue. Instead, it is surfaced as opaque representations on the wire with no easy means to work out what the data means, coupled with (in certain cases) magic URIs. I agree with you that my proposition would imply a significant inflation (even hyper-inflation) of media types. However, from the babel of types I believe that there would arise a set of 'trusted' media type repositories. One may well be the w3c itself, others may be industry groups (such as OTA for travel) or open source groups and also, of course, commercial entities such as Google, Microsoft, Facebook, etc. Serendipity of reuse would entirely depend upon the rate of adoption of any one media type. This would be up to the (for lack of a better phrase) democratic choice of the 'market' for media types and services. I see RSS as an example of this but there are plenty of others such as sitemaps or OpenSearch. Regards, Alan Dean On Thu, Sep 23, 2010 at 10:51, Nathan <nathan@...> wrote: > Alan Dean wrote: > >> This made me chuckle >> http://williamstw.blogspot.com/2010/09/rest-and-self-descriptiveness.html >> >> The thought occurs to me that perhaps media type discovery / >> self-description itself would be better if the type name was a URI, e.g. >> http://example.com/media-types/application/custom-type >> >> For backwards compatibility, the absence of a URI would indicate an >> implicit >> link to the IANA, i.e. thus application/sparql-query would implicitly link >> to http://www.iana.org/assignments/media-types/application/#sparql-query >> (although >> the IANA website would probably need a tweak to support this cleanly and >> redirect to the actual doc; so >> http://www.iana.org/assignments/media-types/application/sparql-querycould >> be used) >> > > Okay, let's roll with it and say this came in to effect on the 22nd > September 2009, last year. Given that SPARQL has many 'needed' features > missing, how many different variations of SPARQL do you think we'd have by > now? perhaps 5? 10? more? Maybe if we throw RDF in to the equation too, care > to hazard a guess at how many variations we'd have one year on? each with > their own version of 'named graphs', subject literals, and all kinds of > other things. Maybe we should scope out of sem web territory and consider > JSON, do you think there may be a couple of media types minted for a JSON > representation of a 'User' yet? any other objects that somebody somewhere > would think it's a good idea to define as a media type and mint a new URI > for? Maybe if we throw in XML and HTML too, in fact every existing media > type from every domain and every expert who thinks X media type they use > often is lacking something, and all of those who think there's a better way > to do Y, and all those people who simply don't understand Z yet so have > invented FOO; perhaps we should also consider all of those vendors who'd > like some lock in, so they've moved to some custom media types and minted a > few URIs, and of course are only supporting those media types now. > > If you think for a second that you may just create a new media type and > mint a URI for it, then you can knock the number of media types you think > there will be one year on up by (at least) one, then add on a couple for > every person on this list you think might be wanting to create a new media > type, quite sure you could get +100 out of that :) > > Might be fun to say this came in to effect on the 22nd September 2005! > Should be about the same but 5 years on, wonder if any of those domains have > dropped yet.. wonder which ones the major vendors are supporting.. think any > little payments might have been made to tool providers to get support for X > media type in there? Wonder how many of these media type authors have minted > a new URI for each improved version of their media type. Wonder if anybody's > supporting that media type you created yet, wonder if 5 years on your still > supporting it. Actually, tbh, I'm mainly wondering just how many of those > HTTP message-body's I can understand. > > There are probably considerably more factors that I haven't considered > here, most notably what do you get when you dereference one of these URIs > that identify a media type, oh, and what's the media type of the thing you > do get when you dereference it - interesting for sure. > > Best, > > Nathan > > ps: of course I really don't mean to come over as rude, just thought it may > be interesting to nudge people to consider what might happen by asking a > series of questions :) >
Nathan wrote: > > There are probably considerably more factors that I haven't > considered here, most notably what do you get when you dereference > one of these URIs that identify a media type, oh, and what's the > media type of the thing you do get when you dereference it - > interesting for sure. > Also interesting are IETF's reasons for rejecting the RFC for the Link header, for attempting to specify an XML-based IANA registry... -Eric
Eric, I'm not familiar with that - do you have a link? Regards, Alan Dean On Thu, Sep 23, 2010 at 11:38, Eric J. Bowman <eric@...> wrote: > Nathan wrote: > > > > There are probably considerably more factors that I haven't > > considered here, most notably what do you get when you dereference > > one of these URIs that identify a media type, oh, and what's the > > media type of the thing you do get when you dereference it - > > interesting for sure. > > > > Also interesting are IETF's reasons for rejecting the RFC for the Link > header, for attempting to specify an XML-based IANA registry... > > -Eric >
Eric, The thought occurs to me that we are circling around the question of "what is [to be] standardised". You are, I believe, coming from the perspective that the only way to fulfil the standardisation necessary to comply with the constraint is to have a registry and that at the moment there is only one. My apologies if I have unintentionally misrepresented you. I am coming from the perspective that instead of standardising a registry, it would be better to standardise discovery and that could fulfil the constraint sufficiently. For the avoidance of doubt, I am not trying to restart the theological argument over the status quo. Regards, Alan Dean On Thu, Sep 23, 2010 at 11:25, Eric J. Bowman <eric@...> wrote: > Alan Dean wrote: > > > > Bear in mind that my proposition is not the same as arbitrarily > > defining a new HTTP method or URI scheme, so I'm not sure that the > > arguments are the same. > > > > It's all the same constraint. The argument is that you can't just call > anything that gets used self-descriptive by virtue of its being used -- > that may be self-descriptive in the natural-language sense, but it > isn't self-descriptive in the normative, technical sense. It isn't > self-descriptive "enough" to not be self-descriptive at all. > > > > > For one thing, a media type link could be followed by a human and the > > content perused. > > > > So can a link relation link. But how is that standardized? REST isn't > meant to be a label that applies to any HTTP solution that works, or > even that's scalable, it's meant to describe a subset of solutions that > work by using defined standards. Changing how standards are defined is > a reasonable debate; it's even reasonable to debate whether > standardization is necessary in practice. > > It isn't reasonable to argue that nonstandardized solutions are > acceptable in REST, because that would be some other architectural > style, by definition. Unregistered identifiers aren't standardized, > therefore they aren't the REST style, even in "spirit." > > > > > In any event, I'm not anti-IANA - I just think that there needs to be > > better support for media type extensibility to greater specificity. > > > > I don't disagree with that at all. I can think of half a dozen reasons > off the top of my head, for why the IANA registry is broken and needs > to be fixed. > > > > > Feel free to blame application/xml and others for basically defining > > how to structure a glob of data but not what it means . Also, to be > > honest, I consider that saying that you can't be self-descriptive > > unless your specific subtype is registered with the IANA is > > nonsensical (I gave the example of RSS in an earlier thread, but > > there are plenty of other well-known unregistered types and subtypes). > > > > But that requires redefining the meaning of self-descriptive. RFC 3023 > defines +xml, meaning such types *may* be registered, it doesn't mean > any +xml type *is* registered, let alone standardized. It's nonsensical > to me to treat this as a loophole around registration, or to state that > such an unregistered type is self-descriptive -- self-descriptive means > registered, technically speaking. Using +xml isn't some sort of > automatic standard, or even registered. > > By stating that registration is unneccessary, the bar for self- > descriptiveness is raised to ubiquitous, i.e. tied to uptake, which > isn't the point of the constraint. Self-descriptive has nothing to do > with uptake, which is reasonable, because otherwise we have to > determine at what point an identifier becomes ubiquitous enough to be > deemed self-descriptive (which requires redefining self-descriptive, > first). > > Without redefining the term, there is no way that any unregistered > media type, no matter how ubiquitous, can possibly qualify -- that > isn't what the term means. My cousin isn't my brother even if we're so > close that he's my bro -- technically speaking. > > > > > If the IANA were to offer some procedure by which I could easily > > register my application/vnd.something-specific+xml then I would be > > happier, but they don't... > > > > Agreed, but this problem may be solved without changing the definition > of self-descriptive such that we don't need to solve this problem. > > > > > ...and the pragmatic reality of web dev is that absent an easy type / > > subtype registration process, the types simply won't get registered > > (which, in practice, has little or no bearing upon their utility or > > how well-known they actually are). > > > > Just like self-descriptive says nothing about utility or ubiquity. If > the mechanism for self-descriptiveness is broken, then it needs fixing. > Abandoning the constraint because it's too hard in practice, isn't the > solution. > > > > > I floated my idea simply because it's an obviously easy way to > > federate registration. I'm not sure how broad a change it implies to > > the existing standards stack (it may even be a very significant > > change for all I know) but that need not necessitate calling it a bad > > idea (although I acknowledge it might warrant calling it impractical) > > but I don't see standards are requiring immutability anyway. There is > > work going on right now looking at how HTTP might be improved in the > > light of 1.1 having been around for over a decade, so it's not > > inconceivable that we could see an HTTP/1.2 at some point. > > > > I'm not saying it's a bad idea; I'm pushing back against the notion > that unregistered identifiers are self-descriptive because registries > serve no purpose, in which case anything and everything is self- > descriptive, so what's the point of the constraint? > > By definition, such a change would not be HTTP/1.2 because it requires > a new major version number to replace the IANA registry. HTTP 1.2, or > even HTTPbis, could be changed to define some other registry in > addition to IANA. I'm trying to figure out what you mean to change, > HTTP or MIME? Either way may be a very good idea, I'm not passing > judgment -- only saying that self-descriptive requires registration, so > declaring the IANA registry broken and not using it doesn't meet the > constraint. > > -Eric >
Alan Dean wrote: > My initial answer is: guess what you have now - exactly that situation, only > it isn't surfaced as a media type issue. Instead, it is surfaced as opaque > representations on the wire with no easy means to work out what the data > means, coupled with (in certain cases) magic URIs. Before this spirals in to a 100-long thread, can we get right back to roots and clearly define what the problem is with the current media types / IANA registry setup? Best, Nathan
Nathan, I have no wish to replay the 100-post thread either :-) For me, I would like to be able to send the following messages: GET http://example.com/feed Accept: application/atom+xml, application/rss+xml; q=0.5, application/xml; q=0.1 or GET http://example.com/ Accept: application/sitemaps+xml or GET http://example.com/john-doe Accept: application/vnd.contact+json, text/x-vcard; q=0.5 To me these are both acceptable HTTP and acceptable from a REST POV (As I understand his position, Eric would say that they break the self-descriptive constraint because they employ unregistered types). From my perspective, we would have a better "self-descriptive web" if there were lower barriers to making media types like application/rss+xml, application/sitemaps+xml and application/vnd.contact+json discoverable. If I want to find out what they mean right now I have no choice but to go to [insert preferred search engine]. I suspect that this is in no small part due to the high barrier to registration in place right now. Whilst this might well make sense for "fundamental" types like application/xml or application/atom+xml, it makes far less sense for domain-specific dialects of the fundamental types. Regards, Alan Dean On Thu, Sep 23, 2010 at 11:55, Nathan <nathan@...> wrote: > Alan Dean wrote: > >> My initial answer is: guess what you have now - exactly that situation, >> only >> it isn't surfaced as a media type issue. Instead, it is surfaced as opaque >> representations on the wire with no easy means to work out what the data >> means, coupled with (in certain cases) magic URIs. >> > > Before this spirals in to a 100-long thread, can we get right back to roots > and clearly define what the problem is with the current media types / IANA > registry setup? > > Best, > > Nathan >
Alan Dean wrote: > > > Also interesting are IETF's reasons for rejecting the RFC for the > > Link header, for attempting to specify an XML-based IANA registry... > > I'm not familiar with that - do you have a link? > "[T]he specification of a one-off registry XML format doesn't work well with IANA's toolchain for managing registries." http://lists.w3.org/Archives/Public/ietf-http-wg/2010JulSep/0385.html I disagree with the conclusion that the RFC needs changing, since I believe IANA needs a new toolchain for managing registries anyway... But, until we do something to fix the whole IANA registry nightmare, it makes more sense to change the RFC to match IANA custom and practice. -Eric
Alan Dean wrote: > > To me these are both acceptable HTTP and acceptable from a REST POV > (As I understand his position, Eric would say that they break the > self-descriptive constraint because they employ unregistered types). > Why would this be what *I* say, and not what REST/Roy says? There can be no doubt that self-descriptive = registered and points to a spec. > > application/vnd.contact+json > At least RFC 3023 defines +xml syntax -- +json is defined where, again? -Eric
Alan Dean wrote: > > > Also interesting are IETF's reasons for rejecting the RFC for the > > Link header, for attempting to specify an XML-based IANA registry... > > I'm not familiar with that - do you have a link? > Actually, I should have said "not yet approving" instead of "rejecting". "[T]he specification of a one-off registry XML format doesn't work well with IANA's toolchain for managing registries." http://lists.w3.org/Archives/Public/ietf-http-wg/2010JulSep/0385.html I disagree with the conclusion that the RFC needs changing, since I believe IANA needs a new toolchain for managing registries anyway... But, until we do something to fix the whole IANA registry nightmare, it makes more sense to change the RFC to match IANA custom and practice. -Eric (Apologies for duplicate messages -- blame Yahoo, not me!)
Jan, Do we not already have [unregistered] media type sprawl? What's worse is the exceedingly usage of the utterly generic application/xml type which gives zero information about how to handle the payload other than "you can load it into a DOM" and prevents you discovering if a URI supports some specific dialiect that you care about; maybe oData for example. If the barrier isn't too high, why don't the likes of Google, Microsoft or Facebook feel that it is low enough to register the dialects they have created? BTW: For clarity: what I care about is a good solution to the dialect discovery problem here - I'm not wedded to any particular solution. Regards, Alan Dean On Thu, Sep 23, 2010 at 12:46, Jan Algermissen <algermissen1971@...>wrote: > > On Sep 23, 2010, at 4:20 AM, Alan Dean wrote: > > > > > From my perspective, we would have a better "self-descriptive web" if > there were lower barriers to making media types like application/rss+xml, > application/sitemaps+xml and application/vnd.contact+json discoverable. If I > want to find out what they mean right now I have no choice but to go to > [insert preferred search engine]. I suspect that this is in no small part > due to the high barrier to registration in place right now. Whilst this > might well make sense for "fundamental" types like application/xml or > application/atom+xml, it makes far less sense for domain-specific dialects > of the fundamental types. > > > > I'd go with Mark's position (See "Problem three") here. > > > http://www.markbaker.ca/blog/2008/02/media-type-centralization-is-a-feature-not-a-bug/ > > I do not think that the entry barrier is too high (see vendor and personal > tree). > > *Not* having a central authority would mean media type would sprawl just > like XML schemas (see "Problem one"). > > Jan > > > > > > Regards, > > Alan Dean > > > > On Thu, Sep 23, 2010 at 11:55, Nathan <nathan@...> wrote: > > Alan Dean wrote: > > My initial answer is: guess what you have now - exactly that situation, > only > > it isn't surfaced as a media type issue. Instead, it is surfaced as > opaque > > representations on the wire with no easy means to work out what the data > > means, coupled with (in certain cases) magic URIs. > > > > Before this spirals in to a 100-long thread, can we get right back to > roots and clearly define what the problem is with the current media types / > IANA registry setup? > > > > Best, > > > > Nathan > > > > > > > > > >
Eric, I promise it was not intended to sound ad hominem. I simply referenced your name for clarity in the discussion. I am fully aware that you believe that Roy is in full agreement with your position on this; I am less certain that this is so. In any event, as I said, I am not trying to restart the debate about this aspect of things - we disagree and I understand why you disagree with me :-) You are correct that "+json" isn't defined anywhere. I was quickly throwing together indicative messages to answer Nathan. I think that you get my drift, regardless. Regards, Alan Dean On Thu, Sep 23, 2010 at 12:29, Eric J. Bowman <eric@...> wrote: > Alan Dean wrote: > > > > To me these are both acceptable HTTP and acceptable from a REST POV > > (As I understand his position, Eric would say that they break the > > self-descriptive constraint because they employ unregistered types). > > > > Why would this be what *I* say, and not what REST/Roy says? There can > be no doubt that self-descriptive = registered and points to a spec. > > > > > application/vnd.contact+json > > > > At least RFC 3023 defines +xml syntax -- +json is defined where, again? > > -Eric >
On Sep 23, 2010, at 4:20 AM, Alan Dean wrote: > > From my perspective, we would have a better "self-descriptive web" if there were lower barriers to making media types like application/rss+xml, application/sitemaps+xml and application/vnd.contact+json discoverable. If I want to find out what they mean right now I have no choice but to go to [insert preferred search engine]. I suspect that this is in no small part due to the high barrier to registration in place right now. Whilst this might well make sense for "fundamental" types like application/xml or application/atom+xml, it makes far less sense for domain-specific dialects of the fundamental types. > I'd go with Mark's position (See "Problem three") here. http://www.markbaker.ca/blog/2008/02/media-type-centralization-is-a-feature-not-a-bug/ I do not think that the entry barrier is too high (see vendor and personal tree). *Not* having a central authority would mean media type would sprawl just like XML schemas (see "Problem one"). Jan > Regards, > Alan Dean > > On Thu, Sep 23, 2010 at 11:55, Nathan <nathan@...> wrote: > Alan Dean wrote: > My initial answer is: guess what you have now - exactly that situation, only > it isn't surfaced as a media type issue. Instead, it is surfaced as opaque > representations on the wire with no easy means to work out what the data > means, coupled with (in certain cases) magic URIs. > > Before this spirals in to a 100-long thread, can we get right back to roots and clearly define what the problem is with the current media types / IANA registry setup? > > Best, > > Nathan > > > >
Eric J. Bowman wrote: > Alan Dean wrote: >> To me these are both acceptable HTTP and acceptable from a REST POV >> (As I understand his position, Eric would say that they break the >> self-descriptive constraint because they employ unregistered types). >> > > Why would this be what *I* say, and not what REST/Roy says? There can > be no doubt that self-descriptive = registered and points to a spec. > >> application/vnd.contact+json >> > > At least RFC 3023 defines +xml syntax -- +json is defined where, again? http://tools.ietf.org/html/draft-zyp-json-schema-02
Alan Dean wrote: >>From my perspective, we would have a better "self-descriptive web" if there > were lower barriers to making media types > like application/rss+xml, application/sitemaps+xml > and application/vnd.contact+json discoverable. If I want to find out what > they mean right now I have no choice but to go to [insert preferred search > engine]. I suspect that this is in no small part due to the high barrier to > registration in place right now. Whilst this might well make sense for > "fundamental" types like application/xml or application/atom+xml, it makes > far less sense for domain-specific dialects of the fundamental types. Hi Alan, I understand where you are coming from, and certainly agree that searching for custom dialects is far from ideal. Personally I feel that custom dialects often point to something being wrong, either caused by misunderstanding or because the core media type the custom type is a dialect of has a feature missing, such as supporting schemas and extensibility natively and unambiguously within the media type. Further, I'd suggest that in most, if not all cases a better course of action would be to address the issue within the core media type, perhaps along with others who also feel the need for custom dialects, in order to resolve a standardized approach rather that benefits all, rather than simply defining a case specific dialect as a new media type. Similarly, I feel that before creating a new media type or dialect it's best to re-use wherever possible, if you get an 'almost right' fit then work with them to improve the media type for all, failing that and where a need for a new media type has been proven to be needed, then working with the community and discussing the proposed media type with experts on the ietf-types list in order to create something that can be registered will probably lead to better results. There are many people who are happy to help create solutions for problems, and standardization bodies + would-be communities that will gladly assist in creating a standardized solution. From where I'm standing there are three clear paths to creating discoverable custom dialects: 1 - make provision for dialects/extensibility/schemas in the core media type 2 - petition IANA/IETF to allow a third level of media type identifier where by the third argument is under the control of a body/registry specified in the core media type specification. For instance application/json/contact, where application/json is under the domain of IANA and /contact is in the domain of a media type specific registry - it may even be worth pleading the case for a data/ main type.. data/json|xml/dialect 3- attempt to register a vnd. specific custom media type Listed in order of preference, if it turns out that (1) is not possible for xml/json/(similar?) then it may be worth suggesting something similar to (2) to solve the problem at internet scale. I still can't really see a case for (3) - imo it's better to create a v2 of a media-type which caters for dialects/schemas/extensibility than it is to register an internet media type identifier for a single-case solution. Best, Nathan
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 9/23/2010 6:14 AM, Nathan wrote: > > > Eric J. Bowman wrote: > > Alan Dean wrote: > >> To me these are both acceptable HTTP and acceptable from a REST POV > >> (As I understand his position, Eric would say that they break the > >> self-descriptive constraint because they employ unregistered types). > >> > > > > Why would this be what *I* say, and not what REST/Roy says? There can > > be no doubt that self-descriptive = registered and points to a spec. > > > >> application/vnd.contact+json > >> > > > > At least RFC 3023 defines +xml syntax -- +json is defined where, > again? > > http://tools.ietf.org/html/draft-zyp-json-schema-02 > I don't really claim to have "defined" +json syntax, I just followed the clear existing precedence set by +xml with JSON Schema I-D. However, I would argue that the JSON Schema I-D's suggested (not normatively defined, but recommended) use of the "profile" media type parameter provides a smart combination of base media types with a URL-based discoverability of mechanically understandable aspects of the media type definition. The profile media type can reference a schema that defines how hyperlinks are expressed (and the relation name), how the data can be modified and retain a correct structure. application/my.media.type+json;profile=http://my.media.com/my-schema The user agent might need to understand my.media.type+json to truly understand the "meaning" of the document, but it has a profile URL to dereference for more information and which can be used to automatically understand how to discover hyperlinks and structural constraints. In a layered user agent, this may be adequate level of media type understanding to provide a reasonable level of interaction for the next layer up or the user. There are multiple levels of "understanding" a media type. You can understand the grammer (JSON or XML), understand how to interpret hyperlinks, how the data is constrained, how it should or could be visually represented, and so on. Each level of understanding can give the user agents progressively more power to provide a helpful user interface to the user. Thanks, - -- Kris Zyp SitePen (503) 806-1841 http://sitepen.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (MingW32) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAkybVRQACgkQ9VpNnHc4zAwgWACguDx1u3qWHG+2EpDl/ux6vGia R50AoLZETv5DoEw3Nc+bH46P0kCjyAJA =CWRz -----END PGP SIGNATURE-----
Kris Zyp wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > > > On 9/23/2010 6:14 AM, Nathan wrote: >> >> Eric J. Bowman wrote: >>> Alan Dean wrote: >>>> To me these are both acceptable HTTP and acceptable from a REST POV >>>> (As I understand his position, Eric would say that they break the >>>> self-descriptive constraint because they employ unregistered types). >>>> >>> Why would this be what *I* say, and not what REST/Roy says? There can >>> be no doubt that self-descriptive = registered and points to a spec. >>> >>>> application/vnd.contact+json >>>> >>> At least RFC 3023 defines +xml syntax -- +json is defined where, >> again? >> >> http://tools.ietf.org/html/draft-zyp-json-schema-02 >> > > I don't really claim to have "defined" +json syntax, I just followed > the clear existing precedence set by +xml with JSON Schema I-D. > > However, I would argue that the JSON Schema I-D's suggested (not > normatively defined, but recommended) use of the "profile" media type > parameter provides a smart combination of base media types with a > URL-based discoverability of mechanically understandable aspects of > the media type definition. The profile media type can reference a > schema that defines how hyperlinks are expressed (and the relation > name), how the data can be modified and retain a correct structure. > > application/my.media.type+json;profile=http://my.media.com/my-schema is the my.media.type really needed? why not simply: application/json;profile=http://my.media.com/my-schema or a Link "describedby" as suggested by the draft. Note that there may be some conflation with using describedby, it's also used often to point to an RDF document describing the resource. It may be worth thinking whether there is a case for a 'profile' or 'schema' link relation. > There are multiple levels of "understanding" a media type. You can > understand the grammer (JSON or XML), understand how to interpret > hyperlinks, how the data is constrained, how it should or could be > visually represented, and so on. Each level of understanding can give > the user agents progressively more power to provide a helpful user > interface to the user. It may be worth figuring out whether the common case for these dialects would need to be in the media type (so that it can be negotiated over) or whether it's additional response information that can help an application work with the entity. I'd suggest that it's probably the latter. Best & thanks for your reply, Nathan
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 9/23/2010 7:48 AM, Nathan wrote: > Kris Zyp wrote: >> -----BEGIN PGP SIGNED MESSAGE----- >> Hash: SHA1 >> >> >> >> On 9/23/2010 6:14 AM, Nathan wrote: >>> >>> Eric J. Bowman wrote: >>>> Alan Dean wrote: >>>>> To me these are both acceptable HTTP and acceptable from a REST POV >>>>> (As I understand his position, Eric would say that they break the >>>>> self-descriptive constraint because they employ unregistered >>>>> types). >>>>> >>>> Why would this be what *I* say, and not what REST/Roy says? There >>>> can >>>> be no doubt that self-descriptive = registered and points to a spec. >>>> >>>>> application/vnd.contact+json >>>>> >>>> At least RFC 3023 defines +xml syntax -- +json is defined where, >>> again? >>> >>> http://tools.ietf.org/html/draft-zyp-json-schema-02 >>> >> >> I don't really claim to have "defined" +json syntax, I just followed >> the clear existing precedence set by +xml with JSON Schema I-D. >> >> However, I would argue that the JSON Schema I-D's suggested (not >> normatively defined, but recommended) use of the "profile" media type >> parameter provides a smart combination of base media types with a >> URL-based discoverability of mechanically understandable aspects of >> the media type definition. The profile media type can reference a >> schema that defines how hyperlinks are expressed (and the relation >> name), how the data can be modified and retain a correct structure. >> >> application/my.media.type+json;profile=http://my.media.com/my-schema > > is the my.media.type really needed? why not simply: > > application/json;profile=http://my.media.com/my-schema That's fine too, one could use a profile URL with either the generic media type or a more specific subtype. > > or a Link "describedby" as suggested by the draft. > > Note that there may be some conflation with using describedby, it's > also used often to point to an RDF document describing the resource. > It may be worth thinking whether there is a case for a 'profile' or > 'schema' link relation. I am not opposed to a new link relation. It just seemed safest to start by using an existing registered relation until is demonstrated that a separate relation is really needed. Thanks, - -- Kris Zyp SitePen (503) 806-1841 http://sitepen.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (MingW32) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAkybffgACgkQ9VpNnHc4zAylwQCdFZ8c90pHBvGY+4M0/6g1z86x m58An2YvYmHzgMpQqWhJzzKNyR3i3Pyl =eUqp -----END PGP SIGNATURE-----
Nathan wrote: > > > > >> application/vnd.contact+json > >> > > > > At least RFC 3023 defines +xml syntax -- +json is defined where, > > again? > > http://tools.ietf.org/html/draft-zyp-json-schema-02 > I still think the proper path is to change RFC 4627 to be extensible in the same way RFC 3023 is extensible for +xml, and that json-schema needs to refer to that new RFC, instead of defining schemas *and* media- type syntax extensibility. My opinion is, I'll believe it when I see it, and my advice remains, don't count on it being approved as written. At the present time, it is not self-descriptive to use +json syntax, as evidence I point to the IANA registry, which contains no +json types in any tree. Surely if this syntax were allowed, it would be in the vnd. or even the prs. tree by now, since everyone's been doing it for a while now? I think the same criteria will be applied to json-schema, which is that RFC 4627 says nothing about extensibility. Since nothing ending in +json is in the registry, or even pending registration -- as they would need to be based on an IETF draft whose approval can't be assumed -- such identifiers are the exact opposite of how *Roy* defines self-descriptive (making it authoritative): "Self-descriptive means that the type is registered and the registry points to a specification and the specification explains how to process the data according to [sender] intent." http://tech.groups.yahoo.com/group/rest-discuss/message/6594 http://tech.groups.yahoo.com/group/rest-discuss/message/6615 This isn't about Roy agreeing with me, it's about me agreeing with Roy. Using +json simply doesn't square with that definition, but it has nothing to do with me in any fashion, regardless of benign intent on the part of whomever suggests otherwise. Even if we assume json-schema is approved as written, how do I look up the sender intent of application/vnd.foo+json? Discussing alternatives or improvements to the IANA registry is non-sequitir if we can't agree on the fundamental reason behind such changes, which is that such identifiers are *not* self-descriptive *unless* they're registered. What is it about json-schema that automatically defines a processing model such that *everything* ending in +json is automatically self- descriptive despite not being registered? That sounds to me like saying that *everything* ending in +xml is automatically self- descriptive by virtue of having schema languages -- that just *isn't* the definition of self-descriptive, which requires registration. Self-descriptive identifiers clearly define the sender's intended processing model, that is their purpose. They have nothing to do at all with the semantics of the payload, i.e. its schema, only its processing model. Whether or not the payload has a schema is unrelated to self- descriptiveness -- that's what self-describing means. (Which is why, IMO, it's inappropriate for a schema language to define extensible identifier syntax -- whereas replacing RFC 4267 would allow *any* JSON schema language to be defined, and referred to by *any* spec being pointed to by a registered identifier ending in +json. I'd hate not to be able to use +xml because of my choice of RELAX NG over XSD, for instance. Does *anybody* get this?) If and when something ending in +json does get registered, it and only it will be self-descriptive, by definition, unless and until some other identifier ending in +json gets approved, and so on and so forth. I don't currently see any such thing in the IANA registry, so I can't believe it meets Roy's definition of self-descriptive. -Eric
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 9/23/2010 4:22 PM, Eric J. Bowman wrote: > > > Nathan wrote: > > > > > > > >> application/vnd.contact+json > > >> > > > > > > At least RFC 3023 defines +xml syntax -- +json is defined where, > > > again? > > > > http://tools.ietf.org/html/draft-zyp-json-schema-02 > > > > I still think the proper path is to change RFC 4627 to be extensible in > the same way RFC 3023 is extensible for +xml, and that json-schema > needs to refer to that new RFC, instead of defining schemas *and* media- > type syntax extensibility. My opinion is, I'll believe it when I see > it, and my advice remains, don't count on it being approved as written. > > At the present time, it is not self-descriptive to use +json syntax, as > evidence I point to the IANA registry, which contains no +json types in > any tree. Surely if this syntax were allowed, it would be in the vnd. > or even the prs. tree by now, since everyone's been doing it for a > while now? I think the same criteria will be applied to json-schema, > which is that RFC 4627 says nothing about extensibility. > I am not aware of any other I-D's that are proposing a +json media type, so I assume that JSON Schema will likely be the first entry in the IANA registry with that extension if and when it reaches that point (really not even sure what needs to be done before it can registered, but I am assuming the drafts need to reach a little more stable point of revision). > > > Since nothing ending in +json is in the registry, or even pending > registration -- as they would need to be based on an IETF draft whose > approval can't be assumed -- such identifiers are the exact opposite of > how *Roy* defines self-descriptive (making it authoritative): > > "Self-descriptive means that the type is registered and the registry > points to a specification and the specification explains how to process > the data according to [sender] intent." > > http://tech.groups.yahoo.com/group/rest-discuss/message/6594 > http://tech.groups.yahoo.com/group/rest-discuss/message/6615 > > This isn't about Roy agreeing with me, it's about me agreeing with > Roy. Using +json simply doesn't square with that definition, but it has > nothing to do with me in any fashion, regardless of benign intent on the > part of whomever suggests otherwise. > > Even if we assume json-schema is approved as written, how do I look up > the sender intent of application/vnd.foo+json? Discussing alternatives > or improvements to the IANA registry is non-sequitir if we can't agree > on the fundamental reason behind such changes, which is that such > identifiers are *not* self-descriptive *unless* they're registered. > > What is it about json-schema that automatically defines a processing > model such that *everything* ending in +json is automatically self- > descriptive despite not being registered? That sounds to me like > saying that *everything* ending in +xml is automatically self- > descriptive by virtue of having schema languages -- that just *isn't* > the definition of self-descriptive, which requires registration. > The JSON Schema I-D doesn't attempt to make any normative claims on such processing model, it merely indicates that the schema *can* be the target of the schema reference (whether the reference be a media type parameter, relation, documented, or ESP-based), and suggests a couple ways that the schema can be referenced. The I-D quite candidly indicates that it can't and shouldn't enforce any normative mechanism on other JSON media types for if and how they reference a schema. Thanks, - -- Kris Zyp SitePen (503) 806-1841 http://sitepen.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (MingW32) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAkyb2xIACgkQ9VpNnHc4zAzFDQCdHLUfxeIjnEdg9NWhYQxAIK2e 9EUAnRseerCOjrHz67P1JFAWtbkBuz8g =jOvB -----END PGP SIGNATURE-----
http://www.ietf.org/mail-archive/web/ietf-types/current/threads.html#01055 Mark.
Kris Zyp wrote: > > I am not aware of any other I-D's that are proposing a +json media > type, so I assume that JSON Schema will likely be the first entry in > the IANA registry with that extension if and when it reaches that > point (really not even sure what needs to be done before it can > registered, but I am assuming the drafts need to reach a little more > stable point of revision). > Does this help? http://lists.w3.org/Archives/Public/www-tag/2003Jul/0062.html > > The JSON Schema I-D doesn't attempt to make any normative claims on > such processing model, it merely indicates that the schema *can* be > the target of the schema reference (whether the reference be a media > type parameter, relation, documented, or ESP-based), and suggests a > couple ways that the schema can be referenced. The I-D quite candidly > indicates that it can't and shouldn't enforce any normative mechanism > on other JSON media types for if and how they reference a schema. > OK, I gave a bad example, sorry. But, it doesn't change the problem I'm getting at: My feedback is that this is a separate concern from a schema language. There's currently an RFC 3023bis effort to address all the concerns which have arisen in practice around using +xml. Not having to update *any* spec for JSON schemas to address concerns which may arise around the practice of using +json is a desirable thing. There is no connection between a media type identifier and a schema. Nor should *any* be defined for JSON. I can create a schema for Atom, and extend it to make it specific to my domain. The processing model for Atom, extended or not, with a schema or not, is the same, and is identified by application/atom+xml. Schema and processing model are not related. Your I-D looks to me like two different standardization efforts which should not be considered to be related, not even orthogonally. -Eric
Alan Dean wrote: > > The thought occurs to me that perhaps media type discovery / > self-description itself would be better if the type name was a URI, > e.g. http://example.com/media-types/application/custom-type > Self-descriptive messaging means all your headers and the data they contain may be easily understood by anybody, it isn't simply a matter of media type identifier -- the GET method is self-descriptive, the FOO method isn't, regardless of protocol. That being said, your thought has occurred to Tim Berners-Lee, also: " [A]s another example, the headers in an HTTP request which specify attributes of the object. These are defined within the scope of particular specifications. There is always pressure to extend these specifications in a flexible way. HTTP header names are generally extended arbitrarily by those doing experiments. The same can also be true of HTML elements and extension mechanisms have been proposed for both. If we look generically at the very wide space of all such metadata attribute names, we find something in which the dictionary would be so large that ad hoc arbitrary extension would be just as chaotic as central registration would be stifling. " http://www.w3.org/DesignIssues/Metadata.html It's that last bit there that's important. A central registry may indeed stifle innovation; however, ad-hoc extension defeats the goal of generic interoperability. You can't expect method=FOO to interoperate any better than if you use foo:// as your URI scheme or 'foo' as an identifier, unless they're registered, and no distributed registry mechanism has yet been defined for anything with any bearing on self- descriptiveness. More from Tim: " There is an open question as to what the process should be for formulating new URI schemes, but it is clear that to allow unfettered proliferation would be a serious mistake. In almost all other areas, proliferation of new designs is welcomed and the Web can be used as a distributed registry of them, but not for the case of URI schemes. " http://www.w3.org/DesignIssues/Architecture.html Clearly, there's some friction between Tim's view and Roy's view. The question is whether it's appropriate to use the Web as a distributed registry for media type identifiers. I don't believe I've offered any opinion on that (beyond Google not being a registry); I've only pointed out that there is no definition in any of the specs which supports anything other than the IANA registry for media type identifiers, therefore only IANA-registered identifiers (which point to a spec, not a 404) may be considered self-descriptive on the Web today. But, here's the problem with allowing ad-hoc extensibility: " The introduction of any other method apart from GET which has no side-effects and is simply a function of the URI is also incorrect, because the results of such an operation effectively form a separate address space, which violates the universality. " http://www.w3.org/DesignIssues/Architecture.html A distributed registry for HTTP methods would allow FOO to have the same semantics as GET, just as a distributed registry for identifiers would allow application/foo+xml to identify the same processing model as application/xhtml+xml -- which defeats the goal of generic interoperability by splitting the identifier namespace. While media types may have multiple identifiers, each processing model has a one-to-one mapping to an identifier in the IANA standards tree, which I see as the benefit of a central registry. > > For backwards compatibility, the absence of a URI would indicate an > implicit link to the IANA, i.e. thus application/sparql-query would > implicitly link to > http://www.iana.org/assignments/media-types/application/#sparql-query > (although the IANA website would probably need a tweak to support > this cleanly and redirect to the actual doc; so > http://www.iana.org/assignments/media-types/application/sparql-query > could be used) > This thought has also occurred to Tim: " In HTTP, the format of data is defined by a "MIME type". This formally refers to a central registry kept by IANA. However, architecturally this is an unnecessary central point of control, and there is no reason why the Web itself should not be used as a repository for new types. Indeed, a transition plan, in which unqualified MIME types are taken as relative URIs within a standard reference URI in an online MIME registry, would allow migration of MIME types to become first class objects. " http://www.w3.org/DesignIssues/Architecture.html OK, sure, nobody is saying the IANA registry is the best solution -- just that it's the only *existing* solution (no "transition plan" exists). So, how are you going to make this change? HTTP re-uses MIME. It's understood that this is sub-optimal for HTTP: " The problem with MIME syntax is that it assumes the transport is lossy, deliberately corrupting things like line breaks and content lengths. The syntax is therefore verbose and inefficient for any system not based on a lossy transport, which makes it inappropriate for HTTP. Since HTTP/1.1 has the capability to support deployment of incompatible protocols, retaining the MIME syntax won't be necessary for the next major version of HTTP, even though it will likely continue to use the many standardized protocol elements for representation metadata. " http://www.ics.uci.edu/~fielding/pubs/dissertation/evaluation.htm (Representation metadata includes media type identifier, btw.) So, are you suggesting that HTTP be changed/replaced, or that MIME be changed? Changing MIME to allow some other registry mechanism isn't just a change to HTTP, but a fundamental change to the Internet. My point remains that absent some successor to the IANA registry, there is no other way to meet the self-descriptive messaging constraint on the Web and *expect* widespread interoperability, not that IANA is the best-and-only alternative end-of-story -- just the current reality. What transition plan are you anti-IANA-registry folks proposing? Or do we just use Google as a registry without any transition plan or formal declaration anywhere in the specs allowing that as an option? -Eric
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 9/23/2010 11:03 PM, Eric J. Bowman wrote: > > > Kris Zyp wrote: >> >> I am not aware of any other I-D's that are proposing a +json >> media type, so I assume that JSON Schema will likely be the first >> entry in the IANA registry with that extension if and when it >> reaches that point (really not even sure what needs to be done >> before it can registered, but I am assuming the drafts need to >> reach a little more stable point of revision). >> > > Does this help? > > http://lists.w3.org/Archives/Public/www-tag/2003Jul/0062.html > It seems to confirm what I had understood, that one provisionally registers a media type at the beginning of the standardization process, by sending a registration request to ietf-types. And this was indeed done [1]. I guess I had expected it to appear at least some provisional registry for others to know about, but I don't know where that would be. [1] http://www.ietf.org/mail-archive/web/ietf-types/current/msg00767.html > >> >> The JSON Schema I-D doesn't attempt to make any normative claims >> on such processing model, it merely indicates that the schema >> *can* be the target of the schema reference (whether the >> reference be a media type parameter, relation, documented, or >> ESP-based), and suggests a couple ways that the schema can be >> referenced. The I-D quite candidly indicates that it can't and >> shouldn't enforce any normative mechanism on other JSON media >> types for if and how they reference a schema. >> > > OK, I gave a bad example, sorry. But, it doesn't change the > problem I'm getting at: > > My feedback is that this is a separate concern from a schema > language. There's currently an RFC 3023bis effort to address all > the concerns which have arisen in practice around using +xml. Not > having to update *any* spec for JSON schemas to address concerns > which may arise around the practice of using +json is a desirable > thing. > > There is no connection between a media type identifier and a > schema. Nor should *any* be defined for JSON. I can create a schema > for Atom, and extend it to make it specific to my domain. The > processing model for Atom, extended or not, with a schema or not, > is the same, and is identified by application/atom+xml. Schema and > processing model are not related. > I guess maybe I misunderstood what you meant by processing model. If it is defined such it is not related to the expected data structures, hyperlink mechanisms and available relations, than it is indeed orthogonal to a schema. That's fine with me, sorry for any confusion. On 9/23/2010 5:10 PM, Mark Baker wrote: > http://www.ietf.org/mail-archive/web/ietf-types/current/threads.html#01055 Awesome! Great to see this moving forward. Although in terms of submitting a media type, I think application/schema+json still wins by about a year and a half [1] :). - -- Kris Zyp SitePen (503) 806-1841 http://sitepen.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (MingW32) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAkycwEEACgkQ9VpNnHc4zAwNvgCfZAhF6gaeEqAsLQB7mrldKwK8 LmMAn1Rnjq/i1Ev4qyIgpwPEgoztQlVr =xOAo -----END PGP SIGNATURE-----
Kris Zyp wrote:
>
> I guess maybe I misunderstood what you meant by processing model. If
> it is defined such it is not related to the expected data structures,
> hyperlink mechanisms and available relations, than it is indeed
> orthogonal to a schema. That's fine with me, sorry for any confusion.
>
What I mean is, you can't recommend doing this:
Content-Type: application/json;
profile=http://json.com/my-hyper-schema
Because when I look at the IANA registry, I see that application/json
maps to RFC 4627, which only lists one optional parameter, charset. I
see no definition of any profile parameter. The usage you are
recommending is not self-descriptive, because it is not supported by RFC
4627.
One possible processing model for XHTML documents is text/plain, another
is text/html, and another is application/xhtml+xml -- please see RFC
3236 as an example of how a media type registration is the proper place
to define such usage. RFC 3236 doesn't extend the definition of profile
to text/html, text/plain, or even application/xml -- this would be out-
of-scope.
If a standard comes along which registers application/foo+json, it is
up to that MIME registration to define the usage of a profile parameter.
Such a definition shouldn't have to worry about conflicting with some
other use of that syntax being forced on it by an unrelated spec, in
particular a schema spec which may have nothing whatever to do with the
schema (or BNF notation) being used to describe application/foo+json.
This usage is part of the processing model defined by the media type
identifier, it is not appropriate for a schema language to define such
usage for anything beyond, in this case, application/schema+json --
your media type registration for that identifier doesn't make sense, as
it doesn't define 'schema' or 'schema.items' and omits any mention of
the 'profile' parameter (again, see RFC 3236).
Don't get me wrong, I'm in favor of +json, I'm just giving the same
feedback that both schema+json and senml+json have gotten -- the right
way to do this is to first change RFC 4288, then RFC 4627:
http://www.ietf.org/mail-archive/web/ietf-types/current/msg01062.html
Otherwise, I don't see approval of either I-D until that issue is
settled or their associated identifier syntax is changed. But then
again, maybe it will be anyway, I don't know -- all I do know is that
no +json identifier has yet made it into the IANA registry, therefore no
+json identifier is currently self-descriptive; and that it is not self-
descriptive to use a profile parameter in conjunction with any media
type identifier unless that identifier defines a profile parameter.
-Eric
Alan Dean wrote: > > You are, I believe, coming from the perspective that the only way to > fulfil the standardisation necessary to comply with the constraint is > to have a registry and that at the moment there is only one. > Standardization isn't a requirement for a self-descriptive media type identifier, only registration is required, and there's only one registry (which may very well be due for an overhaul). > > I am coming from the perspective that instead of standardising a > registry, it would be better to standardise discovery and that could > fulfil the constraint sufficiently. > I think it's reasonable to advocate for a distributed registry, but not the elimination of the requirement of a registry in favor of some discovery mechanism which fails to avoid identifier collisions (such ambiguity is the opposite of self-descriptive) or allows multiple identifiers to define the same processing model (which splits the Content-Type namespace to the detriment of interoperability). If you can come up with a discovery mechanism which avoids those issues, you may very well find me in favor of it. Otherwise you're re- introducing problems which have already been solved by the decision to formalize a central registry (or may be easily avoided by a distributed registry). -Eric
Nathan wrote: > > or a Link "describedby" as suggested by the draft. > > Note that there may be some conflation with using describedby, it's > also used often to point to an RDF document describing the resource. > It may be worth thinking whether there is a case for a 'profile' or > 'schema' link relation. > I agree that 'describedby' is already self-descriptive for a different purpose. Considering prior art, RFC 2731 defines rel='schema.DC' and rel='schema.AC', suggesting rel='schema.JS'; rel='profile' I could only find here: http://microformats.org/wiki/rel-profile To be self-descriptive, however, would require the choice to be registered in the IANA registry of link relations. > > It may be worth figuring out whether the common case for these > dialects would need to be in the media type (so that it can be > negotiated over) or whether it's additional response information that > can help an application work with the entity. I'd suggest that it's > probably the latter. > The profile parameter defined in RFC 3236 isn't re-used from RFC 3023; but I don't think that means such couldn't be the case for JSON. An RFC 4627bis effort to define +json could also define a profile parameter, re-using the "could be a namespace, schema, or a language specification" language of RFC 3236. -Eric
On the theme of self-descriptive messaging, consider this Roy post: http://lists.w3.org/Archives/Public/ietf-http-wg/2010JulSep/0390.html I appreciate those little insights into waka. -Eric
Kris Zyp wrote: > > - From that thread, it sounded like everyone was in favor of making > the updates. I wonder if that is being done by someone... > I doubt it. Out of all the re-use of +json in the wild, only two applications have been made, so this problem isn't understood. I would think that anyone who does get the ball rolling, would face little to no resistance, by virtue of the overwhelming success of RFC 3023 (aside from the fragment mess), to the point that everyone just assumes +json is "just like +xml" even though it's both undefined and discouraged. My take on what Ned was saying, is that you'd be better off changing the proposed identifier to something else, i.e. application/jsonschema, which he'd have to approve despite his preference to tell you that you SHOULD change that to application/schema+json -- applied to your case. Unfortunately he can't do that, so unfortunately he can't endorse Cullen's and Bjoern's (and my) point about changing RFC 4627 first. But I'm no expert, only a long-time keenly-interested observer. (Ned seems to imply that there exist registered JSON types which don't use the +json suffix, not that there's any way to search for them -- promoting re-use of same seems like another strong argument in favor of formalizing this usage of +suffix.) This sort of problem is to be expected in any architecture with a central registry at Internet scale -- is the whole thing agile enough to keep up with the pace of change? It seems obvious to me that it's high time to formalize the well-understood meaning of +suffix around what the registry is allowing *anyway* and formalize the well- understood meaning of +json around what everyone is already doing. That way we'd see lots more registration of +json identifiers already being used in the wild (and in violation of both RFC 2048 and RFC 4288, let alone the self-descriptive messaging constraint). -Eric
> > Standardization isn't a requirement for a self-descriptive media type > identifier, only registration is required, and there's only one > registry (which may very well be due for an overhaul). > OK, I see where I'm confusing folks. A registered identifier is self- descriptive if it points to a spec, that spec doesn't need to be a standard. Such a registered identifier is still "standardized" in that it's followed the approved RFC process. I confuse myself sometimes... Changing the pertinent RFCs to allow an alternative to the IANA registry, i.e. a discovery mechanism, could allow standardized unregistered identifiers (i.e. the x. subtree). Could such a solution be called self-descriptive becomes the question, if I understand correctly? As an example, Javscript identifiers are a mishmash. I would prefer to use application/javascript, as per RFC 4329, which considers text/ javascript to be obsolete (also quite common in the wild is application/ x-javascript). But, I send 'text/javascript; charset=utf-8' because using application/javascript crashes UTF-8 scripts in IE. The variety of identifiers pointing to the same processing model is a problem, but I also believe it's a historic problem -- we know better now, and hopefully re-registering existing JSON types using +json is the last time we'll make this mistake. The problem tends to self- correct over time, a feature of a central registry (even if unwieldy). A discovery mechanism, it seems to me, would amplify this problem over time. Anything which increasingly splits a processing model into multiple identifiers goes against the generic interoperability objective of self-descriptiveness -- each occurrence reduces the chances of any to become ubiquitous. Having application/javascript and only application/javascript refer to the latest ECMA standard would be best. If a discovery mechanism accounts for this (and the collision problem), I would think the pragmatic results may falsify the constraint -- the result would be a different architectural style, but with no disadvantage (provided all other constraints/standardizations are followed). Likewise with the collision problem. This is one thing that's inherently avoided with a central registry. Witness the collision of processing models identified by application/rss+xml, which is not standardized because it isn't registered (even if it points to a bunch of standards). Had those projects followed the standards for registering their identifiers, no such collision would have occurred in the first place, and we'd have two separate, registered identifiers. If a discovery mechanism accounts for this (and the splitting problem), I would think the pragmatic results may falsify the constraint -- etc. as in the previous paragraph. Whether it's appropriate to go ahead and call it REST, if such mechanism were to evolve, would be Roy's call, as it's up to Roy to change his definition of self-descriptive to mean either registered *or* discoverable. -Eric
> > Had those projects followed the standards for registering their > identifiers, no such collision would have occurred in the first > place, and we'd have two separate, registered identifiers. > Or rather, we *could* have two separate, registered identifiers. The collision is self-evident in the need to introspect the document to determine the processing model -- even if registered, application/ rss+xml would not be self-descriptive of the sender's intent. -Eric
On Sat, Sep 25, 2010 at 4:06 AM, Eric J. Bowman <eric@...> wrote: > Alan Dean wrote: >> >> You are, I believe, coming from the perspective that the only way to >> fulfil the standardisation necessary to comply with the constraint is >> to have a registry and that at the moment there is only one. >> > > Standardization isn't a requirement for a self-descriptive media type > identifier, only registration is required, and there's only one > registry (which may very well be due for an overhaul). Do you mean specifically that formal standardisation isn't a requirement? Cheers, Mike
Mike Kelly wrote: > > Eric J. Bowman wrote: > > Alan Dean wrote: > >> > >> You are, I believe, coming from the perspective that the only way > >> to fulfil the standardisation necessary to comply with the > >> constraint is to have a registry and that at the moment there is > >> only one. > >> > > > > Standardization isn't a requirement for a self-descriptive media > > type identifier, only registration is required, and there's only one > > registry (which may very well be due for an overhaul). > > Do you mean specifically that formal standardisation isn't a > requirement? > That would be the gist of Roy's post, here... http://tech.groups.yahoo.com/group/rest-discuss/message/6594 ...I'm just agreeing with Roy. -Eric
> > Whether it's appropriate to go ahead and call it REST, if such > mechanism were to evolve, would be Roy's call, as it's up to Roy to > change his definition of self-descriptive to mean either registered > *or* discoverable. > Or introduce an optional self-defining constraint. An architectural style based on a discovery mechanism rather than a registry (central or distributed) is a different style from REST, with a self-defining constraint instead of (or in addition to) self-descriptiveness. -Eric
On Sat, Sep 25, 2010 at 8:55 PM, Eric J. Bowman <eric@...> wrote: >> >> Whether it's appropriate to go ahead and call it REST, if such >> mechanism were to evolve, would be Roy's call, as it's up to Roy to >> change his definition of self-descriptive to mean either registered >> *or* discoverable. >> > > Or introduce an optional self-defining constraint. An architectural > style based on a discovery mechanism rather than a registry (central or > distributed) is a different style from REST, with a self-defining > constraint instead of (or in addition to) self-descriptiveness. Wouldn't a distributed registry produce a system that has a "self-defining" property? I don't see a reason for an additional constraint unless you make that property a requirement. In fact, from what I can tell, the only view point that requires an additional constraint is yours i.e. a "non-self-defining" constraint. Everyone else is happy that both approaches establish shared understanding and facilitate self-descriptiveness in their own way - and you aren't. Preferring one way for whatever reason is one thing, claiming the other is "not REST" is another thing entirely. Cheers, Mike
My apologies, this went to the wrong list. Best, Stefan
Mike Kelly wrote: > > > Or introduce an optional self-defining constraint. An architectural > > style based on a discovery mechanism rather than a registry > > (central or distributed) is a different style from REST, with a > > self-defining constraint instead of (or in addition to) > > self-descriptiveness. > > Wouldn't a distributed registry produce a system that has a > "self-defining" property? I don't see a reason for an additional > constraint unless you make that property a requirement. > No, REST requires a registry for self-descriptiveness, but does not require a central registry. A distributed registry would merely assign different syntax to different authorities, i.e. 'schema/foo' would be handled by a different registration authority from IANA and the existing syntax. No collisions and (properly managed) no splitting, means sender intent is clearly and unambiguously declared, which is the point: http://www.w3.org/2001/tag/doc/mime-respect By self-defining, I mean something like 'Content-Type: http://example. org/foo' where the identifier becomes a first-class object. That approach comes with all the drawbacks Nathan and Mark mentioned, with none of the benefits of having a registry; the difference is one of being self-descriptive or NOT being self-descriptive, as explained: http://tech.groups.yahoo.com/group/rest-discuss/message/16638 http://www.markbaker.ca/blog/2008/02/media-type-centralization-is-a-feature-not-a-bug/ Obviously, some other term is needed to define what folks are trying to overload self-descriptive to mean, because self-descriptive = registered etc. *exactly* as Roy clarified, not discoverable by URI (ad-hoc identifiers are the opposite of self-descriptive as defined by the thesis, even without Roy's clarification, as I explained thoroughly with my Gopher examples -- it's fundamental to the difference between a (REST) network-based API and a (NOT REST) library-based API, and as *proof* of my position I just need to point to the specs which all state that we're dealing with MIME and that MIME involves a registry, not first-class objects as identifiers. > > In fact, from what I can tell, the only view point that requires an > additional constraint is yours i.e. a "non-self-defining" constraint. > Uhhh, no, Roy has quite clearly defined self-descriptive to mean *registered*, not searchable-for-in-Google. Saying something may be understood by searching Google for it, is NOT the definition of self- descriptiveness. Exhibit A is application/rss+xml -- anyone can *discover* that it isn't self-descriptive of sender intent. If it were registered, it still wouldn't be self-descriptive of sender intent, as being registered is only the *minimum* requirement (and doesn't even _begin_ to be optional). Try arguing against *any* of the examples I've given, please -- it doesn't do me any good to know that I'm always wrong without knowing about what, or how, I'm wrong. > > Everyone else is happy that both approaches establish shared > understanding and facilitate self-descriptiveness in their own way - > and you aren't. > Oh, stop it with that crap already, please, Mike. The definitions are quite clear. From REST: "The data format of a representation is known as a media type [48]." http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm#sec_5_2_1_2 From MIME: "A registration process is needed, however, to ensure that the set of such values is developed in an orderly, well-specified, and public manner. This document defines registration procedures which use the Internet Assigned Numbers Authority (IANA) as a central registry for such values." http://www.ietf.org/rfc/rfc2048.txt It is NOT self-descriptive to follow some process that has nothing to do with what's defined. REST says "standardization" not "screw the RFCs and do it your own way." I seriously don't know what else REST could _possibly_ mean by identifiers which are self-descriptive of sender intent to process representations which are defined as MIME types. Nobody is really free to define self-descriptive in their own way such that they can merely ignore RFC 2048 and still call it REST. Give me a break. REST is not free-form, it is standardized. > > Preferring one way for whatever reason is one thing, claiming the > other is "not REST" is another thing entirely. > Not being self-descriptive is NOT REST. This isn't about what I prefer, it's about basic definitions of fundamental concepts. There is a fundamental architectural difference between having a registry (as REST quite clearly calls for by referring to MIME, as quite clearly confirmed by Roy for the common case of the Web), and making identifiers first-class objects with URIs (which MIME says nothing about, and REST says nothing about using anything besides MIME, and isn't at all what Roy meant by saying self-descriptive = registered). When you have a fundamentally different approach from REST, it's obviously some other architectural style -- unless REST is the only architectural style and everything on the Web is encompassed by it, which is poppycock. Claiming that this fundamentally different approach, which goes against everything REST/Roy/AWWW/RFCs have to say on the matter, comes down to preference and that it's still RESTful either way, is nonsensical unless you can back it up against any of the strong arguments that have been presented here by myself or others, or explain why Roy didn't really mean registered in a registry when he said self-descriptive=registered, or why Roy didn't really mean MIME, or that MIME doesn't really say anything about a registry... It's quite rational for me to point out that instead of defining self- descriptive to mean something that it clearly was never intended to mean, perhaps we should define another term for that meaning, i.e. self-defining. That's how distributed software architecture is practiced, by defining what constraints achieve what effects -- using URIs instead of registered identifiers is a whole 'nother ball of wax which is incongruous with REST as written or as instantiated by HTTP. While it's fine to speculate about alternative approaches to a registry, don't lose sight of the fact that REST requires a registry for identifiers and HTTP re-uses the MIME-defined IANA central registry for media type identifiers, and that this is what self-descriptive means. The REST style simply doesn't include using URIs as media type identifiers, any more than the specs allow such usage. Get over it. -Eric
OK, I'm confused. I can't square this reference to RFC 2048: "The data format of a representation is known as a media type [48]." http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm#sec_5_2_1_2 With this statement: "...retaining the MIME syntax won't be necessary for the next major version of HTTP..." http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm#sec_6_3_4_5 Wouldn't this mean that such a next major version of HTTP would require REST's definition of a representation's data format to change, i.e. to "may be known as a media type or as _____"? I also can't square the reference to RFC 2048 with this statement: "The problem is that I can't say 'REST requires media types to be registered' because both Internet media types and the registry controlled by IANA are a specific architecture's instance of the style -- they could just as well be replaced by some other mechanism for metadata description." http://tech.groups.yahoo.com/group/rest-discuss/message/6613 I understand that some other protocol may instantiate a different form of registry, but how does replacing Internet media types not go against the definition of data format in 5.2.1.2? Should I be reading 5.2.1.2 as "most commonly known as a media type," i.e. as an example and not as a definition? In which case, should 'media type' be read as 'data type' elsewhere in 5.2.1.2? -Eric
Eric J. Bowman wrote: > OK, I'm confused. I can't square this reference to RFC 2048: > > "The data format of a representation is known as a media type [48]." > > http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm#sec_5_2_1_2 ^^ talking about Internet Media Types > With this statement: > > "...retaining the MIME syntax won't be necessary for the next major > version of HTTP..." > > http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm#sec_6_3_4_5 ^^ talking about the MIME-like structure of an HTTP Message > Wouldn't this mean that such a next major version of HTTP would require > REST's definition of a representation's data format to change, i.e. to > "may be known as a media type or as _____"? no it wouldn't require that change, Internet Media Types won't be going anywhere soon, pretty much all the deployed [everything that has anything to do with the web] depends on Internet Media Types. > I also can't square the reference to RFC 2048 with this statement: > > "The problem is that I can't say 'REST requires media types to be > registered' because both Internet media types and the registry > controlled by IANA are a specific architecture's instance of the style > -- they could just as well be replaced by some other mechanism for > metadata description." > > http://tech.groups.yahoo.com/group/rest-discuss/message/6613 you missed the start of the sentence ;) "This is one of those gray areas of increasing RESTfulness that will doubtless drive some people nuts." It is a grey area where both statements are correct, and it is driving you nuts! In reality, with the deployed stack (Internet/Web) then REST when applied does require Internet Media Types, however, if a completely new stack were to be created and deployed then any central registry of standardized types and any identifier system could be used - the cost of migrating to a new registry for (?what's the benefit) would be so extraordinarily high that it's very unlikely to happen though. > I understand that some other protocol may instantiate a different form > of registry, but how does replacing Internet media types not go against > the definition of data format in 5.2.1.2? Should I be reading 5.2.1.2 > as "most commonly known as a media type," i.e. as an example and not as > a definition? In which case, should 'media type' be read as 'data type' > elsewhere in 5.2.1.2? If the answer is "yes", then what good does it do? your protocol and the messages sent would be incompatible with the entire deployed stack of everything we have on the Internet/Web (clients/servers/agents/intermediaries/caches/apps), how is being incompatible with pretty much everything RESTful? A shared understanding between clients and servers which support many protocols already exists, Internet Media Types, this allows message bodies to move from one protocol to another, and indeed for the messages bodies / entities being transferred to be usable by apps on either side. Perhaps it's not up to the protocol to define a new (un)-shared understanding, but rather to adopt the existing shared understanding in order to be RESTful. Best, Nathan
I'm not coining any new terminology here, what I'm doing is referring to well-understood terminology which already applies to things folks want to overload the meaning of self-descriptive to define. By canceling out those definitions, the meaning of self-descriptive becomes more clear. (Tim's SemWeb notes are my source for self-describing, and self-documenting has been used plenty on this list by people other than me.) There is a well-understood meaning of self-defining, one example of such use is the microformats example I linked to before: http://microformats.org/wiki/rel-profile By referring to the use of discoverable URIs as self-defining, we can see more easily what is meant by self-descriptive link relations: http://www.w3.org/1999/xhtml/vocab Looks like a registry to me. Or at the very least, having a lookup table for things defined by standards, vs. using URIs, looks like distinctly different approaches, worthy of discrete terminology, to me. The proposed IANA registry of link relations for the Link header is a different list with a different purpose, defined by different specs. The I-D for Link defines a URI-based extension mechanism for using *unregistered* link relations: http://tools.ietf.org/html/draft-nottingham-http-link-header-10#section-4.2 Choosing a registered link relation name, vs. choosing an unregistered URI, are distinctly different approaches. It seems logical to me to apply the same discrete terminology used by microformats using URIs in link relations in content, to link relations in the Link header. It also seems logical to me to extend this definition to the use of URIs in Content-Type headers, and elsewhere as applicable. Thus, using names from the IANA registry of link relations is self- descriptive, while using unregistered extension relation types is self- defining. First-class-object vs. lookup table represents a fundamental architectural choice. A hybrid approach may be the best architecture for your system, but you must understand the mismatch exists, if you're using REST to analyze the system you're building against an idealized form that's optimized for the Web. -Eric
Nathan wrote: > > Eric J. Bowman wrote: > > OK, I'm confused. I can't square this reference to RFC 2048: > > > > "The data format of a representation is known as a media type [48]." > > > > http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm#sec_5_2_1_2 > > ^^ talking about Internet Media Types > > > With this statement: > > > > "...retaining the MIME syntax won't be necessary for the next major > > version of HTTP..." > > > > http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm#sec_6_3_4_5 > > ^^ talking about the MIME-like structure of an HTTP Message > Gotcha! Thanks. MIME syntax could be ditched, but application/atom+xml would still be a self-descriptive identifier by virtue of being in the IANA registry. > > > I also can't square the reference to RFC 2048 with this statement: > > > > "The problem is that I can't say 'REST requires media types to be > > registered' because both Internet media types and the registry > > controlled by IANA are a specific architecture's instance of the > > style > > -- they could just as well be replaced by some other mechanism for > > metadata description." > > > > http://tech.groups.yahoo.com/group/rest-discuss/message/6613 > > you missed the start of the sentence ;) "This is one of those gray > areas of increasing RESTfulness that will doubtless drive some people > nuts." > Oh, right! I forgot that REST-the-style could be implemented via carrier-pigeon network -- the color of the tube holding the payload indicates its spoken language (which is self-descriptive if it's specced somewhere, what color maps to what language), striping could indicate encryption, while the color of the paper indicates script vs. cursive... specced properly, perfectly RESTful! But, that does seem to indicate the wording of 5.2.1.2 is a bit off... > > > I understand that some other protocol may instantiate a different > > form of registry, but how does replacing Internet media types not > > go against the definition of data format in 5.2.1.2? Should I be > > reading 5.2.1.2 as "most commonly known as a media type," i.e. as > > an example and not as a definition? In which case, should 'media > > type' be read as 'data type' elsewhere in 5.2.1.2? > > If the answer is "yes", then what good does it do? your protocol and > the messages sent would be incompatible with the entire deployed > stack of everything we have on the Internet/Web > (clients/servers/agents/intermediaries/caches/apps), how is being > incompatible with pretty much everything RESTful? > +1, but it may help in understanding the distinction between the style and the implementation, to better decouple the definition of the style from the implementation. URI still holds true -- as long as the carrier pigeons know how to fly back and forth between routers (pigeon coops), the server (human) who stuffs the paper into the little tube, and the semantics of what that paper represents, may be identified using a URI. The paper could have little tear-off tabs with URLs on them, or a form to fill out, which could be sent by appropriate pigeon/tube. Hypertext constraint, and all... > > A shared understanding between clients and servers which support many > protocols already exists, Internet Media Types, this allows message > bodies to move from one protocol to another, and indeed for the > messages bodies / entities being transferred to be usable by apps on > either side. Perhaps it's not up to the protocol to define a new > (un)-shared understanding, but rather to adopt the existing shared > understanding in order to be RESTful. > On the Internet, certainly. ;-) But what about Carrier Pigeon Transfer Protocol over IPoAC? http://tools.ietf.org/html/rfc2549 Gray area, indeed! -Eric
Eric J. Bowman wrote: > Nathan wrote: >> Eric J. Bowman wrote: >>> OK, I'm confused. I can't square this reference to RFC 2048: >>> >>> "The data format of a representation is known as a media type [48]." >>> >>> http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm#sec_5_2_1_2 >> ^^ talking about Internet Media Types >> >>> With this statement: >>> >>> "...retaining the MIME syntax won't be necessary for the next major >>> version of HTTP..." >>> >>> http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm#sec_6_3_4_5 >> ^^ talking about the MIME-like structure of an HTTP Message >> > > Gotcha! Thanks. MIME syntax could be ditched, but application/atom+xml > would still be a self-descriptive identifier by virtue of being in the > IANA registry. most welcome. >>> I also can't square the reference to RFC 2048 with this statement: >>> >>> "The problem is that I can't say 'REST requires media types to be >>> registered' because both Internet media types and the registry >>> controlled by IANA are a specific architecture's instance of the >>> style >>> -- they could just as well be replaced by some other mechanism for >>> metadata description." >>> >>> http://tech.groups.yahoo.com/group/rest-discuss/message/6613 >> you missed the start of the sentence ;) "This is one of those gray >> areas of increasing RESTfulness that will doubtless drive some people >> nuts." >> > > Oh, right! I forgot that REST-the-style could be implemented via > carrier-pigeon network -- the color of the tube holding the payload > indicates its spoken language (which is self-descriptive if it's > specced somewhere, what color maps to what language), striping could > indicate encryption, while the color of the paper indicates script vs. > cursive... specced properly, perfectly RESTful! "that will doubtless drive some people nuts." - when the nearest use-case is an eleven year old april fools joke.. > But, that does seem to indicate the wording of 5.2.1.2 is a bit off... > >>> I understand that some other protocol may instantiate a different >>> form of registry, but how does replacing Internet media types not >>> go against the definition of data format in 5.2.1.2? Should I be >>> reading 5.2.1.2 as "most commonly known as a media type," i.e. as >>> an example and not as a definition? In which case, should 'media >>> type' be read as 'data type' elsewhere in 5.2.1.2? >> If the answer is "yes", then what good does it do? your protocol and >> the messages sent would be incompatible with the entire deployed >> stack of everything we have on the Internet/Web >> (clients/servers/agents/intermediaries/caches/apps), how is being >> incompatible with pretty much everything RESTful? >> > > +1, but it may help in understanding the distinction between the style > and the implementation, to better decouple the definition of the style > from the implementation. could be worth doing if it's practical in any way, maybe somebody needs to fork and make "REST Strict" packed full of a load of MUSTs. > URI still holds true -- as long as the carrier pigeons know how to fly > back and forth between routers (pigeon coops), the server (human) who > stuffs the paper into the little tube, and the semantics of what that > paper represents, may be identified using a URI. > > The paper could have little tear-off tabs with URLs on them, or a form > to fill out, which could be sent by appropriate pigeon/tube. Hypertext > constraint, and all... lost me (~ish) there's no way I can reply without having to bill somebody for the waste of time :p >> A shared understanding between clients and servers which support many >> protocols already exists, Internet Media Types, this allows message >> bodies to move from one protocol to another, and indeed for the >> messages bodies / entities being transferred to be usable by apps on >> either side. Perhaps it's not up to the protocol to define a new >> (un)-shared understanding, but rather to adopt the existing shared >> understanding in order to be RESTful. >> > > On the Internet, certainly. ;-) But what about Carrier Pigeon Transfer > Protocol over IPoAC? > > http://tools.ietf.org/html/rfc2549 > > Gray area, indeed! Well, don't know about you, but I use the Internet Protocol, for.. well pretty much everything, and certainly everything I'd apply REST to - so no gray(grey!) area for me I'm afraid. I could consider applying REST to non-internet and every day things, but I think the mrs would kill me if I started complaining about dinner not being self descriptive. Best, Nathan
Nathan wrote: > > Eric J. Bowman wrote: > > > >>> I understand that some other protocol may instantiate a different > >>> form of registry, but how does replacing Internet media types not > >>> go against the definition of data format in 5.2.1.2? Should I be > >>> reading 5.2.1.2 as "most commonly known as a media type," i.e. as > >>> an example and not as a definition? In which case, should 'media > >>> type' be read as 'data type' elsewhere in 5.2.1.2? > >> > >> If the answer is "yes", then what good does it do? your protocol > >> and the messages sent would be incompatible with the entire > >> deployed stack of everything we have on the Internet/Web > >> (clients/servers/agents/intermediaries/caches/apps), how is being > >> incompatible with pretty much everything RESTful? > >> > > > > +1, but it may help in understanding the distinction between the > > style and the implementation, to better decouple the definition of > > the style from the implementation. > > could be worth doing if it's practical in any way, maybe somebody > needs to fork and make "REST Strict" packed full of a load of MUSTs. > Actually, my thinking is more along the lines of clarification through light editing, i.e. RESTbis -- but please note that I read REST at least two dozen times over a dozen years before presuming to edit the thesis, including this earlier effort at editing 5.3.3: http://tech.groups.yahoo.com/group/rest-discuss/message/15146 The nature of REST as a dissertation means it will never get edited, but I don't think REST is the holiest-of-holies such that it couldn't stand a little clarification here and there, over time, in response to issues which arise demonstrating actual confusion surrounding its wording. This is meant as conversation fodder, not "REST is wrong." Here's all I'd do to 5.2.1.2: "The data format of a representation is known as a media type [48] on IP networks. A representation can be included in a message and processed by the recipient according to the control data of the message and the nature of the data format. Some data formats are intended for automated processing, some are intended to be rendered for viewing by a user, and a few are capable of both. Composite media types can be used to enclose multiple representations in a single message. The design of a data format can directly impact the user-perceived performance of a distributed hypermedia system. Any data that must be received before the recipient can begin rendering the representation adds to the latency of an interaction. A data format that places the most important rendering information up front, such that the initial information can be incrementally rendered while the rest of the information is being received, results in much better user-perceived performance than a data format that must be entirely received before rendering can begin." Again, I'm just winging it on the wording of that first sentence, not putting doctoral-thesis-level forethought into it, it's only a suggestion and not meant as criticism. But, I think this is more consistent with the wording of Table 5-1 -- media type is representation metadata, not the representation itself, which is a data format. I don't think a composite data format (i.e. Atom containing HTML) is what Roy meant, so I left "composite media types" alone. -Eric
> > Kris Zyp wrote: > > > > - From that thread, it sounded like everyone was in favor of making > > the updates. I wonder if that is being done by someone... > > > > I doubt it. > I stand corrected: http://www.ietf.org/id/draft-masinter-mime-web-info-00.txt It seems to me, like that effort could be extended to include the profile parameter, and the common usage of +json on the Web. -Eric
Nathan wrote: > > > Oh, right! I forgot that REST-the-style could be implemented via > > carrier-pigeon network -- the color of the tube holding the payload > > indicates its spoken language (which is self-descriptive if it's > > specced somewhere, what color maps to what language), striping could > > indicate encryption, while the color of the paper indicates script > > vs. cursive... specced properly, perfectly RESTful! > > "that will doubtless drive some people nuts." - when the nearest > use-case is an eleven year old april fools joke.. > And even then, isn't self-descriptive without IANA registration, if I take your point correctly (which is why I re-worded REST the way I did, btw): > > Well, don't know about you, but I use the Internet Protocol, for.. > To extend my metaphor, since RFC 2549 is an IP network, I'd need to regsiter the paper/plain media type (A4), including a charset parameter, and a profile parameter to allow OCR agents to negotiate format between typed/handwritten and print/cursive conformance levels; and get it approved, before CPTP over IPoAC messaging would be self-descriptive... -Eric
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 9/26/2010 8:25 AM, Eric J. Bowman wrote: > > > I'm not coining any new terminology here, what I'm doing is > referring to well-understood terminology which already applies to > things folks want to overload the meaning of self-descriptive to > define. By canceling out those definitions, the meaning of > self-descriptive becomes more clear. (Tim's SemWeb notes are my > source for self-describing, and self-documenting has been used > plenty on this list by people other than me.) > > There is a well-understood meaning of self-defining, one example of > such use is the microformats example I linked to before: > > http://microformats.org/wiki/rel-profile > Do you suggest rel="profile" as preferable to rel="describedby" for referring to a schema for a document? - -- Kris Zyp SitePen (503) 806-1841 http://sitepen.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (MingW32) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAkyhFkoACgkQ9VpNnHc4zAxH7wCfWRoBlHAiWThKcMh8VB8ueKLQ SwAAn3ynV7yRK4kE6bG8u8XQWMWjofxZ =ls8N -----END PGP SIGNATURE-----
Kris Zyp wrote: > > Do you suggest rel="profile" as preferable to rel="describedby" for > referring to a schema for a document? > I'd suggest standardizing rel='schema', where @type tells us what the schema language and its serialization (think RELAX NG compact vs. XML) are, and @href tells us what the schema itself is. Schema languages like yours or XSD both impart more semantics to the content than RELAX NG (which is strictly syntax), but not enough to warrant describedby -- which means that the document is self-described (in the SemWeb, RDF sense seen in Tim's architecture notes) by some other document conforming to some serialization of RDF, but not as a syntax validation model. -Eric
Kris Zyp wrote: > > Do you suggest rel="profile" as preferable to rel="describedby" for > referring to a schema for a document? > I don't suggest rel='profile' because the common use of the term 'profile' within document content, is to impart meaning to metadata attribute values; whereas a schema defines both parts of a name/value pair as metadata syntax, separate from document content or meaning. -Eric
I follow discussions here enough to know that versioning in RESTful APIs has been much discussed, but not enough to understand all the threads. Is there a write-up summarizing the issues?
Lucas Gonze wrote: > > I follow discussions here enough to know that versioning in RESTful > APIs has been much discussed, but not enough to understand all the > threads. > > Is there a write-up summarizing the issues? > Not that I know of. I won't summarize the issues, but I can list the positions: 1) You don't need versioning in REST, just like there was no versioning required for image/gif changing from 87a to 89a, etc. 2) Version the identifier, like application/x.foo-v2+xml 3) Use the profile parameter, like RFC 3236 4) If only client components care, leave it in the document, like DOCTYPE for text/html 5) Version the URI, like /v1/foo vs. /v2/foo No further comment. -Eric
On Wed, Sep 29, 2010 at 3:45 PM, Eric J. Bowman <eric@...> wrote: > Lucas Gonze wrote: >> >> I follow discussions here enough to know that versioning in RESTful >> APIs has been much discussed, but not enough to understand all the >> threads. >> >> Is there a write-up summarizing the issues? >> > > Not that I know of. I won't summarize the issues, but I can list the > positions: > > 1) You don't need versioning in REST, just like there was no versioning > required for image/gif changing from 87a to 89a, etc. > > 2) Version the identifier, like application/x.foo-v2+xml > > 3) Use the profile parameter, like RFC 3236 > > 4) If only client components care, leave it in the document, like > DOCTYPE for text/html > > 5) Version the URI, like /v1/foo vs. /v2/foo 6) I think it was Craig's suggestion to use a version http header (e.g. API-Version: 1.0) There's also an opinion out there that's a twist on option 1: 1a) You don't need versioning if you're designing your media types well. That's just for completeness, there's no best answer. --tim
Thanks for the help, Tim and Eric. I'd add one more point: moving away from URI construction to HATEOAS sometimes make versioning moot. On Wed, Sep 29, 2010 at 12:53 PM, Tim Williams <williamstw@...> wrote: > On Wed, Sep 29, 2010 at 3:45 PM, Eric J. Bowman <eric@...> wrote: >> Lucas Gonze wrote: >>> >>> I follow discussions here enough to know that versioning in RESTful >>> APIs has been much discussed, but not enough to understand all the >>> threads. >>> >>> Is there a write-up summarizing the issues? >>> >> >> Not that I know of. I won't summarize the issues, but I can list the >> positions: >> >> 1) You don't need versioning in REST, just like there was no versioning >> required for image/gif changing from 87a to 89a, etc. >> >> 2) Version the identifier, like application/x.foo-v2+xml >> >> 3) Use the profile parameter, like RFC 3236 >> >> 4) If only client components care, leave it in the document, like >> DOCTYPE for text/html >> >> 5) Version the URI, like /v1/foo vs. /v2/foo > > 6) I think it was Craig's suggestion to use a version http header > (e.g. API-Version: 1.0) > > There's also an opinion out there that's a twist on option 1: > > 1a) You don't need versioning if you're designing your media types well. > > That's just for completeness, there's no best answer. > --tim >
For me, it helps to think about versioning is a _technique_ and not a goal or system property to be attained. Usually when talking about "versioning" we are really trying to deal with the issue of modifiability at the system (arch) level. Fielding's dissertation does a good job of identifying "System Properties of Key Interest"[1] and one section deals with Modifiability[2] in general. Once I started thinking about versioning in this way, I was able to look to other techniques to improve the general modifiability of an implementation. And sometimes these other techniques (identified in 2.3.4) made the use of versioning superfluous. [1] http://www.ics.uci.edu/~fielding/pubs/dissertation/net_app_arch.htm#sec_2_3 [2] http://www.ics.uci.edu/~fielding/pubs/dissertation/net_app_arch.htm#sec_2_3_4 mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me #RESTFest 2010 http://rest-fest.googlecode.com On Wed, Sep 29, 2010 at 16:32, Lucas Gonze <lucas.gonze@...> wrote: > Thanks for the help, Tim and Eric. > > I'd add one more point: moving away from URI construction to HATEOAS > sometimes make versioning moot. > > On Wed, Sep 29, 2010 at 12:53 PM, Tim Williams <williamstw@...> wrote: >> On Wed, Sep 29, 2010 at 3:45 PM, Eric J. Bowman <eric@...> wrote: >>> Lucas Gonze wrote: >>>> >>>> I follow discussions here enough to know that versioning in RESTful >>>> APIs has been much discussed, but not enough to understand all the >>>> threads. >>>> >>>> Is there a write-up summarizing the issues? >>>> >>> >>> Not that I know of. I won't summarize the issues, but I can list the >>> positions: >>> >>> 1) You don't need versioning in REST, just like there was no versioning >>> required for image/gif changing from 87a to 89a, etc. >>> >>> 2) Version the identifier, like application/x.foo-v2+xml >>> >>> 3) Use the profile parameter, like RFC 3236 >>> >>> 4) If only client components care, leave it in the document, like >>> DOCTYPE for text/html >>> >>> 5) Version the URI, like /v1/foo vs. /v2/foo >> >> 6) I think it was Craig's suggestion to use a version http header >> (e.g. API-Version: 1.0) >> >> There's also an opinion out there that's a twist on option 1: >> >> 1a) You don't need versioning if you're designing your media types well. >> >> That's just for completeness, there's no best answer. >> --tim >> > > > ------------------------------------ > > Yahoo! Groups Links > > > >
Totally agree with Mike. Side note: Reusability is an effect more than a need for modifiability, the actual part that helps is the loose coupling. Now, I would like to add some of the techniques described seems just to identify versions, versioning requires a step further. Version identification is useful to avoid conflicts, to choose implementations or to validate. At the end of the day, if you have identified a version, you will still need to write specific code to handle it. That causes redundancy the most of the time, and as Zachman says, redundancy drives to chaos, system's entropy. When you don't need version identification simply because a new version will not break the system, you just achieved modifiability. All at all, it all depends on what is the change that creates a new version. Thanks, Mike, you just gave me an idea for a blog post on types of changes that require versioning, and how to solve them without version identification. William Martinez. --- In rest-discuss@yahoogroups.com, mike amundsen <mamund@...> wrote: > > For me, it helps to think about versioning is a _technique_ and not a > goal or system property to be attained. > > Usually when talking about "versioning" we are really trying to deal > with the issue of modifiability at the system (arch) level. Fielding's > dissertation does a good job of identifying "System Properties of Key > Interest"[1] and one section deals with Modifiability[2] in general. > > Once I started thinking about versioning in this way, I was able to > look to other techniques to improve the general modifiability of an > implementation. And sometimes these other techniques (identified in > 2.3.4) made the use of versioning superfluous. > > [1] http://www.ics.uci.edu/~fielding/pubs/dissertation/net_app_arch.htm#sec_2_3 > [2] http://www.ics.uci.edu/~fielding/pubs/dissertation/net_app_arch.htm#sec_2_3_4 > > mca > http://amundsen.com/blog/ > http://twitter.com@mamund > http://mamund.com/foaf.rdf#me > > > #RESTFest 2010 > http://rest-fest.googlecode.com > > >
William: Happy to be a catalyst<g>. I'm looking forward to another of your excellent blog posts, too. mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me #RESTFest 2010 http://rest-fest.googlecode.com On Thu, Sep 30, 2010 at 11:54, William Martinez Pomares <wmartinez@...> wrote: > Totally agree with Mike. > Side note: Reusability is an effect more than a need for modifiability, the actual part that helps is the loose coupling. > > Now, I would like to add some of the techniques described seems just to identify versions, versioning requires a step further. > Version identification is useful to avoid conflicts, to choose implementations or to validate. At the end of the day, if you have identified a version, you will still need to write specific code to handle it. That causes redundancy the most of the time, and as Zachman says, redundancy drives to chaos, system's entropy. > > When you don't need version identification simply because a new version will not break the system, you just achieved modifiability. > All at all, it all depends on what is the change that creates a new version. > > Thanks, Mike, you just gave me an idea for a blog post on types of changes that require versioning, and how to solve them without version identification. > > William Martinez. > --- In rest-discuss@yahoogroups.com, mike amundsen <mamund@...> wrote: >> >> For me, it helps to think about versioning is a _technique_ and not a >> goal or system property to be attained. >> >> Usually when talking about "versioning" we are really trying to deal >> with the issue of modifiability at the system (arch) level. Fielding's >> dissertation does a good job of identifying "System Properties of Key >> Interest"[1] and one section deals with Modifiability[2] in general. >> >> Once I started thinking about versioning in this way, I was able to >> look to other techniques to improve the general modifiability of an >> implementation. And sometimes these other techniques (identified in >> 2.3.4) made the use of versioning superfluous. >> >> [1] http://www.ics.uci.edu/~fielding/pubs/dissertation/net_app_arch.htm#sec_2_3 >> [2] http://www.ics.uci.edu/~fielding/pubs/dissertation/net_app_arch.htm#sec_2_3_4 >> >> mca >> http://amundsen.com/blog/ >> http://twitter.com@mamund >> http://mamund.com/foaf.rdf#me >> >> >> #RESTFest 2010 >> http://rest-fest.googlecode.com >> >> >> > > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
Le 29 sept. 2010 23:07, mike amundsen a crit : > For me, it helps to think about versioning is a _technique_ and not a > goal or system property to be attained. > > Usually when talking about "versioning" we are really trying to deal > with the issue of modifiability at the system (arch) level. Fielding's > dissertation does a good job of identifying "System Properties of Key > Interest"[1] and one section deals with Modifiability[2] in general. > > Once I started thinking about versioning in this way, I was able to > look to other techniques to improve the general modifiability of an > implementation. And sometimes these other techniques (identified in > 2.3.4) made the use of versioning superfluous. I concur. Defining a service versioning strategy is often perceived, by companies I'm doing SOA consulting for, as the goal, as Mike points out, because it is identified as the only means to manage service evolution. This isn't a surprise, as people have been so much exposed to distributed technologies that induce strong coupling. Often, I then try to convey the idea that "what is central to your needs isn't to define a versioning strategy, but to define a strategy that will let you avoid using versioning whenever possible. You'll want to version only in last resort." (here, the goodness of REST and other practices such as using dynamic data structures in the clients and services implementations kick in). Philippe Mougin
Alan Dean wrote: > > > Also interesting are IETF's reasons for rejecting the RFC for the > > Link header, for attempting to specify an XML-based IANA registry... > > I'm not familiar with that - do you have a link? > http://lists.w3.org/Archives/Public/ietf-http-wg/2010JulSep/0385.html -Eric
Sorry if there is already a nice list someplace, but I've been trying make a list of solid blogs that deal primarily with REST. So far, these are my favorites: http://mamund.com/blog http://www.subbu.org http://roy.gbiv.com/untangled/tag/rest http://thisweekinrest.wordpress.com so, yeah...kind of a sparse list =D (also...is there an faq or etiquette guide anywhere? is this sort of post cool? ...and is it okay to ask for feedback on my own blog posts? (aka pimp my blog))
On Sep 30, 2010, at 11:46 PM, raypolk55 wrote: > Sorry if there is already a nice list someplace, but I've been trying make a list of solid blogs that deal primarily with REST. So far, these are my favorites: > > http://mamund.com/blog > http://www.subbu.org > http://roy.gbiv.com/untangled/tag/rest > http://thisweekinrest.wordpress.com *the classic* http://www.markbaker.ca/blog Stefan's http://www.innoq.com/blog/st/ Mine :-) http://www.nordsc.com/blog And defiinitely more, but I have bad connection here. Jan > > so, yeah...kind of a sparse list =D > > (also...is there an faq or etiquette guide anywhere? is this sort of post cool? ...and is it okay to ask for feedback on my own blog posts? (aka pimp my blog)) > > > > ------------------------------------ > > Yahoo! Groups Links > > >
likewise.. -------- Original Message -------- Subject: Fwd: [apps-discuss] Fwd: WG Review: Web Security (websec) Resent-Date: Sat, 02 Oct 2010 01:38:30 +0000 Resent-From: ietf-http-wg@... Date: Sat, 2 Oct 2010 11:37:45 +1000 From: Mark Nottingham <mnot@...> To: HTTP Working Group <ietf-http-wg@...> References: <4CA22A4F.3080502@...> In case you haven't seen it... Begin forwarded message: > From: Peter Saint-Andre <stpeter@...> > Date: 29 September 2010 3:47:59 AM AEST > To: "apps-discuss@..." <apps-discuss@...> > Subject: [apps-discuss] Fwd: WG Review: Web Security (websec) > > FYI. > > -------- Original Message -------- > Subject: WG Review: Web Security (websec) > Date: Tue, 28 Sep 2010 10:15:06 -0700 (PDT) > From: IESG Secretary <iesg-secretary@...> > Reply-To: iesg@... > To: ietf-announce@... > CC: hasmat@... > > A new IETF working group has been proposed in the Applications Area. The > IESG has not made any determination as yet. The following draft charter > was submitted, and is provided for informational purposes only. Please > send your comments to the IESG mailing list (iesg@...) by Tuesday, > October 5, 2010. > > Web Security (websec) > --------------------------------------------- > Status: Proposed Working Group > Last updated: 2010-09-23 > > Chairs(s) > Tobias Gondrom <tobias.gondrom@...> > > Applications Area Directors: > Alexey Melnikov <alexey.melnikov@...> > Peter Saint-Andre <stpeter@...> > > Applications Area Advisor: > Peter Saint-Andre <stpeter@...> > > Security Area Advisor: > Sean Turner <turners@...> > > Mailing Lists: > General Discussion: hasmat@... > To Subscribe: <https://www.ietf.org/mailman/listinfo/hasmat> > Archive: <http://www.ietf.org/mail-archive/web/hasmat/> > [to be changed to websec@... if approved] > > Problem Statement > > Although modern Web applications are built on top of HTTP, they provide > rich functionality and have requirements beyond the original vision of > static web pages. HTTP, and the applications built on it, have evolved > organically. Over the past few years, we have seen a proliferation of > AJAX-based web applications (AJAX being shorthand for asynchronous > JavaScript and XML), as well as Rich Internet Applications (RIAs), based > on so-called Web 2.0 technologies. These applications bring both > luscious eye-candy and convenient functionality, e.g. social networking, > to their users, making them quite compelling. At the same time, we are > seeing an increase in attacks against these applications and their > underlying technologies. > > The list of attacks is long and includes Cross-Site-Request Forgery > (CSRF)-based attacks, content-sniffing, cross-site-scripting (XSS) > attacks, attacks against browsers supporting anti-XSS policies, > clickjacking attacks, malvertising attacks, as well as man-in-the-middle > (MITM) attacks against "secure" (e.g. Transport Layer Security > (TLS/SSL)-based) web sites along with distribution of the tools to carry > out such attacks (e.g. sslstrip). > > Objectives and Scope > > With the arrival of new attacks the introduction of new web security > indicators, security techniques, and policy communication mechanisms > have sprinkled throughout the various layers of the Web and HTTP. > > The goal of this working group is to compose an overall "problem > statement and requirements" document derived from surveying the > issues outlined in the above section ([1] provides a starting point). > The requirements guiding the work will be taken from the Web > application and Web security communities. The scope of this document > is HTTP applications security, but does not include HTTP authentication, > nor internals of transport security which are addressed by other working > groups (although it may make reference to transport security as an > available security "primitive"). See the "Out of Scope" section, below. > > Additionally, the WG will standardize a small number of selected > specifications that have proven to improve security of Internet > Web applications. Initial work will be the following topics: > > - Same origin policy, as discussed in draft-abarth-origin > (see also Appendices A and B, below) > > - HTTP Strict transport security, as discussed in > draft-hodges-strict-transport-sec > > - Media type sniffing, as discussed in draft-abarth-mime-sniff > > This working group will work closely with IETF Apps Area WGs (such as > HYBI, HTTPstate, and HTTPbis), as well as appropriate W3C working > group(s) (e.g. HTML, WebApps). > > Out of Scope > > As noted in the objectives and scope (above), this working group's > scope does not include working on HTTP Authentication nor underlying > transport (secure or not) topics. So, for example, these items are > out-of-scope for this WG: > > - Replacements for BASIC and DIGEST authentication > > - New transports (e.g. SCTP and the like) > > Deliverables > > 1. A document illustrating the security problems Web applications are > facing and listing design requirements. This document shall be > Informational. > > 2. A selected set of technical specifications documenting deployed > HTTP-based Web security solutions. These documents shall be Standards > Track. > > Goals and Milestones > > Oct 2010 Submit "HTTP Application Security Problem Statement and > Requirements" as initial WG item. > > Oct 2010 Submit "Media Type Sniffing" as initial WG item. > > Oct 2010 Submit "Web Origin Concept" as initial WG item. > > Oct 2010 Submit "Strict Transport Security" as initial WG item. > > Feb 2011 Submit "HTTP Application Security Problem Statement and > Requirements" to the IESG for consideration as an > Informational RFC. > > Mar 2011 Submit "Media Type Sniffing" to the IESG for consideration > as a Standards Track RFC. > > Mar 2011 Submit "Web Origin Concept" to the IESG for consideration as > a Standards Track RFC. > > Mar 2011 Submit "Strict Transport Security" to the IESG for > consideration as a Standards Track RFC. > > Apr 2011 Possible re-chartering > > References > > [1] Hodges and Steingruebl, "The Need for a Coherent Web Security Policy > Framework", W2SP position paper, 2010. > http://w2spconf.com/2010/papers/p11.pdf > > Appendices > > A. Relationship between origin work in IETF WebSec and W3C HTML WG > > draft-abarth-origin defines the nuts-and-bolts of working with > origins (computing them from URIs, comparing them to each other, etc). > HTML5 defines HTML-specific usage of origins. For example, when > making an HTTP request, HTML5 defines how to compute which origin > among all the origins rendering HTML is the one responsible for making > the request. draft-abarth-origin then takes that origin, serializes > it to a string, and shoves it in a header. > > B. Origin work may yield two specifications > > There also seems to be demand for a document that describes the > same-origin security model overall. However, it seems like that > document ought to be more informative rather than normative. The > working group may split draft-abarth-origin into separate informative > and standards track specifications, the former describing same-origin > security model, and the latter specifying the nuts-and-bolts of working > with origins (computing them from URLs, comparing them to each other, > etc). > _______________________________________________ > apps-discuss mailing list > apps-discuss@... > https://www.ietf.org/mailman/listinfo/apps-discuss -- Mark Nottingham http://www.mnot.net/
No, of course I didn't just send that, but we now have some idea of Yahoo's latency -- which only justifies my decision to re-send instead of waiting, despite the duplicate messages now trickling in. I think that's all of 'em, though... -Eric
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 9/24/2010 8:37 PM, Eric J. Bowman wrote: > Kris Zyp wrote: >> >> I guess maybe I misunderstood what you meant by processing model. If >> it is defined such it is not related to the expected data structures, >> hyperlink mechanisms and available relations, than it is indeed >> orthogonal to a schema. That's fine with me, sorry for any confusion. >> > > What I mean is, you can't recommend doing this: > > Content-Type: application/json; > profile=http://json.com/my-hyper-schema > > Because when I look at the IANA registry, I see that application/json > maps to RFC 4627, which only lists one optional parameter, charset. I > see no definition of any profile parameter. The usage you are > recommending is not self-descriptive, because it is not supported by RFC > 4627. > > One possible processing model for XHTML documents is text/plain, another > is text/html, and another is application/xhtml+xml -- please see RFC > 3236 as an example of how a media type registration is the proper place > to define such usage. RFC 3236 doesn't extend the definition of profile > to text/html, text/plain, or even application/xml -- this would be out- > of-scope. > > If a standard comes along which registers application/foo+json, it is > up to that MIME registration to define the usage of a profile parameter. > Such a definition shouldn't have to worry about conflicting with some > other use of that syntax being forced on it by an unrelated spec, in > particular a schema spec which may have nothing whatever to do with the > schema (or BNF notation) being used to describe application/foo+json. Yes, that makes sense, I'll update the text for the next draft to try to have more appropriate language. > > This usage is part of the processing model defined by the media type > identifier, it is not appropriate for a schema language to define such > usage for anything beyond, in this case, application/schema+json -- > your media type registration for that identifier doesn't make sense, as > it doesn't define 'schema' or 'schema.items' and omits any mention of > the 'profile' parameter (again, see RFC 3236). > > Don't get me wrong, I'm in favor of +json, I'm just giving the same > feedback that both schema+json and senml+json have gotten -- the right > way to do this is to first change RFC 4288, then RFC 4627: > > http://www.ietf.org/mail-archive/web/ietf-types/current/msg01062.html > > Otherwise, I don't see approval of either I-D until that issue is > settled or their associated identifier syntax is changed. But then > again, maybe it will be anyway, I don't know -- all I do know is that > no +json identifier has yet made it into the IANA registry, therefore no > +json identifier is currently self-descriptive; and that it is not self- > descriptive to use a profile parameter in conjunction with any media > type identifier unless that identifier defines a profile parameter. - From that thread, it sounded like everyone was in favor of making the updates. I wonder if that is being done by someone... - -- Kris Zyp SitePen (503) 806-1841 http://sitepen.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (MingW32) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAkydcYIACgkQ9VpNnHc4zAyDEACfSl9koe9GiSIG4Vj4bIFF9qip SqcAn02wFDwZw7pljiIb+8pdIeVgFevm =BJaB -----END PGP SIGNATURE-----
+1 and that's just this thread :-) I would love for Roy to be that person since the topic always leads back to his direction. Glenn On Tue, Sep 7, 2010 at 10:23 AM, Bob Haugen <bob.haugen@...> wrote: > > > Yahoo says there are 103 messages in this thread. The discussion is > circular and will never end. > > May I suggest starting a new thread with an appropriate title to focus > exclusively on the IANA registry issue, where each person who has a > different position states their position clearly and succinctly, and > thereafter we refer back to that thread as a FAQ? > > Best I think if somebody with moderator-type skills summarizes all of > the contradictory positions at the end of the thread, so we don't get > into a who-get-the-last-word fight. > > I could start one, but I don't really have a position other than > wanting to shortcut permathreads. > >
While at REST FEST, Darrel Miller demonstrated a shop floor automation system that is designed with a RESTful style. The system (at leat as shown) uses a proprietary smart client to talk back to the server. Assuming the system uses one or more custom media types would the recommendation be that those types be registered with IANA? Darrel, I hope you don't mind being my example :-) Thanks Glenn
Glenn Block wrote: > > Assuming the system uses one or more custom media types would the > recommendation be that those types be registered with IANA? > It's more than a recommendation, it's an absolute requirement, as having a network-defined (IP defines MIME defines the IANA registry) identifier is the fundamental difference between a uniform interface (network-defined) and a plain old HTTP interface (library-defined). The dichotomy between resource and representation is realized by exposing the sender's processing intent in a header, allowing that intent to be decoupled from the data format. Just like Gopher and HTTP, not at all like FTP. The essence of the REST style is that this decoupling must be based on a network-defined value for that header. Gopher messaging over IP is not self-descriptive because it defines its own identifier syntax, instead of using the agreed-upon standard for self-descriptive IP network message tagging and bagging, the Internet Media Type (RFC 2048). (Yes, I realize this contradicts what I've said before, Nathan's feedback has made my position even more strict than it was. Don't forget that to some extent, REST is an ex-post-facto explanation of the decision to extend MIME beyond e-mail to begin with, by encapsulating the rationale and benefits of this decision within a constraint.) In order to meet REST's definition of what makes an interface uniform, the media type must be a media type, where IP is concerned. If the identifier in Content-Type isn't registered, then it isn't an Internet Media Type, by the definition of media type, therefore it's *just a string*, even if it's a URI, regardless of its syntax. Internet Media Types are self-descriptive *because* they're Internet Media Types, i.e. registered. Strings are not Internet Media Types, therefore they are not self-descriptive, therefore not standardized, therefore not uniform, therefore library-based instead of network- based, therefore not remotely the same style as REST. -Eric
> > Internet Media Types are self-descriptive *because* they're Internet > Media Types, i.e. registered. Strings are not Internet Media Types, > therefore they are not self-descriptive, therefore not standardized, > therefore not uniform, therefore library-based instead of network- > based, therefore not remotely the same style as REST. > We can discuss the theoretical importance of this for proprietary systems until the cows come home, of course, provided that we agree on the definition of "media type" vs. "data type + identifier string". But the ramifications of the constraint lead to this advice: "[W]orking with the community and discussing the proposed media type with experts on the ietf-types list in order to create something that can be registered will probably lead to better results. There are many people who are happy to help create solutions for problems, and standardization bodies + would-be communities that will gladly assist in creating a standardized solution." http://tech.groups.yahoo.com/group/rest-discuss/message/16653 There's nothing wrong with some outside review, even for proprietary solutions, of any proposed media type (there's no such thing as an unregistered media type). Even if the decision is made to continue sending a string in Content-Type in lieu of a media type, surely some insight from experts in media type design would be of use. -Eric
Would you consider this list to be an appropriate venue to solicit feedback about potential media-types, before asking on the IETF types list? Darrel On Sun, Oct 3, 2010 at 9:33 PM, Eric J. Bowman <eric@...>wrote: > > > > We can discuss the theoretical importance of this for proprietary > systems until the cows come home, of course, provided that we agree on > the definition of "media type" vs. "data type + identifier string". > But the ramifications of the constraint lead to this advice: > > "[W]orking with the community and discussing the proposed media type > with experts on the ietf-types list in order to create something that > can be registered will probably lead to better results. There are many > people who are happy to help create solutions for problems, and > standardization bodies + would-be communities that will gladly assist > in creating a standardized solution." > > http://tech.groups.yahoo.com/group/rest-discuss/message/16653 > > There's nothing wrong with some outside review, even for proprietary > solutions, of any proposed media type (there's no such thing as an > unregistered media type). Even if the decision is made to continue > sending a string in Content-Type in lieu of a media type, surely some > insight from experts in media type design would be of use. > > -Eric > >
Darrel Miller wrote: > > Would you consider this list to be an appropriate venue to solicit > feedback about potential media-types, before asking on the IETF types > list? > Absolutely! I think that would lead to much better conversations on both lists, resulting in better, more re-usable media types for everyone to choose from (instead of one media type per dialect as per Nathan), encompassing as many problem domains as are already using unregistered identifiers as solutions. -Eric
I agree that having some place where such discussions can take place would be valuable. I am not sure this is the right list or if there should be a list formed specifically for that purpose in order to make it more useful. I do think regardless having a body or some lore on media type design would be really valuable. Back to the question at hand, I find myself struggling with what I lose if the type is not registered. Saying it must be registered implies some big loss. For a proprietary system where I control who the clients are, or I use code on-demand to deploy the user-agent code, why do I care if the type is registered or not? Can you point out exactly what I lose in such a system if I don't register it. On Sun, Oct 3, 2010 at 7:27 PM, Eric J. Bowman <eric@...>wrote: > Darrel Miller wrote: > > > > Would you consider this list to be an appropriate venue to solicit > > feedback about potential media-types, before asking on the IETF types > > list? > > > > Absolutely! I think that would lead to much better conversations on > both lists, resulting in better, more re-usable media types for everyone > to choose from (instead of one media type per dialect as per Nathan), > encompassing as many problem domains as are already using unregistered > identifiers as solutions. > > -Eric >
Glenn Block wrote: > > I do think regardless having a body or some lore on media type design > would be really valuable. > I think there is one -- every media type (i.e. registered) links to a media type description. Each one (where one exists) describes a processing model that was approved by IANA at one time or another, as being exactly what is meant by "media type". The demo I posted uses a dozen different media types. Reading all those documents takes a day, but provides much enlightenment on approved media type design. > > Back to the question at hand, I find myself struggling with what I > lose if the type is not registered. Saying it must be registered > implies some big loss. For a proprietary system where I control who > the clients are, or I use code on-demand to deploy the user-agent > code, why do I care if the type is registered or not? > > Can you point out exactly what I lose in such a system if I don't > register it. > No, I can't, only the system designer is in a position to judge what's best for the system being designed. I can point out that you're asking the wrong question, though... the proper questions are what properties are you seeking to induce, and what constraints do you apply to achieve those properties? From REST Chapter 6.5: "Why is this important? Because it differentiates a system where network intermediaries can be effective agents from a system where they can be, at most, routers." If you honestly don't care about traversing network boundaries, or re- use by intermediaries if you do, then I can think of at least three REST constraints off the top of my head which are irrelevant to your needs -- IOW, why choose REST as an architectural style if it's fundamentally inappropriate to your design goals? Everyone keeps asking if it's OK to ignore the uniform interface in situations where the uniform interface is irrelevant to their needs... sure, just don't call the result REST, or obsess over it not being REST. The first three chapters of Roy's thesis lay out a terminology and a rationale for deriving networked application architectures (including but not limited to hypermedia applications). The next two chapters lay out the design concerns of the REST style, and the methodology used to derive it based on the terminology and rationale from Chapters 1-3. If your concerns are different, then following Roy's methodology won't lead you to REST, but to some other style -- all that matters is that it leads you to an architectural style that's appropriate for your system. You can name it whatever you want, because an architectural style is really just a named set of interdependent constraints -- which is why omitting any constraint results in a different style. Start with the null style (as described in Chapter 5). Apply the subset of architectural constraints which result in the properties you seek to induce (as described in Chapter 3). Ignore the subset of architectural constraints whose properties don't advance you to your goal. The result is an architectural style appropriate to serve as a design guideline when modeling and implementing your system. If that style doesn't require media types, so be it, but the REST style sure does -- most intermediaries only care about a limited subset of media types (meaning registered). Which is why self-descriptive messaging is essential to the REST style -- it allows traversing of network boundaries (not guaranteed without using media types, could be seen as tunneling) while enabling intermediaries to act as intelligent agents (participate in the communication beyond being routers). Saying you don't care about those properties, means you have no need for the full set of REST constraints, means you have some other set of constraints, which isn't the same set of constraints defined by the term REST. Which is not to say you shouldn't follow Roy's thesis, only that you can't *expect* that methodology to result in the same architectural style Roy defines for purposes you state aren't relevant to your system's needs, anyway... -Eric
Most OS package-management systems have GUIs these days, and the back- ends to these systems mostly use wget (mostly to FTP sites) as user agent. What URIs to have wget fetch, are determined by the text configuration files (patches, prerequisite packages, options etc.). If a URI is unavailable, an alternate is selected and processing continues -- the user selecting these application state transitions expressed as URIs, is the installer process (i.e. ports/pkgsrc), no human involved except to select options and initiate the installer (via hypertext API of course). The media type of each patch tells us the encoding of its tarball, not that when extracted it's part of a numbered set of patches for some other tarball. This processing model may be standards-worthy -- as a multipart media type which allows each tarball to be a set of alternate links in order of preference (or via net heuristic), and establishes the order in which the patches are to be applied. Just a thought -- while a RESTful-m2m fork of pkgsrc would be fun, I hardly have the time. Instead of downloading a tree of stub files, the installer dereferences the package manifest of the requested package from some host, the representation is this new multipart media type, which defines the sender's intended processing model -- guiding the installer's use of wget as its agent, extracting and patching together the returned tarballs. -Eric
So what I am hearing you saying is one should start with the types of properties they want a system to induce and apply the constraints that lead to the emergence of those properties. However unless you are applying the full set of constraints that REST requires, don't call it RESTful. I have no issue with that. But (and this is the rub) on the other side of the discussion I am seeing folks arguing against the premise that registration with IANA is indeed required for conformance to the uniform interface. Which then leads back to the "Is it RESTful" discussion. On Sun, Oct 3, 2010 at 8:36 PM, Eric J. Bowman <eric@...>wrote: > Glenn Block wrote: > > > > I do think regardless having a body or some lore on media type design > > would be really valuable. > > > > I think there is one -- every media type (i.e. registered) links to a > media type description. Each one (where one exists) describes a > processing model that was approved by IANA at one time or another, as > being exactly what is meant by "media type". The demo I posted uses a > dozen different media types. Reading all those documents takes a day, > but provides much enlightenment on approved media type design. > > > > > Back to the question at hand, I find myself struggling with what I > > lose if the type is not registered. Saying it must be registered > > implies some big loss. For a proprietary system where I control who > > the clients are, or I use code on-demand to deploy the user-agent > > code, why do I care if the type is registered or not? > > > > Can you point out exactly what I lose in such a system if I don't > > register it. > > > > No, I can't, only the system designer is in a position to judge what's > best for the system being designed. I can point out that you're asking > the wrong question, though... the proper questions are what properties > are you seeking to induce, and what constraints do you apply to achieve > those properties? From REST Chapter 6.5: > > "Why is this important? Because it differentiates a system where > network intermediaries can be effective agents from a system where they > can be, at most, routers." > > If you honestly don't care about traversing network boundaries, or re- > use by intermediaries if you do, then I can think of at least three > REST constraints off the top of my head which are irrelevant to your > needs -- IOW, why choose REST as an architectural style if it's > fundamentally inappropriate to your design goals? > > Everyone keeps asking if it's OK to ignore the uniform interface in > situations where the uniform interface is irrelevant to their needs... > sure, just don't call the result REST, or obsess over it not being REST. > > The first three chapters of Roy's thesis lay out a terminology and a > rationale for deriving networked application architectures (including > but not limited to hypermedia applications). The next two chapters lay > out the design concerns of the REST style, and the methodology used to > derive it based on the terminology and rationale from Chapters 1-3. > > If your concerns are different, then following Roy's methodology won't > lead you to REST, but to some other style -- all that matters is that it > leads you to an architectural style that's appropriate for your system. > You can name it whatever you want, because an architectural style is > really just a named set of interdependent constraints -- which is why > omitting any constraint results in a different style. > > Start with the null style (as described in Chapter 5). Apply the subset > of architectural constraints which result in the properties you seek to > induce (as described in Chapter 3). Ignore the subset of architectural > constraints whose properties don't advance you to your goal. The result > is an architectural style appropriate to serve as a design guideline > when modeling and implementing your system. > > If that style doesn't require media types, so be it, but the REST style > sure does -- most intermediaries only care about a limited subset of > media types (meaning registered). Which is why self-descriptive > messaging is essential to the REST style -- it allows traversing of > network boundaries (not guaranteed without using media types, could be > seen as tunneling) while enabling intermediaries to act as intelligent > agents (participate in the communication beyond being routers). > > Saying you don't care about those properties, means you have no need > for the full set of REST constraints, means you have some other set of > constraints, which isn't the same set of constraints defined by the term > REST. Which is not to say you shouldn't follow Roy's thesis, only that > you can't *expect* that methodology to result in the same architectural > style Roy defines for purposes you state aren't relevant to your > system's needs, anyway... > > -Eric >
Glenn, On Oct 4, 2010, at 4:39 AM, Glenn Block wrote: > > > I agree that having some place where such discussions can take place would be valuable. I am not sure this is the right list or if there should be a list formed specifically for that purpose in order to make it more useful. I do think regardless having a body or some lore on media type design would be really valuable. > > Back to the question at hand, I find myself struggling with what I lose if the type is not registered. You might call it desirable, but it is certainly not required to register media types with IANA to have a working system. There are several non registered types out there today that are being used successfully - and there are also a bunch of registered types that are not being used widely (or at all). What matters is that you are able to find the specification and, maybe more importantly, the community around it. Usually the spec alone is not sufficient to grasp the type, eh? Jan > Saying it must be registered implies some big loss. For a proprietary system where I control who the clients are, or I use code on-demand to deploy the user-agent code, why do I care if the type is registered or not? > > Can you point out exactly what I lose in such a system if I don't register it. > > On Sun, Oct 3, 2010 at 7:27 PM, Eric J. Bowman <eric@...> wrote: > Darrel Miller wrote: > > > > Would you consider this list to be an appropriate venue to solicit > > feedback about potential media-types, before asking on the IETF types > > list? > > > > Absolutely! I think that would lead to much better conversations on > both lists, resulting in better, more re-usable media types for everyone > to choose from (instead of one media type per dialect as per Nathan), > encompassing as many problem domains as are already using unregistered > identifiers as solutions. > > -Eric > > > >
Glenn Block wrote: > > But (and this is the rub) on the other side of the discussion I am > seeing folks arguing against the premise that registration with IANA > is indeed required for conformance to the uniform interface. > None of those arguments have explained why Roy didn't mean exactly what he said (why should he need to repeat himself): "Self-descriptive means that the type is registered and the registry points to a specification and the specification explains how to process the data according to [sender] intent." http://tech.groups.yahoo.com/group/rest-discuss/message/6594 http://tech.groups.yahoo.com/group/rest-discuss/message/6615 Which is the same conclusion to be drawn from the thesis, absent that statement, with or without the mime-respect note. The constraint in question is the embodiment of the decision to expand MIME from e-mail to become *the* tagging-and-bagging spec for IP network protocols. Chapter 6 extensively covers the topic of REST mismatches in HTTP, without mentioning the IANA registry... There is simply no argument to be made against this, which doesn't amount to advocating against standardization or against following the specs as they're written -- both of which are against the point of REST. The definition of self-descriptive is not "has Content-Type," it is "Content-Type contains a media type;" the definition of media type is not "any string sent in Content-Type," but "strings specified in the IANA registry" -- when we're talking about HTTP and Internet Protocol, respectively; application/foo+xml is not a media type, it's just a string. REST clearly defines media type as representation metadata as per RFC 2048, which clearly states "A registration process is needed, however, to ensure that the set of such values is developed in an orderly, well-specified, and public manner." If Roy didn't mean to constrain the value of Content-Type to a registered subset of publicly well- specified media types developed in an orderly manner, then why does he even mention media types let alone refer to RFC 2048, and why does RFC 2616 discourage the use of even the allowable x. syntax? And, why does this document: http://www.ietf.org/id/draft-masinter-mime-web-info-00.txt Even exist as an RFC process with one stated goal being to edit AWWW to go from not mentioning media types, to explaining why they're critical? Because the TAG has decided to move *away* from REST? ;-) Seriously, how is anything I've been saying in conflict with that document (they even mention poor ol' Gopher)? That's the TAG doubling down on the IANA registry... Where are the voices of protest against that, which are so quick to condemn my statements as absurd? G'head, let the TAG have it, that's www-tag@..., if they're not following REST surely someone ought to let them know? Their goals are congruous with REST, so if their actions aren't RESTful how will they achieve those goals if nobody steps up and educates them as to their folly? Does the following passage not enumerate some of REST's key design goals? "We need a clear direction on how to make the web more reliable, not less. We need a realistic transition plan from the unreliable web to the more reliable one. Part of this is to encourage senders (web servers) to mean what they say, and encourage recipients (browsers) to give preference to what the senders are sending... [We should] encourage behavior which, on the one hand, continues to work with the already deployed infrastructure (of servers, browsers, and intermediaries), but which advice, if followed, also improves the operability, reliability and security of the web." By which the TAG clearly means _use media types_ instead of strings in your Content-Type headers... not at all coincidentally, just as REST prescribes! Something tells me not to expect Roy to post any criticism of the TAG's increasing focus on media types as the key to reliability, interoperability and security at Internet scale. Kind of exactly what I've been saying on this list for the past year now, to the extent of editing the thesis, without rebuke from Roy... This _still_ shouldn't be controversial at all, it's exactly what the specs say, which is exactly the standardization that is meant by "uniform" in REST. Nobody has provided any logical explanation as to how "uniform" could possibly mean "unstandardized random strings instead of media types" by *any* interpretation of REST or the standards it refers to, or why it's irrelevant to Web architecture for uniform to actually mean ad-hoc. What is the sender intent here? Is this a media type by virtue of being a standard linked to in a Content-Type header? Content-Type: http://www.w3.org/TR/xhtml1/ HTML rendering or XML parsing? That isn't a media type, it's just a string. Yeah, it's a URI that points to a standardized data type, but in terms of defining sender intent it's beyond useless, because that's what media types do, and a URI is not a media type, not even this one: Content-Type: http://www.ietf.org/rfc/rfc3023.txt That is *not* self-descriptive because URIs are *not* media types by virtue of being sent in Content-Type, and neither is application/foo+ xml. The only things that are media types, are listed in the IANA registry, by definition. Anything else is _just a string_ and does not describe sender intent, including and especially application/rss+xml, the use of which is a _blatant_ violation of RFC 2048 and _still_ requires introspection to determine intent -- how is any identifier which fails to describe sender intent, self-descriptive of that intent? Practicing REST on the Web means you MUST use media types. If there's one hard-and-fast rule which always holds true, it's this one, and it's fundamental to the whole concept of REST's uniform interface. (I dream of the day when that last statement garners a single +1.) -Eric
Glenn Block wrote: > For a proprietary system where I control who the clients are, or I use > code on-demand to deploy the user-agent code, why do I care if the type is > registered or not? > > Can you point out exactly what I lose in such a system if I don't register > it. You don't loose anything that wasn't already lost by the system being un-RESTful, constraint number 1: "The first constraints added to our hybrid style are those of the client-server architectural style, described in Section 3.4.1. Separation of concerns is the principle behind the client-server constraints. ... the separation allows the components to evolve independently, thus supporting the Internet-scale requirement of multiple organizational domains." Maybe I completely missunderstand something but afaict that certainly doesn't describe "a proprietary system where I control who the clients are, or I use code on-demand to deploy the user-agent code" Best, Nathan
On Mon, Oct 4, 2010 at 9:20 AM, Eric J. Bowman <eric@...> wrote: > > Which is the same conclusion to be drawn from the thesis, absent that > statement, with or without the mime-respect note. The constraint in > question is the embodiment of the decision to expand MIME from e-mail > to become *the* tagging-and-bagging spec for IP network protocols. > Chapter 6 extensively covers the topic of REST mismatches in HTTP, > without mentioning the IANA registry... Why would that be a mismatch? You're missing the point; the argument has been that centralised registration is _not a requirement_ for REST. Shared understanding is the requirement, which can be achieved via other means than a central registry. Interestingly; HTTP doesn't actually *require* use of registered type identifiers which we know because of the specific wording in 2616 where non-registered identifiers are merely "discouraged" . If formal IANA registration was a fundamental requirement to establish shared understanding, presumably _that_ should have been raised as a mismatch? Oops. > REST clearly defines media type as representation metadata as per RFC > 2048, which clearly states "A registration process is needed, however, > to ensure that the set of such values is developed in an orderly, > well-specified, and public manner." That is anything but "hard science". It's completely subjective and it isn't backed up by any empirical evidence. It also happens to be being ignored (in the real world) by people building systems that - obviously by some act of the devil - actually work ok; with intermediaries (i.e. caches and proxies) leveraging the self-descriptiveness of messages too. Shocking. > If Roy didn't mean to constrain > the value of Content-Type to a registered subset of publicly well- > specified media types developed in an orderly manner, then why does he > even mention media types let alone refer to RFC 2048 I don't know why the style references an implementation detail of the web. That does seem a bit odd. Cheers, Mike
Mike Kelly wrote: > On Mon, Oct 4, 2010 at 9:20 AM, Eric J. Bowman <eric@...> wrote: >> Which is the same conclusion to be drawn from the thesis, absent that >> statement, with or without the mime-respect note. The constraint in >> question is the embodiment of the decision to expand MIME from e-mail >> to become *the* tagging-and-bagging spec for IP network protocols. >> Chapter 6 extensively covers the topic of REST mismatches in HTTP, >> without mentioning the IANA registry... > > Why would that be a mismatch? > > You're missing the point; the argument has been that centralised > registration is _not a requirement_ for REST. Shared understanding is > the requirement, which can be achieved via other means than a central > registry. Shared understanding is the requirement, so given a temporally varying set of clients, servers and intermediaries, and a constant transfer protocol, then the only way to achieve a shared understanding of anything, from messages through to the type label of the content being transferred, is to have it defined by the constant, namely the transfer protocol. That is to say, three servers and six clients all sharing knowledge that a specific content type label exists, and sharing an understanding of content typed with that label does not constitute shared understanding at an architectural level. To illustrate simply include another set of clients and servers which also share knowledge that a specific content type label exists, and understand content typed with the label, but use the same content type label as our original set; this is a conflict, a name collision if you will, and proves that understanding is not shared at an architectural, or even transfer protocol, level The only way to prevent this conflict at an architectural or transfer protocol level is to define a set of content type labels and a method for components to share which of the set they understand, for instance the Content-Type and Accept headers in the hypertext transfer protocol. In order to understand content typed with a specific label, then a specification for that content type must be known, and given that some content type outwith the set could also share the same name, or that two content types within the set could share similar ambiguous names, then the only way to prevent conflict here is to store a reference to the content type specification along with the label for that content type, within the defined set. So, the only way to meet this requirement of shared understanding is to have a defined set of content type label + content type specification pairs at architectural or transfer protocol level. That set may consist of only one, or may be a fixed set, or may be a varying set. If it's a varying set then some means of adding and removing content type label + content specification pairs is needed - typically we refer to this a registry with registration process. Thus we need a registry at the architectural level, and at transfer protocol level. Bringing it back to the real world, *the shared understanding of content type labels already exists* at an architectural level, we call the labels "Internet Media Types", they have a registry at IANA, and a well defined registration process, these aren't just transfer protocol specific, they are a shared understanding at the Internet Layer, which is two layers below any transfer protocol. This is why the transfer protocols you use on a daily basis, like SMTP, FTP and HTTP all use internet media types. In other words, to meet the shared understanding constraint and not use IANA registered media types you would not only have to go lower than (HTTP/DNS etc), but you'd have to lower than the transport layer (tcp/udp etc), and even lower still, somewhere below the internet layer (IPv4, IPv6 etc) and start from scratch tbh, define a new internet-like-thing which isn't the internet and doesn't use any of the internet stack, components or protocols. Or am I missing the point too? Best, Nathan
Nathan wrote: > > Mike Kelly wrote: > > On Mon, Oct 4, 2010 at 9:20 AM, Eric J. Bowman > > <eric@...> wrote: > >> Which is the same conclusion to be drawn from the thesis, absent > >> that statement, with or without the mime-respect note. The > >> constraint in question is the embodiment of the decision to expand > >> MIME from e-mail to become *the* tagging-and-bagging spec for IP > >> network protocols. Chapter 6 extensively covers the topic of REST > >> mismatches in HTTP, without mentioning the IANA registry... > > > > Why would that be a mismatch? > > > > You're missing the point; the argument has been that centralised > > registration is _not a requirement_ for REST. Shared understanding > > is the requirement, which can be achieved via other means than a > > central registry. > > Shared understanding is the requirement... > This statement is accurate, but imprecise. REST requires that the shared understanding be network-based. If the shared understanding isn't network-based, it's library-based. REST defines the difference as the self-descriptive messaging constraint, defining uniform to mean standardized and network-based. Which is exactly what your post goes on to explain... > > The only way to prevent this conflict at an architectural or transfer > protocol level is to define a set of content type labels and a method > for components to share which of the set they understand, for > instance the Content-Type and Accept headers in the hypertext > transfer protocol. > +1 > > In order to understand content typed with a specific label, then a > specification for that content type must be known, and given that > some content type outwith the set could also share the same name, or > that two content types within the set could share similar ambiguous > names, then the only way to prevent conflict here is to store a > reference to the content type specification along with the label for > that content type, within the defined set. > Exactly like how domain name lookups work on IP by using the root name servers to define a distributed registry. You can use any domain name you want on your LAN, but the shared understanding won't be network- based, i.e. you can't extend your API over the Internet that way, because it isn't an Internet domain name unless it's duly registered. It isn't in-scope for HTTP to define any other mechanism for domain name lookups, HTTP refers to URI which specifies DNS for IP networks. It also isn't in-scope for HTTP to spec any other mechanism for understanding Content-Type headers aside from the IANA registry. Wrong layer entirely. Unregistered identifiers can't be forbidden, because that would prevent new media types from evolving. HTTP discourages persistent use of x identifiers, in favor of re-using media types or registering new ones. All HTTP is doing there, is defining a profile of the existing rules of the underlying network: http://tools.ietf.org/html/rfc1521#section-4 This is quite clear that unless you're using a string that's defined in the IANA registry, you must use x- (which has been obsoleted by x.) so as not to confuse anyone into thinking it's a standardized Internet Media Type. If HTTP doesn't override the network-based shared understanding of the Content-Type header, how do you justify doing this at an even higher level, i.e. in your HTTP API, and claiming it's network-based, particularly when REST defines media type as RFC 2048, further confirmed by Roy's clarification that self-descriptive = registered (at the minimum)? > > Bringing it back to the real world, *the shared understanding of > content type labels already exists* at an architectural level, we > call the labels "Internet Media Types", they have a registry at IANA, > and a well defined registration process, these aren't just transfer > protocol specific, they are a shared understanding at the Internet > Layer, which is two layers below any transfer protocol. > > This is why the transfer protocols you use on a daily basis, like > SMTP, FTP and HTTP all use internet media types. > Just like how they all re-use URI and DNS. Part of IP. > > In other words, to meet the shared understanding constraint and not > use IANA registered media types you would not only have to go lower > than (HTTP/DNS etc), but you'd have to lower than the transport layer > (tcp/udp etc), and even lower still, somewhere below the internet > layer (IPv4, IPv6 etc) and start from scratch tbh, define a new > internet-like-thing which isn't the internet and doesn't use any of > the internet stack, components or protocols. > > Or am I missing the point too? > I think your posts in this discussion since joining the group, have exponentially increased my understanding of the point I've been trying to make for a year now. My conclusions were correct, but my rationale was off. Thinking of the IANA registry being as critical as DNS to the operations of network protocols in general, further confirms REST's rationale in considering it critical to any uniform interface style for IP networking. -Eric
I was hoping I could get some more clarity on some things.
What are some examples of serendipitous reuse that the Web offers
applications today?
One, I guess, is Caching. It has been suggested that admins won't
cache unfamiliar data types.
But what are some other examples? I heard a mention of link caching,
or some such thing. What is that referring to? Is that premised on a
proxy perhaps prefetching links in an HTML payload much like some
browsers do today? Something of that nature?
Are search engine search bots an example?
My other question refers to the use of bundling domain specific
information in a generic media type.
For example, a campaign donation. In theory, in the US, candidates
need to make their campaign donations accessible to the public.
It's not a leap to suggest a campaign website publishing a service
that returns an Atom list of donations based on some query.
For example, GET /donations?query=county:Los%20Angeles to see all
donations for Los Angeles county.
And the result can have links to the actual donation documents.
On the one hand, these donation documents could be
application/x-campaign-donation+xml, with a specification posted on
the campaign website. But that's an unregistered media type.
<donation>
<name>Bob Eubanks</name>
<date>09/01/2010</date>
<amount>25.00</amount>
</donation>
On the other hand, it could be simply text/html:
<html>
<body>
<dl>
<dt>Name</dt><dd>Bob Eubanks</dd>
<dt>Date</dt><dd>09/01/2010</dd>
<dt>Amount</dt><dd>$25.00</dd>
</dt>
</body>
</html>
Here's my issue.
The application/x-campaign-donation+xml is not self descriptive, since
it is unregistered. Therefore it has no expectation of getting any
reuse. It may well not even be cached, even with appropriate caching
headers.
The HTML version is self descriptive, but it's only self descriptive
of HTML. It's not self descriptive of a campaign donation. There is no
way to identify this resource as a campaign donation. It can benefit
from some reuse, notably caching, potentially google, etc. But there
can be no expectation of reuse at the domain level. For example, if
someone wanted to track the rate of donations by county, they can not
do that on the HTML payload, as they have no documentation of the
domain elements within the payload. This has no semantics outside of
HTML, because that's all it is identified with.
Much like the difference between application/xml and
application/x-campaign-donation+xml. Both are XML, but one has the
campaign donation semantics associated with it.
That's my conflict with all this. That using generic containers, you
may only be able to get domain knowledge through introspection, yet
introspection is considered a bad practice, that's one reason cited
why application/xml is not a proper media type to use.
I was hoping this conundrum could be discussed to learn how in
practice this conflict can be overcome.
Regards,
Will Hartung
(willh@...)
Will Hartung wrote: > > That's my conflict with all this. That using generic containers, you > may only be able to get domain knowledge through introspection, yet > introspection is considered a bad practice, that's one reason cited > why application/xml is not a proper media type to use. > Introspection to determine sender intent is bad practice -- sender intent should be declared by media type. Semantics of the payload are not. The media type needs to tell me what kind of processor to use, in order to understand the semantics of the payload. These semantics can't be understood unless the payload can first be decoded. But they have no bearing on the sender's intended processing model. There are about a bazillion different uses for HTML, from campaign donations, to banking, to airline reservations, to etc. etc. etc. and so on and so forth, all of which share the same processing model. Determining the semantics of the payload is a separate problem -- first you need to know how to decode the payload, and that sender intent is what a media type is self-descriptive of. In many cases, the data type is XHTML 1.0, which may be sent as text/html or as application/xhtml+xml. If the rules are followed, the semantics of the payload are exactly the same either way, because they have nothing to do with the processing model being HTML vs. XML. You can use RDFa to annotate any standard element with domain-specific vocabulary. Or microformats, or microdata, or linked RDF, or whatever else comes along -- using any of which has no bearing on media type. -Eric
> > In many cases, the data type is XHTML 1.0, which may be sent as > text/html or as application/xhtml+xml. > Or as text/plain, in which case the payload has no semantics -- it's intended to be read as a document, not processed as HTML or XML. -Eric
Will Hartung wrote: > > What are some examples of serendipitous reuse that the Web offers > applications today? > This has already been discussed at length. The entire security architecture of the Internet is based on media types. Anything and everything called a "Web accelerator" is based on media types. DNS accelerators, like the one Google makes, are based on media types. The point is that you can't know; all you can do to allow intermediaries to participate in the communication as smart agents, rather than just as routers, is to use the media types common to the deployed network infrastructure. > > One, I guess, is Caching. It has been suggested that admins won't > cache unfamiliar data types. > Or block them altogether as attempts at tunneling. > > But what are some other examples? I heard a mention of link caching, > or some such thing. What is that referring to? Is that premised on a > proxy perhaps prefetching links in an HTML payload much like some > browsers do today? Something of that nature? > Yes. How can anything be prefetched if the agent doesn't know what a link is? How can the security considerations of the payload be known to anyone on the network unless it's been peer reviewed as part of the public approval process for standards-tree media types? > > Are search engine search bots an example? > Yes, they know what <a href> means when the media type is text/html or application/xhtml+xml. They don't know what those URIs in your JSON sent as application/json mean, other than they're strings of text. > > On the one hand, these donation documents could be > application/x-campaign-donation+xml, with a specification posted on > the campaign website. But that's an unregistered media type. > No, it's simply not a media type. The definition of media type is reserved for those things approved for inclusion in the IANA registry. The proper prefix is 'x.' not 'x-'. The '+xml' suffix is meaningless, only media types approved as RFC 3023-compliant XML media types give that suffix any meaning. You're sending application/$. > > <donation> > <name>Bob Eubanks</name> > <date>09/01/2010</date> > <amount>25.00</amount> > </donation> > > On the other hand, it could be simply text/html: > > <html> > <body> > <dl> > <dt>Name</dt><dd>Bob Eubanks</dd> > <dt>Date</dt><dd>09/01/2010</dd> > <dt>Amount</dt><dd>$25.00</dd> > </dt> > </body> > </html> > You're comparing data types, not media types. You need to understand the vast difference between the two -- HTML tables (which is more correct for your data structure) have a thead-tfoot-tbody structure which allows for progressive rendering. It's been extensively peer- reviewed and improved in the public arena over many years, for both forward and backward compatibility, extensibility, processing into a DOM such that it may have bindings for scripting and styling. The security considerations are a matter of public record, there are bindings such that the user agent's accessibility API may be scripted, there's a forms language which relates how protocol methods are to be used, the semantics of which URIs are to be embedded and which are to be selected by the user and which are styles/scripts/namespace identifiers is clearly defined, and so on and so forth. This is the network-based shared understanding represented by both HTML media types. I can't deduce anything of the sort from application/$. Why would you want to duplicate all that effort to recreate HTML for an application that's right up HTML's alley to begin with? > > The application/x-campaign-donation+xml is not self descriptive, since > it is unregistered. Therefore it has no expectation of getting any > reuse. It may well not even be cached, even with appropriate caching > headers. > It's an unknown quantity with no public information about its peer- reviewed security considerations regarding its use on IP networks across multiple protocols. Why would I let it cross my boundary? What possible incentive do I have? Whereas the IANA registry allows me to make an *informed* decision as to whether or not to allow a media type through a firewall. Granted, that's a worst-case scenario, but then again it's often the case with Java/Javascript/Flash, so why would you expect anything unknown to fare any better? > > The HTML version is self descriptive, but it's only self descriptive > of HTML. It's not self descriptive of a campaign donation. > By that logic, text/html is not self-descriptive of *anything* and every use of HTML thus requires its own media type -- one for online banking, another for airline reservations, yet another for event ticketing, still another for registering a domain name, yet another to purchase crap from Amazon, then another to purchase crap from BestBuy because Amazon and BestBuy can't agree on semantics... > > There is no way to identify this resource as a campaign donation. > Of course there is. A hypertext API is self-documenting. An HTML system which allows you to track these things is encapsulated around whatever your backend system is. Does the natural language of the text tell you your list or table of things are campaign donations? Are the forms marked up such that you know you're entering someone's name followed by a dollar value? Then clearly, the representation is describing a campaign donation resource, even without RDFa, which you only need to make it machine-readable. The resource means whatever its representations say it means, not its media type, which only defines the sender's intended processing model. > > can benefit from some reuse, notably caching, potentially google, > Why would Google bother indexing it, when it can't know what a link even is? Or an image? Or how to construct a URI from a form? Or anything else anyone assumes Google magically just knows, when the reality is that Google's behavior is (mostly) based on media types? > > etc. But there can be no expectation of reuse at the domain level. > For example, if someone wanted to track the rate of donations by > county, they can not do that on the HTML payload, as they have no > documentation of the domain elements within the payload. > Huh? Why can't you provide them a search interface which tells them exactly what they're searching for, with natural-language text explaining that interface, exactly like Google does? Or better yet, if you're re-using HTML, then why can't those folks just google for what they're after from your domain specifically? You're describing exactly the sort of routine, everyday HTML application the Web thrives on. REST doesn't require you to abandon all that and strike off on your own. > > Much like the difference between application/xml and > application/x-campaign-donation+xml. Both are XML, but one has the > campaign donation semantics associated with it. > No, one is XML, the other one is application/$ and any assumption that it's XML requires introspection of the payload to confirm -- that's the opposite of being self-descriptive. -Eric
Will, On Oct 6, 2010, at 3:52 AM, Will Hartung wrote: > I was hoping I could get some more clarity on some things. > > What are some examples of serendipitous reuse that the Web offers > applications today? Serendipitous reuse happens any time an application is realized (when a user agent engages in communication with servers on behalf of some user goal). One might argue that services are built with some primary application in mind and that this application is then not reuse, but I think that is secondary. What is important is that REST enables clients to use what they are given by the servers in previously unanticipated ways. Hence we call it *re*-use and not sort of *again*-use. > > One, I guess, is Caching. It has been suggested that admins won't > cache unfamiliar data types. I do not see how that relates to reuse? Can you explain? Jan > > But what are some other examples? I heard a mention of link caching, > or some such thing. What is that referring to? Is that premised on a > proxy perhaps prefetching links in an HTML payload much like some > browsers do today? Something of that nature? > > Are search engine search bots an example? > > My other question refers to the use of bundling domain specific > information in a generic media type. > > For example, a campaign donation. In theory, in the US, candidates > need to make their campaign donations accessible to the public. > > It's not a leap to suggest a campaign website publishing a service > that returns an Atom list of donations based on some query. > > For example, GET /donations?query=county:Los%20Angeles to see all > donations for Los Angeles county. > > And the result can have links to the actual donation documents. > > On the one hand, these donation documents could be > application/x-campaign-donation+xml, with a specification posted on > the campaign website. But that's an unregistered media type. > > <donation> > <name>Bob Eubanks</name> > <date>09/01/2010</date> > <amount>25.00</amount> > </donation> > > On the other hand, it could be simply text/html: > > <html> > <body> > <dl> > <dt>Name</dt><dd>Bob Eubanks</dd> > <dt>Date</dt><dd>09/01/2010</dd> > <dt>Amount</dt><dd>$25.00</dd> > </dt> > </body> > </html> > > Here's my issue. > > The application/x-campaign-donation+xml is not self descriptive, since > it is unregistered. Therefore it has no expectation of getting any > reuse. It may well not even be cached, even with appropriate caching > headers. > > The HTML version is self descriptive, but it's only self descriptive > of HTML. It's not self descriptive of a campaign donation. There is no > way to identify this resource as a campaign donation. It can benefit > from some reuse, notably caching, potentially google, etc. But there > can be no expectation of reuse at the domain level. For example, if > someone wanted to track the rate of donations by county, they can not > do that on the HTML payload, as they have no documentation of the > domain elements within the payload. This has no semantics outside of > HTML, because that's all it is identified with. > > Much like the difference between application/xml and > application/x-campaign-donation+xml. Both are XML, but one has the > campaign donation semantics associated with it. > > That's my conflict with all this. That using generic containers, you > may only be able to get domain knowledge through introspection, yet > introspection is considered a bad practice, that's one reason cited > why application/xml is not a proper media type to use. > > I was hoping this conundrum could be discussed to learn how in > practice this conflict can be overcome. > > Regards, > > Will Hartung > (willh@...) > > > ------------------------------------ > > Yahoo! Groups Links > > >
Hi folks, one of the big benefits of REST is the caching ability. Is it possible to use a reverse proxy cache AND https? This might be very useful in enterprise scenarios, but i dont have any experience with squid & co... Or are there any other recommendations to secure the resource exchange? Regards, Jakob
This is possible. I would suggest checking squid docs and the squid-users mailing list. Subbu On Oct 6, 2010, at 1:33 AM, jakobstrauch wrote: > Hi folks, > > one of the big benefits of REST is the caching ability. Is it possible to use a reverse proxy cache AND https? This might be very useful in enterprise scenarios, but i dont have any experience with squid & co... > > Or are there any other recommendations to secure the resource exchange? > > Regards, > Jakob > > > > ------------------------------------ > > Yahoo! Groups Links > > >
Or maybe Varnish as an alternative. Cheers, Erlend On Wed, Oct 6, 2010 at 5:47 PM, Subbu Allamaraju <subbu@...> wrote: > > > This is possible. I would suggest checking squid docs and the squid-users > mailing list. > > Subbu > > > On Oct 6, 2010, at 1:33 AM, jakobstrauch wrote: > > > Hi folks, > > > > one of the big benefits of REST is the caching ability. Is it possible to > use a reverse proxy cache AND https? This might be very useful in enterprise > scenarios, but i dont have any experience with squid & co... > > > > Or are there any other recommendations to secure the resource exchange? > > > > Regards, > > Jakob > > > > > > > > ------------------------------------ > > > > Yahoo! Groups Links > > > > > > > > >
Weekend fun (oh, how I wish). Here is an example of someone spreading recommendations who clearly doesn't have a grasp on the subject matter. I wish this were a joke, but alas, it's not. http://www.readwriteweb.com/cloud/2010/10/another-10-mistakes-made-by-api-providers.php Cheers, - Steve -------------- Steve G. Bjorg http://mindtouch.com http://twitter.com/bjorg
> > The entire security architecture of the Internet is based on media > types. > I'm referring to application-layer security, not just network-layer security, to clarify -- application security is based on media type, network security may take media type into account, but in neither case is media type the only concern. 2010 saw many sysadmins block PDF outright, across all protocols, until Adobe and Microsoft released patches for their vulnerable software. Just one tiny example I stumbled across in my bookmarks today, of this hard-and-fast rule of the Internet (use the media type which best describes your intended processing model for the data type you're using): http://jibbering.com/blog/?p=514 Nothing to do with REST, just a plain fact -- work against the Web, and the resulting system is one which can't be hardened by anyone, not even Google (except by changing the media type to what it should've been in the first place). Of course, there's a guy in the comment thread who insists on continuing to use text/html for his JSON because his entire system is based on vulnerable, undefined browser handling of JSON as JSON despite being served as text/html, where changing to application/ json would present his users with a download dialogue -- NOT REST. REST systems, by working with the Web, may be more easily hardened by avoiding all concerns derived from improper design: http://ha.ckers.org/blog/20071014/web-application-scanning-depth-statistics/ There's a whole industry out there dedicated to "securing" things like cookie-based authentication, sessions (randomized session IDs instead of sequential) or AJAX-driven stateful cookies -- which just aren't relevant concerns in REST. The "crawling" concern is entirely based on figuring out what the links *really* are, since resources are defined as URI + cookies, instead of the identification of resources constraint. But these crawlers are still based on <a href> and <link> -- not URIs in random markup -- plus the ability to decipher XHR code. This industry is based on known exploits targeted at standardized data types (or, HTTP un-RESTfully implemented). There is no way to predict what exploits may be possible with unknown data types, so unknown data types are considered security holes in Web systems, regardless of what REST has to say about the practice. My advice for hardening any website, is to first re-architect it as a REST system, even if that means accepting crummy browser implementations of HTTP authentication, or media/data types which are less-than-ideal. Such tradeoffs have far-reaching benefits which outweigh the problems they entail. The consequences of avoiding such problems only increase with the scope, scale and longevity of the system. There's enough to worry about not only *by* using standard media types, but also in *how* they are used, without introducing unknowns into the system. The security profile of any REST system is a known problem, not a boundless one requiring consultant deployment of $30K dynamic security software. I'm not implying that REST is secure, only readily securable. REST systems are invulnerable to SQL injection, because SQL isn't part of the API -- it's encapsulated behind a uniform interface, not exposed *as* the interface. This concept is typically ignored -- by systems which are then compromised by SQL injection. IMO, Web security _starts_ with proper media type selection. You may address everything else, but by straying from the IANA registry you're still left with an unknown at the heart of your system, which is just begging to be exploited -- likely in a manner that's already been exposed and corrected for standardized media types, or which would have been brought to light in discussion on ietf-types as part of the registration process. -Eric
This is a shot across the bow of Web Sockets Protocol (or, as I call it, Google Wave Protocol), followed by some RESTful alternatives. Roy, of course, has the money quote: "Generally speaking, REST is designed to avoid tying a server's connection-level resources to a single client using an opaque protocol that is indistinguishable from a denial of service attack. Go figure." http://tech.groups.yahoo.com/group/rest-discuss/message/15818 I don't think it's possible for any protocol to constrain its implementations to be RESTful. All I really require from any extension of the Web is that I *can* implement it RESTfully, if I so choose. Web Sockets precludes REST, which should be an architectural red flag where the Web is concerned. If you know where to look, the rationale behind the dissertation's development of an idealized model for the Web, is steeped in the fundamentals of the Internet. You can disagree with REST, but it's hard to dismiss the logic of 2.3 (which says nothing about improving application performance by stripping out protocol headers, particularly at the expense of caching, btw): " The performance of a network-based application is bound first by the application requirements, then by the chosen interaction style, followed by the realized architecture, and finally by the implementation of each component. In other words, software cannot avoid the basic cost of achieving the application needs; e.g., if the application requires that data be located on system A and processed on system B, then the software cannot avoid moving that data from A to B. Likewise, an architecture cannot be any more efficient than its interaction style allows; e.g., the cost of multiple interactions to move the data from A to B cannot be any less than that of a single interaction from A to B. Finally, regardless of the quality of an architecture, no interaction can take place faster than a component implementation can produce data and its recipient can consume data. ... An interesting observation about network-based applications is that the best application performance is obtained by not using the network. This essentially means that the most efficient architectural styles for a network-based application are those that can effectively minimize use of the network when it is possible to do so, through reuse of prior interactions (caching), reduction of the frequency of network interactions in relation to user actions (replicated data and disconnected operation), or by removing the need for some interactions by moving the processing of data closer to the source of the data (mobile code). " This issue goes beyond REST, to the architecture of the Web and of the Internet itself. Apparently HTTP is incapable of supporting modern Web systems which desire to use push. Apparently, push requires all aspects of good protocol design to be chucked out the window. Late binding? Useless -- who needs compression anyway? These are the assumptions seemingly underlying Web Sockets. But where's the rationale behind those assumptions? What architectural precepts are guiding the design, how does the protocol meet those precepts, and do the results solve the problems as rationalized? Why is HTTP being treated as obsolete? It appears to me, that Web Sockets is not only being made up as it goes along (heh, just like SOA), but represents an outright rejection of architecture itself (heh, also just like SOA). REST and Web architecture are based on an object model -- each object (resource) has properties and methods. In OOP, messaging between objects is part of the language; on the Web, this messaging is HTTP. In Web Sockets, payloads have no relation to objects -- no properties or methods are exposed. I realize that stripped-down packets of data are the goal, but *why* is that remotely a good idea when it goes against every peer- reviewed and ubiquitous protocol design to ever succeed on the Internet, willfully disregarding features that allowed the Web to thrive -- like caching, or filtering/negotiating on data type? Unlike Web architecture, there is no way to restrict a browser from rendering a PDF, except by blocking Web Sockets communication outright. Unlike Web architecture, content is sent without indicating length or chunked, or even delimiting one message from another by adhering to a 1:1 request/response ratio? Unlike Web architecture, caching is impossible because the protocol is stateful. Unlike Web architecture, the user has no control (via browser settings) over what content should be handled in what way. All of these features of the Web evolved through consensus and working code, guided by solid architectural rationale (even before REST), and were essential in the success of the Web -- apparently all this is completely irrelevant if we want to do push! Hogwash. If Web Sockets were to be accepted as an RFC, Jon Postel would roll over in his grave. Jon thought it was important that any application protocol be a well-behaved citizen of the Net. His influence is why RFCs are written the way they're written, to this day, except for Web Sockets (which recently introduced three SHOULDs, but everything else is MUST/MUST NOT, resting on an assumption that all implementations will be fully compliant good Net citizens and therefore graceful degradation isn't needed, presumably). http://www.ics.uci.edu/~rohit/IEEE-L7-Jon-NNTP.html http://tools.ietf.org/html/rfc2468 Dr. Postel's leadership is responsible for the Internet architecture being what it is. Aside from ICMP, every protocol he wrote or influenced, push or pull, shares the request/response idiom. IRC, FTP, SMTP, NNTP, HTTP and every other client-server application protocol I can think of (except Gopher) sends a _response code_ after receiving a request. Web Sockets is off in its own little world of completely untried and untested architecture astronuttery which goes against the very nature of Internet messaging -- once a connection is established with a single request, multiple responses are sent until the connection is closed. This is not the tried-and-true architecture of the Internet, it's a greenfield experiment with no foundation in what's known to work. If you're going to propose an extension to the Web architecture that defies the Internet itself, I'm gonna need to see your rationale as to exactly what problem it is you're trying to solve, why it can't be solved in a Web-native or even Internet-native fashion, and what design constraints you expect will result in a protocol meeting those needs. Lacking that, I just can't be expected to approve of winging it on a blank sheet of paper and making up a spec as it moves along. Like SOA, Web Sockets is an example of the null architecture, i.e. no constraints. While there are smart folks involved, doing their best to make sure there are no obvious security holes in the protocol, I can't help but think that hackers will be having a field day with it -- any new, untried and untested pattern can't be considered to have the same security considerations as request/response messaging, meaning it's all just guesswork. You can secure against known attack vectors, but you can't secure against attack vectors you don't know you're creating, which you're likely doing by ignoring all that has come before. Despite the efforts of a minority, the WG doesn't seem to think it's all that big a deal that their protocol as currently written won't interoperate with the deployed infrastructure, or that it isn't really a problem to require that such infrastructure be updated to avoid deadlock conditions between existing load balancers and the servers they farm, when encountering Web Sockets. If that's the case, then why not add a push method to HTTP? I'll get to that... First, though, how to do RESTful push given the current reality. Is there some requirement that long polling results in a 200 response? Better to assign a sub-resource to handle long polling, and have it send a redirect to the updated resource. Instead of sending a new representation to every client polling, just a URI is sent, allowing all those clients to take advantage of caching of the main resource. Not an ideal solution, but an improvement on common practice. The problem is how to make one resource capable of both pull and push... http://tools.ietf.org/html/rfc2177 So why not define HTTP IDLE, if the solution is going to require all intermediaries be upgraded in order to work, anyway? IDLE would be almost exactly like GET, except that instead of a 304 the connection stays open. Caches could pool IDLE requests from multiple clients, reducing load on origin servers. The advantage of caching solves the problem of reducing the bandwidth required to service push requests, by several orders of magnitude at Internet scale as compared to using a protocol that's essentially an uncacheable, raw TCP connection based on the provably false assumption that network or user-perceived performance are somehow impaired by the overhead of HTTP headers (OK, they are a little, but it's a tradeoff worth making -- an un-protocol isn't the solution). Wouldn't it be better, in the commonly-cited use case of a stock ticker, if that exactly-the-same data could be shared instead of having to be delivered separately to every browser interested in the resource -- at the same time, no less? The Web Sockets solution, i.e. reducing protocol overhead by eliminating headers entirely, throws this baby out with the bathwater. Surely a better solution is warranted? Unless Web Sockets is committed to being compatible with HTTP's Upgrade facility (instead of requiring an upgrade of the deployed infrastructure), just what problem is it solving that wouldn't be better, more easily and more securely addressed by extending HTTP rather than declaring it obsolete? Even if this problem is recognized and solved, is this protocol really an HTTP "upgrade" or rather, a completely fundamentally opposed protocol violating basic Web security by using HTTP to tunnel through any firewalls, even as a temporary stopgap until ws:// and wss:// are approved? Using Upgrade to launch HTTP 1.2, rHTTP or Waka makes sense; Web Sockets, not so much. Surely *any* solution that's compatible with RESTful implementation, is by default aligned with both Web and Internet architecture? I fail to understand why REST is a toxic concept to the browser vendors. It seems to me like it's in their best interests, unless of course you're Google and your goal is not to improve the Web, but to try to corrupt it into being a replacement for an OS for the purpose of taking market share away from Apple and Microsoft... Without any technical basis for Web Sockets, I'm left to ponder the political considerations of those pushing hardest against using HTTP for Web messaging (as if it were obsolete). So I'm calling "ITAS" (see post title) on Web Sockets -- this isn't REST, Web or Internet architecture; in fact, it isn't architecture at all. As such, it ought to be killed in favor of an architecture- oriented solution. Apologies to those working on it, I have no issues with y'all trying to make the best of a situation being foisted on us by the runaway HTML 5 project. But my opinion is that it's DOA, and given that, I'd just as soon it not see the light of day so I won't be forced to deal with it for the rest of my career even if I choose _not_ to implement it in my own projects. Kinda like Flash. -Eric
Eric J. Bowman wrote: > Web Sockets Protocol Architecture is fine IMHO, all we need do to is stick an HTTP server on the "client side". We've been using the pattern for years on the "server side" and it works wonders for RESTful async messaging w/ HTTP. In fact, the architecture of the Web get's exponentially more interesting when you put an HTTP Server, Client and Cache on each machine - RESTful-p2p I guess. Anyway, nice post, good points, web sockets is a bit of a bag-o-shite but it's better than long-poll HTTP, or polling - out of interest have looked at sending HTTP messages over WebSockets? if you could then there would be nothing to stop you creating an HTTP Server in the browser and kicking the web in to almost-async-p2p mode using HTTP and RESTful patterns whilst waiting on proper support, and giving the opportunity t explore all the many challenges faced coupling it to the presentation tier. I'm rambling now! Nathan
Hi Nathan, A p2p REST style, and consequently a p2p Web, is definitely not rambling -- what it *is* is the focus of my ph.d. research :). Having a p2p network on the HTTP level would not only solve a lot of existing communication problems of the Web, but also increase the number application-level functionalities exposed on the Web. Think of all the functionalities that are "trapped" on the client side, disconnected from the Web, although they originated from the Web by the means of navigating to a Web application. I should shut up before someone publishes these ideas in a paper before I do. :) + If you already haven't, you should check out Justin Erenkrantz's dissertation on CREST - http://www.erenkrantz.com/CREST/ which is an "evolution" of REST founded on *very* similar ideas. Guess who Justin's advisor was... :) Ivan --- In rest-discuss@yahoogroups.com, Nathan <nathan@...> wrote: > > Eric J. Bowman wrote: > > Web Sockets Protocol > > Architecture is fine IMHO, all we need do to is stick an HTTP server on > the "client side". > > We've been using the pattern for years on the "server side" and it works > wonders for RESTful async messaging w/ HTTP. > > In fact, the architecture of the Web get's exponentially more > interesting when you put an HTTP Server, Client and Cache on each > machine - RESTful-p2p I guess. > > Anyway, nice post, good points, web sockets is a bit of a bag-o-shite > but it's better than long-poll HTTP, or polling - out of interest have > looked at sending HTTP messages over WebSockets? if you could then there > would be nothing to stop you creating an HTTP Server in the browser and > kicking the web in to almost-async-p2p mode using HTTP and RESTful > patterns whilst waiting on proper support, and giving the opportunity t > explore all the many challenges faced coupling it to the presentation tier. > > I'm rambling now! > > Nathan >
On Fri, Oct 15, 2010 at 12:37 PM, izuzak <izuzak@...> wrote: > A p2p REST style, and consequently a p2p Web, is definitely not rambling -- what it *is* is the focus of my ph.d. research :). Having a p2p network on the HTTP level would not only solve a lot of existing communication problems of the Web, but also increase the number application-level functionalities exposed on the Web. Think of all the functionalities that are "trapped" on the client side, disconnected from the Web, although they originated from the Web by the means of navigating to a Web application. > Is that the same as, or different than, the various attempts to put a server in your browser, like the deceased KnowNow or the apparently still-living Opera Unite?
Do WebHooks make for a p2p web? If so; I guess a (registered?!) media type and/or some link relations would be required to make it RESTful? Cheers, Mike On Fri, Oct 15, 2010 at 6:37 PM, izuzak <izuzak@...> wrote: > Hi Nathan, > > A p2p REST style, and consequently a p2p Web, is definitely not rambling -- what it *is* is the focus of my ph.d. research :). Having a p2p network on the HTTP level would not only solve a lot of existing communication problems of the Web, but also increase the number application-level functionalities exposed on the Web. Think of all the functionalities that are "trapped" on the client side, disconnected from the Web, although they originated from the Web by the means of navigating to a Web application. > > I should shut up before someone publishes these ideas in a paper before I do. :) > > + If you already haven't, you should check out Justin Erenkrantz's dissertation on CREST - http://www.erenkrantz.com/CREST/ which is an "evolution" of REST founded on *very* similar ideas. Guess who Justin's advisor was... :) > > Ivan > > > --- In rest-discuss@yahoogroups.com, Nathan <nathan@...> wrote: >> >> Eric J. Bowman wrote: >> > Web Sockets Protocol >> >> Architecture is fine IMHO, all we need do to is stick an HTTP server on >> the "client side". >> >> We've been using the pattern for years on the "server side" and it works >> wonders for RESTful async messaging w/ HTTP. >> >> In fact, the architecture of the Web get's exponentially more >> interesting when you put an HTTP Server, Client and Cache on each >> machine - RESTful-p2p I guess. >> >> Anyway, nice post, good points, web sockets is a bit of a bag-o-shite >> but it's better than long-poll HTTP, or polling - out of interest have >> looked at sending HTTP messages over WebSockets? if you could then there >> would be nothing to stop you creating an HTTP Server in the browser and >> kicking the web in to almost-async-p2p mode using HTTP and RESTful >> patterns whilst waiting on proper support, and giving the opportunity t >> explore all the many challenges faced coupling it to the presentation tier. >> >> I'm rambling now! >> >> Nathan >> > > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
On Oct 15, 2010, at 9:01 PM, Mike Kelly wrote: > Do WebHooks make for a p2p web? > > If so; I guess a (registered?!) media type and/or some link relations > would be required to make it RESTful? Beware though that all these pubsubby[1] approaches make the system much more difficult to understand and much less easy to evolve. I'd personally go a very long way trying to get by with polling. Jan [1] Been there, done that :-) http://search.cpan.org/~alger/Apache-MONITOR-0.02/ > > Cheers, > Mike > > > On Fri, Oct 15, 2010 at 6:37 PM, izuzak <izuzak@...> wrote: >> Hi Nathan, >> >> A p2p REST style, and consequently a p2p Web, is definitely not rambling -- what it *is* is the focus of my ph.d. research :). Having a p2p network on the HTTP level would not only solve a lot of existing communication problems of the Web, but also increase the number application-level functionalities exposed on the Web. Think of all the functionalities that are "trapped" on the client side, disconnected from the Web, although they originated from the Web by the means of navigating to a Web application. >> >> I should shut up before someone publishes these ideas in a paper before I do. :) >> >> + If you already haven't, you should check out Justin Erenkrantz's dissertation on CREST - http://www.erenkrantz.com/CREST/ which is an "evolution" of REST founded on *very* similar ideas. Guess who Justin's advisor was... :) >> >> Ivan >> >> >> --- In rest-discuss@yahoogroups.com, Nathan <nathan@...> wrote: >>> >>> Eric J. Bowman wrote: >>>> Web Sockets Protocol >>> >>> Architecture is fine IMHO, all we need do to is stick an HTTP server on >>> the "client side". >>> >>> We've been using the pattern for years on the "server side" and it works >>> wonders for RESTful async messaging w/ HTTP. >>> >>> In fact, the architecture of the Web get's exponentially more >>> interesting when you put an HTTP Server, Client and Cache on each >>> machine - RESTful-p2p I guess. >>> >>> Anyway, nice post, good points, web sockets is a bit of a bag-o-shite >>> but it's better than long-poll HTTP, or polling - out of interest have >>> looked at sending HTTP messages over WebSockets? if you could then there >>> would be nothing to stop you creating an HTTP Server in the browser and >>> kicking the web in to almost-async-p2p mode using HTTP and RESTful >>> patterns whilst waiting on proper support, and giving the opportunity t >>> explore all the many challenges faced coupling it to the presentation tier. >>> >>> I'm rambling now! >>> >>> Nathan >>> >> >> >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> >> > > > ------------------------------------ > > Yahoo! Groups Links > > >
Hey Mike, IMO, I wouldn't say they do. Webhooks are a way of doing callbacks between components that are accessible on the Web (have a HTTP URI), which are server components. However, the Web is still a client-server model where clients are not exposed on the Web. Using the PubSubHubBub protocol as an example, the entity subscribing to a Hub must pass an URI that will be used by the Hub to notify it of new posts. Entities on the Web having URIs are server components, so PSHB itself can't be used to push notifications to client components, but only to servers which then must transfer the notification to the client (somehow). If the Web was p2p, a client could be a PSHB subscriber as it would (be able to) have a (HTTP) URI. Does this make sense? Ivan --- In rest-discuss@yahoogroups.com, Mike Kelly <mike@...> wrote: > > Do WebHooks make for a p2p web? > > If so; I guess a (registered?!) media type and/or some link relations > would be required to make it RESTful? > > Cheers, > Mike > > > On Fri, Oct 15, 2010 at 6:37 PM, izuzak <izuzak@...> wrote: > > Hi Nathan, > > > > A p2p REST style, and consequently a p2p Web, is definitely not rambling -- what it *is* is the focus of my ph.d. research :). Having a p2p network on the HTTP level would not only solve a lot of existing communication problems of the Web, but also increase the number application-level functionalities exposed on the Web. Think of all the functionalities that are "trapped" on the client side, disconnected from the Web, although they originated from the Web by the means of navigating to a Web application. > > > > I should shut up before someone publishes these ideas in a paper before I do. :) > > > > + If you already haven't, you should check out Justin Erenkrantz's dissertation on CREST - http://www.erenkrantz.com/CREST/ which is an "evolution" of REST founded on *very* similar ideas. Guess who Justin's advisor was... :) > > > > Ivan > > > > > > --- In rest-discuss@yahoogroups.com, Nathan <nathan@> wrote: > >> > >> Eric J. Bowman wrote: > >> > Web Sockets Protocol > >> > >> Architecture is fine IMHO, all we need do to is stick an HTTP server on > >> the "client side". > >> > >> We've been using the pattern for years on the "server side" and it works > >> wonders for RESTful async messaging w/ HTTP. > >> > >> In fact, the architecture of the Web get's exponentially more > >> interesting when you put an HTTP Server, Client and Cache on each > >> machine - RESTful-p2p I guess. > >> > >> Anyway, nice post, good points, web sockets is a bit of a bag-o-shite > >> but it's better than long-poll HTTP, or polling - out of interest have > >> looked at sending HTTP messages over WebSockets? if you could then there > >> would be nothing to stop you creating an HTTP Server in the browser and > >> kicking the web in to almost-async-p2p mode using HTTP and RESTful > >> patterns whilst waiting on proper support, and giving the opportunity t > >> explore all the many challenges faced coupling it to the presentation tier. > >> > >> I'm rambling now! > >> > >> Nathan > >> > > > > > > > > > > ------------------------------------ > > > > Yahoo! Groups Links > > > > > > > > >
Would there be much of a distinction between clients and servers on a p2p web? What prevents "clients" having URIs now? Someone's already mentioned Opera Unite - wouldn't exposing webhooks out of the browser fit the bill? Cheers, Mike On Sat, Oct 16, 2010 at 3:18 PM, izuzak <izuzak@...> wrote: > Hey Mike, > > IMO, I wouldn't say they do. Webhooks are a way of doing callbacks between components that are accessible on the Web (have a HTTP URI), which are server components. However, the Web is still a client-server model where clients are not exposed on the Web. > > Using the PubSubHubBub protocol as an example, the entity subscribing to a Hub must pass an URI that will be used by the Hub to notify it of new posts. Entities on the Web having URIs are server components, so PSHB itself can't be used to push notifications to client components, but only to servers which then must transfer the notification to the client (somehow). If the Web was p2p, a client could be a PSHB subscriber as it would (be able to) have a (HTTP) URI. > > Does this make sense? > > Ivan > > --- In rest-discuss@yahoogroups.com, Mike Kelly <mike@...> wrote: >> >> Do WebHooks make for a p2p web? >> >> If so; I guess a (registered?!) media type and/or some link relations >> would be required to make it RESTful? >> >> Cheers, >> Mike >> >> >> On Fri, Oct 15, 2010 at 6:37 PM, izuzak <izuzak@...> wrote: >> > Hi Nathan, >> > >> > A p2p REST style, and consequently a p2p Web, is definitely not rambling -- what it *is* is the focus of my ph.d. research :). Having a p2p network on the HTTP level would not only solve a lot of existing communication problems of the Web, but also increase the number application-level functionalities exposed on the Web. Think of all the functionalities that are "trapped" on the client side, disconnected from the Web, although they originated from the Web by the means of navigating to a Web application. >> > >> > I should shut up before someone publishes these ideas in a paper before I do. :) >> > >> > + If you already haven't, you should check out Justin Erenkrantz's dissertation on CREST - http://www.erenkrantz.com/CREST/ which is an "evolution" of REST founded on *very* similar ideas. Guess who Justin's advisor was... :) >> > >> > Ivan >> > >> > >> > --- In rest-discuss@yahoogroups.com, Nathan <nathan@> wrote: >> >> >> >> Eric J. Bowman wrote: >> >> > Web Sockets Protocol >> >> >> >> Architecture is fine IMHO, all we need do to is stick an HTTP server on >> >> the "client side". >> >> >> >> We've been using the pattern for years on the "server side" and it works >> >> wonders for RESTful async messaging w/ HTTP. >> >> >> >> In fact, the architecture of the Web get's exponentially more >> >> interesting when you put an HTTP Server, Client and Cache on each >> >> machine - RESTful-p2p I guess. >> >> >> >> Anyway, nice post, good points, web sockets is a bit of a bag-o-shite >> >> but it's better than long-poll HTTP, or polling - out of interest have >> >> looked at sending HTTP messages over WebSockets? if you could then there >> >> would be nothing to stop you creating an HTTP Server in the browser and >> >> kicking the web in to almost-async-p2p mode using HTTP and RESTful >> >> patterns whilst waiting on proper support, and giving the opportunity t >> >> explore all the many challenges faced coupling it to the presentation tier. >> >> >> >> I'm rambling now! >> >> >> >> Nathan >> >> >> > >> > >> > >> > >> > ------------------------------------ >> > >> > Yahoo! Groups Links >> > >> > >> > >> > >> > > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
Rick Cobb wrote: > > Well, one point about Mr. Postel -- he largely worked in an Internet > where all machines were reachable via the Internet Protocol, and > security was managed on a protocol-endpoint (port) basis. Most of > the protocols he worked on were end-to-end, and the connection could > be established in either direction. > It's also interesting to note that back in the day, bandwidth, CPU, RAM, HD etc. were precious; now they're commodities. The point being, if it were a good idea to create a universal application protocol essentially as raw TCP access, instead of protocols with "header overhead", things would have been done that way long ago. http://tools.ietf.org/html/rfc1958 http://tools.ietf.org/html/rfc3439 http://tools.ietf.org/html/rfc3724 It seems to me that the end-to-end principle has evolved since Dr. Postel's time, but still holds. Web Sockets ignores this principle. Is the Web such a failure that it's time to "raze the city and rebuild it" rather than repaving the streets? If the future of HTTP is binding to SCTP instead of TCP, does it make sense to couple Web Sockets to TCP? Isn't this exactly the "vertical coupling" described in RFC 3439? > > That Internet is long dead; NAT, HTTP, and RFC1918 killed it. The > Web established a network that has big well-named servers that > clients must bow in supplication to connect to -- and anonymous > clients that can't be reached without them establishing and holding a > connection of some sort. > I predict that Internet will come back to life in the form of IPv6, but for political rather than technical reasons. The 2010 Postel Award winner is Jianping Wu: http://en.wikipedia.org/wiki/Dr._Jianping_Wu China, due to the political need for censorship and control, requires that each client node have a routable address. Politically, I prefer NAT. Technologically, I prefer IPv6. I agree with you, though -- IPv4 begat RFC 1918, begat long-polling. > > There *are* legitimate applications for push. Not everything is > request/response: P2P and publish/subscribe are legitimate > communication patterns. That's not to say they're REST, but if "it's > the architecture, stupid", you do have to look at the application > communication pattern and find a way to deal with it. > I have to disagree. My view is that the application's goals must be realized within the prevailing architecture, and the communication pattern designed accordingly. RESTful pub/sub is possible using request/response HTTP. RESTful P2P? On the one hand, REST has that client-server constraint. OTOH, Roy has stated that Waka is a P2P protocol, in the Q&A at the end of this session: http://streaming.linux-magazin.de/events/apacheconfree/archive/rfielding/frames-java.htm I don't see why push needs to break the request/response model (Waka has the MONITOR method). Each message is still going over a network, so there needs to be some sort of response code indicating success/fail. All that's different is that the user-agent acts as server, and the origin server acts as client. Using rHTTP, this can be just as RESTful as pull. If a stock-ticker app is implemented using Web Sockets, how do I know I'm not missing anything due to dropped packets? Can I verify the integrity of the data received, even if I get all the packets? TCP is fine for this at the transmission layer, but not the application layer. These seem to me like problems inherent to breaking the request/response model, rather than problems specific to Web Sockets; thus, ITAS... " A specific case is that any network, however carefully designed, will be subject to failures of transmission at some statistically determined rate. The best way to cope with this is to accept it, and give responsibility for the integrity of communication to the end systems. " http://tools.ietf.org/html/rfc1958 Whereas with BitTorrent, request/response doesn't matter because the end result (a file of size=x and checksum=x) is known -- it's still end- to-end. With Web push, no a priori knowledge of the parameters of the transfer exists unless presented as protocol headers (like rHTTP, unlike Web Sockets). > > HTTP, essentially the only important protocol in the context of the > current Internet, makes it vey hard to do a good job on P2P or > pub/sub. (...) > Very hard, yes, but not impossible (except P2P, HTTP isn't a P2P protocol by any stretch). Which is why I object to the Web Sockets notion that HTTP must be replaced in order to do push, particularly if the alleged benefit is illogical, and the potential consequences severe. As you point out, the requirement of a hanging connection is a limitation imposed not by HTTP but by RFC 1918, so replacing HTTP isn't the answer (without thoroughly documenting rationale, first). http://tech.groups.yahoo.com/group/rest-discuss/message/8314 http://www.dehora.net/journal/2007/07/earned_value.html Just some interesting posts about working with the Web instead of against it. > > Roy's postings about the economics of scale of these communication > patterns are sensible (though Facebook seems to have been able to > monetize pub/sub pretty well), but people are going to need to > implement them. > (Not to jump all over your example, I was just looking for any excuse to bring up Fb...) I wouldn't hold Facebook up as an example; there's more to REST than scaling, which Fb doesn't actually do very well -- judging from their reputation for flaky service, and the fact that it's standard practice at Fb (and most other Web 2.0 sites) to disable features during peak usage. I don't even know that Fb is monetized, vs. being a VC funding pit... In fact, Facebook wins my inaugural ITAS Award -- to be granted intermittently based on (de)merit: http://blogs.wsj.com/digits/2010/09/24/what-caused-facebooks-worst-outage-in-four-years/ There's a reason HTTP has a 500 error, and why the purported benefit of not exposing errors to *some* users is a logical fallacy. Is total system failure the automatic penalty for coding typos in Web Sockets, due to the lack of *any* response codes, let alone for error handling? From REST, 2.3.7: " Reliability, within the perspective of application architectures, can be viewed as the degree to which an architecture is susceptible to failure at the system level in the presence of partial failures within components, connectors, or data. Styles can improve reliability by avoiding single points of failure, enabling redundancy, allowing monitoring, or reducing the scope of failure to a recoverable action. " I don't even have to look at Facebook, the failure analysis is enough basis for me to wave my magic guru wand and declare NOT REST. RESTful systems don't DDoS themselves! Internet architecture allows for monitoring. Web Sockets doesn't, nor does it "reduce the scope of failure to recoverable actions" due to its cross-layer coupling (RFC 3439). > > Now, this isn't to defend websockets -- but to say that if you're > going to accept a non-addressable Internet, people will need to > invent things like it. > Well, sure. But the issue is what problem is Web Sockets trying to solve? There's no workaround to hanging connections, all that can be done about them is make them scale better -- which Web Sockets doesn't do. I think rHTTP is as fine a solution to this problem as is possible, short of IPv6 becoming ubiquitous and allowing pub/sub via server-stored IP addresses. > > What we did at KnowNow (remember Rohit Khare and Adam Rifkin?) is > build a tiny web server in Javascript. The implementation of > resource handlers were (roughly) Javascript functions; the dominant > media type was form/x-www-urlencoded. As we got better at writing > this server, it got more RESTful. But the connection itself was > always a tunnel; there was no alternative. Whether we implemented > that with long-poll or just a big GET with function callbacks, it was > certainly more RESTful than the websocket approach -- but it's not > like somebody could easily add an HTTP security system on those > tunnels. > Yes, actually I just came across a KnowNow reference as I was typing this response: http://lists.w3.org/Archives/Public/www-tag/2002Apr/0242.html That thread discusses the Web architecture as being one in which URIs are used to address resources. In Web Sockets, one URI starts sending multiple, unrelated messages -- each of which seems like a different resource to me, and should therefore be addressable via separate URIs. Nebulous transmissions aren't bookmarkable, or even distinguishable from one another. Can't HTTP security be added to rHTTP, or am I missing something? > > I'm perfectly willing to admit that systems that use P2P or > publish/subscribe communication patterns aren't REST, but it's not > like anybody out there is generally opening their networks to XMPP, > BEEP, AMQP.... Nor are they providing mechanisms (well, other than > email addresses, hi, Mr. Spam) for addressing real endpoints so you > don't have to hold request/response HTTP connections open in order to > implement them. > Well, I'm not willing to say that pub/sub can't be RESTful, just that I've yet to see it done that way (using redirection). RESTful P2P, I don't know... But my post wasn't limited to REST (nowhere else is appropriate for general Internet architecture discussion). The other protocols you mention all represent architectural styles which at least conform to the fundamentals of the Internet, rather than being in active denial of them, like Web Sockets. -Eric
Nathan wrote: > > In fact, the architecture of the Web get's exponentially more > interesting when you put an HTTP Server, Client and Cache on each > machine - RESTful-p2p I guess. > I don't see how that's P2P, or what P2P has to do with Web Sockets... > > Anyway, nice post, good points, web sockets is a bit of a bag-o-shite > but it's better than long-poll HTTP, or polling - out of interest > have looked at sending HTTP messages over WebSockets? > It still is long-polling, AFAIC. The WG has identified using Web Sockets to transfer HTTP frames as a security issue. Unfortunately, the solution being discussed is to hash the payload. Solves the problem, but represents a fundamental break with Internet architecture in that it requires developers to use a library to develop/debug their APIs (not seen as a problem except by a minority of participants). I hope I don't have to explain my problem with that, to this audience... I don't see what advantage a browser-based httpd has over rHTTP. -Eric
Mike Kelly wrote: > > Would there be much of a distinction between clients and servers on a > p2p web? What prevents "clients" having URIs now? > RFC 1918. What's your client IP address? Can I route to it? Will it be the same five minutes from now? In many cases, the answer is, "I don't know, no, and probably not." There are plenty of websites out there which echo a visitors' IP address, most of 'em get my dedicated IP address wrong because of how the NAT at my ISP is configured. -Eric
I've fiddled with hal's design a bit since and - in the interests of
the uniform interface constraint, scalability and serendipitous reuse
- I have since registered it with IANA as application/vnd.hal+xml
http://www.iana.org/assignments/media-types/application/vnd.hal+xml
Any thoughts?
Cheers,
Mike
On Wed, Jun 9, 2010 at 9:13 AM, Mike Kelly <mike@...> wrote:
> Here's an example of something I'm calling "hal":
>
> GET /list
>
> ====
>
> <link rel="self" href="/list">
> <link rel="description" href="/list/description" />
> <link rel="search" href="/list/search/{search_term}" />
> <link rel="item" name=1 href="/items/some_item">
> <title>Some Item</title>
> <content>This is some item content</content>
> </link>
> <link rel="item" name=2 href="/foo/some_other_item">
> <title>Some Other Item</title>
> <content>This is content for some other item</content>
> </link>
> </link>
>
> Hal just defines a standard way to express hyperlinks in xml via a simple
> <link> element. The link element has the following attributes: @rel @href
> @name
> - Simple links can be written as solo/self-closing tags.
> - Links used to indicate embedded representations from other resources
> should be written with open and close tags, with the embedded representation
> contained within.
> - The root element must always be a link with an @rel of self and an
> appropriate @href value.
> - @name must be unique between all links in a document with the same @rel
> value, but is not unique within the entire document. i.e. a link element
> cannot be referred to by @name alone
> - @href value may contain a URI template
> Be interested to hear whether people think there's any value/legs in this,
> problems with it, etc.
> Cheers,
> Mike
> (see also:
> http://restafari.blogspot.com/2010/06/please-accept-applicationhalxml.html )
Mike Kelly wrote: > > Do WebHooks make for a p2p web? > I don't see the relation between pub/sub and P2P. > > If so; I guess a (registered?!) media type and/or some link relations > would be required to make it RESTful? > "RESTful Webhooks" is a fine HTTP API, but not a REST API. To be RESTful would require a rewrite from scratch. Roy had the money quote on this, too, but I couldn't find it. Something about marketingspeak, IIRC. -Eric
Jan Algermissen wrote: > > Beware though that all these pubsubby[1] approaches make the system > much more difficult to understand and much less easy to evolve. > IOW, violates the principles of simplicity, reliability, visibility, reusability and scalability, in addition to evolvability. > > I'd personally go a very long way trying to get by with polling. > +1 Using redirection such that payloads are cacheable, when using long- polling, impacts scalability to a much lesser extent than sending 200 OK or using Web Sockets. Whether or not the simplicity tradeoff is appropriate for the system under development, is a decision for the developer of the system. > > [1] Been there, done that :-) > http://search.cpan.org/~alger/Apache-MONITOR-0.02/ > Has the MONITOR method ever been documented? I've vaguely heard of it, but my inability to link to a definition is why I used IDLE as an example. -Eric
Mike Kelly wrote: > > Any thoughts? > +1 to registering a media type. :-) -Eric
On Sat, Oct 16, 2010 at 6:47 PM, Eric J. Bowman <eric@...> wrote: > Mike Kelly wrote: >> >> Any thoughts? >> > > +1 to registering a media type. :-) > > -Eric > Yeah - it's amazing how much of a difference that's made. Cheers, Mike
Mike Kelly wrote: > > Yeah - it's amazing how much of a difference that's made. > That +1 was no Brownie Point. The IANA registry has much greater exposure to those looking for a compatible media type for their system, than does your weblog. Aside from that, the real benefits of REST only come with uptake, i.e. ubiquity, towards which you've taken the first step. At the very least, I have someone else's word (instead of only yours) to go on that it is indeed an XML format -- just by seeing that it's registered, without having to introspect the payload or google. -Eric
Well, one point about Mr. Postel -- he largely worked in an Internet where all machines were reachable via the Internet Protocol, and security was managed on a protocol-endpoint (port) basis. Most of the protocols he worked on were end-to-end, and the connection could be established in either direction. That Internet is long dead; NAT, HTTP, and RFC1918 killed it. The Web established a network that has big well-named servers that clients must bow in supplication to connect to -- and anonymous clients that can't be reached without them establishing and holding a connection of some sort. There *are* legitimate applications for push. Not everything is request/response: P2P and publish/subscribe are legitimate communication patterns. That's not to say they're REST, but if "it's the architecture, stupid", you do have to look at the application communication pattern and find a way to deal with it. HTTP, essentially the only important protocol in the context of the current Internet, makes it vey hard to do a good job on P2P or pub/sub. Roy's postings about the economics of scale of these communication patterns are sensible (though Facebook seems to have been able to monetize pub/sub pretty well), but people are going to need to implement them. Now, this isn't to defend websockets -- but to say that if you're going to accept a non-addressable Internet, people will need to invent things like it. At KnowNow (thanks to Rohit Khare and Adam Rifkin), we built a tiny web server in Javascript. The implementation of resource handlers were (roughly) Javascript functions; the dominant media type was form/x-www-urlencoded. As we got better at writing this server, it got more RESTful. But the connection itself was always a tunnel; there was no alternative. Whether we implemented that with long-poll or just a big GET with function callbacks, it was certainly more RESTful than the websocket approach -- but it's not like somebody could easily add an HTTP security system on those tunnels. I'm perfectly willing to admit that systems that use P2P or publish/subscribe communication patterns aren't REST, but it's not like anybody out there is generally opening their networks to XMPP, BEEP, AMQP.... Nor are they providing mechanisms (well, other than email addresses, hi, Mr. Spam) for addressing real endpoints so you don't have to hold request/response HTTP connections open in order to implement them. The 304 Idle idea is kind of cool, too; thanks for getting me thinking in that direction.
Mike Kelly wrote: > > Would there be much of a distinction between clients and servers on a > p2p web? > Even in a P2P protocol like BitTorrent, each discrete transfer still has a client and a server, just like how an intermediary cache is either a client or a server depending on what it's doing. I think the right question is whether there's still a distinction between user- agent and origin server. I think the answer to that is yes -- the user-agent makes a request from the origin server (tracker). When serving files, it's acting as an intermediary in the transactions between some other user-agents and the origin server (tracker). (Or perhaps there are multiple origin servers, i.e. the tracker and whatever systems are seeding the actual file. It would be interesting to use the approach in Roy's thesis to describe a BitTorrent architectural style. Which is exactly what the starting point should have been before embarking on Web Sockets -- even if you're not a fan of REST-the-style, there's a methodology there for the disciplined development of new Web protocols. It's possible to add other styles besides what's in Chapter 3, as appropriate, to introduce constraints which aren't in REST, based on the desirable properties of prior art.) Or at least this is the explanation I come up with, to square Roy's statement that Waka is P2P with the client-server constraint. There's still independent evolvability of components, and separation of user interface (selecting a torrent from a tracker) from data storage, which are the purposes of the constraint. If the purpose of the constraint is met, claiming P2P is a violation would be nitpicking semantics. In HTTP long-polling, the origin server is essentially a tracker, in that it knows a bunch of clients are after the same data. In a P2P protocol, this could work much like BitTorrent, where the tracker orchestrates the clients to distribute the response amongst themselves after seeding a few power-users. But I don't think any amount of scripted tunneling can make HTTP work like that in existing browsers, Web Sockets or otherwise. It would be interesting to be proved wrong on that, however. -Eric
I am working on a comparison of the amount of coupling of various connector types. I have identified the list below. Does anyone have an additional idea? (I am using 'connector' in a somewhat sloppy way here. That is why I also include things like "file based integration"). Here is my list: - File based integration (coordination of processes using the file system) - Database based integration (coordination of processes using an RDBMS - Message Queues - RPC (I am equating all forms of it: RMI, DCOM, Corba, WS-*, URI tunneling. Makes sense?[1]) - PubSub - HTTP Type I ( HTTP with design time WADL or similar and generic media types) - HTTP Type II ( HTTP with design time WADL or similar and specific media types) - REST Anything else? Jan [1] Meaning: do they all have the same coupling effect? I guess one could consider RMI to couple more than WS-* because it also couples on the Language used.
Jan Algermissen wrote: > > I am working on a comparison of the amount of coupling of various > connector types. I have identified the list below. > Can you elaborate? That looks more like a list of architectural styles, with the first two as subsets of the last one. Is PubSub = EBI? I don't know what you mean by "URI tunneling", is that like Web Sockets? -Eric
> [1] Meaning: do they all have the same coupling effect? I see them varying substantially in term of coupling. You pointed out one aspect where they can differ and there are many other. For instance JAX-WS induce less coupling than JAX-RPC because the former wont break when receiving a response with additional, new fields. Another example given by RPC system found in static languages where the interface of the remote procedure has to be hardcoded in the client system vs. some RPC system found in dynamic languages and that don't require such hardcoding. I would say that each RPC technology has its own flavor of coupling, and some induce much less coupling than others.
http://www.infoq.com/news/2010/10/WCF-REST I agree :-) Recently I delivered a Eurpoean Virtual ALT.NET<http://europevan.blogspot.com/2010/10/glenn-block-on-wcf-evolving-for-web-e.html>talk on the work we're doing in WCF (which many of you have helped with) around HTTP / REST. Infoq picked up the session and wrote a nice article. We'll be out there soon, it's not smoke and mirrors. Glenn
This is good to see. Frankly, I don't care if it's RESTful or not, or if folks use it RESTfully or not. I think the important thing here is to see Microsoft recognize that HTTP is an application protocol, not a transfer protocol, moving forward. Get in the right ballpark first, then worry about the ground rules. -Eric
Agreed. If you check the preso I made it clear this was about http and enabling REST.... On 10/21/10, Eric J. Bowman <eric@...> wrote: > This is good to see. Frankly, I don't care if it's RESTful or not, or > if folks use it RESTfully or not. I think the important thing here is > to see Microsoft recognize that HTTP is an application protocol, not > a transfer protocol, moving forward. Get in the right ballpark first, > then worry about the ground rules. > > -Eric > -- Sent from my mobile device
On Oct 22, 2010, at 12:31 AM, Glenn Block wrote: > Agreed. If you check the preso I made it clear this was about http and > enabling REST.... > > On 10/21/10, Eric J. Bowman <eric@...> wrote: >> This is good to see. Frankly, I don't care if it's RESTful or not, or >> if folks use it RESTfully or not. I think the important thing here is >> to see Microsoft recognize that HTTP is an application protocol, not >> a transfer protocol, "not a s/transfer/transport/ protocol"? Jan >> moving forward. Get in the right ballpark first, >> then worry about the ground rules. >> >> -Eric >> > > -- > Sent from my mobile device > > > ------------------------------------ > > Yahoo! Groups Links > > >
Jan Algermissen wrote: > > > Eric J. Bowman wrote: > >> This is good to see. Frankly, I don't care if it's RESTful or > >> not, or if folks use it RESTfully or not. I think the important > >> thing here is to see Microsoft recognize that HTTP is an > >> application protocol, not a transfer protocol, > > "not a s/transfer/transport/ protocol"? > D'oh!!! Did I type that? Thanks for catching it. -Eric
Glenn Block wrote: > > Agreed. If you check the preso I made it clear this was about http and > enabling REST.... > Right, that's exactly what I was calling out for praise, sorry if it came across as criticism. As to the presentation, there was a Q&A discussion about the platform imposing design criteria on the system under development, which you say you want to avoid. But, your example of /foo/bar making bar a child of foo, would do precisely that. I can think of more situations where bar isn't a child of foo, in the application sense, than situations where it is -- most of the time, I have nothing there to inherit. Unless you make that optional, but the presentation isn't clear in that regard, so this is just a note. The other feedback I have is regarding Content-Location, not only for conneg but for situations like "author's preferred version". The demo shows that you can assign multiple representations to a resource. But this approach typically fails the identification of resources and self- descriptive messaging constraints. We've discussed this here before, and I realize that I'm in the minority on this, but it's still a SHOULD in HTTP. What I'd like from a platform, is the ability to define first an Atom resource, then a JSON resource, *then* be able to define a resource which negotiates between them. The problem with Accept-based conneg, is that you don't always have control over the client -- i.e. you can't tell a browser to negotiate between Atom and JSON without Code on Demand. So there needs to be some way to link directly to the desired variant, without conneg. Among other reasons. Some of my rationale rug was pulled out from under me with HTTPbis -11 about a day after the last thread on this topic, but C-L with conneg is still a SHOULD, not a MAY, for very good reasons. -Eric
How are you defining "coupling"? How are you quantifying coupling? Andrew --- In rest-discuss@yahoogroups.com, Jan Algermissen <algermissen1971@...> wrote: > > > I am working on a comparison of the amount of coupling of various connector types. I have identified the list below. > > Does anyone have an additional idea? > > (I am using 'connector' in a somewhat sloppy way here. That is why I also include things like "file based integration"). > > > Here is my list: > > > - File based integration (coordination of processes using the file system) > - Database based integration (coordination of processes using an RDBMS > - Message Queues > - RPC (I am equating all forms of it: RMI, DCOM, Corba, WS-*, URI tunneling. Makes sense?[1]) > - PubSub > - HTTP Type I ( HTTP with design time WADL or similar and generic media types) > - HTTP Type II ( HTTP with design time WADL or similar and specific media types) > - REST > > Anything else? > > > Jan > > > [1] Meaning: do they all have the same coupling effect? I guess one could consider RMI to couple more than WS-* because it also couples on the Language used. >
Eric, The only place where conneg should be used is when, from the client perspective, the variants are indistinguishable from one another. If json and xml are both processable to the same output, as they usually are in those frameworks, then conneg makes sense. I wouldn't recommend conneg on html and json on those resources, but json and xml as two serializations of the same infoset is very much within what coneng was designed for. The difference is mechanical in nature. -----Original Message----- From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of Eric J. Bowman Sent: 22 October 2010 01:30 To: Glenn Block Cc: rest-discuss@yahoogroups.com Subject: [rest-discuss] Re: The Future of WCF is indeed RESTful Glenn Block wrote: > > Agreed. If you check the preso I made it clear this was about http and > enabling REST.... > Right, that's exactly what I was calling out for praise, sorry if it came across as criticism. As to the presentation, there was a Q&A discussion about the platform imposing design criteria on the system under development, which you say you want to avoid. But, your example of /foo/bar making bar a child of foo, would do precisely that. I can think of more situations where bar isn't a child of foo, in the application sense, than situations where it is -- most of the time, I have nothing there to inherit. Unless you make that optional, but the presentation isn't clear in that regard, so this is just a note. The other feedback I have is regarding Content-Location, not only for conneg but for situations like "author's preferred version". The demo shows that you can assign multiple representations to a resource. But this approach typically fails the identification of resources and self- descriptive messaging constraints. We've discussed this here before, and I realize that I'm in the minority on this, but it's still a SHOULD in HTTP. What I'd like from a platform, is the ability to define first an Atom resource, then a JSON resource, *then* be able to define a resource which negotiates between them. The problem with Accept-based conneg, is that you don't always have control over the client -- i.e. you can't tell a browser to negotiate between Atom and JSON without Code on Demand. So there needs to be some way to link directly to the desired variant, without conneg. Among other reasons. Some of my rationale rug was pulled out from under me with HTTPbis -11 about a day after the last thread on this topic, but C-L with conneg is still a SHOULD, not a MAY, for very good reasons. -Eric ------------------------------------ Yahoo! Groups Links
Sebastien Lambla wrote: > > The only place where conneg should be used is when, from the client > perspective, the variants are indistinguishable from one another. If > json and xml are both processable to the same output, as they usually > are in those frameworks, then conneg makes sense. > > I wouldn't recommend conneg on html and json on those resources, but > json and xml as two serializations of the same infoset is very much > within what coneng was designed for. The difference is mechanical in > nature. > Of course. But it doesn't really matter what the purpose is, except for compression. RFC 2616 says: "A server SHOULD provide a Content-Location for the variant corresponding to the response entity; especially in the case where a resource has multiple entities associated with it, and those entities actually have separate locations by which they might be individually accessed, the server SHOULD provide a Content-Location for the particular variant which is returned." Variants are resources in their own right. Frameworks shouldn't preclude the option of sending Content-Location with conneg responses, as this imposes an interpretation on the system being developed. -Eric
On Sat, Oct 23, 2010 at 2:58 AM, Eric J. Bowman <eric@...> wrote: > Sebastien Lambla wrote: >> >> The only place where conneg should be used is when, from the client >> perspective, the variants are indistinguishable from one another. If >> json and xml are both processable to the same output, as they usually >> are in those frameworks, then conneg makes sense. >> >> I wouldn't recommend conneg on html and json on those resources, but >> json and xml as two serializations of the same infoset is very much >> within what coneng was designed for. The difference is mechanical in >> nature. >> > > Of course. But it doesn't really matter what the purpose is, except > for compression. RFC 2616 says: > > "A server SHOULD provide a Content-Location for the variant > corresponding to the response entity; especially in the case where a > resource has multiple entities associated with it, and those entities > actually have separate locations by which they might be individually > accessed, the server SHOULD provide a Content-Location for the > particular variant which is returned." > > Variants are resources in their own right. I have no idea how you draw that conclusion from the excerpt you've quoted: "especially in the case where a resource has multiple entities associated with it, _*AND*_ those entities actually have separate locations by which they might be individually accessed" That wording clearly acknowledges that the entities might /not/ have separate locations, so it doesn't seem to support any generalised statement such as "Variants are resources in their own right". Cheers, Mike
On Fri, Oct 22, 2010 at 1:30 AM, Eric J. Bowman <eric@...> wrote: > The problem with Accept-based conneg, > is that you don't always have control over the client -- i.e. you can't > tell a browser to negotiate between Atom and JSON without Code on > Demand. So there needs to be some way to link directly to the desired > variant, without conneg. .. or hyperlinks that can indicate circumstantial negotiation preferences? e.g. Browsers could over-ride their default headers for the following link: <a type="application/json" href="/foo"> thus addressing the need you've outlined without being forced to treat representations as resources or use code on demand. > Among other reasons. Please expand! :) Cheers, Mike
Mike Kelly wrote: > > I have no idea how you draw that conclusion from the excerpt you've > quoted: > > "especially in the case where a resource has multiple entities > associated with it, _*AND*_ those entities actually have separate > locations by which they might be individually accessed" > > That wording clearly acknowledges that the entities might /not/ have > separate locations, so it doesn't seem to support any generalised > statement such as "Variants are resources in their own right". > You're focusing on the "especially if" part. Leave that out, and you have a rather unambiguous statement regarding conneg (assuming variant implies conneg): "A server SHOULD provide a Content-Location for the variant corresponding to the response entity." "Especially if" hardly means the same as "if and only if." Currently, this isn't in HTTPbis, but it's all beside the point anyway. My point is, that if my design choice includes Content-Location when doing conneg (really quite common), a framework shouldn't preclude my doing that. -Eric
We've had this conversation before, so if anyone is interested they could always refer back to it. If you've come up with any reference in the meantime that @type means what you think it means, vs. what everyone else thinks it means, then by all means share. -Eric Mike Kelly wrote: > On Fri, Oct 22, 2010 at 1:30 AM, Eric J. Bowman > <eric@...> wrote: > > The problem with Accept-based conneg, > > is that you don't always have control over the client -- i.e. you > > can't tell a browser to negotiate between Atom and JSON without > > Code on Demand. So there needs to be some way to link directly to > > the desired variant, without conneg. > > .. or hyperlinks that can indicate circumstantial negotiation > preferences? > > e.g. Browsers could over-ride their default headers for the following > link: > > <a type="application/json" href="/foo"> > > thus addressing the need you've outlined without being forced to treat > representations as resources or use code on demand. > > > Among other reasons. > > Please expand! :) > > Cheers, > Mike
On Sat, Oct 23, 2010 at 10:29 AM, Eric J. Bowman <eric@...> wrote: > Mike Kelly wrote: >> >> I have no idea how you draw that conclusion from the excerpt you've >> quoted: >> >> "especially in the case where a resource has multiple entities >> associated with it, _*AND*_ those entities actually have separate >> locations by which they might be individually accessed" >> >> That wording clearly acknowledges that the entities might /not/ have >> separate locations, so it doesn't seem to support any generalised >> statement such as "Variants are resources in their own right". >> > > You're focusing on the "especially if" part. Leave that out, and you > have a rather unambiguous statement regarding conneg (assuming variant > implies conneg): > > "A server SHOULD provide a Content-Location for the variant > corresponding to the response entity." I'm not disputing that, I'm simply highlighting - for you - the fact that it is acknowledging varying entities may share the same location. > > "Especially if" hardly means the same as "if and only if." Currently, > this isn't in HTTPbis, but it's all beside the point anyway. My point > is, that if my design choice includes Content-Location when doing > conneg (really quite common), a framework shouldn't preclude my doing > that. Agreed, any HTTP oriented framework that doesn't give you access to response headers is flawed. Using Content-Location for variants with distinct locations is good practice - it increases visibility. However; treating any or all variants/representations as resources in their own right is a design decision, not a hard and fast rule. There's nothing in REST or 2616 that disambiguates this, so pulling out excerpts and then making absolute statements such as "Variants are resources in their own right." is wrong. Cheers, Mike
I know what @type means in html, I'm suggesting what it could mean. Let's pretend I called it @foobar or @accept if it bothers you that much. Cheers, Mike On Sat, Oct 23, 2010 at 11:05 AM, Eric J. Bowman <eric@bisonsystems.net> wrote: > We've had this conversation before, so if anyone is interested they > could always refer back to it. If you've come up with any reference in > the meantime that @type means what you think it means, vs. what > everyone else thinks it means, then by all means share. > > -Eric > > Mike Kelly wrote: > >> On Fri, Oct 22, 2010 at 1:30 AM, Eric J. Bowman >> <eric@...> wrote: >> > The problem with Accept-based conneg, >> > is that you don't always have control over the client -- i.e. you >> > can't tell a browser to negotiate between Atom and JSON without >> > Code on Demand. So there needs to be some way to link directly to >> > the desired variant, without conneg. >> >> .. or hyperlinks that can indicate circumstantial negotiation >> preferences? >> >> e.g. Browsers could over-ride their default headers for the following >> link: >> >> <a type="application/json" href="/foo"> >> >> thus addressing the need you've outlined without being forced to treat >> representations as resources or use code on demand. >> >> > Among other reasons. >> >> Please expand! :) >> >> Cheers, >> Mike >
On Oct 22, 2010, at 3:13 PM, wahbedahbe wrote: > How are you defining "coupling"? Significant change[1] in one component mandates change in other component. > How are you quantifying coupling? Yep, good question. I am thinking about a 'value' derived from the questions: Does the change of the provider lead to - stop the consumer - reconfigure the consumer - recompile the consumer - 'refactor' the consumer - port the consumer implementation (e.g. when going from RMI to DCOM) Jan [1] http://www.nordsc.com/blog/?p=644 > > Andrew > > --- In rest-discuss@yahoogroups.com, Jan Algermissen <algermissen1971@...> wrote: >> >> >> I am working on a comparison of the amount of coupling of various connector types. I have identified the list below. >> >> Does anyone have an additional idea? >> >> (I am using 'connector' in a somewhat sloppy way here. That is why I also include things like "file based integration"). >> >> >> Here is my list: >> >> >> - File based integration (coordination of processes using the file system) >> - Database based integration (coordination of processes using an RDBMS >> - Message Queues >> - RPC (I am equating all forms of it: RMI, DCOM, Corba, WS-*, URI tunneling. Makes sense?[1]) >> - PubSub >> - HTTP Type I ( HTTP with design time WADL or similar and generic media types) >> - HTTP Type II ( HTTP with design time WADL or similar and specific media types) >> - REST >> >> Anything else? >> >> >> Jan >> >> >> [1] Meaning: do they all have the same coupling effect? I guess one could consider RMI to couple more than WS-* because it also couples on the Language used. >> > > > > > ------------------------------------ > > Yahoo! Groups Links > > >
Mike Kelly wrote: > > I know what @type means in html, I'm suggesting what it could mean. > > Let's pretend I called it @foobar or @accept if it bothers you that > much. > It doesn't matter what you call it, separation of concerns (or the layered system constraint, if you prefer) means that hypertext has no control over what headers the user-agent sends. You're suggesting some architecture that has nothing to do with the Web. -Eric
Mike Kelly wrote: > > > "A server SHOULD provide a Content-Location for the variant > > corresponding to the response entity." > > I'm not disputing that, I'm simply highlighting - for you - the fact > that it is acknowledging varying entities may share the same location. > Not without violating the identification of resources constraint, they can't. -Eric
Having the ability to support "children" is not forcing opinions. Also the notion of "children" may be out of context. The particular scenario relates to how a request is handled based on the uri. For example if I have a uri that is "Foo/Bar" or "Foo/Baz" I may want to have specific handling that all "children" of Foo have attached to them based on the uri namespace. It's not to say that Bar is an actual child of Foo in an entity / domain model case. Regardless, the intent was to say we want to at least enable them, which is something that our current implementation doesn't amend itself to. As far as Conneg we want to support both transparent and explicit and the notion of sub-resources which are variants. Out of the box today we are supporting transparent, but the design should allow whatever kind of conneg floats you boat so to speak. On Thu, Oct 21, 2010 at 5:30 PM, Eric J. Bowman <eric@...>wrote: > Glenn Block wrote: > > > > Agreed. If you check the preso I made it clear this was about http and > > enabling REST.... > > > > Right, that's exactly what I was calling out for praise, sorry if it > came across as criticism. As to the presentation, there was a Q&A > discussion about the platform imposing design criteria on the system > under development, which you say you want to avoid. But, your example > of /foo/bar making bar a child of foo, would do precisely that. I can > think of more situations where bar isn't a child of foo, in the > application sense, than situations where it is -- most of the time, I > have nothing there to inherit. Unless you make that optional, but the > presentation isn't clear in that regard, so this is just a note. > > The other feedback I have is regarding Content-Location, not only for > conneg but for situations like "author's preferred version". The demo > shows that you can assign multiple representations to a resource. But > this approach typically fails the identification of resources and self- > descriptive messaging constraints. We've discussed this here before, > and I realize that I'm in the minority on this, but it's still a SHOULD > in HTTP. > > What I'd like from a platform, is the ability to define first an Atom > resource, then a JSON resource, *then* be able to define a resource > which negotiates between them. The problem with Accept-based conneg, > is that you don't always have control over the client -- i.e. you can't > tell a browser to negotiate between Atom and JSON without Code on > Demand. So there needs to be some way to link directly to the desired > variant, without conneg. Among other reasons. Some of my rationale > rug was pulled out from under me with HTTPbis -11 about a day after the > last thread on this topic, but C-L with conneg is still a SHOULD, not a > MAY, for very good reasons. > > -Eric >
One BIG clarification on this article. We are not throwing out and completely re-architecting WCF. What we are doing is enhancing our support for HTTP to make it first class. On Thu, Oct 21, 2010 at 11:24 AM, Glenn Block <glenn.block@...> wrote: > http://www.infoq.com/news/2010/10/WCF-REST > > I agree :-) > > Recently I delivered a Eurpoean Virtual ALT.NET<http://europevan.blogspot.com/2010/10/glenn-block-on-wcf-evolving-for-web-e.html>talk on the work we're doing in WCF (which many of you have helped with) > around HTTP / REST. Infoq picked up the session and wrote a nice article. > > We'll be out there soon, it's not smoke and mirrors. > > Glenn > >
Glenn Block wrote: > > Having the ability to support "children" is not forcing opinions. > OK, I understand now, and I agree you're not imposing anything -- I just wanted to make sure. > > As far as Conneg we want to support both transparent and explicit and > the notion of sub-resources which are variants. Out of the box today > we are supporting transparent, but the design should allow whatever > kind of conneg floats you boat so to speak. > I understand that, please allow me to clarify my concern. Assume a negotiated resource /A with variants /A.a and /A.b. Let's say I've written code to generate /A.a and /A.b. When I code /A, do I wind up with two new code paths which are identical to, yet separate from, the /A.a and /A.b code paths? Or is a single code path an "aspect" of both /A and /A.a? This is a subtlety regarding conneg frameworks, but to me, an important one. -Eric
[full message to list] Jan: Your list below looks a bit like the text in the "Modifiability" section of Fielding's dissertation[1]. That might give you some ideas on what to focus on when determining "coupling" between elements in the arch. FWIW, I tend to think of (private) components as existing "behind" the (public) connectors. In my mind the most trouble (e.g. coupling) occurs when component aspects are exposed by the connectors (if that makes sense). [1] http://www.ics.uci.edu/~fielding/pubs/dissertation/net_app_arch.htm#sec_2_3_4 mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me #RESTFest 2010 http://rest-fest.googlecode.com On Sat, Oct 23, 2010 at 11:13, Jan Algermissen <algermissen1971@mac.com> wrote: > > On Oct 22, 2010, at 3:13 PM, wahbedahbe wrote: > >> How are you defining "coupling"? > > Significant change[1] in one component mandates change in other component. > >> How are you quantifying coupling? > > Yep, good question. I am thinking about a 'value' derived from the questions: > > Does the change of the provider lead to > > - stop the consumer > - reconfigure the consumer > - recompile the consumer > - 'refactor' the consumer > - port the consumer implementation (e.g. when going from RMI to DCOM) > > Jan > > > [1] http://www.nordsc.com/blog/?p=644 > > > >> >> Andrew >> >> --- In rest-discuss@yahoogroups.com, Jan Algermissen <algermissen1971@...> wrote: >>> >>> >>> I am working on a comparison of the amount of coupling of various connector types. I have identified the list below. >>> >>> Does anyone have an additional idea? >>> >>> (I am using 'connector' in a somewhat sloppy way here. That is why I also include things like "file based integration"). >>> >>> >>> Here is my list: >>> >>> >>> - File based integration (coordination of processes using the file system) >>> - Database based integration (coordination of processes using an RDBMS >>> - Message Queues >>> - RPC (I am equating all forms of it: RMI, DCOM, Corba, WS-*, URI tunneling. Makes sense?[1]) >>> - PubSub >>> - HTTP Type I ( HTTP with design time WADL or similar and generic media types) >>> - HTTP Type II ( HTTP with design time WADL or similar and specific media types) >>> - REST >>> >>> Anything else? >>> >>> >>> Jan >>> >>> >>> [1] Meaning: do they all have the same coupling effect? I guess one could consider RMI to couple more than WS-* because it also couples on the Language used. >>> >> >> >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
On Oct 24, 2010, at 2:30 PM, mike amundsen wrote: > [full message to list] > Jan: > > Your list below looks a bit like the text in the "Modifiability" > section of Fielding's dissertation[1]. That might give you some ideas > on what to focus on when determining "coupling" between elements in > the arch. > > FWIW, I tend to think of (private) components as existing "behind" the > (public) connectors. In my mind the most trouble (e.g. coupling) > occurs when component aspects are exposed by the connectors (if that > makes sense). GVery good way to put it. E.g. RPC connectors 'leak' the component API. Jan > > [1] http://www.ics.uci.edu/~fielding/pubs/dissertation/net_app_arch.htm#sec_2_3_4 > > mca > http://amundsen.com/blog/ > http://twitter.com@mamund > http://mamund.com/foaf.rdf#me > > > #RESTFest 2010 > http://rest-fest.googlecode.com > > > > > On Sat, Oct 23, 2010 at 11:13, Jan Algermissen <algermissen1971@...> wrote: >> >> On Oct 22, 2010, at 3:13 PM, wahbedahbe wrote: >> >>> How are you defining "coupling"? >> >> Significant change[1] in one component mandates change in other component. >> >>> How are you quantifying coupling? >> >> Yep, good question. I am thinking about a 'value' derived from the questions: >> >> Does the change of the provider lead to >> >> - stop the consumer >> - reconfigure the consumer >> - recompile the consumer >> - 'refactor' the consumer >> - port the consumer implementation (e.g. when going from RMI to DCOM) >> >> Jan >> >> >> [1] http://www.nordsc.com/blog/?p=644 >> >> >> >>> >>> Andrew >>> >>> --- In rest-discuss@yahoogroups.com, Jan Algermissen <algermissen1971@...> wrote: >>>> >>>> >>>> I am working on a comparison of the amount of coupling of various connector types. I have identified the list below. >>>> >>>> Does anyone have an additional idea? >>>> >>>> (I am using 'connector' in a somewhat sloppy way here. That is why I also include things like "file based integration"). >>>> >>>> >>>> Here is my list: >>>> >>>> >>>> - File based integration (coordination of processes using the file system) >>>> - Database based integration (coordination of processes using an RDBMS >>>> - Message Queues >>>> - RPC (I am equating all forms of it: RMI, DCOM, Corba, WS-*, URI tunneling. Makes sense?[1]) >>>> - PubSub >>>> - HTTP Type I ( HTTP with design time WADL or similar and generic media types) >>>> - HTTP Type II ( HTTP with design time WADL or similar and specific media types) >>>> - REST >>>> >>>> Anything else? >>>> >>>> >>>> Jan >>>> >>>> >>>> [1] Meaning: do they all have the same coupling effect? I guess one could consider RMI to couple more than WS-* because it also couples on the Language used. >>>> >>> >>> >>> >>> >>> ------------------------------------ >>> >>> Yahoo! Groups Links >>> >>> >>> >> >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> >> > > > ------------------------------------ > > Yahoo! Groups Links > > >
Which do you prefer? On 10/23/10, Eric J. Bowman <eric@...> wrote: > Glenn Block wrote: >> >> Having the ability to support "children" is not forcing opinions. >> > > OK, I understand now, and I agree you're not imposing anything -- I > just wanted to make sure. > >> >> As far as Conneg we want to support both transparent and explicit and >> the notion of sub-resources which are variants. Out of the box today >> we are supporting transparent, but the design should allow whatever >> kind of conneg floats you boat so to speak. >> > > I understand that, please allow me to clarify my concern. Assume a > negotiated resource /A with variants /A.a and /A.b. Let's say I've > written code to generate /A.a and /A.b. When I code /A, do I wind up > with two new code paths which are identical to, yet separate from, the > /A.a and /A.b code paths? Or is a single code path an "aspect" of both > /A and /A.a? > > This is a subtlety regarding conneg frameworks, but to me, an important > one. > > -Eric > -- Sent from my mobile device
Glenn Block wrote: > > Which do you prefer? > Less bloat, more configuration. -Eric
Hi all, I gave a talk at the first Web Philosophy conference a week ago [1], and spent the last week putting the slides together in English this time with audio. I just made it available here: http://www.slideshare.net/bblfish/philosophy-and-the-social-web-5583083 The talk covers: - Issues in the Social Networking space, including privacy - REST and Web Architecture (including the importance of URLs) - The Semantic Web - The social web - the network effect - sense and reference from Frege and logic - biological metaphors and the web and a lot more It may be interesting to people here. I'd be interested in feedback. Henry [1] http://web-and-philosophy.org/ Social Web Architect http://bblfish.net/
I've been successfully using REST for a while now with both JAX-RS and Ruby on Rails, so I believe in the overall concept and use it in my architecture. It seems like everything now is just CRUD (i.e., Create/Read/Update/Delete) using the 4 HTTP request types. This is good, simple, and gets you a long way. But is everything just CRUD? Isn't that too limiting? What about situations where CRUD isn't sophisticated enough or just doesn't fit with what you're trying to do? I've heard that everything that can be expressed in a RESTful URL, but I'm not buying that. Like I said, I do believe in and love REST, and that it covers a LOT of situations. But I don't think that REST applies to everything. What do other people think? Thanks. Tom
Tom wrote: > I've been successfully using REST for a while now with both JAX-RS and Ruby on Rails, so I believe in the overall concept and use it in my architecture. > > It seems like everything now is just CRUD (i.e., Create/Read/Update/Delete) using the 4 HTTP request types. This is good, simple, and gets you a long way. > > But is everything just CRUD? Isn't that too limiting? What about situations where CRUD isn't sophisticated enough or just doesn't fit with what you're trying to do? I've heard that everything that can be expressed in a RESTful URL, but I'm not buying that. > > Like I said, I do believe in and love REST, and that it covers a LOT of situations. But I don't think that REST applies to everything. > > What do other people think? Personally I think that there isn't really a "REST", and see it more as a set of constraints and a style which you apply to a situation in order to assess how RESTful it is, and then use the constraints to make it more RESTful. Usually this involves making things that bit simpler and more crud-like, and personally I've always seen many benefits in doing this. Certainly, I've not found a need for anything beyond the core set of HTTP Verbs and a couple from older specs + webdav. Best, Nathan
First, the CRUD-dy REST pattern, while common, is something that is
"applied" to Web implementations; it's not something described in
Fielding's dissertation[1]. Since REST is an architectural style (not
a coding style, not an application model, etc.), saying "REST CRUD" is
a bit like saying "Gothic Object-Orientation" or some other humorous
phrase.
Second, CRUD is not addressed in the HTTP specifications[2]. True
there are methods that seem to map easily to the CRUD pattern (GET =
read, DELETE=well, you know). However, the spec does a good job of
detailing the way PUT is to be used to create new resources as well as
replace existing ones[3]. In fact, POST and PUT are differentiated
primary by the way the URI is treated for that request:
POST: "...accept the entity enclosed in the request as a new
subordinate of the resource identified by the Request-URI..."
PUT: "...the enclosed entity be stored under the supplied Request-URI."
Third, Subbu Allamaraju's book "RESTful Web Services Cookbook"[4] does
a very good job (IMO) of describing the concept of the "Controller
Resource" that can be used to handle a wide range of no-CRUD-dy
activity ("How and When to Use Controller Resources"[5]). There are
several sections in his book devoted to resource operations that do
not fall into the CRUD model; too many to list here.
Fourth, an important aspect of the REST style is reliance on
hypermedia. The book "REST in Practice" by
Webber|Parastatidis|Robinson[6] does a very good job of showing how
this can be done and introduces the concept of "Application Domain
Protocols"[7] as a way to implement hypermedia. This has nothing to do
w/ CRUD and quite a bit to do w/ the REST style.
Finally, I think the CURD-dy REST pattern arises when servers fall
into the habit of serializing reader/write objects over the wire using
HTTP. A number of frameworks actually "promote" this habit since much
of the framework is devoted to making it easy to convert private
objects into public markup in XML, JSON, etc. I found the easiest way
to get past this CRUD-dy pattern is to bypass these framework
"conveniences" and focus instead on crafting hypermedia requests and
responses that support clean state transitions initiated by the
client. There are many examples of this kind of work in the references
I mentioned here; hopefully they give you some ideas on how to use the
REST style w/o being limited to CRUD implementations.
[1] http://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm
[2] http://www.w3.org/Protocols/rfc2616/rfc2616.html
[3] http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html#sec9.6
[4] http://oreilly.com/catalog/9780596801694
[5] http://my.safaribooksonline.com/9780596809140/recipe-how-to-use-controllers
[6] http://oreilly.com/catalog/9780596805838
[7] http://my.safaribooksonline.com/9781449383312/hypermedia_protocols
mca
http://amundsen.com/blog/
http://twitter.com@mamund
http://mamund.com/foaf.rdf#me
#RESTFest 2010
http://rest-fest.googlecode.com
On Thu, Oct 28, 2010 at 08:10, Nathan <nathan@...> wrote:
> Tom wrote:
>> I've been successfully using REST for a while now with both JAX-RS and Ruby on Rails, so I believe in the overall concept and use it in my architecture.
>>
>> It seems like everything now is just CRUD (i.e., Create/Read/Update/Delete) using the 4 HTTP request types. This is good, simple, and gets you a long way.
>>
>> But is everything just CRUD? Isn't that too limiting? What about situations where CRUD isn't sophisticated enough or just doesn't fit with what you're trying to do? I've heard that everything that can be expressed in a RESTful URL, but I'm not buying that.
>>
>> Like I said, I do believe in and love REST, and that it covers a LOT of situations. But I don't think that REST applies to everything.
>>
>> What do other people think?
>
> Personally I think that there isn't really a "REST", and see it more as
> a set of constraints and a style which you apply to a situation in order
> to assess how RESTful it is, and then use the constraints to make it
> more RESTful.
>
> Usually this involves making things that bit simpler and more crud-like,
> and personally I've always seen many benefits in doing this.
>
> Certainly, I've not found a need for anything beyond the core set of
> HTTP Verbs and a couple from older specs + webdav.
>
> Best,
>
> Nathan
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
>
My experience in trying to build REST apps using Rails indicates that much of what Rails provides out of the box simply has to be discarded as it actually gets in the way of a cleaner implementation. A case in point is the entire ActiveResource library. I dare say my own vagueness about what constitutes REST hasn't helped matters, but I do know Rails isn't there yet. I must also say that I still haven't successfully done a good REST Rails app yet. My earliest attempt was a project that where much of what we were building was intended to run inside a walled garden, so the cost of building a bunch of stuff to replace what Rails offered was too high; we wound up accepting all the URI collusion, the lack of access to response headers etc. that come with Rails. In terms of solutions, for more recent projects I've abandoned ActiveResource in favour of fluent HTTP libraries on the client side with some measure of success. On the server side, we're working with resources that are exposed as Atom rather than the vanilla XML serialisation that Rails offers, but are some considerable way from HATEOAS. Rails serialisation isn't as convenient as we'd like it to be, and XML serialisation using the Builder lib is implemented in pure Ruby and can become a bottle neck when traffic grows. I'd be really interested in any experiences in making it easy to create more 'correct' REST apps using Rails. I know of Restfulie from the guys at Caelum. Is there anything else? I'm also interested in experiences around cacheing (anyone got squid cache channels working?) and also around SSO (I'm currently looking at OAuth2, but have no real world experience with it yet, and it isn't the nicest thing for desktop clients). Thanks, Sidu. http://c42.in On Thu, Oct 28, 2010 at 5:40 PM, Nathan <nathan@webr3.org> wrote: > > > Tom wrote: > > I've been successfully using REST for a while now with both JAX-RS and > Ruby on Rails, so I believe in the overall concept and use it in my > architecture. > > > > It seems like everything now is just CRUD (i.e., > Create/Read/Update/Delete) using the 4 HTTP request types. This is good, > simple, and gets you a long way. > > > > But is everything just CRUD? Isn't that too limiting? What about > situations where CRUD isn't sophisticated enough or just doesn't fit with > what you're trying to do? I've heard that everything that can be expressed > in a RESTful URL, but I'm not buying that. > > > > Like I said, I do believe in and love REST, and that it covers a LOT of > situations. But I don't think that REST applies to everything. > > > > What do other people think? > > Personally I think that there isn't really a "REST", and see it more as > a set of constraints and a style which you apply to a situation in order > to assess how RESTful it is, and then use the constraints to make it > more RESTful. > > Usually this involves making things that bit simpler and more crud-like, > and personally I've always seen many benefits in doing this. > > Certainly, I've not found a need for anything beyond the core set of > HTTP Verbs and a couple from older specs + webdav. > > Best, > > Nathan > >
On Thu, Oct 28, 2010 at 12:52 AM, Tom <thomasamarrs@...> wrote: > I've been successfully using REST for a while now with both JAX-RS and Ruby on Rails, so I believe in the overall concept and use it in my architecture. > > It seems like everything now is just CRUD (i.e., Create/Read/Update/Delete) using the 4 HTTP request types. I think the short answer to this is that while the HTTP methods may resemble CRUD at a first glance, there's some pretty significant differences that make the comparison pretty much useless in practice for all but the most trivial of applications (and even then ...). Mark.
Hi, I recently started blogging about REST, and I was hoping the REST mavens here would find it in their hearts to critique my first few entries: http://blogs.oracle.com/allthingsrest/ Thanks much! Sorry for the self promotion. =D
> I recently started blogging about REST, and I was hoping the REST mavens here would find it in their hearts to critique my first few entries:
>
> http://blogs.oracle.com/allthingsrest/
>
Looks great to me. Anything Python works as far as I'm concerned.. :-)
Small thing I'd say (which will probably trigger a mega-thread) is that you're making a bit of a jump from this description of the Hypermedia Constraint:
""
The application state is controlled and stored by the user agent and can be composed of representations from multiple servers. In addition to freeing the server from the scalability problems of storing state, this allows the user to directly manipulate the state (e.g., a Web browser's history), anticipate changes to that state (e.g., link maps and prefetching of representations), and jump from one application to another (e.g., bookmarks and URI-entry dialogs). The model application is therefore an engine that moves from one state to the next by examining and choosing from among the alternative state transitions in the current set of representations.
""
To this:
""
Use uniform state & state transitions -- Hypermedia As The Engine Of Application State (HATEOAS):
* Representations must uniformly capture the current state of an identified resource. That state is the current application state. If our RESTful web app is a state machine, the current representation is the current state.
* Representations may contain uniform links to subsequent application states. These links are the state transitions of our state machine.
""
To me, Roy seems to be getting at more of a gestalt of application state. You can't map the current sampled state of a single resource directly into the whole current application state. And Roy doesn't mention state machines. There is no one current representation making up the single current application state, there are potentially scores of them, present and past. So any links are only state transitions in the sense that the overall application state can be progressed as a whole by /fetching more stuff/, perhaps in parallel.
The point is that you get decoupling because, even though the hypermedia constrains and guides what is available at any time, it's up to the client (or, perhaps, the user driving that client if it's a browser) what it does to explore - perhaps in parallel - the hypermedia landscape before it, and in what order - perhaps back to previous places on that landscape. The current history, cache and rendered page (for a browser) are parts of the whole application state.
In other words, don't forget the back button! There's not always a link for that in the hypermedia you're focusing on...!
I'm sure others will chime in with their own interpretations... :-)
(oh - smaller tip: don't say 'HATEOAS', say 'Hypermedia Constraint'! =0)
Duncan
On Oct 30, 2010, at 7:27 PM, Duncan wrote: > > To me, Roy seems to be getting at more of a gestalt of application state. You can't map the current sampled state of a single resource directly into the whole current application state. Yep. http://lists.w3.org/Archives/Public/www-tag/2010Oct/0100.html Jan > And Roy doesn't mention state machines. There is no one current representation making up the single current application state, there are potentially scores of them, present and past. So any links are only state transitions in the sense that the overall application state can be progressed as a whole by /fetching more stuff/, perhaps in parallel. > > The point is that you get decoupling because, even though the hypermedia constrains and guides what is available at any time, it's up to the client (or, perhaps, the user driving that client if it's a browser) what it does to explore - perhaps in parallel - the hypermedia landscape before it, and in what order - perhaps back to previous places on that landscape. The current history, cache and rendered page (for a browser) are parts of the whole application state. > > In other words, don't forget the back button! There's not always a link for that in the hypermedia you're focusing on...! > > I'm sure others will chime in with their own interpretations... :-) > > (oh - smaller tip: don't say 'HATEOAS', say 'Hypermedia Constraint'! =0) > > Duncan > > > > > ------------------------------------ > > Yahoo! Groups Links > > >
On Oct 30, 2010, at 7:27 PM, Duncan wrote: > (oh - smaller tip: don't say 'HATEOAS', say 'Hypermedia Constraint'! =0) +1, +1, +1, ... Jan
We just launched our new codeplex site <http://wcf.codeplex.com/> which includes the stuff I have been chattering about. I delivered a talk on it yesterday at the PDC. In the talk I had 3 goals with relation to REST 1. To clarify (a bit) what it is an what it isn't. 2. To illustrate several concepts. I spoke quite a bit about media types, conneg, and showed an illustration of builiding one with hypermedia. 3. To make it clear that we are not building a REST framework, but a framework for HTTP information about the site and the talk are below. site: wcf.codeplex.com pdc talk: Go to the player here: http://player.microsoftpdc.com/session we are under featured sessions "Building Connected Web Apis for the Highly Connected Web" Thanks again to everyone who has supported us to this point. Glenn
Let me add one more clarification to 3. I said we want to enable REST and not block it. But we don't want to choose the "One true REST approach" or force it. On Sat, Oct 30, 2010 at 4:07 PM, Glenn Block <glenn.block@...> wrote: > We just launched our new codeplex site <http://wcf.codeplex.com/> which > includes the stuff I have been chattering about. I delivered a talk on it > yesterday at the PDC. > > In the talk I had 3 goals with relation to REST > > 1. To clarify (a bit) what it is an what it isn't. > 2. To illustrate several concepts. I spoke quite a bit about media types, > conneg, and showed an illustration of builiding one with hypermedia. > 3. To make it clear that we are not building a REST framework, but a > framework for HTTP > > information about the site and the talk are below. > > site: wcf.codeplex.com > pdc talk: Go to the player here: http://player.microsoftpdc.com/session we > are under featured sessions "Building Connected Web Apis for the Highly > Connected Web" > > Thanks again to everyone who has supported us to this point. > > Glenn >
Jan, your knowledge of Roy's statements is just scary :p On Sat, Oct 30, 2010 at 12:03 PM, Jan Algermissen <algermissen1971@...>wrote: > > > > On Oct 30, 2010, at 7:27 PM, Duncan wrote: > > > (oh - smaller tip: don't say 'HATEOAS', say 'Hypermedia Constraint'! =0) > > +1, +1, +1, ... > > Jan > > >
Perhaps I should have been more clear in the post. I wasn't attempting to invent a new term for server states as Larry Masinter suggests in the linked email. I was attempting to make an analogy between the client's incarnation of the application and a state machine. In the linked email, Roy refers to the client's incarnation of the application as "hypermedia workspace". Between requests, the client's (or user agent if you prefer) hypermedia workspace is like a state machine's "current state". Then actions taken by the client/UserAgent via links are the state transitions. Am I understanding things poorly? Is that a bad analogy? ...or am I relating my understanding poorly? Perhaps I should revise the blog post to make it clear that the analogy to state machine is to be drawn to the user agent's hypermedia workspace? (not to a resource or to a representation) (sry I sent this twice Jan) --- In rest-discuss@yahoogroups.com, Jan Algermissen <algermissen1971@...> wrote: > > > On Oct 30, 2010, at 7:27 PM, Duncan wrote: > > > > > To me, Roy seems to be getting at more of a gestalt of application state. You can't map the current sampled state of a single resource directly into the whole current application state. > > Yep. > > http://lists.w3.org/Archives/Public/www-tag/2010Oct/0100.html > > > Jan > > > > And Roy doesn't mention state machines. There is no one current representation making up the single current application state, there are potentially scores of them, present and past. So any links are only state transitions in the sense that the overall application state can be progressed as a whole by /fetching more stuff/, perhaps in parallel. > > > > The point is that you get decoupling because, even though the hypermedia constrains and guides what is available at any time, it's up to the client (or, perhaps, the user driving that client if it's a browser) what it does to explore - perhaps in parallel - the hypermedia landscape before it, and in what order - perhaps back to previous places on that landscape. The current history, cache and rendered page (for a browser) are parts of the whole application state. > > > > In other words, don't forget the back button! There's not always a link for that in the hypermedia you're focusing on...! > > > > I'm sure others will chime in with their own interpretations... :-) > > > > (oh - smaller tip: don't say 'HATEOAS', say 'Hypermedia Constraint'! =0) > > > > Duncan > > > > > > > > > > ------------------------------------ > > > > Yahoo! Groups Links > > > > > > >
I'll revise; :) Better to acknowledge the existence of the 'HATEOAS' term and map it to the preferred 'Hypermedia Constraint' ? ...or better to ignore the existence of the term HATEOAS? --- In rest-discuss@yahoogroups.com, Jan Algermissen <algermissen1971@...> wrote: > > > On Oct 30, 2010, at 7:27 PM, Duncan wrote: > > > (oh - smaller tip: don't say 'HATEOAS', say 'Hypermedia Constraint'! =0) > > +1, +1, +1, ... > > Jan >
"Ray" wrote: > > I was attempting to make an analogy between the client's incarnation > of the application and a state machine. In the linked email, Roy > refers to the client's incarnation of the application as "hypermedia > workspace". > I'm not a big fan of that term. I think it's more common, and understandable, to refer to application state (client) vs. resource state (server). Those learning REST typically conflate the two, or assume that a REST application is the server-side system rather than the user interface. -Eric
"Ray" wrote: > > ...or better to ignore the existence of the term HATEOAS? > +1 -Eric
Making my way through the presentation; it's good. :D It has helped me better appreciate how the value of resource graphs increases exponentially relative its size. True of social graphs, information graphs, resource graphs...ALL graphs. --- In rest-discuss@yahoogroups.com, Henry Story <henry.story@...> wrote: > > Hi all, > > I gave a talk at the first Web Philosophy conference a week ago [1], and spent the last week putting the slides together in English this time with audio. I just made it available here: > > http://www.slideshare.net/bblfish/philosophy-and-the-social-web-5583083 > > The talk covers: > - Issues in the Social Networking space, including privacy > - REST and Web Architecture (including the importance of URLs) > - The Semantic Web > - The social web > - the network effect > - sense and reference from Frege and logic > - biological metaphors and the web > and a lot more > > It may be interesting to people here. I'd be interested in feedback. > > Henry > > > [1] http://web-and-philosophy.org/ > > Social Web Architect > http://bblfish.net/ >
I have a resource /players where you can POST to create a new player /players/{id}, but I also want a way to report the supported types of players. I'm struggling with the correct, RESTful way to do that. Here's the ways I've thought of and they all seem to have problems:
1. /players/types. This seems the closest to other things I've seen out there (e.g. atom categories), but it looks weird to me because I expect /players/{id}, but "types" isn't really an id.
2. /players/{id}/types. This seems okay if you already have an id, but since players are created dynamically, the client won't know the id beforehand. And you need to know the type of player to create one. Additionally, the types supported by a specific player instance may mean something else (e.g. it could mean the types its compatible rather than types you can create).
3. /player-types. This seems a little ugly to me, but maybe it's fine?
Is there a good way to do this?
So...the entity you POST to /players has a "type" field? ...and that field is limited to a discrete set of values?
If yes to both of the above, I think someone that posts here regularly ( =D ) would say to give your players entity a type and register that type with IANA. If a person who wanted to create players then had the serendipity to come across your API, they would know exactly where to find documentation on your new type.
The above is probably the most RESTful.
Other approaches include:
1) create a resource catalog that provides type information for all entities in your system
2) something similar to your suggestions - /players/types
3) this one is a little more involved....
so, in theory, the /players resource could respond to a GET request with some hypermedia containing links and/or representations of all known players. Ideally, you'd like to be able to filter those players. You could add query parameters that control:
a) pagination
b) search - the fabulous q= from open search
c) what we called projections - filtering the fields of the entities
d) ...or some query parameter that would return type information
However, all of these other approaches violate the uniform interface constraint to some degree. Users must somehow discover how to parse the resource catalog, how to add a trailing /types or how to add a types query parameter.
--- In rest-discuss@yahoogroups.com, "skillzero" <skillzero@...> wrote:
>
> I have a resource /players where you can POST to create a new player /players/{id}, but I also want a way to report the supported types of players. I'm struggling with the correct, RESTful way to do that. Here's the ways I've thought of and they all seem to have problems:
>
> 1. /players/types. This seems the closest to other things I've seen out there (e.g. atom categories), but it looks weird to me because I expect /players/{id}, but "types" isn't really an id.
>
> 2. /players/{id}/types. This seems okay if you already have an id, but since players are created dynamically, the client won't know the id beforehand. And you need to know the type of player to create one. Additionally, the types supported by a specific player instance may mean something else (e.g. it could mean the types its compatible rather than types you can create).
>
> 3. /player-types. This seems a little ugly to me, but maybe it's fine?
>
> Is there a good way to do this?
>
"Ray" wrote:
>
> If yes to both of the above, I think someone that posts here
> regularly ( =D ) would say to give your players entity a type and
> register that type with IANA.
>
Actually, I'd recommend using (X)HTML, this sounds like a resource-
type issue, not a media-type issue.
"skillzero" <skillzero@...> wrote:
>
> And you need to know the type of player to create one.
>
The constraint you're looking for is the hypertext constraint. GET
some HTML from /players containing a form, with a selection control
listing the allowed types. POST that form to /players to create
/players/{id}. REST isn't about what your URLs look like, that's a
matter of preference, not constraint.
-Eric
--- In rest-discuss@yahoogroups.com, "Eric J. Bowman" <eric@...> wrote: > > "Ray" wrote: > > > > If yes to both of the above, I think someone that posts here > > regularly ( =D ) would say to give your players entity a type and > > register that type with IANA. > > > > Actually, I'd recommend using (X)HTML, this sounds like a resource- > type issue, not a media-type issue. > > <snip/> > > -Eric > Sorry. I thought you were on board the "resource type & media type crunched into one field" approach.
"Ray" wrote:
>
> I was attempting to make an analogy between the client's incarnation
> of the application and a state machine. In the linked email, Roy
> refers to the client's incarnation of the application as "hypermedia
> workspace".
>
Which Duncan called a "hypermedia landscape." I suppose "application
state" can be interpreted as "the current page" (so to speak, usually
more than one representation makes up the current steady-state), whereas
both Roy and Duncan are trying to make a point about other tabs the
user has open on the same site, plus the history contained within each.
OK, I get it now, so I suppose I take back my objection...
>
> Between requests, the client's (or user agent if you prefer)
>
Actually, this isn't a preference -- there's a reason for REST having
defined a more-explicit term. The default meaning of "client" in REST
is "client connector" which could be on a cache -- caches know nothing
of application states or hypermedia landscapes/workspaces, only which
representations map to what resources.
When we're talking about client-side issues which are relevant to both
intermediaries and user-agents, we say "client"; another term is
required for the component assembling multiple representations into
application states and acting on user input. Hence, "user-agent" --
very descriptively precise of exactly what's being discussed, while
deliberately allowing "client" to have the same meaning it does in
"client-server," for multiple components. An origin-server component
can have a client connector.
>
> hypermedia workspace is like a state machine's "current state". Then
> actions taken by the client/UserAgent via links are the state
> transitions.
>
Well, links or forms. Submitting a hypertext form is a user request
for a state transition, the semantics of which are described by the
method (your introductory post mistakenly lists methods as part of the
representation constraint, they're really part of the self-descriptive
messaging constraint; also, self-descriptiveness isn't simply headers,
as headers are part of the representation). The payload of PUT, PATCH
or POST requests are (not necessarily full) representations of user-
agent application state being transferred to the origin server, for the
purpose of manipulating one or more resources.
If you're looking for a state machine, it's found on the server -- each
resource is a state machine. REST applications aren't -- think of
manipulating a form. Each field, as it's altered, changes the state of
the application without changing any underlying resource states.
Background saves or user-initiated actions serve as input to one or
more state machine resources, each of which has an output which begins
with a status code. Altering forms isn't a state-machine interaction;
submitting forms and clicking links are, where resources are concerned.
The current state of the resource doesn't change when it's dereferenced.
The state of the resource may be changed by transferring a
representation of the resource's desired state. What goes on in user-
agents isn't a state machine, because the output isn't a function of
the input -- the result of clicking on a link is to transition from one
application state to another; however, what the user sees depends on
type and configuration of user-agent (or authentication, or a host of
other factors), where the same user generating the same input doesn't
get the same result from one client, or even one session, to the next.
Once you've coded a REST resource, the same input always yields the
same output (idempotent method or not), with certain exceptions like
503 errors. I've always considered resources to be Mealy Machines, or
at least such simplicity is my goal when designing resources.
=================
"Beware deviating from the path of The Architectural Constraints!"
Disagree. You're suggesting there's some ideal ("The") set of
constraints (style). This POV flirts with purism, suggesting REST as
Maslow's hammer. The only thing to beware of, is Roy's wrath if you
apply the term REST to some other set of constraints... There's a
limited REST mismatch in my system where I've ignored a constraint and
don't intend to fix it. It's an optimization, so I don't consider it
some other style -- just REST with a small, identified mismatch. All
HTTP systems have REST mismatches, anyway.
REST may also be extended; see ARRESTED and CREST. Roy's thesis should
really be taken as a methodology for developing distributed software
architectures, with a tutorial (chapter 5). REST isn't a religion,
just a named set of constraints describing idealized behavior. Design
by constraint *is* a religion, one I'm happy to preach. There is no
best set of constraints, only the set of constraints best applied to the
system being designed -- this is the path from which one must not
deviate.
Constraints are derived from the observable characteristics (pointed
arches, Internet scale) of existing systems. The natural scientist in
me loves it that constraints are like theories -- explanations of nature
(I consider the Internet a living, albeit non-sentient, phenomenon of
nature) derived through observation and experimentation (market research
of consumer preference qualifies, and results in the bell-bottom pantleg
constraint coming in and going out of style). The creative artist in me
loves it that constraints may be applied in myriad fashions evocative
of the same style.
Some gothic cathedrals merely *looked* like gothic cathedrals, but have
failed to stand the test of time; while others still stand, with only
intermittent (and mainly non-structural) restoration. Incorrect
curvature of the points of the arches, or flying buttresses insufficient
to the height of the walls, or foundations not dug deeply enough -- all
such deviation from these constraints of masonry led to the long-term
failure of some very pretty buildings, whereas other very pretty
buildings meeting these constraints remain in productive service.
Some romanesque cathedrals also still stand. That architectural style
imposed limits on the scalability of cathedral structures, so it was
*extended* by the gothic style. If the design goals of a cathedral
cannot be distinguished from those of the romanesque style, then the
choice of romanesque vs. gothic architecture is one of taste. If those
design goals exceed the scalability of romanesque architecture, the
choices include extending or innovating within some existing named set
of constraints, or devising a new set of constraints and bestowing a
name upon the resulting style.
-Eric
"Ray" wrote: > > Sorry. I thought you were on board the "resource type & media type > crunched into one field" approach. > When I talk about resource type, I'm talking about an abstraction which manifests itself in various ways in my system (hypertext, configuration files), but never in a header or any other field. I have over a dozen resource types on the system I'm developing, all of which negotiate between the same handful of ubiquitous media types. Media types identify generic processing models for families of forwards-backwards compatible data types. The application logic is in the interpretation of the payloads, after following the designated processing models. The application code has knowledge of the resource type, the user-agent requires none, i.e. this is not a concern at the protocol layer. If I were negotiating a dozen resource types over separate handfuls of custom media types, I'd have complexity and interoperability issues. I also have resource types (e.g. images, scripts) which are bound to a single media type. Currently there are 11 ubiquitous media types on my system, one opaque string masquerading as a media type, and application/ x-www-form-urlencoded. -Eric
One way to do this could be do this:
The /players/types is discovered from somewhere.
GET /players/types
One possible response could then be:
300 Multiple Choices
Content-Type: application/xhtml+xml
<ul>
<li><a href="/players/types/one" rel="http://rels.example.com/player-create
"></a></li>
...
</ul>
This resource could then give a HTML form which then contains the type
pre-filled, and the rest of the information ready to be added.
You could look at this as a player template.
The form's action is then POSTed back to some resource, e.g /players which
then accepts the application/
x-www-form-urlencoded request.
The response could then be:
201 Created
Location: http://example.com/players/1
--
Erlend
On Wed, Nov 3, 2010 at 10:44 PM, skillzero <skillzero@...> wrote:
>
>
> I have a resource /players where you can POST to create a new player
> /players/{id}, but I also want a way to report the supported types of
> players. I'm struggling with the correct, RESTful way to do that. Here's the
> ways I've thought of and they all seem to have problems:
>
> 1. /players/types. This seems the closest to other things I've seen out
> there (e.g. atom categories), but it looks weird to me because I expect
> /players/{id}, but "types" isn't really an id.
>
> 2. /players/{id}/types. This seems okay if you already have an id, but
> since players are created dynamically, the client won't know the id
> beforehand. And you need to know the type of player to create one.
> Additionally, the types supported by a specific player instance may mean
> something else (e.g. it could mean the types its compatible rather than
> types you can create).
>
> 3. /player-types. This seems a little ugly to me, but maybe it's fine?
>
> Is there a good way to do this?
>
>
>
+1 On Wed, Nov 3, 2010 at 6:10 PM, Eric J. Bowman <eric@...>wrote: > > > "Ray" wrote: > > > > ...or better to ignore the existence of the term HATEOAS? > > > > +1 > > -Eric > > >
Hello --- In rest-discuss@yahoogroups.com, "Eric J. Bowman" <eric@...> wrote: > > REST isn't about what your URLs look like, that's a > matter of preference, not constraint. I totally agree with Erick on this. All the confusion comes precisely with the URL templating approach, which is fine but I do not favor much. Do not think on URIs. They can even be consecutive numbers, who cares. Focus on what you are requesting, the resources. You have, it seems, several resources there. 1. One that is a "container" of players. You may not need that one, as the players may be grouped by a field or something. For now, you have it. 2. The player itself. 3. A player's type list, which contains the player types. I guess those are just a value in for a field. It may even be something more complex, like a format (e.g. each player type may have a different format). So, as you have three resources, think then you will need three different URIs. Those may be ugly or not, who cares, what it is important is to have access to all those from a starting point. Following the suggestion Erick provides, will not only eliminate the type list from being a resource, but also simplify the interaction, as you won't need to request the type first in order to create the player. Cheers.
"William Martinez Pomares" wrote: > > Following the suggestion Eric provides, will not only eliminate the > type list from being a resource, but also simplify the interaction, > as you won't need to request the type first in order to create the > player. > Thanks. The nature of the hypertext control is what establishes the relationship, i.e. whether a player can be created with multiple types, or what combinations of types aren't allowed. This is what's meant by a self-documenting API. -Eric
Erik Mogensen wrote: > > > > > When I talk about resource type, I'm talking about an abstraction > > which manifests itself in various ways in my system (hypertext, > > configuration files), but never in a header or any other field. I > > have over a dozen resource types on the system I'm developing, all > > of which negotiate between the same handful of ubiquitous media > > types. Media types identify generic processing models for families > > of forwards-backwards compatible data types. > > > > You have a dozen or so resource types (e.g. person, order or > whatever; I don't know your domain) and they all typically have an > XHTML representation, and perhaps a url encoded form representation > (and some of the other 11 ubiquitous media types) of some sort. > > Is that a correct assumption? > Close. The negotiated representation may be application/xhtml+xml, application/xml, text/html, text/xml, text/plain or application/atom+ xml (sometimes ; type=feed). My resource types are related to the integration of wiki, weblog and forum; all of which are represented as either Atom Entry or Atom Feed documents. Atom (etc.) is encapsulated by HTML to provide a hypertext user interface. The HTML user interface is negotiated based on client capability (read on), and it's this user interface which communicates the difference in resource types to the user (read on). > > If it is, then I assume that your resources work perfectly in Firefox > or any other generic user agent, and can be cached and transmogrified > using Google's mobile proxy and so on. Nice. > Almost. I'm using client-side XSLT, initiated via XML PI. Most user- agents grok application/xhtml+xml, but IE needs application/xml while Atom-only clients are redirected to application/atom+xml. Other user- agents need a text/html represenation, generated server-side; text/xml and text/plain are really just there to demonstrate how certain browsers fail to respect sender intent. In a perfect world, I wouldn't need conneg, if all user-agents (bots, browsers etc.) would only agree to support XSLT via XML PI using application/xhtml+xml, because that media type best describes sender intent (so does text/html for my server-side XSLT, but in my case application/xml and text/xml do not because current browsers don't respect that intent, and text/plain only does for user- agents which respect my display-rather-than-render intent). IOW, my resources *should* work perfectly as you say, without conneg. The fact that conneg exists, is what allows interoperability of new approaches, i.e. browser-resident XSLT, and compensates for the fact that in the real world not every participant in the communication is well-behaved. Minting new data/media types for every resource type means your resources fail hard, instead of degrading gracefully. What conneg does for me, is enable graceful degradation and forwards compatibility which is only possible when resource type is decoupled from media type. > > > > > The application logic is in the interpretation of the payloads, > > after following the designated processing models. The application > > code has knowledge of the resource type, the user-agent requires > > none, i.e. this is not a concern at the protocol layer. > > > > Here's where I'm not 100% clear on your terminology. Normally, I'd > interpret "application logic" as the server component, but here I'm > inclined to understand it as a rich client component, since it does > most of the "interpretation of the payloads". > > If I'm right to interpret "application logic" as a client connector, > In REST, the application logic is what executes in client components. Typically, application logic also resides in the server code generating the code executing in client components, however, I've used client-side XSLT to place it entirely within the browser. So my terminology was confusing, as I was thinking in terms of my system. What I meant was, the API is in the interpretation of payloads. This is the notion of a self-documenting API, which my HTML served as text/plain should never be, but which IE transforms anyway, by treating it as application/xml, which isn't my intent either; or what IE does, which ultimately amounts to treating it as text/html, which is a privilege escalation. What media type to send shouldn't be considered part of the application logic; it's system logic on the origin server. My application logic is expressed by the combination of HTML, XSLT, Javascript, CSS, images, Atom Entry/Feed/Category, and XBEL documents which together comprise the steady-states which users interact with; processed according to the media types and encodings I've assigned to them per-request on the basis of capabilities and realities. Resource type is key to the rendering and interpretation of the API, yet bears no relation to the processing rules used to decode that API into, say, a DOM. > > then that more specific client (e.g. yours) would recognize the > "person HTML" as a person resource type and (perhaps?) present that > differently... Or...? You mention that the application logic knows > about resource types, so I'm curious to understand how it recognizes > them. > The system I'm working on uses Atom Feeds to represent many resource types -- one is wiki page, another is weblog archive. The HTML interface of a wiki page is different than that of a weblog archive. Neither the application logic nor the user-agent cares about this distinction -- the interface is uniform -- they just need to call the proper transformation. So the distinction between resource types on my system, manifests itself in the @href of the xml-stylesheet PI. It's incidental, and not a thing to standardize as separate media types or present as custom link relations. The user sees either what looks and functions like a wiki page, or what looks and functions like a weblog archive (regardless of whether by user I mean human or machine). The users know what their goals are, the hypertext explains how to accomplish those goals; user-agents provide a mechanism which translates these goals into actions. This mechanism (hypertext over HTTP) simply doesn't need to care about resource types. This decoupling is what allows my representations to be negotiated between media types. If the mechanism needs to know about resource type, it's coupling client to server. Who cares about resource type, are resource owners and end users. "Wiki page" and "weblog archive" are abstract notions the resource owner needs to communicate to the end user. This understanding has no bearing on the semantics of the messaging between connectors, where everything is an Atom Feed or Entry, and roughly follows Atom Protocol. The application logic, client-side or server-side, makes sure that the right markup goes to the right place, without caring about resource type -- media types aren't contracts, just a shared understanding of common processing models. > > And lastly, just a word of praise :-) I greatly appreciate your > efforts in this list, and although I've been learning REST for 7 > years, I still think there's more to it: > > http://stackoverflow.com/questions/3543075/what-is-a-concise-way-of-understanding-restful-and-its-implications/3543326#3543326 > Or, a lot less to it. Thanks, btw, and sorry you're having difficulties with rest-discuss (happens). The most concise explanation of REST I can give, uses OOP terminology. Resources are objects with IDs. Each object may have one or more methods from a set, which remains uniform from one object to the next, and from one system to the next. If you need more methods, REST says try using more objects, first. Messaging between objects is HTTP. Properties vary depending on the nature of the request (method, selection headers, cache-control), organized by response code. For any response code, each object may have one or more associated data types from a standardized set, the processing of which is determined by a registered set of media types; both of which remain uniform from one object to the next, and from one system to the next. If you need more data/media types, REST says try less coupling, first. The API is described using standardized hypermedia data types and link relations. Hyperlinks are used within properties, to reference other objects for encapsulation, inheritance or extension. Familiar approaches to networked software development (DCOM, CORBA, SOA, many unRESTful HTTP APIs, etc.) attempt to distribute the object over the wire, ignoring the constraints imposed by the network (see REST 2.1, 2.3.1). REST deviates radically from these unproven approaches, by providing a uniform object interface which compensates for the reality of these network constraints by becoming part of the network itself. You aren't distributing your objects, you're attaching them as nodes on a network-based, distributed object messaging bus. Instead of the limited scale of ESB, think GSB -- Global Service Bus, or distributed ESB. You're extending your object *interface* across organizational boundaries, in a proven, standardized fashion described as a set of design constraints named REST (or CREST, or ARRESTED, or whatever set of constraints is appropriate to both the characteristics of the global Internet and the needs of your system). If your object interfaces are application-specific, you're distributing your objects instead of distributing your uniform object interfaces, even with HTTP + URI. -Eric
On Thu, Nov 4, 2010 at 6:13 AM, Eric J. Bowman <eric@...>wrote: > > > When I talk about resource type, I'm talking about an abstraction which > manifests itself in various ways in my system (hypertext, configuration > files), but never in a header or any other field. I have over a dozen > resource types on the system I'm developing, all of which negotiate > between the same handful of ubiquitous media types. Media types > identify generic processing models for families of forwards-backwards > compatible data types. > You have a dozen or so resource types (e.g. person, order or whatever; I don't know your domain) and they all typically have an XHTML representation, and perhaps a url encoded form representation (and some of the other 11 ubiquitous media types) of some sort. Is that a correct assumption? If it is, then I assume that your resources work perfectly in Firefox or any other generic user agent, and can be cached and transmogrified using Google's mobile proxy and so on. Nice. > The application logic is in the interpretation of the payloads, after > following the designated processing models. The application code has > knowledge of the resource type, the user-agent requires none, i.e. this > is not a concern at the protocol layer. > > Here's where I'm not 100% clear on your terminology. Normally, I'd interpret "application logic" as the server component, but here I'm inclined to understand it as a rich client component, since it does most of the "interpretation of the payloads". If I'm right to interpret "application logic" as a client connector, then that more specific client (e.g. yours) would recognize the "person HTML" as a person resource type and (perhaps?) present that differently... Or...? You mention that the application logic knows about resource types, so I'm curious to understand how it recognizes them. And lastly, just a word of praise :-) I greatly appreciate your efforts in this list, and although I've been learning REST for 7 years, I still think there's more to it: http://stackoverflow.com/questions/3543075/what-is-a-concise-way-of-understanding-restful-and-its-implications/3543326#3543326
As you all know, most Web 2.0 pages are quite dynamic in nature. After a skeleton webpage is loaded, AJAX kicks in to get customized info and they start injecting live data into the page asynchronously and dynamically. Hopefully, the AJAX calls are talking to RESTful WS. Question: In a RESTful system, who/what will be serving those initial base skeleton webpages (which may include HTML, JS scripts, locations to grab CSS, etc)? Is the same JAXB-RS server also serving these skeleton webpages? If not, would it be wise to use Apache to serve these webpages while the WS themselves are serviced by RESTful WS?
So, I have read Richardson and Ruby's "RESTful Web Services". The book is great for the foundational rationale of REST. But, when it comes to implementations, there were RESTlet when it comes to the Java world. I looked around and there are these: 1) RESTlet 2) Apache CFX 3) Oracle/Sun Glassfish (Jersey) So, here are the questions: a) Are there others that are worth looking at? b) If you have a brand new project, which one would you use and why?
My previous message has a typo, It should be JAX-RS (not JAXB-RS). Sorry about that. Thanks, Art
Art: Check out this wiki: http://code.google.com/p/implementing-rest/wiki/ByLanguage It lists some known frameworks/libraries that claim to support RESTful development. It may not be complete, but it might point you in some new directions. mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me #RESTFest 2010 http://rest-fest.googlecode.com On Sun, Nov 7, 2010 at 18:44, photogspassion <artyyeo@...> wrote: > My previous message has a typo, It should be JAX-RS (not JAXB-RS). Sorry about that. > Thanks, > Art > > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
Hello photogspassion. Although this is a this or that question, I think, as it always happens to me, I would need to pin down a couple of concepts. 1. A web services is business functionality that is exposed through a uniform interface. 2. A RESTFul WS is a web services that is implemented following the REST constrains, aiming (I hope) to get the REST benefits (Not all WS apply) That said, I feel your question assumes a RESTful WS is an artifact, like a server or middleware, that uses HTTP (Jax-RS perhaps?). Or even worse, a web service may be some method called by HTTP means. Well, not, it is not. It is something far more complex. The Skeleton you mention is actually a set of resources, not something outside REST but part of the system. Apache server is not more than a name for a web server that is a server in the REST sense. If getting a web page (which is a resource's representation with links to other resources) is the first step for a WS, then it is part of that WS. In other words, a WS is that sequence of steps, operations, flow that uses all HTTP operations on server and resources to achieve the goal you are seeking. Under that abstraction, the question of whether the resource is served by Apache or by a JAX-RS implementation makes no sense. See? Hope this clarifies a little. Regards. William Martinez Pomares --- In rest-discuss@yahoogroups.com, "photogspassion" <artyyeo@...> wrote: > > As you all know, most Web 2.0 pages are quite dynamic in nature. After a skeleton webpage is loaded, AJAX kicks in to get customized info and they start injecting live data into the page asynchronously and dynamically. Hopefully, the AJAX calls are talking to RESTful WS. > > Question: In a RESTful system, who/what will be serving those initial base skeleton webpages (which may include HTML, JS scripts, locations to grab CSS, etc)? Is the same JAXB-RS server also serving these skeleton webpages? If not, would it be wise to use Apache to serve these webpages while the WS themselves are serviced by RESTful WS? >
Hello Again. There are other names, RESTFULIE works for Ruby and Java, also RESTEasy from JBOSS, and of course Axis 2 has a REST mode. Using one of the other depends on what do you need, and what are you using. None is perfect, by the way, but there is a lot of effort put on each one. You need to try them first, my advice. William Martinez Pomares. --- In rest-discuss@yahoogroups.com, "photogspassion" <artyyeo@...> wrote: > > So, I have read Richardson and Ruby's "RESTful Web Services". The book is great for the foundational rationale of REST. But, when it comes to implementations, there were RESTlet when it comes to the Java world. > > I looked around and there are these: > 1) RESTlet > 2) Apache CFX > 3) Oracle/Sun Glassfish (Jersey) > > So, here are the questions: > a) Are there others that are worth looking at? > b) If you have a brand new project, which one would you use and why? >
"photogspassion" wrote: > > b) If you have a brand new project, which one would you use and why? > None! If you're heavily invested in a legacy Java system, then I'm sure such frameworks have their uses. If you're in a position *not* to use JAX-RS, then by all means, don't! My reaction to JAX-RS is, "What the hell are you folks even *talking* about?!?" Surely not REST. -Eric
On Mon, Nov 8, 2010 at 2:41 PM, Eric J. Bowman <eric@...> wrote: > "photogspassion" wrote: >> >> b) If you have a brand new project, which one would you use and why? >> > > None! If you're heavily invested in a legacy Java system, then I'm > sure such frameworks have their uses. If you're in a position *not* to > use JAX-RS, then by all means, don't! My reaction to JAX-RS is, "What > the hell are you folks even *talking* about?!?" Surely not REST. Re: JAX-RS, I'll reluctantly bite, why do you have that reaction? --tim
On Mon, Nov 8, 2010 at 7:41 PM, Eric J. Bowman <eric@...> wrote: > "photogspassion" wrote: >> >> b) If you have a brand new project, which one would you use and why? >> > > None! If you're heavily invested in a legacy Java system, then I'm > sure such frameworks have their uses. If you're in a position *not* to > use JAX-RS, then by all means, don't! My reaction to JAX-RS is, "What > the hell are you folks even *talking* about?!?" Surely not REST. > > -Eric > Hmm.. you could at least give him a decent HTTP lib to work from. Although I suppose you write that from scratch as well, right? For me, any 'framework' that provides you with a layered model for piping your server logic together is a win. Ruby: Rack Python: WSGI Javascript: jsgi If you pick one of the above specs to write against you can easily reuse components across applications (e.g. rack-cache http://rtomayko.github.com/rack-cache/). Cheers, Mike
Mike Kelly wrote: > > Hmm.. you could at least give him a decent HTTP lib to work from. > Although I suppose you write that from scratch as well, right? > An awful lot of what folks are trying to do with REST can be modeled as HTML applications on a shared hosting account. Most of the REST literature leaves these folks with the impression that REST is so complex that it can't be done without benefit of a framework, but this is not the case, and often leads to the simplest of concepts becoming complicated beyond reason (not to mention missing the point of REST). -Eric
Tim Williams wrote: > > > > > None! If you're heavily invested in a legacy Java system, then I'm > > sure such frameworks have their uses. If you're in a position > > *not* to use JAX-RS, then by all means, don't! My reaction to > > JAX-RS is, "What the hell are you folks even *talking* about?!?" > > Surely not REST. > > Re: JAX-RS, I'll reluctantly bite, why do you have that reaction? > See REST 2.3.3 and 2.3.4, then point me to an actual RESTful service built around JAX-RS which exhibits any of these characteristics. For example, configurability seems to be a matter of re-coding the system. I can't follow any JAX-RS discussions, as they use terminology that I'm unfamiliar with (injection, marshalling, controller, persistence, repository and more) in the REST context, while studiously avoiding any use of terms like "representation" or "hypertext constraint". Simply too much complexity for my tastes, I understand REST just fine without all the enterprisey lingo (or Java). -Eric
"photogspassion" wrote: > > What would you recommend? > I can't recommend any approach without knowing anything about your system. The last REST system I developed for a client doesn't use any server-side coding (i.e. PHP, Ruby) for servicing GET requests, only .htaccess, and runs on any run-of-the-mill shared-hosting account based on Apache < 2, with a little PHP to handle POST. I recommend KISS. -Eric
On Mon, Nov 8, 2010 at 3:45 PM, Eric J. Bowman <eric@...> wrote: > Tim Williams wrote: >> >> > >> > None! If you're heavily invested in a legacy Java system, then I'm >> > sure such frameworks have their uses. If you're in a position >> > *not* to use JAX-RS, then by all means, don't! My reaction to >> > JAX-RS is, "What the hell are you folks even *talking* about?!?" >> > Surely not REST. >> >> Re: JAX-RS, I'll reluctantly bite, why do you have that reaction? >> > > See REST 2.3.3 and 2.3.4, then point me to an actual RESTful service > built around JAX-RS which exhibits any of these characteristics. For > example, configurability seems to be a matter of re-coding the system. > I can't follow any JAX-RS discussions, as they use terminology that I'm > unfamiliar with (injection, marshalling, controller, persistence, > repository and more) in the REST context, while studiously avoiding any > use of terms like "representation" or "hypertext constraint". Simply > too much complexity for my tastes, I understand REST just fine without > all the enterprisey lingo (or Java). I don't know, it seems like you're comparing apples and oranges here. You can use JAX-RS as a tool to build an origin server that is both simple and modifiable - though, I think in the context of the dissertation, those properties are guaranteed for the *whole* distributed hypermedia system, not necessarily individual components within it. In any case, nothing will be perceived as simple if you don't understand the terminology. In this case, the terms you mention are fairly elementary computer science and design patterns. Regardless to JAX-RS, a tool, you may find a basic understanding of those terms useful. To apples and oranges, you may compare the apache daemon architecture itself with JAX-RS - I doubt it'd be any more comfortable. Take a look at the daemon's Streams[1], for example, compared to JAX-RS. --tim [1] - http://journal.paul.querna.org/articles/2010/10/06/transforming-streams/
Tim Williams wrote: > > In any case, nothing will be perceived as simple if you don't > understand the terminology. In this case, the terms you mention are > fairly elementary computer science and design patterns. Regardless to > JAX-RS, a tool, you may find a basic understanding of those terms > useful. > I understand the terminology just fine in the general sense. Understanding REST doesn't require understanding those terms in general, or in how they're applied to REST, and I don't see how such knowledge doesn't just get in the way of anyone trying to learn REST. Setting my colorful approach to posting aside, the advice I gave the OP shouldn't be controversial: instead of starting by choosing the right JAX-RS framework, question whether you need JAX-RS or a framework at all. -Eric
Tim Williams wrote: > > > > > See REST 2.3.3 and 2.3.4, then point me to an actual RESTful service > > built around JAX-RS which exhibits any of these characteristics. > > For example, configurability seems to be a matter of re-coding the > > system. I can't follow any JAX-RS discussions, as they use > > terminology that I'm unfamiliar with (injection, marshalling, > > controller, persistence, repository and more) in the REST context, > > while studiously avoiding any use of terms like "representation" or > > "hypertext constraint". Simply too much complexity for my tastes, > > I understand REST just fine without all the enterprisey lingo (or > > Java). > > > > I don't know, it seems like you're comparing apples and oranges here. > You can use JAX-RS as a tool to build an origin server that is both > simple and modifiable - though, I think in the context of the > dissertation, those properties are guaranteed for the *whole* > distributed hypermedia system, not necessarily individual components > within it. > OK, let's run with that. Most -- not all, but most -- frameworks are locked into the IDL approach (if not IDLs themselves). Resource A allows PUT, therefore the instructions on how to PUT to A must somehow be a product of A. But the hypertext constraint allows resource B to describe how to PUT to A, without requiring A to have anything to do with that description, or requiring B to have anything to do with A beyond containing A's URI as its target. Resource B can easily be changed to PUT to C, merely by reconfiguring the target URI. Or, the interface to A may be reconfigured by changing B (assuming A supports the new configuration, of course). Frameworks (with apologies to those I don't mean to lump into my generalizations) typically insist that A's interface must be changed by re-coding A. Frameworks are great if you're following this IDL pattern, but typically throw a royal fit when you try getting them to follow the hypertext constraint -- the free-form design approach the hypertext constraint allows is a very difficult thing to model generically. This is what makes the system, as a whole, not very configurable when using frameworks. I ought to be able to reconfigure a system just by changing some hyperlinks, not re-thinking the marshalling code or changing an IDL (or anything else associated with the distributed-object paradigm, as opposed to REST's distributed object *interface* paradigm). My thinking on this results from my investigation into creating an in- house REST framework -- and deciding against it, in favor of creating a configuration framework for REST systems. REST 3.2.1: "One aspect of PF styles that is rarely mentioned is that there is an implied 'invisible hand' that arranges the configuration of filters in order to establish the overall application. A network of filters is typically arranged just prior to each activation, allowing the application to specify the configuration of filter components based on the task at hand and the nature of the data streams (configurability). This controller function is considered a separate operational phase of the system, and hence a separate architecture, even though one cannot exist without the other." The desirable property I'm after is the inherent configurability of pipe-and-filter architectures. The purpose of a framework is to allow resources to be stitched together into a coherent whole. Frameworks following the IDL paradigm miss the mark on configurability, by being inflexible once the system is stitched together. RESTful IDL-based systems are possible, I'm just of the opinion that the results are rigid. I see the appeal of the IDL approach to framework development (even where IDLs aren't used), but I think this necessarily results in a very limited view of REST unsuitable for my style of development. These are my opinions, YMMV. -Eric
Erick. "Eric J. Bowman" wrote: > If you're in a position *not* to > use JAX-RS, then by all means, don't! My reaction to JAX-RS is, "What > the hell are you folks even *talking* about?!?" Surely not REST. I didn't want to write it so crude, but I guess I do share some of your position against Jax-RS. You see, there is an evaluation of frameworks and even a talk I proposed (not accepted yet) on REST frameworks. Obviously, to compare we need a set of criteria, like abstraction level, domain matching, paradigm matching, etc. JAX-RS in particular seems a little wrap of servlets, nothing more that a set of tools to translate HTTP requests into the Java realm (has a couple of nice things, granted), but not of much help to build truly RESTful systems. And as you mention, there is a problem with the domain matching, as you have to learn concepts that are not in the REST domain. Anyway, no one will learn to swim without getting wet, so my advice is to try first. And not all are JAX-RS. William
Tim Williams wrote: > > In any case, nothing will be perceived as simple if you don't > understand the terminology. In this case, the terms you mention are > fairly elementary computer science and design patterns. Regardless to > JAX-RS, a tool, you may find a basic understanding of those terms > useful. > To clarify, what does marshalling have to do with REST? I don't see how the term applies, because in the general sense it's about object serialization and transfer. But, I don't distribute my objects, only their interfaces; thus any discussion of marshalling in REST will be seen by me as non-sequitir. -Eric
"William Martinez Pomares" wrote: > > > "What the hell are you folks even *talking* about?!?" > > > I didn't want to write it so crude... > I gotta be free to be me. :-) Thanks for the support. -Eric
On Mon, Nov 8, 2010 at 6:32 PM, Eric J. Bowman <eric@...> wrote: > Tim Williams wrote: >> >> In any case, nothing will be perceived as simple if you don't >> understand the terminology. In this case, the terms you mention are >> fairly elementary computer science and design patterns. Regardless to >> JAX-RS, a tool, you may find a basic understanding of those terms >> useful. >> > > To clarify, what does marshalling have to do with REST? I don't see > how the term applies, because in the general sense it's about object > serialization and transfer. But, I don't distribute my objects, only > their interfaces; thus any discussion of marshalling in REST will be > seen by me as non-sequitir. I think your confusion stems from your conflating terminology of the style with terminology of the implementation. Marshalling is implementation terminology. Manipulation of resources through representations is terminology of the style. --tim
"William Martinez Pomares" wrote: > > You see, there is an evaluation of frameworks and even a talk I > proposed (not accepted yet) on REST frameworks. Obviously, to compare > we need a set of criteria, like abstraction level, domain matching, > paradigm matching, etc. > Yup. Actually, I'd be a lot happier if, as a class, these were called "HTTP Frameworks" because otherwise, there's an implication that REST is achieved by using the framework. Without implying anything about any framework, RESTlet does indeed directly map REST terminology into configuration. But in reality, a RESTful outcome is the least likely for beginners, regardless of framework. You just can't abstract away having to learn REST; once you have learned, you'll find most frameworks working against you unless their assumptions are congruous with your own. I'd better shut up now, lest I piss off a bunch of people I have a great deal of respect for, who are involved in such work. Instead of comparison criteria, I'd suggest devising two or three REST systems (doesn't have to be anything complicated) as reference, basing your comparison on the implementation experience of each reference across each target framework. I'd also suggest the subjective results be organized in terms of REST 2.3.4 and 2.3.6, with objective results in terms of 2.3.1.1. I'd really be interested in such results, particularly an everything- else-being-equal performance analysis. The test methodology could be duplicated, and anyone else could run the tests on the same frameworks in the future, or newly-introduced frameworks, or as QA if developing a framework. It'd be more work for me to implement the reference systems in my not-a-REST-framework, but I'm sure I'd gain valuable insight by comparing against the subjective experiences of others implementing the same thing on different frameworks. -Eric
Tim Williams wrote: > > > To clarify, what does marshalling have to do with REST? I don't see > > how the term applies, because in the general sense it's about object > > serialization and transfer. But, I don't distribute my objects, > > only their interfaces; thus any discussion of marshalling in REST > > will be seen by me as non-sequitir. > > I think your confusion stems from your conflating terminology of the > style with terminology of the implementation. Marshalling is > implementation terminology. Manipulation of resources through > representations is terminology of the style. > You're conflating my WTF with confusion. I understand just fine, I just don't see the relevance to REST; IOW, I'm not the one who's confused. If your implementation serializes objects for transfer over HTTP, which is exactly what the term "marshalling" refers to in the JAX-RS context, then you're making the same mistake as WS-*/SOAP. That implementation terminology describes a REST anti-pattern, as opposed to the hypertext constraint. (Granted, it's possible to prove me wrong here, but such an explanation won't be reflective of typical systems, based on object serialization and transfer instead of the hypertext constraint.) -Eric
On Mon, Nov 8, 2010 at 4:38 PM, Eric J. Bowman <eric@...>wrote: > If your implementation serializes objects for transfer over HTTP, which > is exactly what the term "marshalling" refers to in the JAX-RS context, > then you're making the same mistake as WS-*/SOAP. That implementation > terminology describes a REST anti-pattern, as opposed to the hypertext > constraint. (Granted, it's possible to prove me wrong here, but such > an explanation won't be reflective of typical systems, based on object > serialization and transfer instead of the hypertext constraint.) > Marshalling is the conversion of the byte stream in to the internal structures used (in this case) by the Java program. Not everyone wants to simply work with "arrays of bytes" and would rather work with higher level data structures. To be effective with the outside world, one needs to be able to convert to and from these data structures and arrays of bytes. Whether they're converting their data structure to HTML, XHTML, ATOM, or some purpose built representation, and what those representations contain, is not germane to the discussion. One of the benefits of the framework is to make the marshalling process transparent to the overall application. Marshalling is neither an empowering REST process, nor is it a REST killer. It's simply a data conversion process. Regards, Will Hartung (willh@...)
> > (Granted, it's possible to prove me wrong here, but such an > explanation won't be reflective of typical systems, based on object > serialization and transfer instead of the hypertext constraint.) > Meaning, the user goal may actually be to view a serialized object, but this would seem to be an atypical, edge case. -Eric
On Mon, Nov 8, 2010 at 7:38 PM, Eric J. Bowman <eric@...> wrote: > Tim Williams wrote: >> >> > To clarify, what does marshalling have to do with REST? I don't see >> > how the term applies, because in the general sense it's about object >> > serialization and transfer. But, I don't distribute my objects, >> > only their interfaces; thus any discussion of marshalling in REST >> > will be seen by me as non-sequitir. >> >> I think your confusion stems from your conflating terminology of the >> style with terminology of the implementation. Marshalling is >> implementation terminology. Manipulation of resources through >> representations is terminology of the style. >> > > You're conflating my WTF with confusion. I understand just fine, I > just don't see the relevance to REST; IOW, I'm not the one who's > confused. Actually, I'm asserting your 'WTF' is a result of your confusion, but ok, this is tiring. > If your implementation serializes objects for transfer over HTTP, which > is exactly what the term "marshalling" refers to in the JAX-RS context, > then you're making the same mistake as WS-*/SOAP. Jave is object-oriented. So any Java web server, by definition, will serialize objects for transfer over HTTP. Not good, not bad, not a mistake - it just is. You're more likely, imprecisely, referring to folks that use a JavaBean to XML serialization instead of, for example, ROME for Atom representations - but I'm trying to guess what part confuses you. > That implementation > terminology describes a REST anti-pattern, as opposed to the hypertext > constraint. (Granted, it's possible to prove me wrong here, but such > an explanation won't be reflective of typical systems, based on object > serialization and transfer instead of the hypertext constraint.) It doesn't describe an anti-pattern - its simply factually describes how an Object-Oriented language (like Java) does these things. You seem pretty opinionated on Java frameworks but yet don't come across as a Java programmer? --tim
Tim Williams wrote: > > Jave is object-oriented. So any Java web server, by definition, will > serialize objects for transfer over HTTP. Not good, not bad, not a > mistake - it just is. > The REST paradigm is that your representation is an interface to the object, not the object itself. In REST, implementation details like OOP are hidden behind the interface, not exposed to the world. > > You're more likely, imprecisely, referring to folks that use a > JavaBean to XML serialization instead of, for example, ROME for Atom > representations - but I'm trying to guess what part confuses you. > I'm not confused. Stating repeatedly that I *am* confused, is an ad- hominem argument. If you want to enlighten me, that isn't how to go about it. I can't imagine what an object would look like, which serializes into an HTML form. Yeah, you can serialize objects into Atom, butt-ugly Atom; but that isn't the REST paradigm where the representation is an interface to the object, not the object itself. > > It doesn't describe an anti-pattern - its simply factually describes > how an Object-Oriented language (like Java) does these things. You > seem pretty opinionated on Java frameworks but yet don't come across > as a Java programmer? > No, I don't know Java. But that's irrelevant to the point I'm making, which is that JAX-RS contains terminology which has everything to do with Java and nothing to do with REST. Personally, Java cost my project years, and was a mistake to pursue -- I am not an enterprise. -Eric
Will Hartung wrote: > > Whether they're converting their data structure to HTML, XHTML, ATOM, > or some purpose built representation, and what those representations > contain, is not germane to the discussion. > I don't understand. The content of the representation is what I look at to determine whether the hypertext constraint is applied. None of the examples I've seen of marshalling in JAX-RS look like hypertext which drives application state, they look like serialized objects. If I'm looking at bad examples, please direct my attention to good examples. -Eric
On Mon, Nov 8, 2010 at 5:33 PM, Eric J. Bowman <eric@...> wrote: > Will Hartung wrote: >> >> Whether they're converting their data structure to HTML, XHTML, ATOM, >> or some purpose built representation, and what those representations >> contain, is not germane to the discussion. >> > > I don't understand. The content of the representation is what I look > at to determine whether the hypertext constraint is applied. None of > the examples I've seen of marshalling in JAX-RS look like hypertext > which drives application state, they look like serialized objects. If > I'm looking at bad examples, please direct my attention to good > examples. And...you blame those representations on the MARSHALLING? Talk about killing the messenger. JAX-RS is a mechanism of mapping HTTP requests, handling internal method dispatch and routing, URL decoding and data marshalling. The fact that people don't use this in a RESTful way, or that the examples are not RESTful doesn't mean that framework isn't a good choice for implementing a RESTful system in the Java environment. Seems to me, not being a heavy user of it, that it manages much of the boiler plate processing that many HTTP transactions must perform, and manages it better than the raw Servlet spec. Perhaps it could have been better named than JAX-RS, but it's certainly enables a more RESTful style of development than JAX-WS. Regards, Will Hartung (willh@...)
Will Hartung wrote: > > > I don't understand. The content of the representation is what I > > look at to determine whether the hypertext constraint is applied. > > None of the examples I've seen of marshalling in JAX-RS look like > > hypertext which drives application state, they look like serialized > > objects. If I'm looking at bad examples, please direct my > > attention to good examples. > > And...you blame those representations on the MARSHALLING? Talk about > killing the messenger. > Whenever I come across REST articles presented with JAX-RS examples, the examples are serialized objects and the discussion is marshalling, to the extent that I believe that in the common vernacular, marshalling refers to a REST anti-pattern in JAX-RS. If it's supposed to mean something else, or isn't tied to the meaning I perceive from those examples, I've seen no evidence but would appreciate links to any. > > JAX-RS is a mechanism of mapping HTTP requests, handling internal > method dispatch and routing, URL decoding and data marshalling. The > fact that people don't use this in a RESTful way, or that the examples > are not RESTful doesn't mean that framework isn't a good choice for > implementing a RESTful system in the Java environment. > I believe I've said the same thing, twice. The issue is, if you don't have good reason (i.e. legacy Java system), the additional terminology obscures "what is REST" and leads to confusion -- if I'm confused about what marshalling means in JAX-RS, which I may be, it results from the confusing way the term is used in practice to refer to something foreign to REST (while claiming to be REST by virtue of using JAX-RS). Better to avoid that confusion if possible. -Eric
> > Whenever I come across REST articles presented with JAX-RS examples, > the examples are serialized objects and the discussion is marshalling, > to the extent that I believe that in the common vernacular, > marshalling refers to a REST anti-pattern in JAX-RS. If it's > supposed to mean something else, or isn't tied to the meaning I > perceive from those examples, I've seen no evidence but would > appreciate links to any. > The corollary is, of course, the widespread misconception that REST limits you to four HTTP methods which map to CRUD. Except in that case, there are plenty of examples available which refute the notion. I've never mocked anyone for holding that belief -- it's understandable given how pervasive it is -- only politely set them on the right path. -Eric
Will Hartung wrote: > > Perhaps it could have been better named than JAX-RS, but it's > certainly enables a more RESTful style of development than JAX-WS. > Oh, absolutely. Plus, annotations are interesting. So are URI templates. In fact, there's lots of good stuff there. But for some reason, AFAICT, the result isn't a proliferation of RESTful services based on it. I think the reason for this is the terminology, like "marshalling", is being applied to REST exactly the same way it was applied (in the Java world) to WS-*/SOAP, rather than in the generic sense. Just my opinion. -Eric
On Mon, Nov 8, 2010 at 8:27 PM, Eric J. Bowman <eric@...> wrote: > Tim Williams wrote: >> >> Jave is object-oriented. So any Java web server, by definition, will >> serialize objects for transfer over HTTP. Not good, not bad, not a >> mistake - it just is. >> > > The REST paradigm is that your representation is an interface to the > object, not the object itself. In REST, implementation details like > OOP are hidden behind the interface, not exposed to the world. No one's suggested otherwise Eric. You've got a beef - clearly. I'm trying to help you understand that your beef really isn't with "marshalling" - a factual necessity - it seems to be with developer's lazy out of the box JAXB JavaBean-to-XML object serialization. >> You're more likely, imprecisely, referring to folks that use a >> JavaBean to XML serialization instead of, for example, ROME for Atom >> representations - but I'm trying to guess what part confuses you. >> > > I'm not confused. Stating repeatedly that I *am* confused, is an ad- > hominem argument. If you want to enlighten me, that isn't how to go > about it. I can't imagine what an object would look like, which > serializes into an HTML form. Yeah, you can serialize objects into > Atom, butt-ugly Atom; but that isn't the REST paradigm where the > representation is an interface to the object, not the object itself. I won't again say you're confused:) I'll instead say you've latched on to a term "marshalling" and projected upon it way too much meaning. It's also poor inductive reasoning. You've observed that folks using JAX-RS to do simple objective serialization (a la JAXB) and you've apparently concluded that therefore all folks using JAX-RS must do simple object serialization. The truth is, JAX-RS as with most so-called REST Frameworks are tools that *can* support RESTful services but can also lead to bad implementations. That's not necessarily the fault of the framework. For example, most Jersey code that's around me is using ROME to produce Atom but it's pretty close to this: http://weblogs.java.net/blog/2008/02/05/integrating-jersey-and-abdera >> It doesn't describe an anti-pattern - its simply factually describes >> how an Object-Oriented language (like Java) does these things. You >> seem pretty opinionated on Java frameworks but yet don't come across >> as a Java programmer? >> > > No, I don't know Java. But that's irrelevant to the point I'm making, > which is that JAX-RS contains terminology which has everything to do > with Java and nothing to do with REST. Personally, Java cost my > project years, and was a mistake to pursue -- I am not an enterprise. I'd suggest that your lack of experience with Java and JAX-RS is limiting your ability to provide a useful critique on the subject. I don't see much "marshalling", but I see Architecture - > Implementation Uniform Interface - > @GET, @POST, etc. Resources manipulated - > @ProducesMime, @ConsumesMime through representations URI -> @Path Hypermedia -> @Ref, @Produces, etc. You get the idea. It's not perfect, but the tools are there to support an implementation that lives within the REST constraints. Maybe I'm comfortable enough with Java not to notice Java-ish terminology but when read the guide: http://jersey.java.net/nonav/documentation/latest/user-guide.html#d0e2522 ... I think they've done a pretty good job at sticking to the resources, representations, URI, etc. speak. --tim
Will, I'm playing devil's advocate here... Yes, JAX-RS helps in what you say, but as you say it seems more like an improved servlet. Not bad, maybe not what REST needs either. Only the fact of annotating a method is suspicious, and as you can see from the post, may be confused with RPC very easily. William Martinez. --- In rest-discuss@yahoogroups.com, Will Hartung <willh@...> wrote: > > On Mon, Nov 8, 2010 at 5:33 PM, Eric J. Bowman <eric@...> wrote: > > Will Hartung wrote: > >> > >> Whether they're converting their data structure to HTML, XHTML, ATOM, > >> or some purpose built representation, and what those representations > >> contain, is not germane to the discussion. > >> > > > > I don't understand. The content of the representation is what I look > > at to determine whether the hypertext constraint is applied. None of > > the examples I've seen of marshalling in JAX-RS look like hypertext > > which drives application state, they look like serialized objects. If > > I'm looking at bad examples, please direct my attention to good > > examples. > > And...you blame those representations on the MARSHALLING? Talk about > killing the messenger. > > JAX-RS is a mechanism of mapping HTTP requests, handling internal > method dispatch and routing, URL decoding and data marshalling. The > fact that people don't use this in a RESTful way, or that the examples > are not RESTful doesn't mean that framework isn't a good choice for > implementing a RESTful system in the Java environment. Seems to me, > not being a heavy user of it, that it manages much of the boiler plate > processing that many HTTP transactions must perform, and manages it > better than the raw Servlet spec. > > Perhaps it could have been better named than JAX-RS, but it's > certainly enables a more RESTful style of development than JAX-WS. > > Regards, > > Will Hartung > (willh@...) >
Me again. Marshalling is not to blame, actually. Point granted. When you marshall, you create a structure in Java, an object. That structure can be a direct domain object or a metadata structure. 1. Using a domain object is great, as you work directly with the domain, but that may force the domain outwards and you may have some impedance mismatch. 2. Using metadata (that is, an object that knows how to work with HTML for instance), has the benefit of still using object, but managing the original data (the HTML in this case). The con is the poor insertion into the domain. Still, not all incoming representations should be mapped into domain objects, some may be kept in metadata, and the resource should not necessarily be an object. Again, not all is the framework's fault, and the framework alone will not make the thinking work for you. William Martinez. --- In rest-discuss@yahoogroups.com, Will Hartung <willh@...> wrote: > > On Mon, Nov 8, 2010 at 4:38 PM, Eric J. Bowman <eric@...>wrote: > > > If your implementation serializes objects for transfer over HTTP, which > > is exactly what the term "marshalling" refers to in the JAX-RS context, > > then you're making the same mistake as WS-*/SOAP. That implementation > > terminology describes a REST anti-pattern, as opposed to the hypertext > > constraint. (Granted, it's possible to prove me wrong here, but such > > an explanation won't be reflective of typical systems, based on object > > serialization and transfer instead of the hypertext constraint.) > > > > Marshalling is the conversion of the byte stream in to the internal > structures used (in this case) by the Java program. Not everyone wants to > simply work with "arrays of bytes" and would rather work with higher level > data structures. To be effective with the outside world, one needs to be > able to convert to and from these data structures and arrays of bytes. > > Whether they're converting their data structure to HTML, XHTML, ATOM, or > some purpose built representation, and what those representations contain, > is not germane to the discussion. One of the benefits of the framework is to > make the marshalling process transparent to the overall application. > > Marshalling is neither an empowering REST process, nor is it a REST killer. > It's simply a data conversion process. > > Regards, > > Will Hartung > (willh@...) >
Tim Williams wrote: > > No one's suggested otherwise Eric. You've got a beef - clearly. I'm > trying to help you understand that your beef really isn't with > "marshalling" - a factual necessity - it seems to be with developer's > lazy out of the box JAXB JavaBean-to-XML object serialization. > Me just being me, tends to fool people into thinking I have beeves when I don't. I've been very careful to discuss confusion surrounding terminology, rather than staking out a "JAX-RS considered harmful" position. I appreciate your helping me to work through this, but I also think there are lessons to be learned (given that this *is* rest- discuss) by analyzing where, why and how folks wind up somewhere other than REST when using frameworks crafted specifically to support REST. Case in point, is that so many folks are apparently misappropriating the terminology that I've been misled by reading them, a la REST=CRUD, into believing (for example) that marshalling is what people do when they don't understand the hypertext constraint. Using the same term which in WS-* meant "serialize to SOAP" may mean that it's preferable to avoid the term in framework documentation, to avoid such confusion. > > I won't again say you're confused:) I'll instead say you've latched > on to a term "marshalling" and projected upon it way too much meaning. > Well, that, or I adopted the meaning that's most prevalent in the material I've read -- which means it's others who have projected too much, or an incorrect, meaning onto the term in their own confusion. > > It's also poor inductive reasoning. You've observed that folks using > JAX-RS to do simple objective serialization (a la JAXB) and you've > apparently concluded that therefore all folks using JAX-RS must do > simple object serialization. > If I thought that, I would've staked out "JAX-RS considered harmful" as my position, instead of the position that the terminology is leading folks astray (for whatever reason), and is best avoided by those trying to learn. > > The truth is, JAX-RS as with most so-called REST Frameworks are tools > that *can* support RESTful services but can also lead to bad > implementations. That's not necessarily the fault of the framework. > No, you're right -- it could result from the documentation and articles about the frameworks. Four to five years ago, there was no such thing as a REST framework, now they're myriad, but we're still in the first generation. Moving forward, there's some value to examining the reasons why the appearance of these frameworks hasn't really led to more examples of honest-to-gosh REST in the wild. I kinda thought they would... > > For example, most Jersey code that's around me is using ROME to > produce Atom but it's pretty close to this: > > http://weblogs.java.net/blog/2008/02/05/integrating-jersey-and-abdera > > >> It doesn't describe an anti-pattern - its simply factually > >> describes how an Object-Oriented language (like Java) does these > >> things. You seem pretty opinionated on Java frameworks but yet > >> don't come across as a Java programmer? > >> > > > > No, I don't know Java. But that's irrelevant to the point I'm > > making, which is that JAX-RS contains terminology which has > > everything to do with Java and nothing to do with REST. > > Personally, Java cost my project years, and was a mistake to > > pursue -- I am not an enterprise. > > I'd suggest that your lack of experience with Java and JAX-RS is > limiting your ability to provide a useful critique on the subject. > Perhaps. OTOH, the OP didn't give us much in the way of details about his project or experience, so I decided not to make any assumptions about that, or why he thinks a Java-based framework is the way to go. Given that he may indeed be new to both REST and Java, I feel my advice was spot-on -- if, given my knowledge and experience, I can't relate discussions of JAX-RS to REST, what hope is there for someone new to the field to succeed in bringing a system to market that way? > > Architecture - > Implementation > Uniform Interface - > @GET, @POST, etc. > Resources manipulated - > @ProducesMime, @ConsumesMime > through representations > URI -> @Path > Hypermedia -> @Ref, @Produces, etc. > Well, yeah, on this level it makes sense. But when the discussion turns to implementation, there's a whole world of terminology that simply isn't encountered in REST, outside of Java -- terms I'm familiar with outside of REST, applied in a way that doesn't make sense to me within REST. As I've said, the terminology seems to bring the WS-*/SOAP paradigm along with it, which is why I challenged the notion of using Java, JAX-RS or frameworks in general, as a starting point. Frameworks are HTTP libraries mistaken for REST libraries, as if REST's constraints may be abstracted away as easily as generating cache headers (which may eventually be the case, but we're nowhere near that today). > > You get the idea. It's not perfect, but the tools are there to > support an implementation that lives within the REST constraints. > Maybe I'm comfortable enough with Java not to notice Java-ish > terminology but when read the guide: > > http://jersey.java.net/nonav/documentation/latest/user-guide.html#d0e2522 > > ... I think they've done a pretty good job at sticking to the > resources, representations, URI, etc. speak. > OK, thanks for the links, I'll go read them now, with an eye towards figuring out where the disconnect lies -- such that when it comes time to document my own solution, I don't fall into the same traps and see my own work horrifyingly used to recast SOAP. :-0 -Eric
After reading REST in Practice <http://restinpractice.com> , I was
puzzled by the authors' recommendation of POST over PUT for resource
updates. I thought through the implications of some of the alternatives
and wrote up my thoughts as a blog post
<http://alexscordellis.blogspot.com/2010/11/restful-architecture-what-sh\
ould-we-put.html> . This triggered a very deep debate in the comments,
revealing a number of subtleties that I had missed. I feel that the
debate deserves a bigger forum than the comments of my blog, so I'm
moving it here.
If you want the full history of the debate, read my original post
<http://alexscordellis.blogspot.com/2010/11/restful-architecture-what-sh\
ould-we-put.html> , including the comments, and William Vambenepe's
response <http://stage.vambenepe.com/archives/1665> , but here's a quick
summary:
* Services serve representations including both business data and
hypermedia controls. Clients have no business sending the hypermedia
controls to the service, since they are determined by resource state and
business rules hosted by the server.
* Therefore we have to either PUT a partial representation (data, but
no links), or use a different HTTP method, for the client to make a
change to the resource.
* However, there are many complications with using partial PUT, which
are detailed in William Martinez's comment
<http://alexscordellis.blogspot.com/2010/11/restful-architecture-what-sh\
ould-we-put.html?showComment=1288962611437#c255996206325049719> and
William Vambenepe's post <http://stage.vambenepe.com/archives/1665> .
* These complications may force us to use POST for changes which the
client sees as full resource replacement, and therefore still feel more
like they should be a PUT
* An alternative, variations of which were suggested by Garry Shutler
<http://alexscordellis.blogspot.com/2010/11/restful-architecture-what-sh\
ould-we-put.html?showComment=1288870660489#c5473614401681968877> and
Duncan Cragg
<http://alexscordellis.blogspot.com/2010/11/restful-architecture-what-sh\
ould-we-put.html?showComment=1289244765256#c299846722503776739> , is to
separate the resource into two parts. One would be the resource that the
client sends to the service, and the other would be generated by the
service and include hypermedia controls and service-generated state
information.
This has been a great debate. I've learned a lot and it's forced me to
think things through very carefully; thank you to all participants.
However I feel it has so far been rather theoretical. I'd be very
excited to hear from people who have tackled this problem in building
real systems. What did you do? How did that work out? Did you hit any of
the complications of partial PUT described by the two Williams? If you
used one of the alternatives, did you feel that had any drawbacks?
Discuss!
alex.scordellis wrote: > > After reading REST in Practice <http://restinpractice.com> , I was > puzzled by the authors' recommendation of POST over PUT for resource > updates. I thought through the implications of some of the alternatives > and wrote up my thoughts as a blog post > <http://alexscordellis.blogspot.com/2010/11/restful-architecture-what-sh\ > ould-we-put.html> . This triggered a very deep debate in the comments, > revealing a number of subtleties that I had missed. I feel that the > debate deserves a bigger forum than the comments of my blog, so I'm > moving it here. Fifth option, DELETE then POST
Ah, this debate again... it's a bit like debating the right way to eat soup with a fork. The best approach is to take a big step back and come at the problem from a slightly different angle (or utensil? ;-). A summary of my thoughts: - there is no "right", generic way to use PUT to update a resource that works across all resources and media types except specifying verbatim how the resource should be represented in full. e.g. PUT the full .jpg image you expect to be served in a subsequent GET of the resource - if you are including enough data in a PUT to allow the the rest of the resource's state to be derived by the server then you are ok. Alternatively, you can have the server change the values the client suggests. This is my take on what AtomPub does. It is necessary for things like identifiers and URIs (in general hypermedia controls) that the server needs control over. - omitting information that the server is going to fill in is quite a different thing from a "partial update" -- don't confuse the two. To me a "partial update" is when the full resource state cannot be derived from the information you are providing and the previous state of the resource needs to be factored in as well. - this leaves you with the problem of how the client knows what information can be omitted (or is just a place-holder for the server's values). This is either (a) specified by the media type definition or (b) specified (or rather constrained) at run-time by a hypermedia control. I believe that AtomPub is an example of (a) but I've never seen an example of (b) in the wild -- don't see why it is not possible though. In general though, hypermedia controls for PUT (and DELETE) really need further exploration. I'd like to see more examples of media types that describe how PUT/DELETE should be used as well. I don't buy the "you have the URI, so just invoke PUT or DELETE on it" generic approach -- you end up with problems like the one you are tackling here. At the end of the day though, unless you think the idempotent property of PUT and DELETE is going to realize practical benefits for your system, you might just be better off using POST. Andrew --- In rest-discuss@yahoogroups.com, "alex.scordellis" <alex.scordellis@...> wrote: > > > > After reading REST in Practice <http://restinpractice.com> , I was > puzzled by the authors' recommendation of POST over PUT for resource > updates. I thought through the implications of some of the alternatives > and wrote up my thoughts as a blog post > <http://alexscordellis.blogspot.com/2010/11/restful-architecture-what-sh\ > ould-we-put.html> . This triggered a very deep debate in the comments, > revealing a number of subtleties that I had missed. I feel that the > debate deserves a bigger forum than the comments of my blog, so I'm > moving it here. > > If you want the full history of the debate, read my original post > <http://alexscordellis.blogspot.com/2010/11/restful-architecture-what-sh\ > ould-we-put.html> , including the comments, and William Vambenepe's > response <http://stage.vambenepe.com/archives/1665> , but here's a quick > summary: > > * Services serve representations including both business data and > hypermedia controls. Clients have no business sending the hypermedia > controls to the service, since they are determined by resource state and > business rules hosted by the server. > * Therefore we have to either PUT a partial representation (data, but > no links), or use a different HTTP method, for the client to make a > change to the resource. > * However, there are many complications with using partial PUT, which > are detailed in William Martinez's comment > <http://alexscordellis.blogspot.com/2010/11/restful-architecture-what-sh\ > ould-we-put.html?showComment=1288962611437#c255996206325049719> and > William Vambenepe's post <http://stage.vambenepe.com/archives/1665> . > * These complications may force us to use POST for changes which the > client sees as full resource replacement, and therefore still feel more > like they should be a PUT > * An alternative, variations of which were suggested by Garry Shutler > <http://alexscordellis.blogspot.com/2010/11/restful-architecture-what-sh\ > ould-we-put.html?showComment=1288870660489#c5473614401681968877> and > Duncan Cragg > <http://alexscordellis.blogspot.com/2010/11/restful-architecture-what-sh\ > ould-we-put.html?showComment=1289244765256#c299846722503776739> , is to > separate the resource into two parts. One would be the resource that the > client sends to the service, and the other would be generated by the > service and include hypermedia controls and service-generated state > information. > > This has been a great debate. I've learned a lot and it's forced me to > think things through very carefully; thank you to all participants. > However I feel it has so far been rather theoretical. I'd be very > excited to hear from people who have tackled this problem in building > real systems. What did you do? How did that work out? Did you hit any of > the complications of partial PUT described by the two Williams? If you > used one of the alternatives, did you feel that had any drawbacks? > > Discuss! >
Partial updates should be done using PATCH [1]
Complete updates should be done using PUT.
When those methods are not practical, POST can be used instead.
Subbu's "RESTful Web Services Cookbook" has a very good chapter ("11.
Miscellaneous Writes") [2] that includes more than one section
covering strategies for partial updates, too.
[1] http://tools.ietf.org/html/rfc5789
[2] http://bit.ly/bRCwGj
mca
http://amundsen.com/blog/
http://twitter.com@mamund
http://mamund.com/foaf.rdf#me
#RESTFest 2010
http://rest-fest.googlecode.com
On Tue, Nov 9, 2010 at 09:14, Nathan <nathan@...> wrote:
> alex.scordellis wrote:
>>
>> After reading REST in Practice <http://restinpractice.com> , I was
>> puzzled by the authors' recommendation of POST over PUT for resource
>> updates. I thought through the implications of some of the alternatives
>> and wrote up my thoughts as a blog post
>> <http://alexscordellis.blogspot.com/2010/11/restful-architecture-what-sh\
>> ould-we-put.html> . This triggered a very deep debate in the comments,
>> revealing a number of subtleties that I had missed. I feel that the
>> debate deserves a bigger forum than the comments of my blog, so I'm
>> moving it here.
>
> Fifth option, DELETE then POST
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
>
On 09.11.2010 15:14, Nathan wrote: > alex.scordellis wrote: > > > > After reading REST in Practice <http://restinpractice.com> , I was > > puzzled by the authors' recommendation of POST over PUT for resource > > updates. I thought through the implications of some of the alternatives > > and wrote up my thoughts as a blog post > > <http://alexscordellis.blogspot.com/2010/11/restful-architecture-what-sh\ > <http://alexscordellis.blogspot.com/2010/11/restful-architecture-what-sh>> > ould-we-put.html> . This triggered a very deep debate in the comments, > > revealing a number of subtleties that I had missed. I feel that the > > debate deserves a bigger forum than the comments of my blog, so I'm > > moving it here. > > Fifth option, DELETE then POST PATCH? Best regards, Julian
Hi all, I'm designing a web-based architecture - and to help future adopters/maintainers I'd like to follow REST-principles where possible. Hey, let them read the REST guidance rather than some custom-written by me... I'm ok with REST as recommended in the Rest Cookbook and Restful Web Services, but have struggled to find a REST-compliant strategy for one area: In my distributed environment I have a host application that provides a list of services. Client applications are able to determine which service is of interest, and then the client wants to receive updates related to that service. I've come across two strategies: 1. Polling. In pure REST I believe my client could repeatedly poll for new updates via a GET, or use a HEAD request to see if it already holds the most recent update. But, since the updates could be a frequency of 5 per second or 5 per hour a high polling overhead would be present. 2. Peer to peer REST. Another alternative, expressed in the 'Re: It's the architecture, stupid.' thread last month recommended multiple REST servers. So my client would 'register' for updates by PUTing a URL to a service listener resource at the server. The server would then POST updates to that URL as they arise. This overcomes the high polling overhead of the first option, though it does require a more capable client (one that can host a REST server - which a browser can't). Am I missing any alternative RESTful approaches to this problem?
On Tue, Nov 9, 2010 at 7:40 AM, wahbedahbe <andrew.wahbe@...> wrote: > A summary of my thoughts: > > - there is no "right", generic way to use PUT to update a resource that > works across all resources and media types except specifying verbatim how > the resource should be represented in full. e.g. PUT the full .jpg image you > expect to be served in a subsequent GET of the resource > > - if you are including enough data in a PUT to allow the the rest of the > resource's state to be derived by the server then you are ok. Alternatively, > you can have the server change the values the client suggests. This is my > take on what AtomPub does. It is necessary for things like identifiers and > URIs (in general hypermedia controls) that the server needs control over. > > - omitting information that the server is going to fill in is quite a > different thing from a "partial update" -- don't confuse the two. To me a > "partial update" is when the full resource state cannot be derived from the > information you are providing and the previous state of the resource needs > to be factored in as well. > > - this leaves you with the problem of how the client knows what information > can be omitted (or is just a place-holder for the server's values). This is > either (a) specified by the media type definition or (b) specified (or > rather constrained) at run-time by a hypermedia control. I believe that > AtomPub is an example of (a) but I've never seen an example of (b) in the > wild -- don't see why it is not possible though. > This sounds right to me. The issue I guess stems from some idea that the resource being PUT is a LITERAL, physical copy, rather than a logical one. In many systems, the representations are the same. WYPIWYG (What You Put Is What You Get). But I can't see how that's some cast in stone requirement. I see no reason why a system can't take what is given, "ignore" the extraneous stuff (specifically links and whatever), and mine the "interesting" stuff. In this case, you don't have WYPIWYG, you'll get back "something different", and not literally identical, but semantically identical which is all you're really looking for in the long run anyway. In general though, hypermedia controls for PUT (and DELETE) really need > further exploration. I'd like to see more examples of media types that > describe how PUT/DELETE should be used as well. I don't buy the "you have > the URI, so just invoke PUT or DELETE on it" generic approach -- you end up > with problems like the one you are tackling here. At the end of the day > though, unless you think the idempotent property of PUT and DELETE is going > to realize practical benefits for your system, you might just be better off > using POST. > This is a valid point. The key take away is that perhaps the "Uniform Interface" is not quite as uniform as we may like. Or, perhaps the interface itself is uniform, the semantics surrounding the use of the interface is not. This is something that will shake out long term through practice and eventually evolve in to a "best practice" (whatever it ends up being) so that folks can have reasonable expectations of behavior in the long term. Regards, Will Hartung (willh@...)
Will Hartung wrote: > > wahbedahbe wrote: > > > > > In general though, hypermedia controls for PUT (and DELETE) really > > need further exploration. I'd like to see more examples of media > > types that describe how PUT/DELETE should be used as well. I don't > > buy the "you have the URI, so just invoke PUT or DELETE on it" > > generic approach -- you end up with problems like the one you are > > tackling here. At the end of the day though, unless you think the > > idempotent property of PUT and DELETE is going to realize practical > > benefits for your system, you might just be better off using POST. > > > > This is a valid point. The key take away is that perhaps the "Uniform > Interface" is not quite as uniform as we may like. Or, perhaps the > interface itself is uniform, the semantics surrounding the use of the > interface is not. > Yup, I've said this myself on many occasions. Some folks believe that <link rel='edit'/> is a hypertext control. Others, myself included, believe that hypertext controls are forms, not link relations. -Eric
Ian Mayo wrote: > > Am I missing any alternative RESTful approaches to this problem? > Have you reviewed ARRESTED or CREST? One view is that REST needs to be extended to solve this problem. -Eric
Again, HTTP semantics are not at all an easy mapping to CRUD, are they? The hypermedia control you mention is an interesting example. Let's say Creation. PUT for creation has been somehow banned, as it implies the client forces the URI into the server. Anyhow, that is how it works. The Server should own its name space, so issuing a PUT with a URI defined by the client makes no sense, right? Actually, it makes no sense the client issuing a PUT as a starting point for a REST interaction. That means, we can assume the client performs a GET first, and that has a document with all the links we need, including a PUT option. If so, what is the problem of that document adding the PUT and the actual URI that PUT has to use, that comes from server, isn't it? Fun discussions. William Martinez Pomares. --- In rest-discuss@yahoogroups.com, "wahbedahbe" <andrew.wahbe@...> wrote: > > Ah, this debate again... it's a bit like debating the right way to eat soup with a fork. The best approach is to take a big step back and come at the problem from a slightly different angle (or utensil? ;-). > > A summary of my thoughts: > - there is no "right", generic way to use PUT to update a resource that works across all resources and media types except specifying verbatim how the resource should be represented in full. e.g. PUT the full .jpg image you expect to be served in a subsequent GET of the resource > > - if you are including enough data in a PUT to allow the the rest of the resource's state to be derived by the server then you are ok. Alternatively, you can have the server change the values the client suggests. This is my take on what AtomPub does. It is necessary for things like identifiers and URIs (in general hypermedia controls) that the server needs control over. > > - omitting information that the server is going to fill in is quite a different thing from a "partial update" -- don't confuse the two. To me a "partial update" is when the full resource state cannot be derived from the information you are providing and the previous state of the resource needs to be factored in as well. > > - this leaves you with the problem of how the client knows what information can be omitted (or is just a place-holder for the server's values). This is either (a) specified by the media type definition or (b) specified (or rather constrained) at run-time by a hypermedia control. I believe that AtomPub is an example of (a) but I've never seen an example of (b) in the wild -- don't see why it is not possible though. > > In general though, hypermedia controls for PUT (and DELETE) really need further exploration. I'd like to see more examples of media types that describe how PUT/DELETE should be used as well. I don't buy the "you have the URI, so just invoke PUT or DELETE on it" generic approach -- you end up with problems like the one you are tackling here. At the end of the day though, unless you think the idempotent property of PUT and DELETE is going to realize practical benefits for your system, you might just be better off using POST. > > Andrew > > --- In rest-discuss@yahoogroups.com, "alex.scordellis" <alex.scordellis@> wrote: > > > > > > > > After reading REST in Practice <http://restinpractice.com> , I was > > puzzled by the authors' recommendation of POST over PUT for resource > > updates. I thought through the implications of some of the alternatives > > and wrote up my thoughts as a blog post > > <http://alexscordellis.blogspot.com/2010/11/restful-architecture-what-sh\ > > ould-we-put.html> . This triggered a very deep debate in the comments, > > revealing a number of subtleties that I had missed. I feel that the > > debate deserves a bigger forum than the comments of my blog, so I'm > > moving it here. > > > > If you want the full history of the debate, read my original post > > <http://alexscordellis.blogspot.com/2010/11/restful-architecture-what-sh\ > > ould-we-put.html> , including the comments, and William Vambenepe's > > response <http://stage.vambenepe.com/archives/1665> , but here's a quick > > summary: > > > > * Services serve representations including both business data and > > hypermedia controls. Clients have no business sending the hypermedia > > controls to the service, since they are determined by resource state and > > business rules hosted by the server. > > * Therefore we have to either PUT a partial representation (data, but > > no links), or use a different HTTP method, for the client to make a > > change to the resource. > > * However, there are many complications with using partial PUT, which > > are detailed in William Martinez's comment > > <http://alexscordellis.blogspot.com/2010/11/restful-architecture-what-sh\ > > ould-we-put.html?showComment=1288962611437#c255996206325049719> and > > William Vambenepe's post <http://stage.vambenepe.com/archives/1665> . > > * These complications may force us to use POST for changes which the > > client sees as full resource replacement, and therefore still feel more > > like they should be a PUT > > * An alternative, variations of which were suggested by Garry Shutler > > <http://alexscordellis.blogspot.com/2010/11/restful-architecture-what-sh\ > > ould-we-put.html?showComment=1288870660489#c5473614401681968877> and > > Duncan Cragg > > <http://alexscordellis.blogspot.com/2010/11/restful-architecture-what-sh\ > > ould-we-put.html?showComment=1289244765256#c299846722503776739> , is to > > separate the resource into two parts. One would be the resource that the > > client sends to the service, and the other would be generated by the > > service and include hypermedia controls and service-generated state > > information. > > > > This has been a great debate. I've learned a lot and it's forced me to > > think things through very carefully; thank you to all participants. > > However I feel it has so far been rather theoretical. I'd be very > > excited to hear from people who have tackled this problem in building > > real systems. What did you do? How did that work out? Did you hit any of > > the complications of partial PUT described by the two Williams? If you > > used one of the alternatives, did you feel that had any drawbacks? > > > > Discuss! > > >
On 10.11.2010 05:50, William Martinez Pomares wrote: > Again, HTTP semantics are not at all an easy mapping to CRUD, are they? > > The hypermedia control you mention is an interesting example. > Let's say Creation. > PUT for creation has been somehow banned, as it implies the client > forces the URI into the server. Anyhow, that is how it works. The Server > should own its name space, so issuing a PUT with a URI defined by the > client makes no sense, right? Wrong. It depends on the application. > ... Best regards, Julian
On 09.11.2010 16:40, wahbedahbe wrote: > Ah, this debate again... it's a bit like debating the right way to eat > soup with a fork. The best approach is to take a big step back and come > at the problem from a slightly different angle (or utensil? ;-). > > A summary of my thoughts: > - there is no "right", generic way to use PUT to update a resource that > works across all resources and media types except specifying verbatim > how the resource should be represented in full. e.g. PUT the full .jpg > image you expect to be served in a subsequent GET of the resource > > - if you are including enough data in a PUT to allow the the rest of the > resource's state to be derived by the server then you are ok. > Alternatively, you can have the server change the values the client > suggests. This is my take on what AtomPub does. It is necessary for > things like identifiers and URIs (in general hypermedia controls) that > the server needs control over. > ... Right. The important part is that the state of the resource after the PUT does only depend on the payload, not the previous state. Best regards, Julian
On Wed, Nov 10, 2010 at 9:08 AM, Julian Reschke <julian.reschke@...> wrote: > On 09.11.2010 16:40, wahbedahbe wrote: >> Ah, this debate again... it's a bit like debating the right way to eat >> soup with a fork. The best approach is to take a big step back and come >> at the problem from a slightly different angle (or utensil? ;-). >> >> A summary of my thoughts: >> - there is no "right", generic way to use PUT to update a resource that >> works across all resources and media types except specifying verbatim >> how the resource should be represented in full. e.g. PUT the full .jpg >> image you expect to be served in a subsequent GET of the resource >> >> - if you are including enough data in a PUT to allow the the rest of the >> resource's state to be derived by the server then you are ok. >> Alternatively, you can have the server change the values the client >> suggests. This is my take on what AtomPub does. It is necessary for >> things like identifiers and URIs (in general hypermedia controls) that >> the server needs control over. >> ... > > Right. The important part is that the state of the resource after the > PUT does only depend on the payload, not the previous state. > Why is that important? Cheers, Mike
Unless you've got a resource that maps to "The universe" or perhaps to some similar all-encompassing entity related to a particular theological stance, then all PUTs are partial PUTs because all resources can be considered part of another resource. Conversely, sometimes one can conveniently model a given part of a given resource as itself being a resource. When this is the case (and I grant, it isn't always), PUTting to the URI of that resource neatly solves the partial PUT issue. Again, it isn't always applicable, and it isn't always convenient when it is applicable, but it is another approach that can be used sometimes.
Julian Reschke wrote: > Right. The important part is that the state of the resource after the > PUT does only depend on the payload, not the previous state. ! I hadn't fully appreciated that previously, thanks Julian
Mike Kelly wrote: > On Wed, Nov 10, 2010 at 9:08 AM, Julian Reschke <julian.reschke@...> wrote: >> Right. The important part is that the state of the resource after the >> PUT does only depend on the payload, not the previous state. > > Why is that important? It's the difference between saying "this is the state of the resource" and "apply this to the resource to create a new state".. or, Sn=Mn (state = message) vs -> given the time t, a previous state Sn-1, and a message Mn we process Mn,Sn-1,t with a set of Rules in order to conclude Sn (the state of our resource) So, PUT and DELETE are the first case, PATCH and POST are the second. Perhaps more easily said, PUT replaces the previous state, with no consideration for it.
--- In rest-discuss@yahoogroups.com, "Eric J. Bowman" <eric@...> wrote: > > Have you reviewed ARRESTED or CREST? One view is that REST needs to be > extended to solve this problem. > > -Eric > Eric, initially it proved a challenge to look for either of these terms - since they're also plain English words... Eventually, however, I did come across the Khare and Taylor's ARRESTED paper [1] - which introduces Asynchronous REST (A+REST) which closely matches what I'm looking for. The concept of A+REST is quite widely discussed, I'll investigate any standards that follow it. Thanks for providing the stepping-stone, Ian Mayo [1] http://www.ics.uci.edu/~rohit/ARRESTED-ICSE.pdf
ian.mayo wrote: > --- In rest-discuss@yahoogroups.com, "Eric J. Bowman" <eric@...> wrote: >> Have you reviewed ARRESTED or CREST? One view is that REST needs to be >> extended to solve this problem. > > Eric, initially it proved a challenge to look for either of these terms CREST: http://www.erenkrantz.com/CREST/
On Wed, Nov 10, 2010 at 7:37 AM, Nathan <nathan@...> wrote: > Mike Kelly wrote: > >> On Wed, Nov 10, 2010 at 9:08 AM, Julian Reschke <julian.reschke@...> >> wrote: >> >>> Right. The important part is that the state of the resource after the >>> PUT does only depend on the payload, not the previous state. >>> >> >> Why is that important? >> > > It's the difference between saying "this is the state of the resource" and > "apply this to the resource to create a new state".. > > or, Sn=Mn (state = message) vs -> given the time t, a previous state Sn-1, > and a message Mn we process Mn,Sn-1,t with a set of Rules in order to > conclude Sn (the state of our resource) > > So, PUT and DELETE are the first case, PATCH and POST are the second. > > Perhaps more easily said, PUT replaces the previous state, with no > consideration for it. > Right, or Sn could be a function of the message: Sn = f(Mn) If the state is a function of the message that depends on the previous state, e.g. f(Mn) = Mn+Sn-1 then the operation is not idempotent as this generally requires that if Mi = Mj then Si=Sj. So if the state is a function of the previous state, you can't repeat the operation an arbitrary number of times (e.g. retries on response timeout) and know that each invocation leaves the resource in the same state. Or, if requests from multiple parties are interleaved the resource is left in a state that no party actually asked for. Sure, with conditional requests (like in http://www.w3.org/1999/04/Editing/) you can manage this. But this can be overkill in some scenarios and slow things down (repeating requests when you don't need to). If you don't use conditional requests but the operations are idempotent, you lose all parties' edits except for 1 -- the party whose request was last processed, which is sometimes good enough and all you need. But if you don't use conditional requests and the operations are not idempotent then the resource can be left in a state that does not match any party's request. This is sometimes ok too -- the point is that you get to pick the semantics appropriate for your application. This choice is important because the stronger guarantees tend to imply more cost in terms of # of messages or general complexity. Using conditional PUTs often means repeated GET+PUT retries until your PUT succeeds (2k where k > 0 requests). Using an idempotent operation like PUT often means you must first GET the resource, apply your changes and then PUT the new state (2 requests). Finally, you can usually apply a non-idempotent operation like POST at any time (without GETing and editing first) as long as the client's been given the instructions on how to do this (e.g. a form) at some previous point in time (1 request -- I'm not counting the transmission of the form as a request as I didn't count the transmission of the link you are GETing/PUTing in the previous cases). So as you can see you are trading off simplicity and a lower # of messages for stronger guarantees. Regards, Andrew Wahbe
Hello Julian. Interesting point. First of all, what is wrong? 1. The client forcing the URI namespace? 2. The server owning the URI namespace? 3. The Creation-wise PUT banning? Assuming it is option 2, do you mean the actual URI space owner depends on the application? Can you elaborate, with an example where the client needs to define the URI for a resource? Regards William Martinez Pomares --- In rest-discuss@yahoogroups.com, Julian Reschke <julian.reschke@...> wrote: > > On 10.11.2010 05:50, William Martinez Pomares wrote: > > Again, HTTP semantics are not at all an easy mapping to CRUD, are they? > > > > The hypermedia control you mention is an interesting example. > > Let's say Creation. > > PUT for creation has been somehow banned, as it implies the client > > forces the URI into the server. Anyhow, that is how it works. The Server > > should own its name space, so issuing a PUT with a URI defined by the > > client makes no sense, right? > > Wrong. It depends on the application. > > > ... > > Best regards, Julian >
Hello Mike. Nathan's and Andrew's answers are great. They clearly show why it is important to have one operation that is resource state independent. That does not eliminate the need of having another operation that is actually state dependent, like POST. Julian points out that PUT is a REPLACE and that is important to understand. Actually, a replace is one form of update that is too aggressive. The discussion about partial updates comes from the actual need of partial updates, and PUT is not suitable by definition, no matter how much we play with representations and assumptions from the server understanding partial data. William Martinez Pomares --- In rest-discuss@yahoogroups.com, Mike Kelly <mike@...> wrote: > > On Wed, Nov 10, 2010 at 9:08 AM, Julian Reschke <julian.reschke@...> wrote: > > On 09.11.2010 16:40, wahbedahbe wrote: > >> Ah, this debate again... it's a bit like debating the right way to eat > >> soup with a fork. The best approach is to take a big step back and come > >> at the problem from a slightly different angle (or utensil? ;-). > >> > >> A summary of my thoughts: > >> - there is no "right", generic way to use PUT to update a resource that > >> works across all resources and media types except specifying verbatim > >> how the resource should be represented in full. e.g. PUT the full .jpg > >> image you expect to be served in a subsequent GET of the resource > >> > >> - if you are including enough data in a PUT to allow the the rest of the > >> resource's state to be derived by the server then you are ok. > >> Alternatively, you can have the server change the values the client > >> suggests. This is my take on what AtomPub does. It is necessary for > >> things like identifiers and URIs (in general hypermedia controls) that > >> the server needs control over. > >> ... > > > > Right. The important part is that the state of the resource after the > > PUT does only depend on the payload, not the previous state. > > > > Why is that important? > > Cheers, > Mike >
Hi. <julian.reschke@...> wrote: > Right. The important part is that the state of the resource after the > PUT does only depend on the payload, not the previous state. > > Best regards, Julian > Totally correct. I would even go further and say PUT has an implicit DELETE of the previous resource if that exists. In all cases, PUT is a creational operation. A replace. Updates are much more often about partial changes than total replacement. In this case, PUT is for a very special case, and it is better to look for other update options than force PUT to do something it is not designed to do. William Martinez Pomares.
Hi Mike!
Sorry, I'm posting in this discussion although I prefer to avoid HTTP related ones. Not my level of detail.
Still, this is interesting.
I dislike a little bit PATCH. There are several reasons, these are just a couple:
1. One of important part of PATCH is the payload, as it defines the changes. And that payload is not standardized.
2. As Payload is important, it makes the resource update much more representational oriented. That is, like patching a source code file. Also, it makes the resource expose its attributes, avoiding data hiding
3. Although it may be generic, adjusting the alpha channel of an image may not be as intuitive. In this case, the PATCH Payload should request the change of the alpha channel. Metadata?
4. Not all resources are just a bunch of fields, some can be more complex and the update should be done by the server based on certain conditions or requests.
So, PATCH may work, but I feel still it is not the full solution. At the end, as you say, we may need to go back and use POST.
What do you think?
William Martinez Pomares.
--- In rest-discuss@yahoogroups.com, mike amundsen <mamund@...> wrote:
>
> Partial updates should be done using PATCH [1]
> Complete updates should be done using PUT.
> When those methods are not practical, POST can be used instead.
>
> Subbu's "RESTful Web Services Cookbook" has a very good chapter ("11.
> Miscellaneous Writes") [2] that includes more than one section
> covering strategies for partial updates, too.
>
> [1] http://tools.ietf.org/html/rfc5789
> [2] http://bit.ly/bRCwGj
>
> mca
> http://amundsen.com/blog/
> http://twitter.com@mamund
> http://mamund.com/foaf.rdf#me
>
>
> #RESTFest 2010
> http://rest-fest.googlecode.com
>
On Wed, Nov 10, 2010 at 3:08 PM, Andrew Wahbe <andrew.wahbe@...> wrote: > On Wed, Nov 10, 2010 at 7:37 AM, Nathan <nathan@...> wrote: >> >> Mike Kelly wrote: >>> >>> On Wed, Nov 10, 2010 at 9:08 AM, Julian Reschke <julian.reschke@...> >>> wrote: >>>> >>>> Right. The important part is that the state of the resource after the >>>> PUT does only depend on the payload, not the previous state. >>> >>> Why is that important? >> >> It's the difference between saying "this is the state of the resource" and >> "apply this to the resource to create a new state".. >> >> or, Sn=Mn (state = message) vs -> given the time t, a previous state Sn-1, >> and a message Mn we process Mn,Sn-1,t with a set of Rules in order to >> conclude Sn (the state of our resource) >> >> So, PUT and DELETE are the first case, PATCH and POST are the second. >> >> Perhaps more easily said, PUT replaces the previous state, with no >> consideration for it. > > Right, or Sn could be a function of the message: Sn = f(Mn) > If the state is a function of the message that depends on the previous > state, e.g. f(Mn) = Mn+Sn-1 then the operation is not idempotent as > thisgenerally requires that if Mi = Mj then Si=Sj. > So if the state is a function of the previous state, you can't repeat the > operation an arbitrary number of times (e.g. retries on response timeout) > and know that each invocation leaves the resource in the same state. A partial PUT could leave just a specific portion of the resource in the same state. Yes, the resultant overall state of the resource would depend on previous state but isn't the partial PUT request idempotent in its intent? I think I understand PUT as per it's specification, the question was aimed to understand why it needs to be defined that way in the first place. Put another way; What sort of mechanisms rely on partial PUT being prevented? Cheers, Mike
On Wed, Nov 10, 2010 at 4:08 AM, Julian Reschke <julian.reschke@...> wrote: > Right. The important part is that the state of the resource after the > PUT does only depend on the payload, not the previous state. The server is free to use the state represented in the PUT request *as well* as the current state if it wants to. For example, the resource may include a counter that tracks how many PUTs it received. The important part from a REST perspective is that the meaning of the PUT message is independent of the state of the resource. Mark.
On 10 Nov, 2010,at 05:35 PM, Mike Kelly <mike@...> wrote: I think I understand PUT as per it's specification, the question was aimed to understand why it needs to be defined that way in the first place. Message self descriptiveness is the issue here. If the PUT applies to a part of trhe resource only, addressing that part depends on assumptions about the state of the resource (e.g. internal structure). Hence the meaning depends on the resource state. What if you try to change some surname 'field' but the resource stopped having a 'surname field' long before your PUT? Jan Put another way; What sort of mechanisms rely on partial PUT being prevented? Cheers, Mike ------------------------------------ Yahoo! Groups Links
On Wed, Nov 10, 2010 at 5:06 PM, algermissen1971 <algermissen1971@...> wrote: > > > On 10 Nov, 2010,at 05:35 PM, Mike Kelly <mike@...> wrote: > > > > I think I understand PUT as per it's specification, the question was > aimed to understand why it needs to be defined that way in the first > place. > > > > Message self descriptiveness is the issue here. If the PUT applies to a part > of trhe resource only, addressing that part depends on assumptions about the > state of the resource (e.g. internal structure). Hence the meaning depends > on the resource state. > What if you try to change some surname 'field' but the resource stopped > having a 'surname field' long before your PUT? Return a 400?! How would that be any different from a complete PUT that contains the same incorrect surname assumption? Cheers, Mike
Interesting. If I understand it correctly this is a similar sort of caveat to that of the "safe" property for GET. i.e. a PUT can use the current state while remaining "idempotent" just as a GET can change the current state while remaining "safe". Makes sense, but is a new one to me. Thanks! Andrew On Wed, Nov 10, 2010 at 12:04 PM, Mark Baker <distobj@...> wrote: > On Wed, Nov 10, 2010 at 4:08 AM, Julian Reschke <julian.reschke@...> > wrote: > > Right. The important part is that the state of the resource after the > > PUT does only depend on the payload, not the previous state. > > The server is free to use the state represented in the PUT request *as > well* as the current state if it wants to. For example, the resource > may include a counter that tracks how many PUTs it received. The > important part from a REST perspective is that the meaning of the PUT > message is independent of the state of the resource. > > Mark. > -- Andrew Wahbe
On 10 Nov, 2010,at 06:12 PM, Mike Kelly <mike@...> wrote: On Wed, Nov 10, 2010 at 5:06 PM, algermissen1971 <algermissen1971@maccom> wrote: > > > On 10 Nov, 2010,at 05:35 PM, Mike Kelly <mike@...> wrote: > > > > I think I understand PUT as per it's specification, the question was > aimed to understand why it needs to be defined that way in the first > place. > > > > Message self descriptiveness is the issue here. If the PUT applies to a part > of trhe resource only, addressing that part depends on assumptions about the > state of the resource (e.g. internal structure). Hence the meaning depends > on the resource state. > What if you try to change some surname 'field' but the resource stopped > having a 'surname field' long before your PUT? Return a 400?! How would that be any different from a complete PUT that contains the same incorrect surname assumption? There is no incorrect assumption - the meaning of the message is the same, regardless of resource state. That is what this is all about. Jan Cheers, Mike
William:
<snip>
> So, PATCH may work, but I feel still it is not the full solution. At the end, as you say, we may need to go back and use POST.
> What do you think?
</snip>
From my POV, there are two things to keep in mind here:
- The affect the PATCH RFC has on how we view|use PUT
- The affect the PATCH RFC has on how we view|use POST
PATCH AND PUT
One of the things that sets PATCH apart from PUT is that the RFC[1]
describes the PATCH payload as "a set of instructions..."
<quote>
The difference between the PUT and PATCH requests is reflected in the
way the server processes the enclosed entity to modify the resource
identified by the Request-URI. In a PUT request, the enclosed entity
is considered to be a modified version of the resource stored on the
origin server, and the client is requesting that the stored version
be replaced. With PATCH, however, the enclosed entity contains a set
of instructions describing how a resource currently residing on the
origin server should be modified to produce a new version. The PATCH
method affects the resource identified by the Request-URI, and it
also MAY have side effects on other resources; i.e., new resources
may be created, or existing ones modified, by the application of a
PATCH.
</quote>
There are no details on what that set of instructions looks like; that
detail is left open for implementors to work out. It is also
interesting to note that the PATCH RFC makes allowances for the
possibility that the results of a PATCH request MAY be the creation of
a new resource.
PATCH AND POST
In the past, I used the basic approach described in PATCH ( a set of
instructions delineated by a media type) but used the existing POST
method to complete the task. Usually that meant I minted a URI for
handling "change instructions" (e.g. /my-customers/1;patch or
/my-customers/1/patch/, or /patches/my-customer/1, etc.).
Now, with this new method, I don't need to mint a new URI to handle
"change instructions." I only need to tell clients that a new method
(PATCH) is valid for an existing resource (e.g. /my-customers/1) and
that any execution of PATCH against a resource needs to use the proper
media-type (e.g. application/vnd.amundsen.patch, etc.).
To me this is a major improvement in both the visibility and accuracy
of my HTTP interactions. No more overloading POST using a special
"patch URI." My documentation can be clearer and my client can learn
the details of a target media type and apply that to multiple
resources safely.
That's my viewpoint, anyway.
[1] http://tools.ietf.org/html/rfc5789
mca
http://amundsen.com/blog/
http://twitter.com@mamund
http://mamund.com/foaf.rdf#me
#RESTFest 2010
http://rest-fest.googlecode.com
On Wed, Nov 10, 2010 at 11:32, William Martinez Pomares
<wmartinez@...> wrote:
> Hi Mike!
> Sorry, I'm posting in this discussion although I prefer to avoid HTTP related ones. Not my level of detail.
>
> Still, this is interesting.
> I dislike a little bit PATCH. There are several reasons, these are just a couple:
>
> 1. One of important part of PATCH is the payload, as it defines the changes. And that payload is not standardized.
>
> 2. As Payload is important, it makes the resource update much more representational oriented. That is, like patching a source code file. Also, it makes the resource expose its attributes, avoiding data hiding
>
> 3. Although it may be generic, adjusting the alpha channel of an image may not be as intuitive. In this case, the PATCH Payload should request the change of the alpha channel. Metadata?
>
> 4. Not all resources are just a bunch of fields, some can be more complex and the update should be done by the server based on certain conditions or requests.
>
> So, PATCH may work, but I feel still it is not the full solution. At the end, as you say, we may need to go back and use POST.
> What do you think?
>
> William Martinez Pomares.
> --- In rest-discuss@yahoogroups.com, mike amundsen <mamund@...> wrote:
>>
>> Partial updates should be done using PATCH [1]
>> Complete updates should be done using PUT.
>> When those methods are not practical, POST can be used instead.
>>
>> Subbu's "RESTful Web Services Cookbook" has a very good chapter ("11.
>> Miscellaneous Writes") [2] that includes more than one section
>> covering strategies for partial updates, too.
>>
>> [1] http://tools.ietf.org/html/rfc5789
>> [2] http://bit.ly/bRCwGj
>>
>> mca
>> http://amundsen.com/blog/
>> http://twitter.com@mamund
>> http://mamund.com/foaf.rdf#me
>>
>>
>> #RESTFest 2010
>> http://rest-fest.googlecode.com
>>
>
>
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
>
On Wed, Nov 10, 2010 at 11:35 AM, Mike Kelly <mike@...> wrote: > On Wed, Nov 10, 2010 at 3:08 PM, Andrew Wahbe <andrew.wahbe@...> > wrote: > > On Wed, Nov 10, 2010 at 7:37 AM, Nathan <nathan@...> wrote: > >> > >> Mike Kelly wrote: > >>> > >>> On Wed, Nov 10, 2010 at 9:08 AM, Julian Reschke <julian.reschke@... > > > >>> wrote: > >>>> > >>>> Right. The important part is that the state of the resource after the > >>>> PUT does only depend on the payload, not the previous state. > >>> > >>> Why is that important? > >> > >> It's the difference between saying "this is the state of the resource" > and > >> "apply this to the resource to create a new state".. > >> > >> or, Sn=Mn (state = message) vs -> given the time t, a previous state > Sn-1, > >> and a message Mn we process Mn,Sn-1,t with a set of Rules in order to > >> conclude Sn (the state of our resource) > >> > >> So, PUT and DELETE are the first case, PATCH and POST are the second. > >> > >> Perhaps more easily said, PUT replaces the previous state, with no > >> consideration for it. > > > > Right, or Sn could be a function of the message: Sn = f(Mn) > > If the state is a function of the message that depends on the previous > > state, e.g. f(Mn) = Mn+Sn-1 then the operation is not idempotent as > > this generally requires that if Mi = Mj then Si=Sj. > > So if the state is a function of the previous state, you can't repeat the > > operation an arbitrary number of times (e.g. retries on response timeout) > > and know that each invocation leaves the resource in the same state. > > A partial PUT could leave just a specific portion of the resource in > the same state. Yes, the resultant overall state of the resource would > depend on previous state but isn't the partial PUT request idempotent > in its intent? > > I think I understand PUT as per it's specification, the question was > aimed to understand why it needs to be defined that way in the first > place. > > Put another way; What sort of mechanisms rely on partial PUT being > prevented? > > Cheers, > Mike > Ok, I understand your question a bit better now. To me a key issue is that you seem to be addressing a specific sub-resource (the part of the resource updated by the PUT) but this addressing is not done in the URI. I'd have to assume you are using a portion of the body (implicitly or explicitly) to address the piece of the resource that gets updated. If you take the example to its extreme you could just execute all PUT operations on the "/" resource and identify the sub-path in the body. I see a few negatives (in addition to Jan's points): - you are hurting visibility by moving addressing out of the URI - you can't use etags and conditional requests to control updates these specific portions you are updating, just to the "parent" resource - while things seem fine from the perspective of repeating a single operation multiple times, you don't have the same properties for non-conditional interleaved PUTs from multiple writers. Here, the state of the resource is not the state specified by the last successful PUT (as is usually the case). That is quite a significant difference in some apps I would think. Regards, Andrew
Interesting.
Jan also agrees.
I do too. There is a need and ways people overcome that need.
Still, I'm not totally convinced.
Granted, PUT and PATCH are totally different things, no discussion there. The distinction shouldn't even be there, explained in the PATCH description, as PUT is a creational operation. Still, I guess it is there due to the extended use of PUT as an Update operation.
At some moment, PATCH looks to me like POST but with an update intention. Probably the POST description, by enumerating the 3 uses, reduces a little bit the POST semantic, although it is kept open.
I mean, the payload in POST is not necessarily a (sub)resource representation. It is a payload sent to a resource, who will act accordingly. PATCH does the same but denoting an intention of changing the resource in a predictable way.
Not sure if my feeling is wrong or not, but PATCH transfers, somehow, responsibility to client about the particular modifications to a resource, while POST keeps that in the server. With PATCH, client should be aware and careful with what it is requesting, with POST the server is the one under control (or should be). With patch, if not used carefully, we may need the client to know much more that desirable about the resource structure.
About visibility, and about the update intention, I guess it is good to have clear in the line we are trying to update something. But, I still think we are trying to force CRUD into HTTP, and the update may be somehow better if performed silently by the server based on posted data, rather the directly commanded by the client.
I know, blurred line.
William Martinez Pomares
--- In rest-discuss@yahoogroups.com, mike amundsen <mamund@...> wrote:
>
> William:
>
> <snip>
> > So, PATCH may work, but I feel still it is not the full solution. At the end, as you say, we may need to go back and use POST.
> > What do you think?
> </snip>
>
> From my POV, there are two things to keep in mind here:
> - The affect the PATCH RFC has on how we view|use PUT
> - The affect the PATCH RFC has on how we view|use POST
>
> PATCH AND PUT
> One of the things that sets PATCH apart from PUT is that the RFC[1]
> describes the PATCH payload as "a set of instructions..."
> <quote>
> The difference between the PUT and PATCH requests is reflected in the
> way the server processes the enclosed entity to modify the resource
> identified by the Request-URI. In a PUT request, the enclosed entity
> is considered to be a modified version of the resource stored on the
> origin server, and the client is requesting that the stored version
> be replaced. With PATCH, however, the enclosed entity contains a set
> of instructions describing how a resource currently residing on the
> origin server should be modified to produce a new version. The PATCH
> method affects the resource identified by the Request-URI, and it
> also MAY have side effects on other resources; i.e., new resources
> may be created, or existing ones modified, by the application of a
> PATCH.
> </quote>
>
> There are no details on what that set of instructions looks like; that
> detail is left open for implementors to work out. It is also
> interesting to note that the PATCH RFC makes allowances for the
> possibility that the results of a PATCH request MAY be the creation of
> a new resource.
>
> PATCH AND POST
> In the past, I used the basic approach described in PATCH ( a set of
> instructions delineated by a media type) but used the existing POST
> method to complete the task. Usually that meant I minted a URI for
> handling "change instructions" (e.g. /my-customers/1;patch or
> /my-customers/1/patch/, or /patches/my-customer/1, etc.).
>
> Now, with this new method, I don't need to mint a new URI to handle
> "change instructions." I only need to tell clients that a new method
> (PATCH) is valid for an existing resource (e.g. /my-customers/1) and
> that any execution of PATCH against a resource needs to use the proper
> media-type (e.g. application/vnd.amundsen.patch, etc.).
>
> To me this is a major improvement in both the visibility and accuracy
> of my HTTP interactions. No more overloading POST using a special
> "patch URI." My documentation can be clearer and my client can learn
> the details of a target media type and apply that to multiple
> resources safely.
>
> That's my viewpoint, anyway.
>
> [1] http://tools.ietf.org/html/rfc5789
>
> mca
> http://amundsen.com/blog/
> http://twitter.com@mamund
> http://mamund.com/foaf.rdf#me
>
>
> #RESTFest 2010
> http://rest-fest.googlecode.com
>
>
>
>
> On Wed, Nov 10, 2010 at 11:32, William Martinez Pomares
> <wmartinez@...> wrote:
> > Hi Mike!
> > Sorry, I'm posting in this discussion although I prefer to avoid HTTP related ones. Not my level of detail.
> >
> > Still, this is interesting.
> > I dislike a little bit PATCH. There are several reasons, these are just a couple:
> >
> > 1. One of important part of PATCH is the payload, as it defines the changes. And that payload is not standardized.
> >
> > 2. As Payload is important, it makes the resource update much more representational oriented. That is, like patching a source code file. Also, it makes the resource expose its attributes, avoiding data hiding
> >
> > 3. Although it may be generic, adjusting the alpha channel of an image may not be as intuitive. In this case, the PATCH Payload should request the change of the alpha channel. Metadata?
> >
> > 4. Not all resources are just a bunch of fields, some can be more complex and the update should be done by the server based on certain conditions or requests.
> >
> > So, PATCH may work, but I feel still it is not the full solution. At the end, as you say, we may need to go back and use POST.
> > What do you think?
> >
> > William Martinez Pomares.
> > --- In rest-discuss@yahoogroups.com, mike amundsen <mamund@> wrote:
> >>
> >> Partial updates should be done using PATCH [1]
> >> Complete updates should be done using PUT.
> >> When those methods are not practical, POST can be used instead.
> >>
> >> Subbu's "RESTful Web Services Cookbook" has a very good chapter ("11.
> >> Miscellaneous Writes") [2] that includes more than one section
> >> covering strategies for partial updates, too.
> >>
> >> [1] http://tools.ietf.org/html/rfc5789
> >> [2] http://bit.ly/bRCwGj
> >>
> >> mca
> >> http://amundsen.com/blog/
> >> http://twitter.com@mamund
> >> http://mamund.com/foaf.rdf#me
> >>
> >>
> >> #RESTFest 2010
> >> http://rest-fest.googlecode.com
> >>
> >
> >
> >
> >
> > ------------------------------------
> >
> > Yahoo! Groups Links
> >
> >
> >
> >
>
<snip>
At some moment, PATCH looks to me like POST but with an update
intention. Probably the POST description, by enumerating the 3 uses,
reduces a little bit the POST semantic, although it is kept open.
I mean, the payload in POST is not necessarily a (sub)resource
representation. It is a payload sent to a resource, who will act
accordingly. PATCH does the same but denoting an intention of changing
the resource in a predictable way.
</snip>
Well, from my POV, PATCH changes the semantics of the "write" by
describing a representation of "change instructions" rather than a
representation of the resource (as in PUT). This is, IMO, the key
value of PATCH. It's not about "partial"; it's not about "create" or
"update." Instead it's about "change instructions."
I think it's also important to keep in mind the change instructions
might be sent to a URI that represents a composite resource
(server-side mashup) or a resource that represents a list of other
resources; not just a "single item" URI.
This is decidedly not CRUD.
<snip>
Not sure if my feeling is wrong or not, but PATCH transfers, somehow,
responsibility to client about the particular modifications to a
resource, while POST keeps that in the server. With PATCH, client
should be aware and careful with what it is requesting, with POST the
server is the one under control (or should be). With patch, if not
used carefully, we may need the client to know much more that
desirable about the resource structure.
</snip>
I understand your POV here. The assumption is that "sending the change
instructions..." means the client has some added level of power over
the server's acceptance of the document. I don't read that meaning
into the RFC and, myself, do not write this added power into my
implementations of PATCH.
Right now my implementations scan the document for "well-formedness"
and "validity" (yes, i am using XML right now) and, finally, do a
concurrency check (has someone else updated before this request?).
Once all that is done, the "change instructions" are reviewed to
internal consistency (are these change instructions logically sound?
,etc.). The first two checks are well-within the client's knowledge
space (e.g. the client can know whether they will pass the test).
However, the last two steps (concurrency and logical soundness) are
out-side the client's knowledge and are the responsibility of the
server. This is really the same as accepting a POST or PUT
representation; the server is responsible for concurrency checks and
for logical soundness.
So, at least in my implementations of PATCH so far, I am not granting
the client any additional power over the use of POST or PUT
representations.
mca
http://amundsen.com/blog/
http://twitter.com@mamund
http://mamund.com/foaf.rdf#me
#RESTFest 2010
http://rest-fest.googlecode.com
On Wed, Nov 10, 2010 at 13:51, William Martinez Pomares
<wmartinez@...> wrote:
> Interesting.
> Jan also agrees.
> I do too. There is a need and ways people overcome that need.
>
> Still, I'm not totally convinced.
>
> Granted, PUT and PATCH are totally different things, no discussion there. The distinction shouldn't even be there, explained in the PATCH description, as PUT is a creational operation. Still, I guess it is there due to the extended use of PUT as an Update operation.
>
> At some moment, PATCH looks to me like POST but with an update intention. Probably the POST description, by enumerating the 3 uses, reduces a little bit the POST semantic, although it is kept open.
> I mean, the payload in POST is not necessarily a (sub)resource representation. It is a payload sent to a resource, who will act accordingly. PATCH does the same but denoting an intention of changing the resource in a predictable way.
>
> Not sure if my feeling is wrong or not, but PATCH transfers, somehow, responsibility to client about the particular modifications to a resource, while POST keeps that in the server. With PATCH, client should be aware and careful with what it is requesting, with POST the server is the one under control (or should be). With patch, if not used carefully, we may need the client to know much more that desirable about the resource structure.
>
> About visibility, and about the update intention, I guess it is good to have clear in the line we are trying to update something. But, I still think we are trying to force CRUD into HTTP, and the update may be somehow better if performed silently by the server based on posted data, rather the directly commanded by the client.
>
> I know, blurred line.
>
> William Martinez Pomares
>
>
> --- In rest-discuss@yahoogroups.com, mike amundsen <mamund@...> wrote:
>>
>> William:
>>
>> <snip>
>> > So, PATCH may work, but I feel still it is not the full solution. At the end, as you say, we may need to go back and use POST.
>> > What do you think?
>> </snip>
>>
>> From my POV, there are two things to keep in mind here:
>> - The affect the PATCH RFC has on how we view|use PUT
>> - The affect the PATCH RFC has on how we view|use POST
>>
>> PATCH AND PUT
>> One of the things that sets PATCH apart from PUT is that the RFC[1]
>> describes the PATCH payload as "a set of instructions..."
>> <quote>
>> The difference between the PUT and PATCH requests is reflected in the
>> way the server processes the enclosed entity to modify the resource
>> identified by the Request-URI. In a PUT request, the enclosed entity
>> is considered to be a modified version of the resource stored on the
>> origin server, and the client is requesting that the stored version
>> be replaced. With PATCH, however, the enclosed entity contains a set
>> of instructions describing how a resource currently residing on the
>> origin server should be modified to produce a new version. The PATCH
>> method affects the resource identified by the Request-URI, and it
>> also MAY have side effects on other resources; i.e., new resources
>> may be created, or existing ones modified, by the application of a
>> PATCH.
>> </quote>
>>
>> There are no details on what that set of instructions looks like; that
>> detail is left open for implementors to work out. It is also
>> interesting to note that the PATCH RFC makes allowances for the
>> possibility that the results of a PATCH request MAY be the creation of
>> a new resource.
>>
>> PATCH AND POST
>> In the past, I used the basic approach described in PATCH ( a set of
>> instructions delineated by a media type) but used the existing POST
>> method to complete the task. Usually that meant I minted a URI for
>> handling "change instructions" (e.g. /my-customers/1;patch or
>> /my-customers/1/patch/, or /patches/my-customer/1, etc.).
>>
>> Now, with this new method, I don't need to mint a new URI to handle
>> "change instructions." I only need to tell clients that a new method
>> (PATCH) is valid for an existing resource (e.g. /my-customers/1) and
>> that any execution of PATCH against a resource needs to use the proper
>> media-type (e.g. application/vnd.amundsen.patch, etc.).
>>
>> To me this is a major improvement in both the visibility and accuracy
>> of my HTTP interactions. No more overloading POST using a special
>> "patch URI." My documentation can be clearer and my client can learn
>> the details of a target media type and apply that to multiple
>> resources safely.
>>
>> That's my viewpoint, anyway.
>>
>> [1] http://tools.ietf.org/html/rfc5789
>>
>> mca
>> http://amundsen.com/blog/
>> http://twitter.com@mamund
>> http://mamund.com/foaf.rdf#me
>>
>>
>> #RESTFest 2010
>> http://rest-fest.googlecode.com
>>
>>
>>
>>
>> On Wed, Nov 10, 2010 at 11:32, William Martinez Pomares
>> <wmartinez@...> wrote:
>> > Hi Mike!
>> > Sorry, I'm posting in this discussion although I prefer to avoid HTTP related ones. Not my level of detail.
>> >
>> > Still, this is interesting.
>> > I dislike a little bit PATCH. There are several reasons, these are just a couple:
>> >
>> > 1. One of important part of PATCH is the payload, as it defines the changes. And that payload is not standardized.
>> >
>> > 2. As Payload is important, it makes the resource update much more representational oriented. That is, like patching a source code file. Also, it makes the resource expose its attributes, avoiding data hiding
>> >
>> > 3. Although it may be generic, adjusting the alpha channel of an image may not be as intuitive. In this case, the PATCH Payload should request the change of the alpha channel. Metadata?
>> >
>> > 4. Not all resources are just a bunch of fields, some can be more complex and the update should be done by the server based on certain conditions or requests.
>> >
>> > So, PATCH may work, but I feel still it is not the full solution. At the end, as you say, we may need to go back and use POST.
>> > What do you think?
>> >
>> > William Martinez Pomares.
>> > --- In rest-discuss@yahoogroups.com, mike amundsen <mamund@> wrote:
>> >>
>> >> Partial updates should be done using PATCH [1]
>> >> Complete updates should be done using PUT.
>> >> When those methods are not practical, POST can be used instead.
>> >>
>> >> Subbu's "RESTful Web Services Cookbook" has a very good chapter ("11.
>> >> Miscellaneous Writes") [2] that includes more than one section
>> >> covering strategies for partial updates, too.
>> >>
>> >> [1] http://tools.ietf.org/html/rfc5789
>> >> [2] http://bit.ly/bRCwGj
>> >>
>> >> mca
>> >> http://amundsen.com/blog/
>> >> http://twitter.com@mamund
>> >> http://mamund.com/foaf.rdf#me
>> >>
>> >>
>> >> #RESTFest 2010
>> >> http://rest-fest.googlecode.com
>> >>
>> >
>> >
>> >
>> >
>> > ------------------------------------
>> >
>> > Yahoo! Groups Links
>> >
>> >
>> >
>> >
>>
>
>
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
>
Mark Baker wrote: > On Wed, Nov 10, 2010 at 4:08 AM, Julian Reschke <julian.reschke@...> wrote: >> Right. The important part is that the state of the resource after the >> PUT does only depend on the payload, not the previous state. > > The server is free to use the state represented in the PUT request *as > well* as the current state if it wants to. For example, the resource > may include a counter that tracks how many PUTs it received. The > important part from a REST perspective is that the meaning of the PUT > message is independent of the state of the resource. Hmm, is that not independent of other the state of the resource though, surely that's just another resource which tracks that state of "this" one? Similarly when it comes to defining additional resources, for instance on might PUT to http:// which also exposes on https://, or one might PUT to /latest whereupon the server also defines an additional archived version at another URI. Best, Nathan
On Wed, Nov 10, 2010 at 6:34 PM, Andrew Wahbe <andrew.wahbe@...> wrote: > > > On Wed, Nov 10, 2010 at 11:35 AM, Mike Kelly <mike@...> wrote: >> >> On Wed, Nov 10, 2010 at 3:08 PM, Andrew Wahbe <andrew.wahbe@gmail.com> >> wrote: >> > On Wed, Nov 10, 2010 at 7:37 AM, Nathan <nathan@webr3.org> wrote: >> >> >> >> Mike Kelly wrote: >> >>> >> >>> On Wed, Nov 10, 2010 at 9:08 AM, Julian Reschke >> >>> <julian.reschke@...> >> >>> wrote: >> >>>> >> >>>> Right. The important part is that the state of the resource after the >> >>>> PUT does only depend on the payload, not the previous state. >> >>> >> >>> Why is that important? >> >> >> >> It's the difference between saying "this is the state of the resource" >> >> and >> >> "apply this to the resource to create a new state".. >> >> >> >> or, Sn=Mn (state = message) vs -> given the time t, a previous state >> >> Sn-1, >> >> and a message Mn we process Mn,Sn-1,t with a set of Rules in order to >> >> conclude Sn (the state of our resource) >> >> >> >> So, PUT and DELETE are the first case, PATCH and POST are the second. >> >> >> >> Perhaps more easily said, PUT replaces the previous state, with no >> >> consideration for it. >> > >> > Right, or Sn could be a function of the message: Sn = f(Mn) >> > If the state is a function of the message that depends on the previous >> > state, e.g. f(Mn) = Mn+Sn-1 then the operation is not idempotent as >> > thisgenerally requires that if Mi = Mj then Si=Sj. >> > So if the state is a function of the previous state, you can't repeat >> > the >> > operation an arbitrary number of times (e.g. retries on response >> > timeout) >> > and know that each invocation leaves the resource in the same state. >> >> A partial PUT could leave just a specific portion of the resource in >> the same state. Yes, the resultant overall state of the resource would >> depend on previous state but isn't the partial PUT request idempotent >> in its intent? >> >> I think I understand PUT as per it's specification, the question was >> aimed to understand why it needs to be defined that way in the first >> place. >> >> Put another way; What sort of mechanisms rely on partial PUT being >> prevented? >> >> Cheers, >> Mike > > Ok, I understand your question a bit better now. > To me a key issue is that you seem to be addressing a specific sub-resource > (the part of the resource updated by the PUT) but this addressing is not > done in the URI. I'd have to assume you are using a portion of the body > (implicitly or explicitly) to address the piece of the resource that gets > updated. > If you take the example to its extreme you could just execute all PUT > operations on the "/" resource and identify the sub-path in the body. > I see a few negatives (in addition to Jan's points): > - you are hurting visibility by moving addressing out of the URI This stuff about self-descriptiveness and visibility is fine, but not really convincing unless there's some practical examples of how the greater visibility of complete PUT can actually be used for layering. e.g. an example of a cache that uses the body of a successful PUT request to respond to subsequent GET requests. Cheers, Mike
On Nov 10, 2010, at 8:30 PM, Mike Kelly wrote: > > This stuff about self-descriptiveness and visibility is fine, but not > really convincing unless there's some practical examples of how the > greater visibility of complete PUT can actually be used for layering. Caches can invalidate upon successful responses to PATCH and PUT, not so for POST. (POST's visibility is zero). > > e.g. an example of a cache that uses the body of a successful PUT > request to respond to subsequent GET requests. NO, that is not allowed by the definition of PUT. Caches can invalidate for the request URI though. (You can use a Content-Location header to enable the PUT response to be cacheable, though. IIRC) Jan > > Cheers, > Mike > > > ------------------------------------ > > Yahoo! Groups Links > > >
Hi Julian > Right. The important part is that the state of the resource after the > PUT does only depend on the payload, not the previous state. > > Best regards, Julian > Pedantically: Except when you use If-Match? What is the state of clarifying PUT semantics in Bis? Duncan
On Wed, Nov 10, 2010 at 1:44 PM, Duncan <rest-discuss@...> wrote: > > Pedantically: Except when you use If-Match? > > What is the state of clarifying PUT semantics in Bis? > Well currently, the semantics are specifically punted on in the Bis. Here: http://svn.tools.ietf.org/svn/wg/httpbis/draft-ietf-httpbis/04/p2-semantics.html#PUT it says: "HTTP/1.1 does not define how a PUT method affects the state of an origin server." Which makes sense to me. Regards, Will Hartung (willh@...)
On Wed, Nov 10, 2010 at 9:21 PM, Jan Algermissen <algermissen1971@...> wrote: > > On Nov 10, 2010, at 8:30 PM, Mike Kelly wrote: > >> >> This stuff about self-descriptiveness and visibility is fine, but not >> really convincing unless there's some practical examples of how the >> greater visibility of complete PUT can actually be used for layering. > > Caches can invalidate upon successful responses to PATCH and PUT, not so for POST. (POST's visibility is zero). > That's actually not true http://tools.ietf.org/html/draft-ietf-httpbis-p6-cache-12#section-2.5 Anyway; the same invalidation behavior would occur for successful partial PUT requests, so how is this relevant to the partial vs. complete PUT discussion we're having? >> >> e.g. an example of a cache that uses the body of a successful PUT >> request to respond to subsequent GET requests. > > NO, that is not allowed by the definition of PUT. Caches can invalidate for the request URI though. > > (You can use a Content-Location header to enable the PUT response to be cacheable, though. IIRC) > I was proposing that as a potential example of how greater visibility of a 'complete PUT' _request_ body might be used for layering, and would support your inital point about self-descriptiveness. The fact it is not allowed by definition supports my position that preventing partial PUT actually achieves nothing, since the extra visibility of enforcing complete PUT across the web is not useful in practice. Cheers, Mike
On 11/10/2010 07:34 PM, Andrew Wahbe wrote:
>
>
>
>
> On Wed, Nov 10, 2010 at 11:35 AM, Mike Kelly <mike@...
> <mailto:mike@...>> wrote:
>
> On Wed, Nov 10, 2010 at 3:08 PM, Andrew Wahbe
> <andrew.wahbe@... <mailto:andrew.wahbe@...>> wrote:
> > On Wed, Nov 10, 2010 at 7:37 AM, Nathan <nathan@...
> <mailto:nathan@...>> wrote:
> >>
> >> Mike Kelly wrote:
> >>>
> >>> On Wed, Nov 10, 2010 at 9:08 AM, Julian Reschke
> <julian.reschke@... <mailto:julian.reschke@...>>
> >>> wrote:
> >>>>
> >>>> Right. The important part is that the state of the resource
> after the
> >>>> PUT does only depend on the payload, not the previous state.
> >>>
> >>> Why is that important?
> >>
> >> It's the difference between saying "this is the state of the
> resource" and
> >> "apply this to the resource to create a new state"..
> >>
> >> or, Sn=Mn (state = message) vs -> given the time t, a previous
> state Sn-1,
> >> and a message Mn we process Mn,Sn-1,t with a set of Rules in
> order to
> >> conclude Sn (the state of our resource)
> >>
> >> So, PUT and DELETE are the first case, PATCH and POST are the
> second.
> >>
> >> Perhaps more easily said, PUT replaces the previous state, with no
> >> consideration for it.
> >
> > Right, or Sn could be a function of the message: Sn = f(Mn)
> > If the state is a function of the message that depends on the
> previous
> > state, e.g. f(Mn) = Mn+Sn-1 then the operation is not idempotent as
> > this generally requires that if Mi = Mj then Si=Sj.
> > So if the state is a function of the previous state, you can't
> repeat the
> > operation an arbitrary number of times (e.g. retries on response
> timeout)
> > and know that each invocation leaves the resource in the same state.
>
> A partial PUT could leave just a specific portion of the resource in
> the same state. Yes, the resultant overall state of the resource would
> depend on previous state but isn't the partial PUT request idempotent
> in its intent?
>
> I think I understand PUT as per it's specification, the question was
> aimed to understand why it needs to be defined that way in the first
> place.
>
> Put another way; What sort of mechanisms rely on partial PUT being
> prevented?
>
> Cheers,
> Mike
>
>
> Ok, I understand your question a bit better now.
> To me a key issue is that you seem to be addressing a specific
> sub-resource (the part of the resource updated by the PUT) but this
> addressing is not done in the URI. I'd have to assume you are using a
> portion of the body (implicitly or explicitly) to address the piece of
> the resource that gets updated.
I would think not a portion of the body but the entire body: the
contract that this server takes in supporting partial update is that it
offers the client the opportunity to only transfer that portion of the
resource-state it wants to change (or very pragmatically only the
portion it cares about)
- or even "knows" about in case of security filtering
- or is 'capable' of updating in case it choose to use some
format/media-type that is not capable to transfer all subtleties of the
resource's members
So really: I don't see a hidden 'address', only the agreement that the
representation in the body can be 'sparse' (after all: the message-body
for any method just holds a representation, never the resource itself :) )
> If you take the example to its extreme you could just execute all PUT
> operations on the "/" resource and identify the sub-path in the body.
>
very extreme, and nobody is suggesting that, right?
> I see a few negatives (in addition to Jan's points):
> - you are hurting visibility by moving addressing out of the URI
don't agree: there is no sub-resource IMHO, just a sparse representation
of the resource state
> - you can't use etags and conditional requests to control updates these
> specific portions you are updating, just to the "parent" resource
on the contrary: in cases where this is important, the usage of etags
and conditionals would allow to guarantee that the partial PUT is only
applied if the resource wasn't changed yet.
so in fact the conditional part in this case completes the sparse-ness
of the passed sparse-representation
> - while things seem fine from the perspective of repeating a single
> operation multiple times, you don't have the same properties for
> non-conditional interleaved PUTs from multiple writers. Here, the state
as mentioned above: etags and conditionals can make it do what you want
here I think, in the end you get some semantics about the parts not in
the put-representation saying: "IF you're still in the same state I just
GOT, then I'm sure that changing only the fields I'm passing you will
bring the resource in the expected state"
and for apps that don't need these checks the parts not mentioned in the
put-body can be considered as "don't care"
(update-counters, update-timestamps would be typical examples here, but
on the correct level of abstraction system-data, meta-data all just is
data, no?)
All in all: the server still has the right to interpret, partially
ignore, additionally decorate whatever you've PUT to update the
'resource state' (it might even have an effect on resources not
addressed in this put)
> of the resource is not the state specified by the last successful PUT
> (as is usually the case). That is quite a significant difference in some
> apps I would think.
>
I agree. However you correctly mention apps, not user-agents (or
intermediaries): those should not make any assumptions on the result of
GET just because they just completed a successful PUT, right?
Looking at it from that angle I don't think the partial put violates the
HTTP contracts, and also keeps playing nice with the REST principles.
Still, the length of this debate surely shows there is a fair amount of
'uncommon' about this and we might argue that we're violating some
"principle of least surprise". Reversing the argument it would only
show we're maybe getting into some habits that are just not generic
enough to cover all subtleties of all needs of all apps out there. Time
to widen our horizon?
Being uncommon every app encountering and solving this need for partial
updates should well-document contracts and behaviour and taken approach.
Appart from that advice, I would be so liberal as to leave it up to the
specific app designers to decide what is the 'least surprise' among the
various solutions one can take. Some considerations along that path.
Oh, I'm not aiming to be complete or unbiased ;)
1/ PATCH
+ separate semantics and clear indication of partial update behaviour
- method itself is a surprise (not in the standard set of methods)
- dev-tool-pragmatics: also harder for finding test and framework
goodies to help you with it
- you might argue that there are enough other solutions not to resort
to these exotics (or be backed by mighty Google in doing it still)
2/ PUT
+ obviously communicates the idempotent properties and the fact that
you are updating
+ clear resource addressing (assuming we agree there are no subresources)
+ optionally reuse conditionals
+ obvious reuse of what is common for clients that (for whatever
reason) only GET partial representations in the first place
- requires some representation description
* explaining the sparseness usage
* providing some mechanism to actively delete properties/members
+/- might push your users into learning about etag/conditionals
- gets you into long winding debates like this one
3/ POST
+ clear warning that you are in 'make no assumptions zone': read the
docs carefully
- not clear from a mile that it really is about a simple idempotent
PUT though
- suprise for clients that don't know about 'other resource
properties then the ones they received': why no PUT in this case
+/- safe bet, avoids the debate
- survive the mock of the REST-zealots for not supporting PUT :)
just my 2c,
-marc=
On Wed, Nov 10, 2010 at 2:23 PM, Nathan <nathan@...> wrote: > Mark Baker wrote: >> >> On Wed, Nov 10, 2010 at 4:08 AM, Julian Reschke <julian.reschke@...> >> wrote: >>> >>> Right. The important part is that the state of the resource after the >>> PUT does only depend on the payload, not the previous state. >> >> The server is free to use the state represented in the PUT request *as >> well* as the current state if it wants to. For example, the resource >> may include a counter that tracks how many PUTs it received. The >> important part from a REST perspective is that the meaning of the PUT >> message is independent of the state of the resource. > > Hmm, is that not independent of other the state of the resource though, > surely that's just another resource which tracks that state of "this" one? You could do it that way too, but then the resource to which you did the PUT would still need to maintain the link to this other resource, and that wouldn't be overwritten by the PUT either. > Similarly when it comes to defining additional resources, for instance on > might PUT to http:// which also exposes on https://, or one might PUT to > /latest whereupon the server also defines an additional archived version at > another URI. Ditto. Mark.
On Thu, Nov 11, 2010 at 4:45 AM, Mike Kelly <mike@...> wrote: > On Wed, Nov 10, 2010 at 9:21 PM, Jan Algermissen > <algermissen1971@...> wrote: > > > > On Nov 10, 2010, at 8:30 PM, Mike Kelly wrote: > > > >> > >> This stuff about self-descriptiveness and visibility is fine, but not > >> really convincing unless there's some practical examples of how the > >> greater visibility of complete PUT can actually be used for layering. > > > > Caches can invalidate upon successful responses to PATCH and PUT, not so > for POST. (POST's visibility is zero). > > > > That's actually not true > > http://tools.ietf.org/html/draft-ietf-httpbis-p6-cache-12#section-2.5 > > Anyway; the same invalidation behavior would occur for successful > partial PUT requests, so how is this relevant to the partial vs. > complete PUT discussion we're having? > > >> > >> e.g. an example of a cache that uses the body of a successful PUT > >> request to respond to subsequent GET requests. > > > > NO, that is not allowed by the definition of PUT. Caches can invalidate > for the request URI though. > > > > (You can use a Content-Location header to enable the PUT response to be > cacheable, though. IIRC) > > > > I was proposing that as a potential example of how greater visibility > of a 'complete PUT' _request_ body might be used for layering, and > would support your inital point about self-descriptiveness. The fact > it is not allowed by definition supports my position that preventing > partial PUT actually achieves nothing, since the extra visibility of > enforcing complete PUT across the web is not useful in practice. > > Cheers, > Mike > I think the trick is that it's not just a "partial PUT" -- a non-idempotent partial PUT is not allowed right? (Just want to make sure we are not debating that) So that leaves "idempotent partial PUT" correct? I think it is worth considering what "idempotent partial PUTs" are -- ie. it is a subset of all partial PUTs. I think that subset might have some specific properties. Unless I'm missing something a PUT of this nature always operates on a specific reference-able subdocument (there may be better terminology for this out there -- if so please point it out!) -- by this I mean that the request body of the PUT provides new contents for a specific subset of resource state that can be always be identified as the resource's state changes. This subset must be commonly understood by both the client and server or requests couldn't be properly processed. How is the subset identified? As this is "partial" it means it not identified by the URI -- the subset must be identified by the request body (explicitly or implicitly) right? I think this really means you have a resource that has no URI. Does REST disallow this? No. Does the world fall apart? No. But you can't take advantage of some good features of REST/HTTP -- you can't GET the resource (just the parent document), you can't use etags on that resource (just the parent document), etc. If you don't care about these things, then you don't and I suppose this doesn't disallow the idempotent partial PUT. These are just advantages. I really think that they guarantees provided by PUT change dramatically with a partial PUT though. As stated before, the state of the resource is no longer determined by the last successful PUT. What does this break? Hard to say -- I'm not aware of any intermediary infrastructure out there that really leverages the properties of PUT/DELETE at all so it's hard for me to point out how changing the properties of PUT breaks existing infrastructure. (This is also why I tend not to bother with PUT/DELETE too much -- I think they're a bit over-emphasized in REST circles.) But I can make something up: with "complete PUT" you could implement a generic HTTP gateway that caches modification requests (PUT/DELETE) to origin servers when they are down, returning a 202 to the client. If a server was down you just need to hold on to the last PUT/DELETE to any URI and then re-issue the last PUT when the server comes back up. With partial PUT you'd have to hold on to the full series of requests -- but of course because of the lack of visibility the gateway doesn't know that these are partial PUTs and only holds on to the last one -- things then break. Maybe this gateway is a dumb idea -- dunno, I haven't thought about it enough. But perhaps some variation of this example makes sense. The problem is that if you violate the guarantees of the protocol, things that work today might break tomorrow when some new infrastructure is added to your system. The fact that you aren't sure how the properties guaranteed by the protocol are or will be used in practice should make you more, not less, worried about violating them. Regards, Andrew
--- In rest-discuss@yahoogroups.com, Marc Portier <mpo@...> wrote: > > > > On 11/10/2010 07:34 PM, Andrew Wahbe wrote: > > > > Ok, I understand your question a bit better now. > > To me a key issue is that you seem to be addressing a specific > > sub-resource (the part of the resource updated by the PUT) but this > > addressing is not done in the URI. I'd have to assume you are using a > > portion of the body (implicitly or explicitly) to address the piece of > > the resource that gets updated. > > I would think not a portion of the body but the entire body: the > contract that this server takes in supporting partial update is that it > offers the client the opportunity to only transfer that portion of the > resource-state it wants to change (or very pragmatically only the > portion it cares about) > > - or even "knows" about in case of security filtering > > - or is 'capable' of updating in case it choose to use some > format/media-type that is not capable to transfer all subtleties of the > resource's members > > So really: I don't see a hidden 'address', only the agreement that the > representation in the body can be 'sparse' (after all: the message-body > for any method just holds a representation, never the resource itself :) ) > Ok but earlier in the thread I made a distinction between the server filling in the rest of the details using the previous resource state and doing so without. Mark Baker also extended things a bit saying: " The important part from a REST perspective is that the meaning of the PUT message is independent of the state of the resource." I *think* the examples you give fall into the stuff that's "ok" based on this. In which case we're in agreement on the above. The sticky part is the "but please *don't change* the other part(s)" that is implicit in the Partial PUTs I think Mike is referring to. I think that leads you into trouble as I've described elsewhere in the thread. > > If you take the example to its extreme you could just execute all PUT > > operations on the "/" resource and identify the sub-path in the body. > > > > very extreme, and nobody is suggesting that, right? Taking something to the extreme sometimes helps point out what is wrong especially when there is no well-defined line where "extreme" starts. > > > I see a few negatives (in addition to Jan's points): > > - you are hurting visibility by moving addressing out of the URI > > don't agree: there is no sub-resource IMHO, just a sparse representation > of the resource state > > > - you can't use etags and conditional requests to control updates these > > specific portions you are updating, just to the "parent" resource > > on the contrary: in cases where this is important, the usage of etags > and conditionals would allow to guarantee that the partial PUT is only > applied if the resource wasn't changed yet. > > so in fact the conditional part in this case completes the sparse-ness > of the passed sparse-representation I think you are implying that it is ok for PUT to be non-idempotent or partial only-if etags & conditional requests. I disagree. The properties must hold without them. I touched on the advantages of this earlier here: http://tech.groups.yahoo.com/group/rest-discuss/message/16857 > > > - while things seem fine from the perspective of repeating a single > > operation multiple times, you don't have the same properties for > > non-conditional interleaved PUTs from multiple writers. Here, the state > > as mentioned above: etags and conditionals can make it do what you want > here I think, in the end you get some semantics about the parts not in > the put-representation saying: "IF you're still in the same state I just > GOT, then I'm sure that changing only the fields I'm passing you will > bring the resource in the expected state" > > and for apps that don't need these checks the parts not mentioned in the > put-body can be considered as "don't care" > > (update-counters, update-timestamps would be typical examples here, but > on the correct level of abstraction system-data, meta-data all just is > data, no?) > > All in all: the server still has the right to interpret, partially > ignore, additionally decorate whatever you've PUT to update the > 'resource state' (it might even have an effect on resources not > addressed in this put) Again, you are preventing the "normal" (non-conditional) PUT semantics of "please update the state to this". Again, I'm fine if the parts you don't specify are "don't care" parts -- the real problem I think arises is when you do care. > > > of the resource is not the state specified by the last successful PUT > > (as is usually the case). That is quite a significant difference in some > > apps I would think. > > > > I agree. However you correctly mention apps, not user-agents (or > intermediaries): those should not make any assumptions on the result of > GET just because they just completed a successful PUT, right? > > Looking at it from that angle I don't think the partial put violates the > HTTP contracts, and also keeps playing nice with the REST principles. > I gave an example of potential problems with intermediaries here: http://tech.groups.yahoo.com/group/rest-discuss/message/16880 The key problem is that idempotent & "not depending on previous state" imply that state is fully determined by last PUT/DELETE. (Here I'm assuming that the "don't care" bits "don't matter" in the same way they don't for "safe" GETs -- i.e. the statement "The important distinction here is that the user did not request the side-effects, so therefore cannot be held accountable for them." has a corollary of "... the user did not specify how to change the portion of state set by the server based on prior state and therefore cannot be held accountable for it." Or something like that. This allows the intermediary I described in the linked message to work correctly. Again the key distinction is between the client "not caring" about unspecified portion vs. an implicit request to leave it unchanged. This discussion has raised the following question for me though: Can the use of an etag in a conditional request be considered as an implicit description (by reference) of the portion of resource state not explicitly described in the body? i.e. the message means set the state as it was in the version indicated by the etag but also apply the changes described in the body? If so then perhaps we fall into alignment... a little concerned on the use of an etag for this though as it isn't necessarily clearly understood by all parties (not fully self-descriptive but maybe "self-descriptive enough"?). As the request fails if the current state doesn't match the one referred to by the etag it gets a bit fuzzy for sure... I'm also not too fond of a request that is valid only-if done conditionally... Andrew
I thought this might be of interest to the REST community. Here's the abstract for Memento: "The HTTP-based Memento framework bridges the present and past Web by interlinking current resources with resources that encapsulate their past. It facilitates obtaining representations of prior states of a resource, available from archival resources in Web archives or version resources in content management systems, by leveraging the resource's URI and a preferred datetime. To this end, the framework introduces datetime negotiation (a variation on content negotiation), and new Relation Types for the HTTP Link header aimed at interlinking resources with their archival/version resources. It also introduces an approach to discover and serialize a list of resources known to a server, each of which provides access to a representation of a prior state of a same resource." I for one would certainly would like to hear how Memento could be made more RESTful. I'm curious in particular as to whether the draft is using the term "resource" correctly (vs "representation of a resource"). Also curious as to how Memento deals with the issue of applying the hypermedia constraint to a returned "historical" or "archival" representation. For example, if I get back a historical representation of a wikipedia web page, should the representation contain links to other historical pages or to the current pages. For example, If I retrieve the wikipedia page for mathematics from five years ago, should the link for "abstraction" in the returned page link to the five year old resource for abstraction or the current one? -- Nick Nick Gall Phone: +1.781.608.5871 Twitter: ironick AOL IM: Nicholas Gall Yahoo IM: nick_gall_1117 MSN IM: (same as email) Google Talk: (same as email) Email: nick.gall AT-SIGN gmail DOT com Weblog: http://ironick.typepad.com/ironick/ ---------- Forwarded message ---------- From: Herbert van de Sompel <hvdsomp@...> Date: Fri, Nov 12, 2010 at 12:51 PM Subject: Memento Internet Draft To: www-tag@... Hi all, I would like to announce the first version of the Memento (Time Travel for the Web) Internet Draft: (*) TXT version: http://www.ietf.org/id/draft-vandesompel-memento-00.txt (*) HTML version: http://mementoweb.org/guide/rfc/ID/ Looking forward to feedback. Greetings Herbert Van de Sompel -- Herbert Van de Sompel Digital Library Research & Prototyping Los Alamos National Laboratory, Research Library http://public.lanl.gov/herbertv/ ==
On 11/11/2010 06:26 PM, wahbedahbe wrote: > > --- In rest-discuss@yahoogroups.com, Marc Portier<mpo@...> wrote: >> >> >> >> On 11/10/2010 07:34 PM, Andrew Wahbe wrote: >>> >>> Ok, I understand your question a bit better now. >>> To me a key issue is that you seem to be addressing a specific >>> sub-resource (the part of the resource updated by the PUT) but this >>> addressing is not done in the URI. I'd have to assume you are using a >>> portion of the body (implicitly or explicitly) to address the piece of >>> the resource that gets updated. >> >> I would think not a portion of the body but the entire body: the >> contract that this server takes in supporting partial update is that it >> offers the client the opportunity to only transfer that portion of the >> resource-state it wants to change (or very pragmatically only the >> portion it cares about) >> >> - or even "knows" about in case of security filtering >> >> - or is 'capable' of updating in case it choose to use some >> format/media-type that is not capable to transfer all subtleties of the >> resource's members >> >> So really: I don't see a hidden 'address', only the agreement that the >> representation in the body can be 'sparse' (after all: the message-body >> for any method just holds a representation, never the resource itself :) ) >> > > Ok but earlier in the thread I made a distinction between the server filling in the rest of the details using the previous resource state and doing so without. Mark Baker also extended things a bit saying: > " The important part from a REST perspective is that the meaning of the PUT message is independent of the state of the resource." > > I *think* the examples you give fall into the stuff that's "ok" based on this. In which case we're in agreement on the above. > > The sticky part is the "but please *don't change* the other part(s)" that is implicit in the Partial PUTs I think Mike is referring to. I think that leads you into trouble as I've described elsewhere in the thread. > >>> If you take the example to its extreme you could just execute all PUT >>> operations on the "/" resource and identify the sub-path in the body. >>> >> >> very extreme, and nobody is suggesting that, right? > > Taking something to the extreme sometimes helps point out what is wrong especially when there is no well-defined line where "extreme" starts. > Ok, I'm with you: there's bound to be a line we both don't want to cross :) >> >>> I see a few negatives (in addition to Jan's points): >>> - you are hurting visibility by moving addressing out of the URI >> >> don't agree: there is no sub-resource IMHO, just a sparse representation >> of the resource state >> >>> - you can't use etags and conditional requests to control updates these >>> specific portions you are updating, just to the "parent" resource >> >> on the contrary: in cases where this is important, the usage of etags >> and conditionals would allow to guarantee that the partial PUT is only >> applied if the resource wasn't changed yet. >> >> so in fact the conditional part in this case completes the sparse-ness >> of the passed sparse-representation > > I think you are implying that it is ok for PUT to be non-idempotent or partial only-if etags& conditional requests. I disagree. The properties must hold without them. Receiving the same partial-put twice (or more) doesn't change the outcome, so I'ld say that is idempotent to me. Note the partial put I envision is not about 'incrementing' or 'operating' on state, it really is about setting state, only not explicitly all aspects of it. My main argument really is that it's not as much a partial put as it is a normal put holding only a partial representation, and that the server is ok with that. (and depending on the case the app will enforce conditionals to guarantee some consistency) It more is about "wise bandwidth consumption" then anything else really. I do understand the effects of concurrent independent writes in this story. But when an app "doesn't care" about conditional guarantees then I assume such attitude extends itself into being large about your interpretation of idempotent, no? (thinking about update-counters, and time-stamps and the like again, but it's really only in a given appliction context one can really decide) > I touched on the advantages of this earlier here: > http://tech.groups.yahoo.com/group/rest-discuss/message/16857 > Yep, read that, and I'm on the same track I think. We're talking about the same use case and app needs for sure, and we seem to agree that in finding a balance towards 'working properly' an app might find itself needing partial updates in a way we described them. I think I read you acknowledge that the contraints/guarantees/contract needed by such application to work properly (and fast enough) might lend itself towards using partial-update messages (with or without conditional guarding) On the remaining topic: should those messages use the PUT or POST method? I honestly don't think that choice is going to make the apps behavioural properties any different. And I have the feeling that user-agents and intermediaries should handle things in a similar way too. From there I suggested the actual choice should be influenced by a developers-feeling of 'least surprise', and tried to show my slight preference next to some observation of inevitable surprise in cases like these. Anyway: what precisely we find surprising or not is only to be discussed in the scope of a hands-on application, IMHO. >> >>> - while things seem fine from the perspective of repeating a single >>> operation multiple times, you don't have the same properties for >>> non-conditional interleaved PUTs from multiple writers. Here, the state >> >> as mentioned above: etags and conditionals can make it do what you want >> here I think, in the end you get some semantics about the parts not in >> the put-representation saying: "IF you're still in the same state I just >> GOT, then I'm sure that changing only the fields I'm passing you will >> bring the resource in the expected state" >> >> and for apps that don't need these checks the parts not mentioned in the >> put-body can be considered as "don't care" >> >> (update-counters, update-timestamps would be typical examples here, but >> on the correct level of abstraction system-data, meta-data all just is >> data, no?) >> >> All in all: the server still has the right to interpret, partially >> ignore, additionally decorate whatever you've PUT to update the >> 'resource state' (it might even have an effect on resources not >> addressed in this put) > > Again, you are preventing the "normal" (non-conditional) PUT semantics of "please update the state to this". Again, I'm fine if the parts you don't specify are "don't care" parts -- the real problem I think arises is when you do care. > I think we agree. If the app cares, it should use conditionals. Sorry if I wasn't clear on that. > >> >>> of the resource is not the state specified by the last successful PUT >>> (as is usually the case). That is quite a significant difference in some >>> apps I would think. >>> >> >> I agree. However you correctly mention apps, not user-agents (or >> intermediaries): those should not make any assumptions on the result of >> GET just because they just completed a successful PUT, right? >> >> Looking at it from that angle I don't think the partial put violates the >> HTTP contracts, and also keeps playing nice with the REST principles. >> > I gave an example of potential problems with intermediaries here: > http://tech.groups.yahoo.com/group/rest-discuss/message/16880 > > The key problem is that idempotent& "not depending on previous state" imply that state is fully determined by last PUT/DELETE. (Here I'm assuming that the "don't care" bits "don't matter" in the same way they don't for "safe" GETs -- i.e. the statement "The important distinction here is that the user did not request the side-effects, so therefore cannot be held accountable for them." has a corollary of "... the user did not specify how to change the portion of state set by the server based on prior state and therefore cannot be held accountable for it." Or something like that. This allows the intermediary I described in the linked message to work correctly. > > Again the key distinction is between the client "not caring" about unspecified portion vs. an implicit request to leave it unchanged. > > This discussion has raised the following question for me though: Can the use of an etag in a conditional request be considered as an implicit description (by reference) of the portion of resource state not explicitly described in the body? i.e. the message means set the state as it was in the version indicated by the etag but also apply the changes described in the body? If so then perhaps we fall into alignment... a little concerned on the use of an etag for this though as it isn't necessarily clearly understood by all parties (not fully self-descriptive but maybe "self-descriptive enough"?). As the request fails if the current state doesn't match the one referred to by the etag it gets a bit fuzzy for sure... I'm also not too fond of a request that is valid only-if done conditionally... > Yep. And I have the same feeling about your concern and unease. But as mentioned before: I think we're talking about border-use cases here that force us into some uncommon strategies. So surprise and unease there will be, and the awkward thing about it seems to be that YMMV. regards, -marc=
Dear All, I'm a new comer to REST. In my application before, session was for authentication. But I'm told using HTTP authentication instead. So what is session for in REST? Thanks. Best regards, Zhi-Qiang Lei zhiqiang.lei@gmail.com
Dear All, Seems convert query into matrix URIs would be more readable. But how does html form support this? For instance, I convert /parent/children[]=child1&children[]=child2&children[]=child3 into /parent/child1;child2;child3 For the original URI I can use a form which use GET and some checkboxes to implement it. But for the matrix one, do I only have to use javascript? Thanks. Best regards, Zhi-Qiang Lei zhiqiang.lei@...
Short answer: don't use session. On Nov 15, 2010 2:41 AM, "Zhi-Qiang Lei" <zhiqiang.lei@...> wrote: Dear All, I'm a new comer to REST. In my application before, session was for authentication. But I'm told using HTTP authentication instead. So what is session for in REST? Thanks. Best regards, Zhi-Qiang Lei zhiqiang.lei@... <zhiqiang.lei%40gmail.com>
Reply to the list and not me directly. I have never used OpenID in a RESTful application, but that doesn't invalidate its use. You would need to research this further. -- Erlend On Mon, Nov 15, 2010 at 10:31 AM, Zhi-Qiang Lei <zhiqiang.lei@...>wrote: > But seems OpenID relies on session. Does this mean OpenID is not a proper > authentication protocol in REST architecture? Thanks. > > On Nov 15, 2010, at 3:03 PM, Erlend Hamnaberg wrote: > > Short answer: don't use session. > > On Nov 15, 2010 2:41 AM, "Zhi-Qiang Lei" <zhiqiang.lei@...> wrote: > > > > Dear All, > > I'm a new comer to REST. In my application before, session was for > authentication. But I'm told using HTTP authentication instead. So what is > session for in REST? Thanks. > > Best regards, > Zhi-Qiang Lei > zhiqiang.lei@... <zhiqiang.lei%40gmail.com> > > > > > > > Best regards, > Zhi-Qiang Lei > zhiqiang.lei@... > >
Hello! When you google 'REST patterns' you get this ( http://developer.mindtouch.com/REST/REST_Patterns ) as the first useful link. It doesn't have many patterns defined, but it's a start. Are there any client libraries, or server side frameworks, which allow you to deal with RESTful resources on this level? Rather than having to manually deal with the HTTP behind it? For example, we have a pretty good idea of what it means to deal with a 'collection', but do we really have to implement the correct use of the particular HTTP status codes and header values from scratch every time? As an example: I'm thinking about ways to hide the implementation complexities of doing a correct 'edit' (PUT with correct Etags, etc.) either on server or client in a way that will be sufficient in around 80% of all cases where someone wants to implement a RESTful system. Some will have specific requirements, of course, in which case they should probably be able to override some default behavior. Juergen
I need invoke a process which doesn't require any input from the user, just a trigger. I plan to use POST /uri without body to trigger the process. I want to know if this is considered bad from both HTTP and REST perspective?
Would love feedback on this group on this. http://codebetter.com/blogs/glenn.block/archive/2010/11/15/exploring-resources-a-resource-programming-model-and-code-based-configuration.aspx We've been working diligently at building out our HTTP story for WCF. Recently I invested some cycles into seeing how far we could go with a convention based model for resources. Thanks Glenn
Hello. In REST, the idea is to perform operations that are session less. That is, for scalability, the server needs to avoid keeping session information. IN other words, server should not remember the clients, and operations should not be required to have past state on the server to work. If the client needs to perform a second operation based on a past state, the client should send to the server all that is needed. Usually, session is used in security to authenticate once keep a conversation with the server once authenticated. That breaks a little REST, as it may force the server to learn about you and keep session information. If you really need that, you then have an unrestful section in your solution. Is that bad? Not really. CAn be done differently? Yes, in several ways, including sending credentials each time or adding an identifier, being a token or a digital signature. REST In Practice book (http://www.amazon.com/gp/product/0596805829/ref=cm_cr_rev_prod_title) has a full chapter in security that may help. William Martinez. --- In rest-discuss@yahoogroups.com, Zhi-Qiang Lei <zhiqiang.lei@...> wrote: > > Dear All, > > I'm a new comer to REST. In my application before, session was for authentication. But I'm told using HTTP authentication instead. So what is session for in REST? Thanks. > > Best regards, > Zhi-Qiang Lei > zhiqiang.lei@... >
Dear All, I got some resources which are needed one person's authentication, and some resources shared by two people which means either one's authentication is OK. Is it possible to make cross realm? Does browser support it? Thanks. Best regards, Zhi-Qiang Lei zhiqiang.lei@...
Zhi-Qiang Lei wrote: > Dear All, > > I got some resources which are needed one person's authentication, and some resources shared by two people which means either one's authentication is OK. Is it possible to make cross realm? Does browser support it? Thanks. yes, see also http://dev.w3.org/2006/waf/access-control/ https://datatracker.ietf.org/drafts/draft-abarth-origin/ http://dev.w3.org/2006/waf/UMP/ http://www.w3.org/Security/wiki/Comparison_of_CORS_and_UM http://waterken.sourceforge.net/aclsdont/current.pdf
On Nov 17, 2010, at 11:55 AM, Nathan wrote: > Zhi-Qiang Lei wrote: >> Dear All, >> I got some resources which are needed one person's authentication, and some resources shared by two people which means either one's authentication is OK. Is it possible to make cross realm? Does browser support it? Thanks. > > yes, see also > > http://dev.w3.org/2006/waf/access-control/ > https://datatracker.ietf.org/drafts/draft-abarth-origin/ > http://dev.w3.org/2006/waf/UMP/ > http://www.w3.org/Security/wiki/Comparison_of_CORS_and_UM > http://waterken.sourceforge.net/aclsdont/current.pdf Hi Nathan, That is interesting but seems it is a little different from what I want. In my original design, I gave the resources which only belong to user A a realm like "A@..." during digest authentication. And give user B "B@example.com" as realm for those resources which only belong to B. But some resources is shared by A and B, I cannot make a third realm alone other than A or B, because A also might share resources with others, the will be infinite number of realm for user A. (Browser will ask for authentication for each new realm discovered, right?) What I want is the realm or challenge for shared resources belong to A and B could be something like a super set of A and B. I've just learned that multiple WWW-Authenticate header in different scheme in challenge response is possible. But how about the same scheme? Will it ask for two times user credentials (for A's and B's)? Or CORS and UM can help me fix this problem in other way? Thanks. Best regards, Zhi-Qiang Lei zhiqiang.lei@...
I would assume a POST or PUT would be ok, as they change some sort of state on the server side... or at least I would assume a "trigger" would require some sort of state change.
--- On Tue, 11/16/10, Suresh <sureshkk@gmail.com> wrote:
From: Suresh <sureshkk@...>
Subject: [rest-discuss] Is it considered bad practice to perform HTTP POST without entity body?
To: rest-discuss@yahoogroups.com
Date: Tuesday, November 16, 2010, 12:59 AM
I need invoke a process which doesn't require any input from the user, just a trigger. I plan to use POST /uri without body to trigger the process. I want to know if this is considered bad from both HTTP and REST perspective?
Kevin Duffey wrote: > > I would assume a POST or PUT would be ok, as they change some sort of > state on the server side... or at least I would assume a "trigger" > would require some sort of state change. > Method doesn't matter. The goal in REST is to transfer a representation of application state, could be an entity, could be urlencoded. Triggers on PUT/POST are anti-patterns in REST. The constraint is, "manipulation of resources through representations". Transferring a representation state to an origin server can "trigger" anything, with visible semantics (the choice between methods is external to REST, and comes down to whether you want idempotency or not). Think of it this way, if it helps -- PUT/POST chooses a handler, but you need to give it something to handle. GET may also be used as a trigger (i.e. a hit counter), provided it's safe (the user isn't held accountable, unlike with unsafe PUT/POST). -Eric
The widely-used Twitter "REST" API has tons of bodyless posts. When the payload is <=140chars and is essentially text/plain, this doesn't seem jarring. It does feel weird URL-encoding payload. -Tim On Wed, Nov 17, 2010 at 9:11 AM, Kevin Duffey <andjarnic@...> wrote: > > > I would assume a POST or PUT would be ok, as they change some sort of state > on the server side... or at least I would assume a "trigger" would require > some sort of state change. > > --- On *Tue, 11/16/10, Suresh <sureshkk@...>* wrote: > > > From: Suresh <sureshkk@...> > Subject: [rest-discuss] Is it considered bad practice to perform HTTP POST > without entity body? > To: rest-discuss@yahoogroups.com > Date: Tuesday, November 16, 2010, 12:59 AM > > > > I need invoke a process which doesn't require any input from the user, just > a trigger. I plan to use POST /uri without body to trigger the process. I > want to know if this is considered bad from both HTTP and REST perspective? > > > > >
On Tue, Nov 16, 2010 at 3:59 AM, Suresh <sureshkk@...> wrote: > I need invoke a process which doesn't require any input from the user, just a trigger. I plan to use POST /uri without body to trigger the process. I want to know if this is considered bad from both HTTP and REST perspective? Good reading: http://roy.gbiv.com/untangled/2009/it-is-okay-to-use-post --tim
A similar question came up on the ieft-http list recently. Here's the thread: http://lists.w3.org/Archives/Public/ietf-http-wg/2010JulSep/0272.html FWIW, my usual practice is to write servers that can accept empty bodies on POST, but do not accept empty bodies on PUT. I find nothing in the HTTP spec that requires (or even suggests) this, it's just what I've come to adopt in practice. Finally, I can't recall running into any "live" examples of accepting empty bodies on PUT in my past. Anyone know of an example of this? mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me #RESTFest 2010 http://rest-fest.googlecode.com On Wed, Nov 17, 2010 at 15:35, Tim Williams <williamstw@...> wrote: > On Tue, Nov 16, 2010 at 3:59 AM, Suresh <sureshkk@...> wrote: >> I need invoke a process which doesn't require any input from the user, just a trigger. I plan to use POST /uri without body to trigger the process. I want to know if this is considered bad from both HTTP and REST perspective? > > Good reading: http://roy.gbiv.com/untangled/2009/it-is-okay-to-use-post > > --tim > > > ------------------------------------ > > Yahoo! Groups Links > > > >
Wouldn't this be what you would want if you just want to cause a resource to exist? 1. PUT /foo (empty body) 2. GET /foo -> 204 No Content Jon ........ Jon Moore Comcast Interactive Media From: mike amundsen <mamund@...<mailto:mamund@...>> Date: Wed, 17 Nov 2010 15:58:04 -0500 To: Tim Williams <williamstw@...<mailto:williamstw@...>> Cc: Suresh <sureshkk@...<mailto:sureshkk@...>>, <rest-discuss@yahoogroups.com<mailto:rest-discuss@yahoogroups.com>> Subject: Re: [rest-discuss] Is it considered bad practice to perform HTTP POST without entity body? A similar question came up on the ieft-http list recently. Here's the thread: http://lists.w3.org/Archives/Public/ietf-http-wg/2010JulSep/0272.html FWIW, my usual practice is to write servers that can accept empty bodies on POST, but do not accept empty bodies on PUT. I find nothing in the HTTP spec that requires (or even suggests) this, it's just what I've come to adopt in practice. Finally, I can't recall running into any "live" examples of accepting empty bodies on PUT in my past. Anyone know of an example of this? mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me #RESTFest 2010 http://rest-fest.googlecode.com On Wed, Nov 17, 2010 at 15:35, Tim Williams <williamstw@...<mailto:williamstw%40gmail.com>> wrote: > On Tue, Nov 16, 2010 at 3:59 AM, Suresh <sureshkk@gmail.com<mailto:sureshkk%40gmail.com>> wrote: >> I need invoke a process which doesn't require any input from the user, just a trigger. I plan to use POST /uri without body to trigger the process. I want to know if this is considered bad from both HTTP and REST perspective? > > Good reading: http://roy.gbiv.com/untangled/2009/it-is-okay-to-use-post > > --tim > > > ------------------------------------ > > Yahoo! Groups Links > > > >
Jon: <snip> Wouldn't this be what you would want if you just want to cause a resource to exist? 1. PUT /foo (empty body) 2. GET /foo -> 204 No Content </snip> I suppose that could be done. I've not had a need to do this in the past; usually some body data is sent by clients when creating a resources on servers I work with. Are you using this pattern now? mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me #RESTFest 2010 http://rest-fest.googlecode.com On Wed, Nov 17, 2010 at 16:00, Moore, Jonathan <jonathan_moore@...>wrote: > > > Wouldn't this be what you would want if you just want to cause a resource > to exist? > > > 1. PUT /foo (empty body) > 2. GET /foo -> 204 No Content > > Jon > ........ > Jon Moore > Comcast Interactive Media > > > > From: mike amundsen <mamund@...> > Date: Wed, 17 Nov 2010 15:58:04 -0500 > To: Tim Williams <williamstw@...> > Cc: Suresh <sureshkk@...>, <rest-discuss@yahoogroups.com> > Subject: Re: [rest-discuss] Is it considered bad practice to perform HTTP > POST without entity body? > > > > A similar question came up on the ieft-http list recently. Here's the > thread: > http://lists.w3.org/Archives/Public/ietf-http-wg/2010JulSep/0272.html > > FWIW, my usual practice is to write servers that can accept empty > bodies on POST, but do not accept empty bodies on PUT. I find nothing > in the HTTP spec that requires (or even suggests) this, it's just what > I've come to adopt in practice. > > Finally, I can't recall running into any "live" examples of accepting > empty bodies on PUT in my past. Anyone know of an example of this? > > mca > http://amundsen.com/blog/ > http://twitter.com@mamund > http://mamund.com/foaf.rdf#me > > #RESTFest 2010 > http://rest-fest.googlecode.com > > On Wed, Nov 17, 2010 at 15:35, Tim Williams <williamstw@...<williamstw%40gmail.com>> > wrote: > > On Tue, Nov 16, 2010 at 3:59 AM, Suresh <sureshkk@...<sureshkk%40gmail.com>> > wrote: > >> I need invoke a process which doesn't require any input from the user, > just a trigger. I plan to use POST /uri without body to trigger the process. > I want to know if this is considered bad from both HTTP and REST > perspective? > > > > Good reading: http://roy.gbiv.com/untangled/2009/it-is-okay-to-use-post > > > > --tim > > > > > > ------------------------------------ > > > > Yahoo! Groups Links > > > > > > > > > > >
No, I'm not really just brainstorming a possible use case for an empty PUT. :) Jon ........ Jon Moore Comcast Interactive Media From: mike amundsen <mamund@...<mailto:mamund@...>> Date: Wed, 17 Nov 2010 16:07:47 -0500 To: Jonathan Moore <Jonathan_Moore@...<mailto:Jonathan_Moore@...>> Cc: "rest-discuss@yahoogroups.com<mailto:rest-discuss@yahoogroups.com>" <rest-discuss@yahoogroups.com<mailto:rest-discuss@yahoogroups.com>> Subject: Re: [rest-discuss] Is it considered bad practice to perform HTTP POST without entity body? Jon: <snip> Wouldn't this be what you would want if you just want to cause a resource to exist? 1. PUT /foo (empty body) 2. GET /foo -> 204 No Content </snip> I suppose that could be done. I've not had a need to do this in the past; usually some body data is sent by clients when creating a resources on servers I work with. Are you using this pattern now? mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me #RESTFest 2010 http://rest-fest.googlecode.com On Wed, Nov 17, 2010 at 16:00, Moore, Jonathan <jonathan_moore@...<mailto:jonathan_moore@...>> wrote: Wouldn't this be what you would want if you just want to cause a resource to exist? 1. PUT /foo (empty body) 2. GET /foo -> 204 No Content Jon ........ Jon Moore Comcast Interactive Media From: mike amundsen <mamund@...m<mailto:mamund@...>> Date: Wed, 17 Nov 2010 15:58:04 -0500 To: Tim Williams <williamstw@...<mailto:williamstw@...>> Cc: Suresh <sureshkk@...<mailto:sureshkk@...>>, <rest-discuss@yahoogroups.com<mailto:rest-discuss@yahoogroups.com>> Subject: Re: [rest-discuss] Is it considered bad practice to perform HTTP POST without entity body? A similar question came up on the ieft-http list recently. Here's the thread: http://lists.w3.org/Archives/Public/ietf-http-wg/2010JulSep/0272.html FWIW, my usual practice is to write servers that can accept empty bodies on POST, but do not accept empty bodies on PUT. I find nothing in the HTTP spec that requires (or even suggests) this, it's just what I've come to adopt in practice. Finally, I can't recall running into any "live" examples of accepting empty bodies on PUT in my past. Anyone know of an example of this? mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me #RESTFest 2010 http://rest-fest.googlecode.com On Wed, Nov 17, 2010 at 15:35, Tim Williams <williamstw@...<mailto:williamstw%40gmail.com>> wrote: > On Tue, Nov 16, 2010 at 3:59 AM, Suresh <sureshkk@...<mailto:sureshkk%40gmail.com>> wrote: >> I need invoke a process which doesn't require any input from the user, just a trigger. I plan to use POST /uri without body to trigger the process. I want to know if this is considered bad from both HTTP and REST perspective? > > Good reading: http://roy.gbiv.com/untangled/2009/it-is-okay-to-use-post > > --tim > > > ------------------------------------ > > Yahoo! Groups Links > > > >
mike amundsen wrote: > > A similar question came up on the ieft-http list recently. Here's the > thread: > http://lists.w3.org/Archives/Public/ietf-http-wg/2010JulSep/0272.html > Jan's exactly right. Instead of a uniform interface, the result is that the semantics of POST vary by URI. IOW, the "verb" is in the URI, not the protocol method (tunneling). Mark is also exactly right, not having a body isn't the same as not having a representation. If I wanted to extend Atom Protocol to accept updates to the slug, I'd use an empty-body POST, the Slug header, and Content-Type: application/ atom+xml on an Atom Entry resource. Except now I've defined POST semantics which vary by media type (in this case, ;type=feed means 'create' and ;type=entry means 'update'), which would lead me to change from POST to PROPPATCH. The slug header *is* the representation of application state I'm sending to the server, for the purpose of manipulating the resource. -Eric
Jonathan, wouldn't PUT /foo (empty body) just set the state of "/foo" to the empty string ""? So when you GET /foo, you would get back the empty string: 200 OK, Content-Type: text/plain, Content-Length: 0, and no body. Maybe 204 === Content-Length: 0, but maybe not. :-) Anyway, try this: <form action="/foo" method="post" enctype="text/plain"> <input type="submit"> </form> Most browsers will POST an empty body, at least. Try it out here: http://mogsie.com/2010/empty/ Chrome gives (when you click trigger) Content-Length: 0 Content-Type: text/plain; boundary= If a browser can do it, I'd say it's RESTful to do empty-body POST or PUT. I would (as browsers do) of course always add a media type (Content-Type header), preferably one where 0 bytes is a valid document. That would rule out all XML types, I guess, since "" is not a valid XML document. -- -mogsie- On Wed, Nov 17, 2010 at 10:07 PM, mike amundsen <mamund@...> wrote: > > > Jon: > <snip> > Wouldn't this be what you would want if you just want to cause a resource > to exist? > > > 1. PUT /foo (empty body) > 2. GET /foo -> 204 No Content > > </snip> > > I suppose that could be done. I've not had a need to do this in the past; > usually some body data is sent by clients when creating a resources on > servers I work with. > > Are you using this pattern now? > > mca > http://amundsen.com/blog/ > http://twitter.com@mamund > http://mamund.com/foaf.rdf#me > > > #RESTFest 2010 > http://rest-fest.googlecode.com > > > > On Wed, Nov 17, 2010 at 16:00, Moore, Jonathan <jonathan_moore@... > > wrote: > >> >> >> Wouldn't this be what you would want if you just want to cause a >> resource to exist? >> >> >> 1. PUT /foo (empty body) >> 2. GET /foo -> 204 No Content >> >> Jon >> ........ >> Jon Moore >> Comcast Interactive Media >> >> >> >> From: mike amundsen <mamund@...> >> Date: Wed, 17 Nov 2010 15:58:04 -0500 >> To: Tim Williams <williamstw@...> >> Cc: Suresh <sureshkk@gmail.com>, <rest-discuss@yahoogroups.com> >> Subject: Re: [rest-discuss] Is it considered bad practice to perform HTTP >> POST without entity body? >> >> >> >> A similar question came up on the ieft-http list recently. Here's the >> thread: >> http://lists.w3.org/Archives/Public/ietf-http-wg/2010JulSep/0272.html >> >> FWIW, my usual practice is to write servers that can accept empty >> bodies on POST, but do not accept empty bodies on PUT. I find nothing >> in the HTTP spec that requires (or even suggests) this, it's just what >> I've come to adopt in practice. >> >> Finally, I can't recall running into any "live" examples of accepting >> empty bodies on PUT in my past. Anyone know of an example of this? >> >> mca >> http://amundsen.com/blog/ >> http://twitter.com@mamund >> http://mamund.com/foaf.rdf#me >> >> #RESTFest 2010 >> http://rest-fest.googlecode.com >> >> On Wed, Nov 17, 2010 at 15:35, Tim Williams <williamstw@...<williamstw%40gmail.com>> >> wrote: >> > On Tue, Nov 16, 2010 at 3:59 AM, Suresh <sureshkk@...<sureshkk%40gmail.com>> >> wrote: >> >> I need invoke a process which doesn't require any input from the user, >> just a trigger. I plan to use POST /uri without body to trigger the process. >> I want to know if this is considered bad from both HTTP and REST >> perspective? >> > >> > Good reading: http://roy.gbiv.com/untangled/2009/it-is-okay-to-use-post >> > >> > --tim >> > >> > >> > ------------------------------------ >> > >> > Yahoo! Groups Links >> > >> > >> > >> > >> >> >> > >
Quick note for all of you open space lovers, our next event is on the 24th of November, it's going to be brilliant, and there are still tickets left. http://openspacebeers2.eventbrite.com/ It's open to any technical community, so I highly recommend you come if you're in London. Seb
Thanks everybody. I take the short answer to be "Yes" it is okay to POST with empty content and I wouldn't be violating HTTP or REST. Thanks again for all links and references. It was very helpful. Best regards, Suresh On Thu, Nov 18, 2010 at 4:18 AM, Erik Mogensen <erik@...> wrote: > > > Jonathan, wouldn't PUT /foo (empty body) just set the state of "/foo" to > the empty string ""? > > So when you GET /foo, you would get back the empty string: 200 OK, > Content-Type: text/plain, Content-Length: 0, and no body. Maybe 204 === > Content-Length: 0, but maybe not. :-) > > Anyway, try this: > > <form action="/foo" method="post" enctype="text/plain"> > <input type="submit"> > </form> > > Most browsers will POST an empty body, at least. Try it out here: > > http://mogsie.com/2010/empty/ > > Chrome gives (when you click trigger) > Content-Length: 0 > Content-Type: text/plain; boundary= > > If a browser can do it, I'd say it's RESTful to do empty-body POST or PUT. > I would (as browsers do) of course always add a media type (Content-Type > header), preferably one where 0 bytes is a valid document. That would rule > out all XML types, I guess, since "" is not a valid XML document. > -- > -mogsie- > > On Wed, Nov 17, 2010 at 10:07 PM, mike amundsen <mamund@...> wrote: > >> >> >> Jon: >> <snip> >> Wouldn't this be what you would want if you just want to cause a resource >> to exist? >> >> >> 1. PUT /foo (empty body) >> 2. GET /foo -> 204 No Content >> >> </snip> >> >> I suppose that could be done. I've not had a need to do this in the past; >> usually some body data is sent by clients when creating a resources on >> servers I work with. >> >> Are you using this pattern now? >> >> mca >> http://amundsen.com/blog/ >> http://twitter.com@mamund >> http://mamund.com/foaf.rdf#me >> >> >> #RESTFest 2010 >> http://rest-fest.googlecode.com >> >> >> >> On Wed, Nov 17, 2010 at 16:00, Moore, Jonathan < >> jonathan_moore@...> wrote: >> >>> >>> >>> Wouldn't this be what you would want if you just want to cause a >>> resource to exist? >>> >>> >>> 1. PUT /foo (empty body) >>> 2. GET /foo -> 204 No Content >>> >>> Jon >>> ........ >>> Jon Moore >>> Comcast Interactive Media >>> >>> >>> >>> From: mike amundsen <mamund@...> >>> Date: Wed, 17 Nov 2010 15:58:04 -0500 >>> To: Tim Williams <williamstw@...> >>> Cc: Suresh <sureshkk@...>, <rest-discuss@yahoogroups.com> >>> Subject: Re: [rest-discuss] Is it considered bad practice to perform >>> HTTP POST without entity body? >>> >>> >>> >>> A similar question came up on the ieft-http list recently. Here's the >>> thread: >>> http://lists.w3.org/Archives/Public/ietf-http-wg/2010JulSep/0272.html >>> >>> FWIW, my usual practice is to write servers that can accept empty >>> bodies on POST, but do not accept empty bodies on PUT. I find nothing >>> in the HTTP spec that requires (or even suggests) this, it's just what >>> I've come to adopt in practice. >>> >>> Finally, I can't recall running into any "live" examples of accepting >>> empty bodies on PUT in my past. Anyone know of an example of this? >>> >>> mca >>> http://amundsen.com/blog/ >>> http://twitter.com@mamund >>> http://mamund.com/foaf.rdf#me >>> >>> #RESTFest 2010 >>> http://rest-fest.googlecode.com >>> >>> On Wed, Nov 17, 2010 at 15:35, Tim Williams <williamstw@...<williamstw%40gmail.com>> >>> wrote: >>> > On Tue, Nov 16, 2010 at 3:59 AM, Suresh <sureshkk@...<sureshkk%40gmail.com>> >>> wrote: >>> >> I need invoke a process which doesn't require any input from the user, >>> just a trigger. I plan to use POST /uri without body to trigger the process. >>> I want to know if this is considered bad from both HTTP and REST >>> perspective? >>> > >>> > Good reading: >>> http://roy.gbiv.com/untangled/2009/it-is-okay-to-use-post >>> > >>> > --tim >>> > >>> > >>> > ------------------------------------ >>> > >>> > Yahoo! Groups Links >>> > >>> > >>> > >>> > >>> >>> >>> >> > > -- When the facts change, I change my mind. What do you do, sir?
Suresh Kumar wrote: > > Thanks everybody. I take the short answer to be "Yes" it is okay to > POST with empty content and I wouldn't be violating HTTP or REST. > Actually, no, the short answer is the REST constraint, "manipulation of resources through representations." Not having an entity doesn't necessarily mean not having a representation, but that's what it usually means. If you don't have a representation, you don't have REST, you have RMI. -Eric
On 18.11.2010 05:13, Eric J. Bowman wrote: > Suresh Kumar wrote: > > > > Thanks everybody. I take the short answer to be "Yes" it is okay to > > POST with empty content and I wouldn't be violating HTTP or REST. > > > > Actually, no, the short answer is the REST constraint, "manipulation of > resources through representations." Not having an entity doesn't > necessarily mean not having a representation, but that's what it > usually means. If you don't have a representation, you don't have > REST, you have RMI. It depends. An empty entity can be a representation. Best regards, Julian
On Thu, Nov 18, 2010 at 12:04 AM, Julian Reschke <julian.reschke@...>wrote: > ... > It depends. > > An empty entity can be a representation. > > As a nitpicky implementation note, I've run into at least a couple of HTTP stacks that don't like a POST with no entity, but I've been able to satisfy them by explicitly including a "Content-Length: 0" header on the POST. Craig
On 18.11.2010 09:24, Craig McClanahan wrote: > > On Thu, Nov 18, 2010 at 12:04 AM, Julian Reschke <julian.reschke@... > <mailto:julian.reschke@...>> wrote: > > ... > It depends. > > An empty entity can be a representation. > > As a nitpicky implementation note, I've run into at least a couple of > HTTP stacks that don't like a POST with no entity, but I've been able to > satisfy them by explicitly including a "Content-Length: 0" header on the > POST. In HTTP, an empty payload is different from no payload. Best regards, Julian
On Thu, Nov 18, 2010 at 12:50 AM, Julian Reschke <julian.reschke@...>wrote: > On 18.11.2010 09:24, Craig McClanahan wrote: > >> >> On Thu, Nov 18, 2010 at 12:04 AM, Julian Reschke <julian.reschke@... >> <mailto:julian.reschke@...>> wrote: >> >> ... >> It depends. >> >> An empty entity can be a representation. >> >> As a nitpicky implementation note, I've run into at least a couple of >> HTTP stacks that don't like a POST with no entity, but I've been able to >> satisfy them by explicitly including a "Content-Length: 0" header on the >> POST. >> > > In HTTP, an empty payload is different from no payload. > > In terms of the pure HTTP specification, I agree with you. In terms of specific implementations of real world web service stacks, you'll sometimes find yourself disagreeably surprised when your server throws an exception on a POST with no body and no Content-Length: 0 header. A robust client that wants to minimize these real world difficulties should pay attention to this, and send the content length header anyway. > Best regards, Julian > Craig
If the client knows in advance that server requires Digest Authentication for a resource, can it include "Authorization" header with each request to avoid 401 error? How about nonce and qop in this case. What are the pros and cons of this approach. Why I am asking this question is.. we are planning to host a RESTful WS for our partner and all the resources are protected and needs authentication. Please advise.
You should never send credentials unless you get a 401. Once you've gotten that you should be able to send it pre-conditionally. Your client needs to adapt if the nonce and qop changes. If you get a redirect to another server, you need to wait until you get a 401 again (as the client should be per host, and per realm). You can look at the authentication part of httpcache4j for an example of how this could be done. AbstractResponseResolver is the class to look at. -- Erlend On Thu, Nov 18, 2010 at 8:07 PM, cyuva_online <cyuva_online@...>wrote: > > > If the client knows in advance that server requires Digest Authentication > for a resource, can it include "Authorization" header with each request to > avoid 401 error? How about nonce and qop in this case. What are the pros and > cons of this approach. > > Why I am asking this question is.. we are planning to host a RESTful WS for > our partner and all the resources are protected and needs authentication. > Please advise. > > >
On 2010-11-18 19:07, cyuva_online wrote: > If the client knows in advance that server requires Digest Authentication for a resource, can it include "Authorization" header with each request to avoid 401 error? How about nonce and qop in this case. What are the pros and cons of this approach. > > Why I am asking this question is.. we are planning to host a RESTful WS for our partner and all the resources are protected and needs authentication. Please advise. You should never send authentication details for ANY scheme until challenged for them. There are sadly some software out there that will pre-emptively authentication headers for the Basic scheme, (XHR from javascript in Safari used to have such a bug if you set the username and password programatically, but I haven't checked in a while to see if this is still the case), and the obvious security hole in doing so if the scheme used on the server isn't Basic should suffice to explain why it's a bad idea generally. Also, you should change the nonce used regularly (a matter of seconds, minutes or at the very outside hours, depending on how security-sensitive the resources are) to avoid replay attacks, so 401s will never be avoidable (if the reason the server is rejecting a request is that it has changed the nonce, it should include stale=true to indicate that previous details can be retried with the new nonce). For this reason it is worth keeping the entity body of a 401 small, to avoid excess transmission. In a webservice a very small (maybe just single-element) XML document can suffice. In an HTML case a small document merely stating that one needs to log in (linked to "" to trigger a reload on link traversal perhaps). In some cases the simple XML document described above with an XSLT to produce a simple HTML case when viewed in a browser can hit both cases adequately.
This is an interesting issue.. I always thought that for my public REST api, I would write up an SDK doc that would explain to consumers that they will need to register at my site for an authentication key of some sort. I think many APIs do this now. You sign up, give some info, a generated token of some sort is sent out via email, you then use this to access that API.
What your saying is, every request that comes in that requires authentication, my API would then send back a challenge for auth, then the consumer would send another request with auth and if it is good, the request goes through?
That just seems odd to me. Unless I am misunderstanding and this is a difference scenario than what I describe? Is what you are saying needed to be considered RESTful? Or if there is the ability to sign up and get a token and supply it with all subsequent requests without ever needing a challenge a viable option to be considered RESTful?
--- On Thu, 11/18/10, Jon Hanna <jon@...> wrote:
From: Jon Hanna <jon@...>
Subject: Re: [rest-discuss] Digest Authentication related
To: rest-discuss@yahoogroups.com
Date: Thursday, November 18, 2010, 5:49 PM
On 2010-11-18 19:07, cyuva_online wrote:
> If the client knows in advance that server requires Digest Authentication for a resource, can it include "Authorization" header with each request to avoid 401 error? How about nonce and qop in this case. What are the pros and cons of this approach.
>
> Why I am asking this question is.. we are planning to host a RESTful WS for our partner and all the resources are protected and needs authentication. Please advise.
You should never send authentication details for ANY scheme until
challenged for them. There are sadly some software out there that will
pre-emptively authentication headers for the Basic scheme, (XHR from
javascript in Safari used to have such a bug if you set the username and
password programatically, but I haven't checked in a while to see if
this is still the case), and the obvious security hole in doing so if
the scheme used on the server isn't Basic should suffice to explain why
it's a bad idea generally.
Also, you should change the nonce used regularly (a matter of seconds,
minutes or at the very outside hours, depending on how
security-sensitive the resources are) to avoid replay attacks, so 401s
will never be avoidable (if the reason the server is rejecting a request
is that it has changed the nonce, it should include stale=true to
indicate that previous details can be retried with the new nonce).
For this reason it is worth keeping the entity body of a 401 small, to
avoid excess transmission. In a webservice a very small (maybe just
single-element) XML document can suffice. In an HTML case a small
document merely stating that one needs to log in (linked to "" to
trigger a reload on link traversal perhaps). In some cases the simple
XML document described above with an XSLT to produce a simple HTML case
when viewed in a browser can hit both cases adequately.
Required reading on this issue: http://www.berenddeboer.net/rest/authentication.html -Eric
On 2010-11-19 02:25, Kevin Duffey wrote: > This is an interesting issue.. I always thought that for my public REST > api, I would write up an SDK doc that would explain to consumers that > they will need to register at my site for an authentication key of some > sort. I think many APIs do this now. You sign up, give some info, a > generated token of some sort is sent out via email, you then use this to > access that API. Digest is one way of dealing with the question of how you pass that token back to the server. The simplest method is Basic. With basic you pass the same thing every time. However, since an eavesdropper could also send that same thing, there is no security built into the system. You can add it by using HTTPS, but unless you are going to use HTTPS anyway (because other data sent is also sensitive to eavesdroppers), then this is wasteful (more processing, more bandwidth, less caching). With digest you prove you have the token without sending it. Other methods include OAuth, HTTPS with client certificates, and cookie-based sessions. Cookie-based sessions tend to be frowned upon in terms of REST, but it's worth noting that they also involve *something* being sent with each request. It's worth noting, that with the use of digest (as with basic) the practical restriction on the client is merely to put pass a user-pass combination to whatever API they are using. Usually, at one single point in the application. Meanwhile, on the server it can be as simple as adding their details to a list of allowed users. One only needs to get involved in the mechanisms if rolling your own user-management. Even then, its pretty simple and affects only one piece of code.
Dear All, As far as I know, PUT is to replace resource state with application state, and POST is to append application state or to create sub-resource. But if the time exposed from my resource is server time (I don't want the client input time), should I use PUT or POST? Thanks. Best regards, Zhi-Qiang Lei zhiqiang.lei@...
On 2010-11-19 03:57, Zhi-Qiang Lei wrote: > Dear All, > > As far as I know, PUT is to replace resource state with application state, and POST is to append application state or to create sub-resource. But if the time exposed from my resource is server time (I don't want the client input time), should I use PUT or POST? Thanks. I think this is largely orthogonal. You could PUT a representation that indicates how it should relate to server time (whether by ignoring it in the representation and using the current time, or by sending an offset, or a time in UTC, which server time should maintain anyway - though perhaps rendering to a given local time for UI). You could POST a representation that indicates an action taken, which similarly relates to server time.
Zhi-Qiang Lei wrote: > > PUT is to replace resource state with application state... > Yes, or create a resource... > > POST is to append application state or to create sub-resource. > Better to think of POST as "process this". In Atom Protocol, POST means "create" and PUT means "replace". In some other protocol or API, POST may mean "replace" with "create" assigned to PUT. The meaning assigned to methods should be a function of the media type involved in the request. REST only cares that methods aren't used in undefined ways (i.e. PUT meaning "update"), or in ways defined by other methods (don't use POST to GET or DELETE), because such usage is not uniform. POST can mean anything, so long as that meaning isn't already defined for some other method. I'm torn as to whether Atom Protocol is correct in constraining PUT to only mean "update", or whether it's just as wrong to use POST to "create" as to "retrieve" or "delete", or if it matters. > > But if the time exposed from my resource is server time (I don't want > the client input time), should I use PUT or POST? > I don't understand your question. Practically, any decision between PUT and POST, for me, comes down to whether I want idempotency or not. -Eric
On Nov 19, 2010, at 1:04 PM, Eric J. Bowman wrote: > Zhi-Qiang Lei wrote: >> >> PUT is to replace resource state with application state... >> > > Yes, or create a resource... > >> >> POST is to append application state or to create sub-resource. >> > > Better to think of POST as "process this". > > In Atom Protocol, POST means "create" and PUT means "replace". In some > other protocol or API, POST may mean "replace" with "create" assigned to > PUT. The meaning assigned to methods should be a function of the media > type involved in the request. REST only cares that methods aren't used > in undefined ways (i.e. PUT meaning "update"), or in ways defined by > other methods (don't use POST to GET or DELETE), because such usage is > not uniform. > > POST can mean anything, so long as that meaning isn't already defined > for some other method. I'm torn as to whether Atom Protocol is correct > in constraining PUT to only mean "update", or whether it's just as wrong > to use POST to "create" as to "retrieve" or "delete", or if it matters. > >> >> But if the time exposed from my resource is server time (I don't want >> the client input time), should I use PUT or POST? >> > > I don't understand your question. Practically, any decision between PUT > and POST, for me, comes down to whether I want idempotency or not. > > -Eric Hi Eric, I want my resource expose its created time or updated time in representation. But seems to PUT a update means that the value is inputed by client. I want it to be assigned by server but client. Does it means that I have to use POST? Thanks. Best regards, Zhi-Qiang Lei zhiqiang.lei@...
As a side note, using either Basic or Digest without a secure channel is insecure. Basic amounts to sending cleartext credentials; Digest is open to man-in-the-middle attacks. Jim
Zhi-Qiang Lei wrote: > > I want my resource expose its created time or updated time in > representation. But seems to PUT a update means that the value is > inputed by client. I want it to be assigned by server but client. > Does it means that I have to use POST? Thanks. > No. The server can do whatever it wants with the PUT, so long as it's idempotent. The user-agent sends a timestamp, the server ignores it and uses its own. Perfectly fine. -Eric
On Nov 19, 2010, at 4:18 PM, Eric J. Bowman wrote: > Zhi-Qiang Lei wrote: >> >> I want my resource expose its created time or updated time in >> representation. But seems to PUT a update means that the value is >> inputed by client. I want it to be assigned by server but client. >> Does it means that I have to use POST? Thanks. >> > > No. The server can do whatever it wants with the PUT, so long as it's > idempotent. The user-agent sends a timestamp, the server ignores it > and uses its own. Perfectly fine. > > -Eric Thanks. Sounds good. But if a timestamp appear in representation, is it still idempotent? Best regards, Zhi-Qiang Lei zhiqiang.lei@...
Zhi-Qiang Lei wrote: > > Thanks. Sounds good. But if a timestamp appear in representation, is > it still idempotent? > Yes, we mean the idempotency of the messaging (protocol), not the message (content). -Eric
In [1] the richardson maturity model is explained by fowler. Ive already seen this in the recently published book "REST in practice". Im missing "media types" between level 2 and 3. I think, before you may think about linkrels you should think about proper media types... What do you think? -Jakob [1] http://martinfowler.com/articles/richardsonMaturityModel.html
--- In rest-discuss@yahoogroups.com, "Jakob Strauch" <jakob.strauch@...> wrote: > > In [1] the richardson maturity model is explained by fowler. Ive already seen this in the recently published book "REST in practice". Im missing "media types" between level 2 and 3. I think, before you may think about linkrels you should think about proper media types... > > What do you think? > > -Jakob > > [1] http://martinfowler.com/articles/richardsonMaturityModel.html > I agree that there should be another level and it has something to do with the self-descriptive messages constraint, specifically around the representation format (i.e. "media types"). I've always thought that it was a level 4 (after links) simply because it is the last lesson that most folks learn -- you rarely see the media types done right before the linking is done right. Actually, you just rarely see media types done right, period. I not sure if you could get a consensus on this list on how to succinctly describe what "media types done right" is though. Andrew
Hello! On Mon, 2010-11-22 at 16:42 +0000, wahbedahbe wrote: > I've always thought that it was a level 4 (after links) simply because > it is the last lesson that most folks learn -- you rarely see the > media types done right before the linking is done right. Actually, you > just rarely see media types done right, period. I not sure if you > could get a consensus on this list on how to succinctly describe what > "media types done right" is though. It seems, however, that there is no consensus on what it means to get "media types done right". There have been some passionate discussions here on this. In your opinion, what would be some good resources you could recommend to someone who wants to learn how to do media types right? Juergen -- Juergen Brendel MuleSoft
Jakob: Jan Algermissen addresses some of this in his model [1]. Over the last several months, I have been focusing on [Hyper]media Types directly [2]. While that work is far from complete, some of the material there might be of interest. [1] http://nordsc.com/ext/classification_of_http_based_apis.html [2] http://amundsen.com/hypermedia/hfactor/ mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me #RESTFest 2010 http://rest-fest.googlecode.com On Mon, Nov 22, 2010 at 10:38, Jakob Strauch <jakob.strauch@...> wrote: > In [1] the richardson maturity model is explained by fowler. Ive already seen this in the recently published book "REST in practice". Im missing "media types" between level 2 and 3. I think, before you may think about linkrels you should think about proper media types... > > What do you think? > > -Jakob > > [1] http://martinfowler.com/articles/richardsonMaturityModel.html > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
Hi, I keep hearing this rumor that the reason why Apple takes the Apple Store offline to add new products is not marketing but technology. Technological reasons would mean something like that Apple has to wait for all active purchasing applications to terminate (end then block new ones) to change the online shop(-service component). A counter example is Amazon which is just altering the server all the time but actually never called me to ask me to suspend my current purchase application for the time of their update. (Which is actually a truly amazing show case of REST if you think about it - millions of concurrent active customers do not prevent the server from changing[1]). Any references on the Apple Question? Jan [1] It has always wondered me why this alone does not cause all the Big-Co IT guys to jump on REST immediately but that is another issue :-)
Um REST? :). Sure ... if you think just "doing REST" gives you 24x7 continuous availability then that's quite funny! And its not even April 1st. Sanjiva. On Tue, Nov 23, 2010 at 9:05 PM, Jan Algermissen <algermissen1971@mac.com>wrote: > > > Hi, > > I keep hearing this rumor that the reason why Apple takes the Apple Store > offline to add new products is not marketing but technology. > > Technological reasons would mean something like that Apple has to wait for > all active purchasing applications to terminate (end then block new ones) to > change the online shop(-service component). > > A counter example is Amazon which is just altering the server all the time > but actually never called me to ask me to suspend my current purchase > application for the time of their update. (Which is actually a truly amazing > show case of REST if you think about it - millions of concurrent active > customers do not prevent the server from changing[1]). > > Any references on the Apple Question? > > Jan > > [1] It has always wondered me why this alone does not cause all the Big-Co > IT guys to jump on REST immediately but that is another issue :-) > > -- Sanjiva Weerawarana, Ph.D. Founder, Director & Chief Scientist; Lanka Software Foundation; http://www.opensource.lk/ Founder, Chairman & CEO; WSO2; http://wso2.com/ Founder & Director; Thinkcube Systems; http://www.thinkcube.com/ Member; Apache Software Foundation; http://www.apache.org/ Member; Sahana Software Foundation; http://www.sahanafoundation.org/ Visiting Lecturer; University of Moratuwa; http://www.cse.mrt.ac.lk/ Blog: http://sanjiva.weerawarana.org/
It's probably not technological reasons at all. Just a way of building up interest in the community. It's marketing. Henry On 23 Nov 2010, at 16:35, Jan Algermissen wrote: > Hi, > > I keep hearing this rumor that the reason why Apple takes the Apple Store offline to add new products is not marketing but technology. > > Technological reasons would mean something like that Apple has to wait for all active purchasing applications to terminate (end then block new ones) to change the online shop(-service component). > > A counter example is Amazon which is just altering the server all the time but actually never called me to ask me to suspend my current purchase application for the time of their update. (Which is actually a truly amazing show case of REST if you think about it - millions of concurrent active customers do not prevent the server from changing[1]). > > Any references on the Apple Question? > > Jan > > [1] It has always wondered me why this alone does not cause all the Big-Co IT guys to jump on REST immediately but that is another issue :-) > Social Web Architect http://bblfish.net/
On Nov 23, 2010, at 5:51 PM, Sanjiva Weerawarana wrote: > Um REST? :). Sure ... if you think just "doing REST" gives you 24x7 continuous availability then that's quite funny! And its not even April 1st. Well, I did not say that REST guarantees any form of technological availability. But that it is the precondition for 24x7 because it is a style that guarantees that you can change[1] the server to without even telling any of the clients. That's quite unique. Jan [1] Including redirecting clients that are in the middle of an application to some failover set of servers. > > Sanjiva. > > On Tue, Nov 23, 2010 at 9:05 PM, Jan Algermissen <algermissen1971@...> wrote: > > Hi, > > I keep hearing this rumor that the reason why Apple takes the Apple Store offline to add new products is not marketing but technology. > > Technological reasons would mean something like that Apple has to wait for all active purchasing applications to terminate (end then block new ones) to change the online shop(-service component). > > A counter example is Amazon which is just altering the server all the time but actually never called me to ask me to suspend my current purchase application for the time of their update. (Which is actually a truly amazing show case of REST if you think about it - millions of concurrent active customers do not prevent the server from changing[1]). > > Any references on the Apple Question? > > Jan > > [1] It has always wondered me why this alone does not cause all the Big-Co IT guys to jump on REST immediately but that is another issue :-) > > > > > -- > Sanjiva Weerawarana, Ph.D. > Founder, Director & Chief Scientist; Lanka Software Foundation; http://www.opensource.lk/ > Founder, Chairman & CEO; WSO2; http://wso2.com/ > Founder & Director; Thinkcube Systems; http://www.thinkcube.com/ > Member; Apache Software Foundation; http://www.apache.org/ > Member; Sahana Software Foundation; http://www.sahanafoundation.org/ > Visiting Lecturer; University of Moratuwa; http://www.cse.mrt.ac.lk/ > > Blog: http://sanjiva.weerawarana.org/ >
Sanjiva Weerawarana wrote: > > Um REST? :). Sure ... if you think just "doing REST" gives you 24x7 > continuous availability then that's quite funny! And its not even > April 1st. > Actually, Jan's referring to the reliability that is a goal of REST, see 2.3.7: "Reliability, within the perspective of application architectures, can be viewed as the degree to which an architecture is susceptible to failure at the system level in the presence of partial failures within components, connectors, or data. Styles can improve reliability by avoiding single points of failure, enabling redundancy, allowing monitoring, or reducing the scope of failure to a recoverable action." Without looking at it, I can assert that Apple's store can't possibly be REST, just from the fact that purchasing needs to be suspended to update the user interface. The constraint being violated is most likely the layered-system constraint, application of which leads to exactly the sort of reliability not exhibited by the Apple store. If it were RESTful, it would exhibit the desired property. -Eric
Henry Story wrote: > > It's probably not technological reasons at all. Just a way of > building up interest in the community. It's marketing. > My market differentiation as a Web consultant, is my strong knowledge of branding (resulting from my 15-year association with one of the top hired guns in the field, y'all haven't heard of him but you have heard of "Where's the beef?" and that silly Taco Bell chihuahua). I don't sell websites, I sell online branding. Apple's branding has nothing to do with inconveniencing the customer; in fact it's the polar opposite. So I suspect this isn't a marketing ploy, because Apple's too good at, and protective of, its brand identity of user-friendliness uber alles. -Eric
Eric J. Bowman wrote: > Henry Story wrote: >> It's probably not technological reasons at all. Just a way of >> building up interest in the community. It's marketing. >> > > My market differentiation as a Web consultant, is my strong knowledge > of branding (resulting from my 15-year association with one of the top > hired guns in the field, y'all haven't heard of him but you have heard > of "Where's the beef?" and that silly Taco Bell chihuahua). I don't > sell websites, I sell online branding. Apple's branding has nothing to > do with inconveniencing the customer; in fact it's the polar opposite. > So I suspect this isn't a marketing ploy, because Apple's too good at, > and protective of, its brand identity of user-friendliness uber alles. Eric, I hate to disagree, but it /has/ to be marketing, if Apple, one of the biggest tech companies in the world, who specialise in sales and marketing, has a store that goes offline before a product launch, then it can only be for marketing reasons. Sure a small business would let that be determined by a technical detail, but a multi-billion dollar company? It simply /must/ increase sales over the 24-96 hour period, otherwise they just wouldn't have it going off line. Sure it's a sweet story to try and say that Apple are loosing millions because there system isn't RESTful, but come on this is the real world a money hungry company, like they'd accept any geek of any calibre saying "uhmm sorry we have to take the apple store offline for X hours every tiem you want to add a product" - nahhhhhhhhhhhhh. Best, Nathan
> > "Reliability, within the perspective of application architectures, can > be viewed as the degree to which an architecture is susceptible to > failure at the system level in the presence of partial failures within > components, connectors, or data. Styles can improve reliability by > avoiding single points of failure, enabling redundancy, allowing > monitoring, or reducing the scope of failure to a recoverable action." > Sometimes, "reducing the scope of failure to a recoverable action" just means letting a 404 error occur. Off-topic but along the same lines, is the failure I discovered today with a website I did for a law firm. There are six types of business entity one may incorporate in my State. Each one is covered by a non-sequential section of the statutes, so it took me a while just to discover how to link to them, which I did using the Lexis-Nexis Michie service. The branding goal, is that a potential client of my customer can see from the language of the relevant statutes, why they need the services of an attorney. The branding goal was not, a year after the links were created, for them to now point to unrelated nonsense. Thanks, Lexis-Nexis, for changing the Michie service's interface in a way that failed to alert me via the link-checker I regularly run. I came across this by chance, but guess what I'll be doing with the rest of my day today? I'd rather potential clients of my customer see errors, which gives me a chance to immediately fix them, than nonsense I have no way of knowing about until the complaints start coming in from my customer. The more I think about it, the more I believe that the broken link (or, the whole concept of user-based error recovery, if you prefer) was truly one of the great leaps forward of the 20th century. I'd rather have no answer, rather than a wrong answer for the sake of having *an* answer (allegedly to keep users from being confused). In REST, changing the interface should update automatically without breaking anything, Michie obviously isn't REST because Lexis-Nexis versioned the interface in a way that broke all links relying on it, in a non-obvious fashion. Note to Apple, Lexis-Nexis: Coupling bad. Decoupling good! -Eric
Nathan wrote: > > I hate to disagree, but it /has/ to be marketing, if Apple, one of > the biggest tech companies in the world, who specialise in sales and > marketing, has a store that goes offline before a product launch, > then it can only be for marketing reasons. Sure a small business > would let that be determined by a technical detail, but a > multi-billion dollar company? > I'll agree to disagree, but without knowledge of sales figures there's no way to know: Does this boost sales, and if so, is it enough to compensate for the revenue lost when the store is closed? To which I add, Branding 101 says you don't tell a customer who's ready to spend money, to come back later; you run the risk that they won't. > > It simply /must/ increase sales over the 24-96 hour period, otherwise > they just wouldn't have it going off line. > Yes, but inconveniencing customers goes against Apple's branding, so if this is a deliberate marketing ploy, it's driving the branding team nuts. Unless they've determined that this somehow adds mystique. Note that Amazon not only stays open, but allows pre-ordering of products before they're available. Why would Apple, instead of taking early orders, close outright -- when closing's got to be felt on the bottom line for *any* Web store? > > Sure it's a sweet story to try and say that Apple are loosing > millions because there system isn't RESTful, but come on this is the > real world a money hungry company, like they'd accept any geek of any > calibre saying "uhmm sorry we have to take the apple store offline > for X hours every tiem you want to add a product" - nahhhhhhhhhhhhh. > What I'm saying is, the marketing person of any caliber saying this is just as mind-boggling; when in the real world, closing your e-commerce store for any period of time comes with the costs of losing revenue and damaging brand identity. If Apple is doing this deliberately and they somehow succeed, I wouldn't recommend anyone to follow their lead, or expect good results from the experiment if they do. -Eric
On Tue, Nov 23, 2010 at 2:48 PM, Eric J. Bowman <eric@...>wrote: > > > > > It simply /must/ increase sales over the 24-96 hour period, otherwise > > they just wouldn't have it going off line. > > > > Yes, but inconveniencing customers goes against Apple's branding, so if > this is a deliberate marketing ploy, it's driving the branding team > nuts. Unless they've determined that this somehow adds mystique. Note > that Amazon not only stays open, but allows pre-ordering of products > before they're available. Why would Apple, instead of taking early > orders, close outright -- when closing's got to be felt on the bottom > line for *any* Web store? > > > > > > Sure it's a sweet story to try and say that Apple are loosing > > millions because there system isn't RESTful, but come on this is the > > real world a money hungry company, like they'd accept any geek of any > > calibre saying "uhmm sorry we have to take the apple store offline > > for X hours every tiem you want to add a product" - nahhhhhhhhhhhhh. > > > > What I'm saying is, the marketing person of any caliber saying this is > just as mind-boggling; when in the real world, closing your e-commerce > store for any period of time comes with the costs of losing revenue and > damaging brand identity. If Apple is doing this deliberately and they > somehow succeed, I wouldn't recommend anyone to follow their lead, or > expect good results from the experiment if they do. > My understanding is not that the store is going offline, but rather that they are not accepting new apps into the store. Users are still able to buy stuff during this time. I have also heard that it is not a technical problem at all. Just giving their human reviewers time off. But that's all hearsay and speculation. -- David blog: http://www.traceback.org twitter: http://twitter.com/dstanek
Hi All - I've been digging around to figure out how best to solve a problem, and am hoping to get thoughts from you folks.
The service in question uses wadl to describe the interface of the service. It's resources are represented using xml and or json, so xsd and JSON Schema will be used to describe representations. These requirements are basically things i have to live with, so I need to figure out how to do my best here.
Now what i need to do is...
1. be able to version the service's resources
2. include a reference to an xml or json schema in my responses
Here's what I'm thinkin'. I'll create custom vendor mime types for both xml and json. Each will contain two mime type parameters; "profile" which points to the json or xml schema, and "v" which specifies the version of the representation.
Below is a simple example of what the wadl might look like. In short, the example resource supports four different response representations, two of which are xml; versions 1.0 and 2.0, and the other two are json; versions 1.0 and 2.0. And each points to a schema via profile parameter.
FWIW - This approach is based on things I've read around here, Peter William's blog, Bill Burk's REST book, and the latest JSON Schema draft (recently 03, congrats!).
Any thoughts?
<resource path="example">
<method name="GET" id="get">
<response>
<representation mediaType="application/vnd.custom.app+json;profile=/path/to/exampleV1.0.json;v=1.0"/>
<representation mediaType="application/vnd.custom.app+json;profile=/path/to/exampleV2.0.json;v=2.0"/>
<representation mediaType="application/vnd.custom.app+xml;profile=/path/to/exampleV1.0.xsd;v=1.0"/>
<representation mediaType="application/vnd.custom.app+xml;profile=/path/to/exampleV2.0.xsd;v=2.0"/>
</response>
</method>
</resource>
Hi, When you navigate anywhere from a page that was generated as the result of a POST, and then attempt to use the browser's Back button to return to the POST response page, all major browsers display a full-screen error page telling you that they can't redisplay the page as it was generated as the result of a POST. Why is this? Wouldn't it be better for the browser to redisplay the stale page, and only display the 'are you sure you want to resubmit' warning if the user actually presses the refresh button? This defacto standard seems unnecessary and annoying. I notice that IE9 is changing its caching behaviour [1] to be more compliant with the RFC's recommendation that history mechanisms are different than caching, but it still doesn't seem to solve the problem. I'd expect no-cache responses to still be redisplayable via the back- button, and probably only no-store responses to possibly prevent history caching. [1] http://blogs.msdn.com/b/ie/archive/2010/07/14/caching-improvements-in- internet-explorer-9.aspx (see the section described as Back/Forward Optimization) -- Dave
Eb wrote: > > I understand the constraint (I think) but curious about > implementation. I'm of the opinion that this would involve some sort > of load balancing setup where a server is not hit while it's being > upgraded. Is their a solution for this with only a single server? > Of course -- I'm the guy who's always on about how you can implement REST on the LAMP stack using cost-effective shared hosting, remember? If your shopping-cart script requires MySQL connectivity to function, just modify it to use a standard disk-caching library: http://pear.php.net/manual/en/package.caching.cache-lite.intro.php Return 503 on a cache-lite miss, if the DB is unavailable. Product pages should be written to static files, using javascript to fetch dynamic data like stock count (but not less-dynamic data like price). Graceful degradation is the goal: if the DB goes down the site shouldn't fail, just lose functionality. After updating the DB, expire cache-lite and regenerate the static-file product pages. This way, it doesn't matter whether or not, or how, the httpd/mysqld are scaled -- section 2.3.6, Portability. What matters is the frontend is decoupled from the DB, a common fault of shopping-cart scripts. It may just be the case that Apple's proprietary protocol couples the user- agent to the DB, instead of using, say, HTML to automatically update the user-agent; there's no way to know, really. -Eric
Hey Eric - On 11/26/2010 03:42 AM, Eric J. Bowman wrote: > Eb wrote: > >> I understand the constraint (I think) but curious about >> implementation. I'm of the opinion that this would involve some sort >> of load balancing setup where a server is not hit while it's being >> upgraded. Is their a solution for this with only a single server? >> >> > Of course -- I'm the guy who's always on about how you can implement > REST on the LAMP stack using cost-effective shared hosting, remember? > If your shopping-cart script requires MySQL connectivity to function, > just modify it to use a standard disk-caching library: > > http://pear.php.net/manual/en/package.caching.cache-lite.intro.php > > Return 503 on a cache-lite miss, if the DB is unavailable. Product > pages should be written to static files, using javascript to fetch > dynamic data like stock count (but not less-dynamic data like price). > Graceful degradation is the goal: if the DB goes down the site > shouldn't fail, just lose functionality. After updating the DB, expire > cache-lite and regenerate the static-file product pages. > > This way, it doesn't matter whether or not, or how, the httpd/mysqld > are scaled -- section 2.3.6, Portability. What matters is the frontend > is decoupled from the DB, a common fault of shopping-cart scripts. It > may just be the case that Apple's proprietary protocol couples the user- > agent to the DB, instead of using, say, HTML to automatically update > the user-agent; there's no way to know, really. > > -Eric > Sure, I get this. My question was targeted towards changing the front-end why it was in use e.g. a page redesign w/o using a failover mechanism. -- blog: http://eikonne.wordpress.com twitter: http://twitter.com/eikonne
Dear All,
I'm designing a REST API which support html and JSON. (might support XML later)
/people/1.json
{"id":1, "name":"Andy"}
/people/2.json
{"id":1, "name":"Kate"}
It said that REST API should be hypertext-driven, so shall I put the format in URL of collection resource?
/people
[{"id":1, "url":"/people/1"}, {"id":2, "url":"/people/2"}]
or
[{"id":1, "url":"/people/1.json"}, {"id":2, "url":"/people/2.json"}]
Which is better? Thanks.
Best regards,
Zhi-Qiang Lei
zhiqiang.lei@...
What's better is to use a media type that's meant to drive hypertext
application state -- application/json says nothing about "url" being
the same as @href in HTML, or containing relative URIs. The <a> and
<link> elements in HTML are universally understood. When is a
hyperlink not a hyperlink? When the media type says nothing about
hyperlinks. In text/plain, not even <a> and <link> are hyperlinks.
That <a> and <link> are hyperlinks, is self-descriptive when the media
type is either text/html or application/xhtml+xml (or, <link> for the
application/atom+xml media type).
-Eric
Zhi-Qiang Lei wrote:
>
> Dear All,
>
> I'm designing a REST API which support html and JSON. (might support
> XML later)
>
> /people/1.json
>
> {"id":1, "name":"Andy"}
>
> /people/2.json
>
> {"id":1, "name":"Kate"}
>
> It said that REST API should be hypertext-driven, so shall I put the
> format in URL of collection resource?
>
> /people
>
> [{"id":1, "url":"/people/1"}, {"id":2, "url":"/people/2"}]
> or
> [{"id":1, "url":"/people/1.json"}, {"id":2, "url":"/people/2.json"}]
>
> Which is better? Thanks.
>
On Nov 26, 2010, at 5:41 PM, Zhi-Qiang Lei wrote:
> Dear All,
>
> I'm designing a REST API which support html and JSON. (might support XML later)
If you use generic media types such as application/json or application/xml you are *not* designing a REST API (sorry, to nit pick).
If your client makes a service specific assumption about a certain structure of the json or the xml then it is coupled to the service.
(You likely have HTTP Type I[1])
>
> /people/1.json
>
> {"id":1, "name":"Andy"}
>
> /people/2.json
>
> {"id":1, "name":"Kate"}
>
> It said that REST API should be hypertext-driven, so shall I put the format in URL of collection resource?
It does not matter. But it is good to have distinct resources for the variants, hence a .json or .xml is a good thing. Something like /people/1/json would do the same. (As would /people/1/76ygtft76)
>
> /people
>
> [{"id":1, "url":"/people/1"}, {"id":2, "url":"/people/2"}]
> or
> [{"id":1, "url":"/people/1.json"}, {"id":2, "url":"/people/2.json"}]
>
> Which is better? Thanks.
I prefer linking to /people/1 and redirecting to /people/1.json for Accept: application/json That way you need put all links to all formats in there.
Jan
[1] http://www.nordsc.com/ext/classification_of_http_based_apis.html#http-type-one
>
> Best regards,
> Zhi-Qiang Lei
> zhiqiang.lei@...
>
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
Can anybody help with some inspiration for a media type for case files, dossiers and documents? This is for a project where we are going to expose those kinds of govermental data. Here is the general setup: 1) There is an Open Search interface for searching for case files, dossiers and so on. 2) The search results contains links to case files etc. 3) A case file has a title, a responsible case manager and such like. It also has links to it's related dossiers. 4) A dossier has a title and more. It also has links to it's related documents and it's containing case. 5) A document has a title and more. It also has a download link where the binary document can be found plus a link to it's containing dossier. 6) Dossiers have associated parties (persons/roles) and as such there will be links to parties. 7) Lots more of the same stuff ... Now, all of this is represented as XML. We already have official Danish XML schemas for case files and documents (although not for dossiers). But none of the schemas contains the relevant links. What we want to do is to use the official schemas and decorate them with the relevant links (using atom:link). In this way we should be backwards compatible with any applications that use the official schemas already. I haven't heard of any official media types for this kind of stuff, so I guess we will have to mint our own. But the data is not really for global consumption - some parts are, but a lot of it is only relevant in a Danish environment. Any recomandations for what to do now? My best guess is to create "application/fesd.dk+xml" where the name "fesd" is the Danish acronym for all the schemas (and much more). I probably also have to put our company name into it, since we don't own the "fesd" name, which means it will be "application/vnd.cbrain.fesd.dk+xml". Right? The next step would the be to write up the media type documentation and do some coding :-) Comments? Thanks, J�rn
Jrn Wildt wrote: > > Can anybody help with some inspiration for a media type for case > files, dossiers and documents? This is for a project where we are > going to expose those kinds of govermental data. > Sounds like a hypertext application to me; nothing wrong with HTML, with or without RDFa. Instead of needing an application-specific client, open-government data should be accessible by anyone with a Web browser, IMO. -Eric
I see. As far as I know, application/atom or application/xml with XLink can do this. I got one more question here. My entry don't have a title, and its content is a list of data but bits of text. What media type shall I use? Seems atom cannot handle it.
On Nov 27, 2010, at 2:26 AM, Eric J. Bowman wrote:
> What's better is to use a media type that's meant to drive hypertext
> application state -- application/json says nothing about "url" being
> the same as @href in HTML, or containing relative URIs. The <a> and
> <link> elements in HTML are universally understood. When is a
> hyperlink not a hyperlink? When the media type says nothing about
> hyperlinks. In text/plain, not even <a> and <link> are hyperlinks.
> That <a> and <link> are hyperlinks, is self-descriptive when the media
> type is either text/html or application/xhtml+xml (or, <link> for the
> application/atom+xml media type).
>
> -Eric
>
> Zhi-Qiang Lei wrote:
>>
>> Dear All,
>>
>> I'm designing a REST API which support html and JSON. (might support
>> XML later)
>>
>> /people/1.json
>>
>> {"id":1, "name":"Andy"}
>>
>> /people/2.json
>>
>> {"id":1, "name":"Kate"}
>>
>> It said that REST API should be hypertext-driven, so shall I put the
>> format in URL of collection resource?
>>
>> /people
>>
>> [{"id":1, "url":"/people/1"}, {"id":2, "url":"/people/2"}]
>> or
>> [{"id":1, "url":"/people/1.json"}, {"id":2, "url":"/people/2.json"}]
>>
>> Which is better? Thanks.
>>
Best regards,
Zhi-Qiang Lei
zhiqiang.lei@...
> Sounds like a hypertext application to me; nothing wrong with HTML, > with or without RDFa. Instead of needing an application-specific > client, open-government data should be accessible by anyone with a Web > browser, IMO. That's a good point. But this data is primary meant for another system to consume - such that it can be presented in a portal where multiple case management systems are merged together. It is the job of the portal to make the data public available (with proper access control) and there it will be presented as HTML. /J�rn ----- Original Message ----- From: "Eric J. Bowman" <eric@...> To: "J�rn Wildt" <jw@...> Cc: "Rest Discussion List" <rest-discuss@yahoogroups.com> Sent: Saturday, November 27, 2010 12:03 AM Subject: Re: [rest-discuss] A media type for case files, dossiers and documents J�rn Wildt wrote: > > Can anybody help with some inspiration for a media type for case > files, dossiers and documents? This is for a project where we are > going to expose those kinds of govermental data. > Sounds like a hypertext application to me; nothing wrong with HTML, with or without RDFa. Instead of needing an application-specific client, open-government data should be accessible by anyone with a Web browser, IMO. -Eric
One more thing: as I mentioned then we already have an XML vocabulary for describing cases and documents etc. There is (should be at least) a shared understanding of this XML schema in Denmark, so turning to (X)HTML doesn't seem like a feasible idea. /J�rn ----- Original Message ----- From: "Eric J. Bowman" <eric@...> To: "J�rn Wildt" <jw@...> Cc: "Rest Discussion List" <rest-discuss@yahoogroups.com> Sent: Saturday, November 27, 2010 12:03 AM Subject: Re: [rest-discuss] A media type for case files, dossiers and documents J�rn Wildt wrote: > > Can anybody help with some inspiration for a media type for case > files, dossiers and documents? This is for a project where we are > going to expose those kinds of govermental data. > Sounds like a hypertext application to me; nothing wrong with HTML, with or without RDFa. Instead of needing an application-specific client, open-government data should be accessible by anyone with a Web browser, IMO. -Eric
Jrn:
If you decide to register a custom media type, here are my suggestions:
- design a _single_ media type to handle all your documents (case
file, dossier, document)
<root>
<case-file />
<dossier />
<document />
</root>
This means your clients only need to "learn" a single media type and
all it's features. Creating several media types (one for each
document, etc.) can be a burden for client implementors.
- if you need to return collections, consider _always_ returning
collections of one or more
GET /case-file/1
RESPONSE:
<root>
<case-file-collection>
<case-file />
</case-file-collection>
</root>
This adds a bit of complexity to the media type, but creates a single
pattern for all clients (which might be an advantage)
- always include an root child element called <error /> that can be
used to return extended error information (beyond status and message)
<root>
<case-file />
<dossier />
<document />
<error />
</root>
See Subbu Allamaraju's design for an error element in his "RESTful Web
Services Cookbook"
- when going through the design details, never include a real URI,
just place holders
<root href="{root-uri}">
<case-file-collection href="case-file-collection-uri}" />
<root />
This makes it harder to fall into bad habits of relying on non-opaque
URIs in your design
- include search/query element definitions within your media type.
<root>
<queries>
<query href="..." rel="first-page" />
<query href="..." rel="added-today" />
</queries>
By implementing a "query" section in your documents, you "teach"
clients to look for and render your queries in a way that allows for
adding/changing queries over time w/o breaking clients
- include suipport for "send" elements (POST PUT DELETE)
<root>
<send-collection>
<send href="{...}" rel="add-new-case-file">
<data name="title" />
...
</send>
<send href="{...}" rel="delete-case-file" />
...
</send-collection>
Again, train your clients to look for and understand "send" elements
and you have the ability to add/remove these operations based on the
state of the client at the moment, changes in application-flow on the
server, etc.
- design your document as "format-agnostic" as possible. IOW, if you
start by implementing an XML-formatted media type, consider whether
you might also implement a JSON-based media type for the same
operations/elements sometime in the future. If yes, try to create a
design that can move between the two formats relatively easily (e.g.
how will you handle XML attributes when implementing the JSON type?,
etc.)
You might want to check out my http://amundsen.com/hypermedia/ pages, too.
mca
http://amundsen.com/blog/
http://twitter.com@mamund
http://mamund.com/foaf.rdf#me
#RESTFest 2010
http://rest-fest.googlecode.com
On Sat, Nov 27, 2010 at 08:52, Jrn Wildt <jw@...> wrote:
> One more thing: as I mentioned then we already have an XML vocabulary for
> describing cases and documents etc. There is (should be at least) a shared
> understanding of this XML schema in Denmark, so turning to (X)HTML doesn't
> seem like a feasible idea.
>
> /Jrn
>
> ----- Original Message -----
> From: "Eric J. Bowman" <eric@...>
> To: "Jrn Wildt" <jw@...>
> Cc: "Rest Discussion List" <rest-discuss@yahoogroups.com>
> Sent: Saturday, November 27, 2010 12:03 AM
> Subject: Re: [rest-discuss] A media type for case files, dossiers and
> documents
>
>
> Jrn Wildt wrote:
>>
>> Can anybody help with some inspiration for a media type for case
>> files, dossiers and documents? This is for a project where we are
>> going to expose those kinds of govermental data.
>>
>
> Sounds like a hypertext application to me; nothing wrong with HTML,
> with or without RDFa. Instead of needing an application-specific
> client, open-government data should be accessible by anyone with a Web
> browser, IMO.
>
> -Eric
>
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
>
Thanks for pointing out. Will it be solved if I drive the hypertext in JSON schema and just have generic JSON represent generic data?
On Nov 27, 2010, at 2:59 AM, Jan Algermissen wrote:
>
> On Nov 26, 2010, at 5:41 PM, Zhi-Qiang Lei wrote:
>
>> Dear All,
>>
>> I'm designing a REST API which support html and JSON. (might support XML later)
>
> If you use generic media types such as application/json or application/xml you are *not* designing a REST API (sorry, to nit pick).
>
> If your client makes a service specific assumption about a certain structure of the json or the xml then it is coupled to the service.
> (You likely have HTTP Type I[1])
>
>
>>
>> /people/1.json
>>
>> {"id":1, "name":"Andy"}
>>
>> /people/2.json
>>
>> {"id":1, "name":"Kate"}
>>
>> It said that REST API should be hypertext-driven, so shall I put the format in URL of collection resource?
>
> It does not matter. But it is good to have distinct resources for the variants, hence a .json or .xml is a good thing. Something like /people/1/json would do the same. (As would /people/1/76ygtft76)
>>
>> /people
>>
>> [{"id":1, "url":"/people/1"}, {"id":2, "url":"/people/2"}]
>> or
>> [{"id":1, "url":"/people/1.json"}, {"id":2, "url":"/people/2.json"}]
>>
>> Which is better? Thanks.
>
> I prefer linking to /people/1 and redirecting to /people/1.json for Accept: application/json That way you need put all links to all formats in there.
>
>
> Jan
>
> [1] http://www.nordsc.com/ext/classification_of_http_based_apis.html#http-type-one
>
>
>>
>> Best regards,
>> Zhi-Qiang Lei
>> zhiqiang.lei@...
>>
>>
>>
>> ------------------------------------
>>
>> Yahoo! Groups Links
>>
>>
>>
>
Best regards,
Zhi-Qiang Lei
zhiqiang.lei@...
On Nov 28, 2010, at 5:07 AM, Zhi-Qiang Lei wrote:
> Thanks for pointing out. Will it be solved if I drive the hypertext in JSON schema and just have generic JSON represent generic data?
No. What you need is a media type that is specific. Otherwise you end up creating coupling between client and service. (This might be perfectly ok in your case, but it is not REST. IWO: choose what suits your needs but understand your choice :-).
If you have 3 clients in an environment you own then the coupling should not be too bad. If you have millions of clients that you cannot achieve coordination with then eliminate all coupling - to allow for your service to evolve if it needs to.
It is, BTW, not difficult to specify a JSON schema, write down the semantics of the link relations you include and give that stuff a media type name. The question whether you register that thing with IANA later is not necessary to answer right now. I think that even *thinking* about your payload semantics in a service-independent, more global way improves the overall design.
Creating a media type is not rocket science on not just for the big use cases. It is just the way how you turn service specific coupling into globally shared semantics.
Sure it is work, but the benefit is completely decoupled components and no need for maintaining service specific descriptions.
Jan
>
> On Nov 27, 2010, at 2:59 AM, Jan Algermissen wrote:
>
>>
>> On Nov 26, 2010, at 5:41 PM, Zhi-Qiang Lei wrote:
>>
>>> Dear All,
>>>
>>> I'm designing a REST API which support html and JSON. (might support XML later)
>>
>> If you use generic media types such as application/json or application/xml you are *not* designing a REST API (sorry, to nit pick).
>>
>> If your client makes a service specific assumption about a certain structure of the json or the xml then it is coupled to the service.
>> (You likely have HTTP Type I[1])
>>
>>
>>>
>>> /people/1.json
>>>
>>> {"id":1, "name":"Andy"}
>>>
>>> /people/2.json
>>>
>>> {"id":1, "name":"Kate"}
>>>
>>> It said that REST API should be hypertext-driven, so shall I put the format in URL of collection resource?
>>
>> It does not matter. But it is good to have distinct resources for the variants, hence a .json or .xml is a good thing. Something like /people/1/json would do the same. (As would /people/1/76ygtft76)
>>>
>>> /people
>>>
>>> [{"id":1, "url":"/people/1"}, {"id":2, "url":"/people/2"}]
>>> or
>>> [{"id":1, "url":"/people/1.json"}, {"id":2, "url":"/people/2.json"}]
>>>
>>> Which is better? Thanks.
>>
>> I prefer linking to /people/1 and redirecting to /people/1.json for Accept: application/json That way you need put all links to all formats in there.
>>
>>
>> Jan
>>
>> [1] http://www.nordsc.com/ext/classification_of_http_based_apis.html#http-type-one
>>
>>
>>>
>>> Best regards,
>>> Zhi-Qiang Lei
>>> zhiqiang.lei@...
>>>
>>>
>>>
>>> ------------------------------------
>>>
>>> Yahoo! Groups Links
>>>
>>>
>>>
>>
>
>
> Best regards,
> Zhi-Qiang Lei
> zhiqiang.lei@...
>
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
Hi Mike Thanks for some really good input! > - design a _single_ media type to handle all your documents Yes! > - if you need to return collections, consider _always_ returning collections of one or more Okay. But collections are always one or more? I guess you mean; also return one-or-more even if it's only a "one item" URL? > - always include an root child element called <error /> Yes. > - when going through the design details, never include a real URI, Yes! > - include search/query element definitions within your media type. That's an interesting idea. Much like always have a "Search" entry on every page on a website. > - include support for "send" elements (POST PUT DELETE) That will be later. So far it is querying only. But, yes, certainly. > - design your document as "format-agnostic" as possible Will try. /J�rn
Hi mike!
On Sat, Nov 27, 2010 at 4:32 PM, mike amundsen <mamund@...> wrote:
> - include suipport for "send" elements (POST PUT DELETE)
> <root>
> <send-collection>
> <send href="{...}" rel="add-new-case-file">
> <data name="title" />
> ...
> </send>
> <send href="{...}" rel="delete-case-file" />
> ...
> </send-collection>
> Again, train your clients to look for and understand "send" elements
> and you have the ability to add/remove these operations based on the
> state of the client at the moment, changes in application-flow on the
> server, etc.
Is it really reasonable to expect your machine clients to react and
comprehend these sort of 'form' changes at run time? I understand that
this works when the client is a human since there is a lot of human
intuition at play, however for machines this just seems an impractical
expectation - which could be costly in terms of adoption of your
application. It seems like requiring developers to build machine
clients that interpret several form changes/configurations is
significantly more trouble than just creating several distinct link
relations that could carry equivalent semantics; adding extra link
relations with the latter approach doesn't seem like it would be any
more disruptive than adding another form configuration in the former.
Cheers,
Mike
MK:
<snip>
> Is it really reasonable to expect your machine clients to react and
> comprehend these sort of 'form' changes at run time? I understand that
> this works when the client is a human since there is a lot of human
> intuition at play, however for machines this just seems an impractical
> expectation - which could be costly in terms of adoption of your
> application.
</snip>
It may be that m2m cases will require different semantics. For example
Atom semantics do not include any dynamic "send" options within the
document itself other than possible <link /> elements w/o added
parameters. Atom handles "send" information by documenting
required/optional data elements for POST(create) and PUT(update) and
expecting clients to have this documentation information "baked" into
the client code.
These M2M semantics, if fundamentally different, can most-likely be
encoded in the response representation in a way that peacefully
co-exists with the examples I gave earlier in this thread. IOW, some
clients might ignore parts of the reponse they don't understand and
only "re-act" to sections or elements of the document that "make
sense" to that client. It is also possible that M2M clients will need
to negotiate w/ the server for their own media-type.
<snip>
It seems like requiring developers to build machine
> clients that interpret several form changes/configurations is
> significantly more trouble than just creating several distinct link
> relations that could carry equivalent semantics; adding extra link
> relations with the latter approach doesn't seem like it would be any
> more disruptive than adding another form configuration in the former.
</snip>
It's an interesting idea. I, myself have not done what you describe
here. In the clients I've written lately, interpreting the dynamic
arguments for a query is not a challenge. However, the hard work is
matching these "place-holders" w/ the client's existing local state
data ("which piece of data that I have in memory is the one to place
in the "name" field for this query?"). So far, whether the queries are
dynamic (varying details included in the response representation) or
static (rules written into the documentation are hard-coded in the
client), this match continues to be the primary challenge for me.
One way I've tackled this problem is to write documentation that lists
all the possible argument elements that might appear within queries or
"send" operations. This allows client authors to code their own
mapping of local state to the elements within the representations and
execute this "resolution of state" whenever the elements appear in the
representation (including for "send" operations). I've only done this
a couple times and it has been successful. However, I'm not sure how
robust this solutions is over time (e.g. I've not had the opportunity
to test this approach against the "Architectural Properties of Key
Interest" items mentioned in Roy's dissertation[1]).
I'd like to see more about how (specifically) you are using link
relations to handle queries for M2M clients. It'd be interesting to me
to see how your approach affects the actual client coding.
Finally, I gave a talk recently that covers some high-level aspects of
my client-coding work for Hypermedia. One of the aims of the talk is
to show how client-coding changes when the representations include
hypermedia elements. The slides and source code (C#) are available
for download[1]. I'd be interested in any comments on this material.
[1] http://www.ics.uci.edu/~fielding/pubs/dissertation/net_app_arch.htm#sec_2_3
[2] http://amundsen.com/talks/#beyond-web20
mca
http://amundsen.com/blog/
http://twitter.com@mamund
http://mamund.com/foaf.rdf#me
#RESTFest 2010
http://rest-fest.googlecode.com
On Sun, Nov 28, 2010 at 09:34, Mike Kelly <mike@mykanjo.co.uk> wrote:
> Hi mike!
>
> On Sat, Nov 27, 2010 at 4:32 PM, mike amundsen <mamund@...> wrote:
>> - include suipport for "send" elements (POST PUT DELETE)
>> <root>
>> <send-collection>
>> <send href="{...}" rel="add-new-case-file">
>> <data name="title" />
>> ...
>> </send>
>> <send href="{...}" rel="delete-case-file" />
>> ...
>> </send-collection>
>> Again, train your clients to look for and understand "send" elements
>> and you have the ability to add/remove these operations based on the
>> state of the client at the moment, changes in application-flow on the
>> server, etc.
>
> Is it really reasonable to expect your machine clients to react and
> comprehend these sort of 'form' changes at run time? I understand that
> this works when the client is a human since there is a lot of human
> intuition at play, however for machines this just seems an impractical
> expectation - which could be costly in terms of adoption of your
> application. It seems like requiring developers to build machine
> clients that interpret several form changes/configurations is
> significantly more trouble than just creating several distinct link
> relations that could carry equivalent semantics; adding extra link
> relations with the latter approach doesn't seem like it would be any
> more disruptive than adding another form configuration in the former.
>
> Cheers,
> Mike
>
<snip> > Wouldn't it be better to use a media type parameters to indicate which type > of document it is instead of having a single gigantic schema? </snip> "gigantic" is a relative term. XHTML can handle just about any representation and it's schema is not at all "gigantic" by my own measure. I suspect you are thinking of document designs where there are possibly hundreds of unique data points (name, customer-name, client-name, store-name, etc.). It is not required that each of these unique data points use a unique XML element (to use XML as the example). XHTML solves this problem by using a single scheme element (<input />) with a number of useful attributes. There are other ways to handle it as well. <snip> Examples would > be: > > application/vnd.example.com; type=case-file > application/vnd.example.com; type=dossier > application/vnd.example.com; type=document > </snip> My experience is that this information does not belong in the content-type/accept header space. Intermediaries rarely need to know this information and clients can use an element within the payload itself (<root type="case-file" />, etc.) just as easily as pull that information from the content-type header. mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me #RESTFest 2010 http://rest-fest.googlecode.com On Sun, Nov 28, 2010 at 15:52, Trygve Laugstl <trygvis@...> wrote: > On 11/27/10 5:32 PM, mike amundsen wrote: >> >> Jrn: >> >> If you decide to register a custom media type, here are my suggestions: >> >> - design a _single_ media type to handle all your documents (case >> file, dossier, document) >> <root> >> <case-file /> >> <dossier /> >> <document /> >> </root> >> This means your clients only need to "learn" a single media type and >> all it's features. Creating several media types (one for each >> document, etc.) can be a burden for client implementors. > > Wouldn't it be better to use a media type parameters to indicate which type > of document it is instead of having a single gigantic schema? Examples would > be: > > application/vnd.example.com; type=case-file > application/vnd.example.com; type=dossier > application/vnd.example.com; type=document > > To me that would make it even easier to introduce new schemas. If you want > to share XML types between the different "root types" that can still be done > with schema name spaces just like you'd reuse atom's link type. > > [snipping lots of good stuff] > > -- > Trygve >
Ummmm, anyone else find some of this stuff downright wrong, from an architectural standpoint? "HTTPS provides the baseline of safety for web application users, and there is no performance- or cost-based reason to stick with HTTP. Web application providers undermine their business models when, by continuing to use HTTP, they enable a wide range of attackers anywhere on the internet to compromise users' information." https://www.eff.org/pages/how-deploy-https-correctly Citing Gmail as a baseline-performance example is kinda lame, for one thing, because it's not the sort of application that's meant to be cached. There's a golden opportunity being missed here, to advocate for digest authentication (deployed correctly) as a more scalable solution than replacing HTTP with HTTPS in order to keep cookie-based authentication as the status quo. But, switch to HTTPS even for anonymous access? Isn't that an overreaction? -Eric
Have to agree with you Eric.. I don't buy that using HTTP for non-secure resources undermines your business. Banks are not all HTTPS all the time, and they get plenty of users.
--- On Sun, 11/28/10, Eric J. Bowman <eric@bisonsystems.net> wrote:
From: Eric J. Bowman <eric@...>
Subject: [rest-discuss] EFF HTTPS HowTo -- HTTP considered harmful?
To: rest-discuss@yahoogroups.com
Date: Sunday, November 28, 2010, 10:29 PM
Ummmm, anyone else find some of this stuff downright wrong, from an
architectural standpoint?
"HTTPS provides the baseline of safety for web application users, and
there is no performance- or cost-based reason to stick with HTTP. Web
application providers undermine their business models when, by
continuing to use HTTP, they enable a wide range of attackers anywhere
on the internet to compromise users' information."
https://www.eff.org/pages/how-deploy-https-correctly
Citing Gmail as a baseline-performance example is kinda lame, for one
thing, because it's not the sort of application that's meant to be
cached.
There's a golden opportunity being missed here, to advocate for digest
authentication (deployed correctly) as a more scalable solution than
replacing HTTP with HTTPS in order to keep cookie-based authentication
as the status quo. But, switch to HTTPS even for anonymous access?
Isn't that an overreaction?
-Eric
On 29 Nov 2010, at 09:40, Kevin Duffey wrote: > Have to agree with you Eric.. I don't buy that using HTTP for non-secure resources undermines your business. Banks are not all HTTPS all the time, and they get plenty of users. Banks are not good guides to security, or anything really. Did you not notice that the world's citizens had to bail them out recently. Most banks have useless password based security. Even though cryptographic techniques have been out for 30 years that could help them protect their clients for an initial sum that is much smaller than the recent bailouts per individual they have never it seems to me done anything serious in that regard. The problem if you do some thing in http is that links on those pages can be changed in transit, to say the fake bank account. If the client is a little naive he won't notice the difference. > > --- On Sun, 11/28/10, Eric J. Bowman <eric@...> wrote: > > From: Eric J. Bowman <eric@bisonsystems.net> > Subject: [rest-discuss] EFF HTTPS HowTo -- HTTP considered harmful? > To: rest-discuss@yahoogroups.com > Date: Sunday, November 28, 2010, 10:29 PM > > > Ummmm, anyone else find some of this stuff downright wrong, from an > architectural standpoint? > > "HTTPS provides the baseline of safety for web application users, and > there is no performance- or cost-based reason to stick with HTTP. Web > application providers undermine their business models when, by > continuing to use HTTP, they enable a wide range of attackers anywhere > on the internet to compromise users' information." > > https://www.eff.org/pages/how-deploy-https-correctly > > Citing Gmail as a baseline-performance example is kinda lame, for one > thing, because it's not the sort of application that's meant to be > cached. > > There's a golden opportunity being missed here, to advocate for digest > authentication (deployed correctly) as a more scalable solution than > replacing HTTP with HTTPS in order to keep cookie-based authentication > as the status quo. But, switch to HTTPS even for anonymous access? > Isn't that an overreaction? > > -Eric > > Social Web Architect http://bblfish.net/
On Sat, Nov 27, 2010 at 3:59 AM, Jan Algermissen
<algermissen1971@...> wrote:
>
> On Nov 26, 2010, at 5:41 PM, Zhi-Qiang Lei wrote:
> >
> > I'm designing a REST API which support html and JSON. (might support XML later)
>
> If you use generic media types such as application/json or application/xml you are *not* designing a REST API (sorry, to nit pick).
>
> If your client makes a service specific assumption about a certain structure of the json or the xml then it is coupled to the service.
> (You likely have HTTP Type I[1])
Sorry to ask, but is there a JSON + links generic media type somewhere
? Something where a JSON object containing the like of
"links": [ { "href": "/people/1", "rel": "prev
http://x.example.com/rels#person" } ]
and, by convention, it would be interpreted as "hyper" ?
I've seen https://datatracker.ietf.org/doc/draft-zyp-json-schema/?include_text=1
but it seems all-encompassing.
Best regards,
--
John Mettraux - http://jmettraux.wordpress.com
On 11/27/10 5:32 PM, mike amundsen wrote: > J�rn: > > If you decide to register a custom media type, here are my suggestions: > > - design a _single_ media type to handle all your documents (case > file, dossier, document) > <root> > <case-file /> > <dossier /> > <document /> > </root> > This means your clients only need to "learn" a single media type and > all it's features. Creating several media types (one for each > document, etc.) can be a burden for client implementors. Wouldn't it be better to use a media type parameters to indicate which type of document it is instead of having a single gigantic schema? Examples would be: application/vnd.example.com; type=case-file application/vnd.example.com; type=dossier application/vnd.example.com; type=document To me that would make it even easier to introduce new schemas. If you want to share XML types between the different "root types" that can still be done with schema name spaces just like you'd reuse atom's link type. [snipping lots of good stuff] -- Trygve
<snip> > One practical advantage of the media type parameter approach is that > it allows the client to more easily dispatch to an appropriate content > handler without reading the entity body - making it easier to > efficiently process the content as a stream vs. memory. Seems like > the 'profile' attribute could be useful in this case too... </snip> While some internal languages might make it easier to handle "dispatch" operations if the data is publicly exposed in the HTTP header, I think this is not a solid reason for implementing it this way. As internal languages can vary over time and location, establishing the public interface to optimize against one particular internal component feature (e.g. language, framework, etc.) is not a good long-term decision. I've used the @profile approach[1][2] a number of times when the public interface is implemented using XHTML. I have not yet used it in any custom media types, but it would work the same way. [1] http://gmpg.org/xmdp/ [2] http://dev.w3.org/html5/profiles/source/ mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me #RESTFest 2010 http://rest-fest.googlecode.com On Mon, Nov 29, 2010 at 08:53, Tim Williams <twilliams@...> wrote: > On Sun, Nov 28, 2010 at 4:04 PM, mike amundsen <mamund@...> wrote: >> <snip> >>> Wouldn't it be better to use a media type parameters to indicate which type >>> of document it is instead of having a single gigantic schema? >> </snip> >> "gigantic" is a relative term. XHTML can handle just about any >> representation and it's schema is not at all "gigantic" by my own >> measure. >> >> I suspect you are thinking of document designs where there are >> possibly hundreds of unique data points (name, customer-name, >> client-name, store-name, etc.). It is not required that each of these >> unique data points use a unique XML element (to use XML as the >> example). XHTML solves this problem by using a single scheme element >> (<input />) with a number of useful attributes. There are other ways >> to handle it as well. >> >> <snip> >> Examples would >>> be: >>> >>> application/vnd.example.com; type=case-file >>> application/vnd.example.com; type=dossier >>> application/vnd.example.com; type=document >>> >> </snip> >> My experience is that this information does not belong in the >> content-type/accept header space. Intermediaries rarely need to know >> this information and clients can use an element within the payload >> itself (<root type="case-file" />, etc.) just as easily as pull that >> information from the content-type header. > > One practical advantage of the media type parameter approach is that > it allows the client to more easily dispatch to an appropriate content > handler without reading the entity body - making it easier to > efficiently process the content as a stream vs. memory. Seems like > the 'profile' attribute could be useful in this case too... > > --tim >
John: > Sorry to ask, but is there a JSON + links generic media type somewhere? I think of it more as a convention or design guideline than a generic media type. http://blog.kevburnsjr.com/standard-link-formats-in-hypermedia-representations XML is to HTML as JSON is to _______ (the generic hypermedia type you're describing) Maybe HMON is a good name? (Hypermedia Object Notation) - Kev
fwiw, here's an example representation using a json equivalent to application/vnd.hal+xml https://gist.github.com/raw/578120/79e081b73e2e311d1b4f1e4d8493cf4795ac8c7f/halv2.json Cheers, Mike <https://gist.github.com/raw/578120/1775c750691555f22dbf842b0b0c363e462b046b/halv2.json> On Mon, Nov 29, 2010 at 3:23 PM, Kev Burns <kevburnsjr@...> wrote: > > > John: > > > Sorry to ask, but is there a JSON + links generic media type somewhere? > > I think of it more as a convention or design guideline than a generic media > type. > > http://blog.kevburnsjr.com/standard-link-formats-in-hypermedia-representations > > XML is to HTML as JSON is to _______ > (the generic hypermedia type you're describing) > > Maybe HMON is a good name? > (Hypermedia Object Notation) > > - Kev > > >
On Wed, Nov 24, 2010 at 10:03 PM, trollyrogers <trollyrogers@...> wrote: > Hi All - I've been digging around to figure out how best to solve a problem, and am hoping to get thoughts from you folks. > > The service in question uses wadl to describe the interface of the service. It's resources are represented using xml and or json, so xsd and JSON Schema will be used to describe representations. These requirements are basically things i have to live with, so I need to figure out how to do my best here. > > Now what i need to do is... > > 1. be able to version the service's resources > 2. include a reference to an xml or json schema in my responses > > Here's what I'm thinkin'. I'll create custom vendor mime types for both xml and json. Each will contain two mime type parameters; "profile" which points to the json or xml schema, and "v" which specifies the version of the representation. > > Below is a simple example of what the wadl might look like. In short, the example resource supports four different response representations, two of which are xml; versions 1.0 and 2.0, and the other two are json; versions 1.0 and 2.0. And each points to a schema via profile parameter. > > FWIW - This approach is based on things I've read around here, Peter William's blog, Bill Burk's REST book, and the latest JSON Schema draft (recently 03, congrats!). > > Any thoughts? > > <resource path="example"> > <method name="GET" id="get"> > <response> > <representation mediaType="application/vnd.custom.app+json;profile=/path/to/exampleV1.0.json;v=1.0"/> > <representation mediaType="application/vnd.custom.app+json;profile=/path/to/exampleV2.0.json;v=2.0"/> > <representation mediaType="application/vnd.custom.app+xml;profile=/path/to/exampleV1.0.xsd;v=1.0"/> > <representation mediaType="application/vnd.custom.app+xml;profile=/path/to/exampleV2.0.xsd;v=2.0"/> > </response> > </method> > </resource> I'd advise against using parameters to describe the version. Assuming you have defined your media type with "must ignore" semantics you only need a new version when breaking changes are needed. (Obviously, breaking changes should be avoided when ever possible, but they are sometimes necessary.) If you are introducing a breaking change then it is a different media type by definition. Putting important information about the semantics of representation in a parameter is inappropriate. Using parameters is also less than ideal from a practical standpoint. Tool chains tend to provide little or no support for media type parameters. Putting important information in the parameters will require more work in both user agents and servers than just minting a brand new media type id. Mint new a media type id every time you are forced to introduce an incompatible media type into your system. They are cheap, you can have as many as you need. Peter barelyenough.org
A question on link relations ... in my application we have, as mentioned
earlier, case files that contains references to dossiers and parties
(persons).
One way to model this could be:
<CaseFile>
<ResponsibleCaseWorker href="{url-to-party}">John</ResponsibleCaseWorker>
<CreatedBy href="{url-to-party}">Lisa</CreatedBy>
<Dossiers>
<Dossier href="{url-to-dossier1}">
<Name>Blah 1</Name>
</Dossier>
<Dossier href="{url-to-dossier2}">
<Name>Blah 2</Name>
</Dossier>
<Dossiers>
</CaseFil>
Another way to do it could be:
<CaseFile>
<ResponsibleCaseWorker>John</ResponsibleCaseWorker>
<atom:link href="{url-to-party}" rel="responsible-case-worker"
title="John" type="application/fesd+xml"/>
<atom:link href="{url-to-party}" rel="created-by" title="Lisa"
type="application/fesd+xml"/>
<Dossiers>
<Dossier>
<Title>Blah 1</Title>
<atom:link href="{url-to-dossier}" rel="fesd-dossier" title="Blah 1"
type="application/fesd+xml"/>
</Dossier>
<Dossier>
<Title>Blah 2</Title>
<atom:link href="{url-to-dossier}" rel="fesd-dossier" title="Blah 2"
type="application/fesd+xml"/>
</Dossier>
<Dossiers>
</CaseFil>
The first format uses custom link formats, the second uses standard Atom
links.
Which should I prefer? I guess the atom:link format is best ... but I could
use some reasons fo why it is best?
Thanks, J�rn
Jrn:
I've used both formats <link /> and <{name} href="..." /> in the past
and don't fiend either one especially compelling over the other.
When I use the <link .. /> style, I *always* use a @rel (or some
similar attribute) to decorate the link. I then "teach" client
applications to look for the @rel in order to understand the
meaning/use of the related @href.
<item>
<link rel="edit" href="..." />
<link rel="attachments" href="..." />
<link rel="history" href="..." />
...
</item>
When I use the <{name} /> style, I don't always use a @rel decorator.
I usually tell clients to search for the name of the element in order
to understand the meaning/use of the related @href.
<item href="...">
<edit href="..." />
<attachments href="..." />
<history href="..." />
</item>
The exception to the second rule (for me) is:
- if there could be multiple instances of the same <{name.. /> element
in the same context (e.g. a collection of queries, etc.).
<queries>
<query href="..." rel="attachments" />
<query href="..." rel="history" />
</queries>
In this case a *always* use a @rel decorator to allow clients to
"find" the link they are interested in.
Mostly these are _design_ decisions. My solutions vary based on the
clients, coding environment, etc.
mca
http://amundsen.com/blog/
http://twitter.com@mamund
http://mamund.com/foaf.rdf#me
#RESTFest 2010
http://rest-fest.googlecode.com
On Mon, Nov 29, 2010 at 16:19, Jrn Wildt <jw@fjeldgruppen.dk> wrote:
> A question on link relations ... in my application we have, as mentioned
> earlier, case files that contains references to dossiers and parties
> (persons).
>
> One way to model this could be:
>
> <CaseFile>
> <ResponsibleCaseWorker href="{url-to-party}">John</ResponsibleCaseWorker>
> <CreatedBy href="{url-to-party}">Lisa</CreatedBy>
> <Dossiers>
> <Dossier href="{url-to-dossier1}">
> <Name>Blah 1</Name>
> </Dossier>
> <Dossier href="{url-to-dossier2}">
> <Name>Blah 2</Name>
> </Dossier>
> <Dossiers>
> </CaseFil>
>
> Another way to do it could be:
>
> <CaseFile>
> <ResponsibleCaseWorker>John</ResponsibleCaseWorker>
> <atom:link href="{url-to-party}" rel="responsible-case-worker"
> title="John" type="application/fesd+xml"/>
> <atom:link href="{url-to-party}" rel="created-by" title="Lisa"
> type="application/fesd+xml"/>
> <Dossiers>
> <Dossier>
> <Title>Blah 1</Title>
> <atom:link href="{url-to-dossier}" rel="fesd-dossier" title="Blah 1"
> type="application/fesd+xml"/>
> </Dossier>
> <Dossier>
> <Title>Blah 2</Title>
> <atom:link href="{url-to-dossier}" rel="fesd-dossier" title="Blah 2"
> type="application/fesd+xml"/>
> </Dossier>
> <Dossiers>
> </CaseFil>
>
> The first format uses custom link formats, the second uses standard Atom
> links.
>
> Which should I prefer? I guess the atom:link format is best ... but I could
> use some reasons fo why it is best?
>
> Thanks, Jrn
>
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
>
On 11/29/10 10:19 PM, J�rn Wildt wrote:
> A question on link relations ... in my application we have, as mentioned
> earlier, case files that contains references to dossiers and parties
> (persons).
>
> One way to model this could be:
>
> <CaseFile>
> <ResponsibleCaseWorker href="{url-to-party}">John</ResponsibleCaseWorker>
> <CreatedBy href="{url-to-party}">Lisa</CreatedBy>
> <Dossiers>
> <Dossier href="{url-to-dossier1}">
> <Name>Blah 1</Name>
> </Dossier>
> <Dossier href="{url-to-dossier2}">
> <Name>Blah 2</Name>
> </Dossier>
> <Dossiers>
> </CaseFil>
>
> Another way to do it could be:
>
> <CaseFile>
> <ResponsibleCaseWorker>John</ResponsibleCaseWorker>
> <atom:link href="{url-to-party}" rel="responsible-case-worker"
> title="John" type="application/fesd+xml"/>
> <atom:link href="{url-to-party}" rel="created-by" title="Lisa"
> type="application/fesd+xml"/>
> <Dossiers>
> <Dossier>
> <Title>Blah 1</Title>
> <atom:link href="{url-to-dossier}" rel="fesd-dossier" title="Blah 1"
> type="application/fesd+xml"/>
> </Dossier>
> <Dossier>
> <Title>Blah 2</Title>
> <atom:link href="{url-to-dossier}" rel="fesd-dossier" title="Blah 2"
> type="application/fesd+xml"/>
> </Dossier>
> <Dossiers>
> </CaseFil>
>
> The first format uses custom link formats, the second uses standard Atom
> links.
>
> Which should I prefer? I guess the atom:link format is best ... but I could
> use some reasons fo why it is best?
I would go for atom:link as I hope that everyone will converge on using
atom:link [1]. The main advantage as I see it is that they will have
less to learn and that we will will converge on a more stable set of
relations which in turn will make it possible to create more generic
code for indexing, crawling etc. Just make sure you create and define
your own rel types according to the atom spec [2].
[1]: As long as atom:link is the appropriate control for your case, but
in this particular case it definitely looks like it.
[2]: http://tools.ietf.org/html/rfc4287#section-4.2.7.2
--
Trygve
Hi all, Just a quickie; Anyone done any work with delivering RESTful SKOS resources? I'm tossing up embedding SKOS in ATOM with extensions or roll my own, both for requests and responses. Any thoughts by people in the know? Kind regards, Alex -- Project Wrangler, SOA, Information Alchemist, UX, RESTafarian, Topic Maps --- http://shelter.nu/blog/ ---------------------------------------------- ------------------ http://www.google.com/profiles/alexander.johannesen ---
2010/11/30 Trygve Laugstl <trygvis@...>:
> I would go for atom:link as I hope that everyone will converge on using
> atom:link [1]. The main advantage as I see it is that they will have
> less to learn and that we will will converge on a more stable set of
> relations which in turn will make it possible to create more generic
> code for indexing, crawling etc. Just make sure you create and define
> your own rel types according to the atom spec [2].
Personally i don't much like atom:link. It trades clarity for a false
promise of re-use. The idea that a great deal of leverage will be
gained by reusing atom:link in other formats seems not be born out in
reality. Parsing arbitrary xml documents, looking for link elements
and then doing things with them without understanding the semantics of
the overall representation sounds pretty far fetched to me. (Is that
edit link for the requested resource, or is it for a different
resource whose information is contained in the requested one.)
The `<{name} href=""/>` pattern promotes the meaningful name of the
item to the highest possible level. This increases the clarity of the
representation. As a client writer i care about semantics of the
various parts of the representations far more than i care about their
types. I can guess with little effort that an element named 'edit'
will contain a link. With a link element i must scan to the right
until i find the @rel in order to understand the purpose of the
element. I'll take clarity over vague, unrealistic, promises of
re-use any day.
Besides, we have the link header[1] which is better, in every
imaginable way, than atom:link for expressing common relations. It
does not require the client be able to parse a particular format and
it unambiguously relates the link to the requested resource as a
whole.
[1]: <http://tools.ietf.org/html/rfc5988>
Peter
barelyenough.org
On 11/30/10 6:38 PM, Peter Williams wrote:
> 2010/11/30 Trygve Laugst�l<trygvis@...>:
>
>> I would go for atom:link as I hope that everyone will converge on using
>> atom:link [1]. The main advantage as I see it is that they will have
>> less to learn and that we will will converge on a more stable set of
>> relations which in turn will make it possible to create more generic
>> code for indexing, crawling etc. Just make sure you create and define
>> your own rel types according to the atom spec [2].
>
> Personally i don't much like atom:link. It trades clarity for a false
> promise of re-use. The idea that a great deal of leverage will be
> gained by reusing atom:link in other formats seems not be born out in
> reality. Parsing arbitrary xml documents, looking for link elements
> and then doing things with them without understanding the semantics of
> the overall representation sounds pretty far fetched to me. (Is that
> edit link for the requested resource, or is it for a different
> resource whose information is contained in the requested one.)
>
> The `<{name} href=""/>` pattern promotes the meaningful name of the
> item to the highest possible level. This increases the clarity of the
> representation. As a client writer i care about semantics of the
> various parts of the representations far more than i care about their
> types. I can guess with little effort that an element named 'edit'
> will contain a link. With a link element i must scan to the right
> until i find the @rel in order to understand the purpose of the
> element. I'll take clarity over vague, unrealistic, promises of
> re-use any day.
To me <atom:link rel="{name}" href=""/> is equal to <{name} href=""/>.
> Besides, we have the link header[1] which is better, in every
> imaginable way, than atom:link for expressing common relations. It
> does not require the client be able to parse a particular format and
> it unambiguously relates the link to the requested resource as a
> whole.
>
> [1]:<http://tools.ietf.org/html/rfc5988>
Link headers is mostly useful for linking *entire resources* to other
*resources* while atom:link gives you a more fine grained linking
mechanism. I'm not against link headers in anyway, just saying it's a
slightly different thing.
--
Trygve
Link headers is a great way of adding hypertext to media types that already
exist.
An example:
GET /image/123
Host: example.com
HTTP/1.1 200 OK
Link: <http://example.com/usages/123>; rel=
http://relations.example.com/usage
I would not use Link headers if I was minting a new media-type, as I believe
the links should be close to the thing i'm linking from.
Regardless if we use atom:link or Link header we should help adding new
useful link relations to the registry so we can have a common understanding
of what they mean.
Best regards
--
Erlend
On Tue, Nov 30, 2010 at 7:07 PM, Trygve Laugstl <trygvis@...> wrote:
>
>
> On 11/30/10 6:38 PM, Peter Williams wrote:
> > 2010/11/30 Trygve Laugstl<trygvis@... <trygvis%40inamo.no>>:
> >
> >> I would go for atom:link as I hope that everyone will converge on using
> >> atom:link [1]. The main advantage as I see it is that they will have
> >> less to learn and that we will will converge on a more stable set of
> >> relations which in turn will make it possible to create more generic
> >> code for indexing, crawling etc. Just make sure you create and define
> >> your own rel types according to the atom spec [2].
> >
> > Personally i don't much like atom:link. It trades clarity for a false
> > promise of re-use. The idea that a great deal of leverage will be
> > gained by reusing atom:link in other formats seems not be born out in
> > reality. Parsing arbitrary xml documents, looking for link elements
> > and then doing things with them without understanding the semantics of
> > the overall representation sounds pretty far fetched to me. (Is that
> > edit link for the requested resource, or is it for a different
> > resource whose information is contained in the requested one.)
> >
> > The `<{name} href=""/>` pattern promotes the meaningful name of the
> > item to the highest possible level. This increases the clarity of the
> > representation. As a client writer i care about semantics of the
> > various parts of the representations far more than i care about their
> > types. I can guess with little effort that an element named 'edit'
> > will contain a link. With a link element i must scan to the right
> > until i find the @rel in order to understand the purpose of the
> > element. I'll take clarity over vague, unrealistic, promises of
> > re-use any day.
>
> To me <atom:link rel="{name}" href=""/> is equal to <{name} href=""/>.
>
>
> > Besides, we have the link header[1] which is better, in every
> > imaginable way, than atom:link for expressing common relations. It
> > does not require the client be able to parse a particular format and
> > it unambiguously relates the link to the requested resource as a
> > whole.
> >
> > [1]:<http://tools.ietf.org/html/rfc5988>
>
> Link headers is mostly useful for linking *entire resources* to other
> *resources* while atom:link gives you a more fine grained linking
> mechanism. I'm not against link headers in anyway, just saying it's a
> slightly different thing.
>
> --
> Trygve
>
>
>
On Tue, Nov 30, 2010 at 11:07 AM, Trygve Laugstøl <trygvis@...> wrote:
> On 11/30/10 6:38 PM, Peter Williams wrote:
>>
>> 2010/11/30 Trygve Laugstųl<trygvis@...>:
>>
>>> I would go for atom:link as I hope that everyone will converge on using
>>> atom:link [1]. The main advantage as I see it is that they will have
>>> less to learn and that we will will converge on a more stable set of
>>> relations which in turn will make it possible to create more generic
>>> code for indexing, crawling etc. Just make sure you create and define
>>> your own rel types according to the atom spec [2].
>>
>> Personally i don't much like atom:link. It trades clarity for a false
>> promise of re-use. The idea that a great deal of leverage will be
>> gained by reusing atom:link in other formats seems not be born out in
>> reality. Parsing arbitrary xml documents, looking for link elements
>> and then doing things with them without understanding the semantics of
>> the overall representation sounds pretty far fetched to me. (Is that
>> edit link for the requested resource, or is it for a different
>> resource whose information is contained in the requested one.)
>>
>> The `<{name} href=""/>` pattern promotes the meaningful name of the
>> item to the highest possible level. This increases the clarity of the
>> representation. As a client writer i care about semantics of the
>> various parts of the representations far more than i care about their
>> types. I can guess with little effort that an element named 'edit'
>> will contain a link. With a link element i must scan to the right
>> until i find the @rel in order to understand the purpose of the
>> element. I'll take clarity over vague, unrealistic, promises of
>> re-use any day.
>
> To me <atom:link rel="{name}" href=""/> is equal to <{name} href=""/>.
I agree they are semantically equivalent. However, the arrangement of
information in an atom:link element increases the effort required to
understand it. Not much of course, but a little. I prefer not to
intentionally increase the complexity of my representations unless it
provide a very real benefit.
>> Besides, we have the link header[1] which is better, in every
>> imaginable way, than atom:link for expressing common relations. It
>> does not require the client be able to parse a particular format and
>> it unambiguously relates the link to the requested resource as a
>> whole.
>
>>
>>
>> [1]:<http://tools.ietf.org/html/rfc5988>
>
> Link headers is mostly useful for linking *entire resources* to other
> *resources* while atom:link gives you a more fine grained linking mechanism.
> I'm not against link headers in anyway, just saying it's a slightly
> different thing.
Exactly. The ambiguity of link elements (due to their fine
grainedness) works against any real world re-use of them. Clien have
to understand the whole representation that contains link elements in
order to use those links. Therefore, there is little, or no,
practical benefit to using atom:link elements but there is some cost.
Peter
barelyenough.org
<snip>
Regardless if we use atom:link or Link header we should help adding new
useful link relations to the registry so we can have a common understanding
of what they mean.
</snip>
Contributing to the link relation registry has recently become rather easy:
http://paramsr.us/link-relation-types/
<http://paramsr.us/link-relation-types/>
mca
http://amundsen.com/blog/
http://twitter.com@mamund
http://mamund.com/foaf.rdf#me
#RESTFest 2010
http://rest-fest.googlecode.com
On Tue, Nov 30, 2010 at 13:21, Erlend Hamnaberg <ngarthl@...> wrote:
>
>
> Link headers is a great way of adding hypertext to media types that already
> exist.
>
> An example:
>
> GET /image/123
> Host: example.com
>
> HTTP/1.1 200 OK
> Link: <http://example.com/usages/123>; rel=
> http://relations.example.com/usage
>
> I would not use Link headers if I was minting a new media-type, as I
> believe the links should be close to the thing i'm linking from.
>
> Regardless if we use atom:link or Link header we should help adding new
> useful link relations to the registry so we can have a common understanding
> of what they mean.
>
>
> Best regards
> --
>
> Erlend
>
> On Tue, Nov 30, 2010 at 7:07 PM, Trygve Laugstl <trygvis@...> wrote:
>
>>
>>
>> On 11/30/10 6:38 PM, Peter Williams wrote:
>> > 2010/11/30 Trygve Laugstl<trygvis@... <trygvis%40inamo.no>>:
>>
>> >
>> >> I would go for atom:link as I hope that everyone will converge on using
>> >> atom:link [1]. The main advantage as I see it is that they will have
>> >> less to learn and that we will will converge on a more stable set of
>> >> relations which in turn will make it possible to create more generic
>> >> code for indexing, crawling etc. Just make sure you create and define
>> >> your own rel types according to the atom spec [2].
>> >
>> > Personally i don't much like atom:link. It trades clarity for a false
>> > promise of re-use. The idea that a great deal of leverage will be
>> > gained by reusing atom:link in other formats seems not be born out in
>> > reality. Parsing arbitrary xml documents, looking for link elements
>> > and then doing things with them without understanding the semantics of
>> > the overall representation sounds pretty far fetched to me. (Is that
>> > edit link for the requested resource, or is it for a different
>> > resource whose information is contained in the requested one.)
>> >
>> > The `<{name} href=""/>` pattern promotes the meaningful name of the
>> > item to the highest possible level. This increases the clarity of the
>> > representation. As a client writer i care about semantics of the
>> > various parts of the representations far more than i care about their
>> > types. I can guess with little effort that an element named 'edit'
>> > will contain a link. With a link element i must scan to the right
>> > until i find the @rel in order to understand the purpose of the
>> > element. I'll take clarity over vague, unrealistic, promises of
>> > re-use any day.
>>
>> To me <atom:link rel="{name}" href=""/> is equal to <{name} href=""/>.
>>
>>
>> > Besides, we have the link header[1] which is better, in every
>> > imaginable way, than atom:link for expressing common relations. It
>> > does not require the client be able to parse a particular format and
>> > it unambiguously relates the link to the requested resource as a
>> > whole.
>> >
>> > [1]:<http://tools.ietf.org/html/rfc5988>
>>
>> Link headers is mostly useful for linking *entire resources* to other
>> *resources* while atom:link gives you a more fine grained linking
>> mechanism. I'm not against link headers in anyway, just saying it's a
>> slightly different thing.
>>
>> --
>> Trygve
>>
>>
>
>
>
I am a little unclear as to why any format is particularly difficult to work with? I am a java guy, so perhaps Java has much better apis for handling these things, but it seems to me it's easy enough to parse and check the rel=, the element name, or the atom:link element. Are there some languages that make this tedious and thus this is the reason why some prefer one way over the other?
I personally prefer the <link rel="" href="" type=""/> format. It's very similar to Atom:link and can easily be an atom:link with minor changes. I've not heard of the link header tho, is that something new, or just another http header that can be used?
I am still struggling to understand when to use different media types correctly. If the API I am providing provides a unique solution, but I am returning chunks of xml (or json) that represent a specific resource, do all these different resources that belong to the overall API use the same media type, or should they use different media types, one for each resource? I mean, if I have /users, /orders, /sellers and /bids, should I be using something like application/vnd.com.mycompany.orders+xml for /orders? I've resorted back to using application/xml and application/json for the convenience, but I am not opposed to using application specific or even resource specific media types if that is a best practices that the REST community at large is leaning towards.
--- On Tue, 11/30/10, mike amundsen <mamund@...> wrote:
From: mike amundsen <mamund@...>
Subject: Re: [rest-discuss] Link relations [was: A media type for case files, dossiers and documents]
To: "Erlend Hamnaberg" <ngarthl@...>
Cc: "Trygve Laugstøl" <trygvis@...>, "Peter Williams" <pezra@...>, "Jørn Wildt" <jw@...>, "Rest Discussion List" <rest-discuss@yahoogroups.com>
Date: Tuesday, November 30, 2010, 10:26 AM
<snip>Regardless if we use atom:link or Link header we should help adding new useful link relations to the registry so we can have a common understanding of what they mean.</snip>
Contributing to the link relation registry has recently become rather easy: http://paramsr.us/link-relation-types/
mca
http://amundsen.com/blog/
http://twitter.com@mamund
http://mamund.com/foaf.rdf#me
#RESTFest 2010
http://rest-fest.googlecode.com
On Tue, Nov 30, 2010 at 13:21, Erlend Hamnaberg <ngarthl@...> wrote:
Link headers is a great way of adding hypertext to media types that already exist.
An example:
GET /image/123Host: example.com
HTTP/1.1 200 OKLink: <http://example.com/usages/123>; rel=http://relations.example.com/usage
I would not use Link headers if I was minting a new media-type, as I believe the links should be close to the thing i'm linking from.
Regardless if we use atom:link or Link header we should help adding new useful link relations to the registry so we can have a common understanding of what they mean.
Best regards--
Erlend
On Tue, Nov 30, 2010 at 7:07 PM, Trygve Laugstøl <trygvis@...> wrote:
On 11/30/10 6:38 PM, Peter Williams wrote:
> 2010/11/30 Trygve Laugstøl<trygvis@...>:
>
>> I would go for atom:link as I hope that everyone will converge on using
>> atom:link [1]. The main advantage as I see it is that they will have
>> less to learn and that we will will converge on a more stable set of
>> relations which in turn will make it possible to create more generic
>> code for indexing, crawling etc. Just make sure you create and define
>> your own rel types according to the atom spec [2].
>
> Personally i don't much like atom:link. It trades clarity for a false
> promise of re-use. The idea that a great deal of leverage will be
> gained by reusing atom:link in other formats seems not be born out in
> reality. Parsing arbitrary xml documents, looking for link elements
> and then doing things with them without understanding the semantics of
> the overall representation sounds pretty far fetched to me. (Is that
> edit link for the requested resource, or is it for a different
> resource whose information is contained in the requested one.)
>
> The `<{name} href=""/>` pattern promotes the meaningful name of the
> item to the highest possible level. This increases the clarity of the
> representation. As a client writer i care about semantics of the
> various parts of the representations far more than i care about their
> types. I can guess with little effort that an element named 'edit'
> will contain a link. With a link element i must scan to the right
> until i find the @rel in order to understand the purpose of the
> element. I'll take clarity over vague, unrealistic, promises of
> re-use any day.
To me <atom:link rel="{name}" href=""/> is equal to <{name} href=""/>.
> Besides, we have the link header[1] which is better, in every
> imaginable way, than atom:link for expressing common relations. It
> does not require the client be able to parse a particular format and
> it unambiguously relates the link to the requested resource as a
> whole.
>
> [1]:<http://tools.ietf.org/html/rfc5988>
Link headers is mostly useful for linking *entire resources* to other
*resources* while atom:link gives you a more fine grained linking
mechanism. I'm not against link headers in anyway, just saying it's a
slightly different thing.
--
Trygve
> I mean, if I have /users, /orders, /sellers and /bids, should I be using
> something like application/vnd.com.mycompany.orders+xml for /orders?
Try checking the first mails in the "A media type for case files, dossiers
and documents" discussion (the prequel to this discussion).
/Jørn
----- Original Message -----
From: "Kevin Duffey" <andjarnic@...>
To: "Rest Discussion List" <rest-discuss@yahoogroups.com>
Sent: Wednesday, December 01, 2010 6:48 AM
Subject: Re: [rest-discuss] Link relations [was: A media type for case
files, dossiers and documents]
I am a little unclear as to why any format is particularly difficult to work
with? I am a java guy, so perhaps Java has much better apis for handling
these things, but it seems to me it's easy enough to parse and check the
rel=, the element name, or the atom:link element. Are there some languages
that make this tedious and thus this is the reason why some prefer one way
over the other?
I personally prefer the <link rel="" href="" type=""/> format. It's very
similar to Atom:link and can easily be an atom:link with minor changes. I've
not heard of the link header tho, is that something new, or just another
http header that can be used?
I am still struggling to understand when to use different media types
correctly. If the API I am providing provides a unique solution, but I am
returning chunks of xml (or json) that represent a specific resource, do all
these different resources that belong to the overall API use the same media
type, or should they use different media types, one for each resource? I
mean, if I have /users, /orders, /sellers and /bids, should I be using
something like application/vnd.com.mycompany.orders+xml for /orders? I've
resorted back to using application/xml and application/json for the
convenience, but I am not opposed to using application specific or even
resource specific media types if that is a best practices that the REST
community at large is leaning towards.
--- On Tue, 11/30/10, mike amundsen <mamund@...> wrote:
From: mike amundsen <mamund@...>
Subject: Re: [rest-discuss] Link relations [was: A media type for case
files, dossiers and documents]
To: "Erlend Hamnaberg" <ngarthl@...>
Cc: "Trygve Laugstøl" <trygvis@...>, "Peter Williams"
<pezra@...>, "Jørn Wildt" <jw@...>, "Rest
Discussion List" <rest-discuss@yahoogroups.com>
Date: Tuesday, November 30, 2010, 10:26 AM
<snip>Regardless if we use atom:link or Link header we should help
adding new useful link relations to the registry so we can have a common
understanding of what they mean.</snip>
Contributing to the link relation registry has recently become rather easy:
http://paramsr.us/link-relation-types/
mca
http://amundsen.com/blog/
http://twitter.com@mamund
http://mamund.com/foaf.rdf#me
#RESTFest 2010
http://rest-fest.googlecode.com
On Tue, Nov 30, 2010 at 13:21, Erlend Hamnaberg <ngarthl@...> wrote:
Link headers is a great way of adding hypertext to media types that already
exist.
An example:
GET /image/123Host: example.com
HTTP/1.1 200 OKLink: <http://example.com/usages/123>;
rel=http://relations.example.com/usage
I would not use Link headers if I was minting a new media-type, as I believe
the links should be close to the thing i'm linking from.
Regardless if we use atom:link or Link header we should help adding new
useful link relations to the registry so we can have a common understanding
of what they mean.
Best regards--
Erlend
On Tue, Nov 30, 2010 at 7:07 PM, Trygve Laugstøl <trygvis@...> wrote:
On 11/30/10 6:38 PM, Peter Williams wrote:
> 2010/11/30 Trygve Laugstøl<trygvis@...>:
>
>> I would go for atom:link as I hope that everyone will converge on using
>> atom:link [1]. The main advantage as I see it is that they will have
>> less to learn and that we will will converge on a more stable set of
>> relations which in turn will make it possible to create more generic
>> code for indexing, crawling etc. Just make sure you create and define
>> your own rel types according to the atom spec [2].
>
> Personally i don't much like atom:link. It trades clarity for a false
> promise of re-use. The idea that a great deal of leverage will be
> gained by reusing atom:link in other formats seems not be born out in
> reality. Parsing arbitrary xml documents, looking for link elements
> and then doing things with them without understanding the semantics of
> the overall representation sounds pretty far fetched to me. (Is that
> edit link for the requested resource, or is it for a different
> resource whose information is contained in the requested one.)
>
> The `<{name} href=""/>` pattern promotes the meaningful name of the
> item to the highest possible level. This increases the clarity of the
> representation. As a client writer i care about semantics of the
> various parts of the representations far more than i care about their
> types. I can guess with little effort that an element named 'edit'
> will contain a link. With a link element i must scan to the right
> until i find the @rel in order to understand the purpose of the
> element. I'll take clarity over vague, unrealistic, promises of
> re-use any day.
To me <atom:link rel="{name}" href=""/> is equal to <{name} href=""/>.
> Besides, we have the link header[1] which is better, in every
> imaginable way, than atom:link for expressing common relations. It
> does not require the client be able to parse a particular format and
> it unambiguously relates the link to the requested resource as a
> whole.
>
> [1]:<http://tools.ietf.org/html/rfc5988>
Link headers is mostly useful for linking *entire resources* to other
*resources* while atom:link gives you a more fine grained linking
mechanism. I'm not against link headers in anyway, just saying it's a
slightly different thing.
--
Trygve
Kevin Duffey wrote: > > I am still struggling to understand when to use different media types > correctly. If the API I am providing provides a unique solution, but > I am returning chunks of xml (or json) that represent a specific > resource, do all these different resources that belong to the overall > API use the same media type, or should they use different media > types, one for each resource? I mean, if I > have /users, /orders, /sellers and /bids, should I be using something > like application/vnd.com.mycompany.orders+xml for /orders? I've > resorted back to using application/xml and application/json for the > convenience, but I am not opposed to using application specific or > even resource specific media types if that is a best practices that > the REST community at large is leaning towards. > Minting resource-specific media types is a REST anti-pattern, exactly the sort of coupling REST seeks to avoid. This thread may help: http://tech.groups.yahoo.com/group/rest-discuss/message/16793 Your /users, /orders, /sellers and /bids resources can all be represented by HTML: media type != resource type. -Eric
On 11/30/10 7:26 PM, Peter Williams wrote:
> On Tue, Nov 30, 2010 at 11:07 AM, Trygve Laugstøl<trygvis@...> wrote:
>> On 11/30/10 6:38 PM, Peter Williams wrote:
>>>
>>> 2010/11/30 Trygve Laugstųl<trygvis@...>:
>>>
>>>> I would go for atom:link as I hope that everyone will converge on using
>>>> atom:link [1]. The main advantage as I see it is that they will have
>>>> less to learn and that we will will converge on a more stable set of
>>>> relations which in turn will make it possible to create more generic
>>>> code for indexing, crawling etc. Just make sure you create and define
>>>> your own rel types according to the atom spec [2].
>>>
>>> Personally i don't much like atom:link. It trades clarity for a false
>>> promise of re-use. The idea that a great deal of leverage will be
>>> gained by reusing atom:link in other formats seems not be born out in
>>> reality. Parsing arbitrary xml documents, looking for link elements
>>> and then doing things with them without understanding the semantics of
>>> the overall representation sounds pretty far fetched to me. (Is that
>>> edit link for the requested resource, or is it for a different
>>> resource whose information is contained in the requested one.)
>>>
>>> The `<{name} href=""/>` pattern promotes the meaningful name of the
>>> item to the highest possible level. This increases the clarity of the
>>> representation. As a client writer i care about semantics of the
>>> various parts of the representations far more than i care about their
>>> types. I can guess with little effort that an element named 'edit'
>>> will contain a link. With a link element i must scan to the right
>>> until i find the @rel in order to understand the purpose of the
>>> element. I'll take clarity over vague, unrealistic, promises of
>>> re-use any day.
>>
>> To me<atom:link rel="{name}" href=""/> is equal to<{name} href=""/>.
>
> I agree they are semantically equivalent. However, the arrangement of
> information in an atom:link element increases the effort required to
> understand it. Not much of course, but a little. I prefer not to
> intentionally increase the complexity of my representations unless it
> provide a very real benefit.
I'm not sure I understand what you mean by "arrangement of information".
Do you mean that they have to go and read the Atom specification?
>>> Besides, we have the link header[1] which is better, in every
>>> imaginable way, than atom:link for expressing common relations. It
>>> does not require the client be able to parse a particular format and
>>> it unambiguously relates the link to the requested resource as a
>>> whole.
>>
>>>
>>>
>>> [1]:<http://tools.ietf.org/html/rfc5988>
>>
>> Link headers is mostly useful for linking *entire resources* to other
>> *resources* while atom:link gives you a more fine grained linking mechanism.
>> I'm not against link headers in anyway, just saying it's a slightly
>> different thing.
>
> Exactly. The ambiguity of link elements (due to their fine
> grainedness) works against any real world re-use of them. Clien have
> to understand the whole representation that contains link elements in
> order to use those links. Therefore, there is little, or no,
> practical benefit to using atom:link elements but there is some cost.
I'm not sure I understand what you mean here. What I'm saying that
instead of using your own <FooLink> you use should consider using
<atom:link>. Link headers really doesn't have anything to do with Atom's
link element type.
--
Trygve
My observation, now that I have started to implement this stuff, is that using atom:link is easier for a (server) programmer's point of view. I have created a AtomLink class and is able to reuse this all the time I need a link to something. This means I am sure to get the right XML serialization every time, without requiring me to repeat the XML serialization attributes all over the code. It is also easier to add my AtomLink to existing classes (or a list of those), by inheritance, such that I can develop REST specific resources based on code from the domain/query/business/whatever model. Had I chosen something like <MyElement href="...">...</MyElement> then I wouldn't know how to add the "href" attribute by inheritance and get the XML serializer to produce the right output. /Jrn
"Mint new a media type id every time you are forced to introduce an incompatible media type into your system. They are cheap, you can have as many as you need." +1 mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me #RESTFest 2010 http://rest-fest.googlecode.com On Mon, Nov 29, 2010 at 14:25, Peter Williams <pezra@...> wrote: > On Wed, Nov 24, 2010 at 10:03 PM, trollyrogers <trollyrogers@yahoo.com> wrote: >> Hi All - I've been digging around to figure out how best to solve a problem, and am hoping to get thoughts from you folks. >> >> The service in question uses wadl to describe the interface of the service. It's resources are represented using xml and or json, so xsd and JSON Schema will be used to describe representations. These requirements are basically things i have to live with, so I need to figure out how to do my best here. >> >> Now what i need to do is... >> >> 1. be able to version the service's resources >> 2. include a reference to an xml or json schema in my responses >> >> Here's what I'm thinkin'. I'll create custom vendor mime types for both xml and json. Each will contain two mime type parameters; "profile" which points to the json or xml schema, and "v" which specifies the version of the representation. >> >> Below is a simple example of what the wadl might look like. In short, the example resource supports four different response representations, two of which are xml; versions 1.0 and 2.0, and the other two are json; versions 1.0 and 2.0. And each points to a schema via profile parameter. >> >> FWIW - This approach is based on things I've read around here, Peter William's blog, Bill Burk's REST book, and the latest JSON Schema draft (recently 03, congrats!). >> >> Any thoughts? >> >> <resource path="example"> >> <method name="GET" id="get"> >> <response> >> <representation mediaType="application/vnd.custom.app+json;profile=/path/to/exampleV1.0.json;v=1.0"/> >> <representation mediaType="application/vnd.custom.app+json;profile=/path/to/exampleV2.0.json;v=2.0"/> >> <representation mediaType="application/vnd.custom.app+xml;profile=/path/to/exampleV1.0.xsd;v=1.0"/> >> <representation mediaType="application/vnd.custom.app+xml;profile=/path/to/exampleV2.0.xsd;v=2.0"/> >> </response> >> </method> >> </resource> > > I'd advise against using parameters to describe the version. Assuming > you have defined your media type with "must ignore" semantics you only > need a new version when breaking changes are needed. (Obviously, > breaking changes should be avoided when ever possible, but they are > sometimes necessary.) If you are introducing a breaking change then > it is a different media type by definition. Putting important > information about the semantics of representation in a parameter is > inappropriate. > > Using parameters is also less than ideal from a practical standpoint. > Tool chains tend to provide little or no support for media type > parameters. Putting important information in the parameters will > require more work in both user agents and servers than just minting a > brand new media type id. > > Mint new a media type id every time you are forced to introduce an > incompatible media type into your system. They are cheap, you can > have as many as you need. > > Peter > barelyenough.org > > > ------------------------------------ > > Yahoo! Groups Links > > > >
On Wed, Dec 1, 2010 at 1:18 AM, Trygve Laugstøl <trygvis@...> wrote:
> On 11/30/10 7:26 PM, Peter Williams wrote:
>>
>> On Tue, Nov 30, 2010 at 11:07 AM, Trygve Laugstøl<trygvis@...>
>> wrote:
>>>
>>> On 11/30/10 6:38 PM, Peter Williams wrote:
>>>>
>>>> 2010/11/30 Trygve Laugstųl<trygvis@...>:
>>>>
>>>>> I would go for atom:link as I hope that everyone will converge on using
>>>>> atom:link [1]. The main advantage as I see it is that they will have
>>>>> less to learn and that we will will converge on a more stable set of
>>>>> relations which in turn will make it possible to create more generic
>>>>> code for indexing, crawling etc. Just make sure you create and define
>>>>> your own rel types according to the atom spec [2].
>>>>
>>>> Personally i don't much like atom:link. It trades clarity for a false
>>>> promise of re-use. The idea that a great deal of leverage will be
>>>> gained by reusing atom:link in other formats seems not be born out in
>>>> reality. Parsing arbitrary xml documents, looking for link elements
>>>> and then doing things with them without understanding the semantics of
>>>> the overall representation sounds pretty far fetched to me. (Is that
>>>> edit link for the requested resource, or is it for a different
>>>> resource whose information is contained in the requested one.)
>>>>
>>>> The `<{name} href=""/>` pattern promotes the meaningful name of the
>>>> item to the highest possible level. This increases the clarity of the
>>>> representation. As a client writer i care about semantics of the
>>>> various parts of the representations far more than i care about their
>>>> types. I can guess with little effort that an element named 'edit'
>>>> will contain a link. With a link element i must scan to the right
>>>> until i find the @rel in order to understand the purpose of the
>>>> element. I'll take clarity over vague, unrealistic, promises of
>>>> re-use any day.
>>>
>>> To me<atom:link rel="{name}" href=""/> is equal to<{name} href=""/>.
>>
>> I agree they are semantically equivalent. However, the arrangement of
>> information in an atom:link element increases the effort required to
>> understand it. Not much of course, but a little. I prefer not to
>> intentionally increase the complexity of my representations unless it
>> provide a very real benefit.
>
> I'm not sure I understand what you mean by "arrangement of information". Do
> you mean that they have to go and read the Atom specification?
When expressing a link in a representation you actually need to
present two pieces of information: the uri and the relationship type.
You can arrange that as `<{meaningful name of relationship type}
href="{uri}"/>` or as `<atom:link href="{uri}" rel="{meaningful name
of relationship type}/>`. Same information, two different
arrangements.
My point is that one of those has a lot more boiler plate verbage that
the other. That verbosity obscures the important information in the
element, and you get nothing for that additional complexity.
>>>> Besides, we have the link header[1] which is better, in every
>>>> imaginable way, than atom:link for expressing common relations. It
>>>> does not require the client be able to parse a particular format and
>>>> it unambiguously relates the link to the requested resource as a
>>>> whole.
>>>
>>>>
>>>>
>>>> [1]:<http://tools.ietf.org/html/rfc5988>
>>>
>>> Link headers is mostly useful for linking *entire resources* to other
>>> *resources* while atom:link gives you a more fine grained linking
>>> mechanism.
>>> I'm not against link headers in anyway, just saying it's a slightly
>>> different thing.
>>
>> Exactly. The ambiguity of link elements (due to their fine
>> grainedness) works against any real world re-use of them. Clien have
>> to understand the whole representation that contains link elements in
>> order to use those links. Therefore, there is little, or no,
>> practical benefit to using atom:link elements but there is some cost.
>
> I'm not sure I understand what you mean here. What I'm saying that instead
> of using your own <FooLink> you use should consider using <atom:link>. Link
> headers really doesn't have anything to do with Atom's link element type.
I understand your point (i think). The primary reason most people
propose to use atom:link is to get some sort of serendipitous reuse
from it. My argument is that atom:link does provides little room for
serendipitous reuse. If you are looking to support serendipitous
reuse then link headers are a much better choice.
Since i think `atom:link` does not support serendipitous reuse i see
no benefit to using it. I do see a cost. `atom:link` is less
expressive than a meaningfully named element. Code to parse the two
will not be very different but human readability is important too.
The ease of being able to explore or debug an api by hand is of huge
importance to me.
Peter
barelyenough.org
PS: Obviously the differences are not huge. You can quite easily
produce a highly usable system by using `atom:link`. I just prefer
less complexity whenever possible.
On Wed, Dec 1, 2010 at 2:41 AM, Jorn Wildt <jw@...> wrote: > My observation, now that I have started to implement this stuff, is that using atom:link is easier for a (server) programmer's point of view. > > I have created a AtomLink class and is able to reuse this all the time I need a link to something. This means I am sure to get the right XML serialization every time, without requiring me to repeat the XML serialization attributes all over the code. > > It is also easier to add my AtomLink to existing classes (or a list of those), by inheritance, such that I can develop REST specific resources based on code from the domain/query/business/whatever model. > > Had I chosen something like <MyElement href="...">...</MyElement> then I wouldn't know how to add the "href" attribute by inheritance and get the XML serializer to produce the right output. Interesting. Whether this is holds for a particular system is going to depend greatly on the technology on which it is built. The technologies i usually use support generalized implementation of arbitrarily named link elements (ie, an element with an `href` attribute) quite easily. This make the cost of produce the various styles of links basically identical for me. One should also consider the cost on the client side. There will usually be more clients written for a service than servers. Peter barelyenough.org
Alexander Johannesen wrote: > > Anyone done any work with delivering RESTful SKOS resources? I'm > tossing up embedding SKOS in ATOM with extensions or roll my own, both > for requests and responses. Any thoughts by people in the know? > This seems orthogonal to REST, kinda like choosing between HTML and XHTML -- implementation details. I've only seen SKOS mentioned here once before: http://tech.groups.yahoo.com/group/rest-discuss/message/16404 Maybe you can ask Brian? -Eric
Hi, First, thanks to Alistair, I'll respond better later. Just a quick note ; On Thu, Dec 2, 2010 at 7:32 AM, Eric J. Bowman <eric@...> wrote: > This seems orthogonal to REST, kinda like choosing between HTML and > XHTML -- implementation details. Hmm, ok. For me this dips into the deeper realm of making a REST representation of a larger model; how do we represent best a model through REST resources? I'm a HATEOAS believer, but how to break up the SKOS meta model to bring a more concrete model through hyperlinks? I don't want to pass a format around; I want to pass a model around, and in doing so, are there formats I should prefer over others? (A preamble here is that I'm trying to avoid as best I can triplets and its ilk, not that it matters much) Of course this isn't limited to SKOS alone, so my question is a bit more open in that I'm looking for experiences in dealing with the meta model and data model (probably false) dichotomy. People who's done any serious ontology work and make it into a RESTful system should have a few good suggestions. Kind regards, Alex -- Project Wrangler, SOA, Information Alchemist, UX, RESTafarian, Topic Maps --- http://shelter.nu/blog/ ---------------------------------------------- ------------------ http://www.google.com/profiles/alexander.johannesen ---
<snip> > I don't want to pass a format around; I want to pass a model around, </snip> HTTP (and REST) is about passing around varying public representations (via negotiated media-types) of private data|models, not the models (or resources, or tables, or classes, etc.) themselves. Maybe some other "representation-less" app-level protocol is what you want? XMPP? mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me #RESTFest 2010 http://rest-fest.googlecode.com On Wed, Dec 1, 2010 at 15:54, Alexander Johannesen <alexander.johannesen@...> wrote: > Hi, > > First, thanks to Alistair, I'll respond better later. Just a quick note ; > > On Thu, Dec 2, 2010 at 7:32 AM, Eric J. Bowman <eric@bisonsystems.net> wrote: >> This seems orthogonal to REST, kinda like choosing between HTML and >> XHTML -- implementation details. > > Hmm, ok. For me this dips into the deeper realm of making a REST > representation of a larger model; how do we represent best a model > through REST resources? I'm a HATEOAS believer, but how to break up > the SKOS meta model to bring a more concrete model through hyperlinks? c > and in doing so, are there formats I should prefer over others? (A > preamble here is that I'm trying to avoid as best I can triplets and > its ilk, not that it matters much) > > Of course this isn't limited to SKOS alone, so my question is a bit > more open in that I'm looking for experiences in dealing with the meta > model and data model (probably false) dichotomy. People who's done any > serious ontology work and make it into a RESTful system should have a > few good suggestions. > > > Kind regards, > > Alex > > -- > Project Wrangler, SOA, Information Alchemist, UX, RESTafarian, Topic Maps > --- http://shelter.nu/blog/ ---------------------------------------------- > ------------------ http://www.google.com/profiles/alexander.johannesen --- > > > ------------------------------------ > > Yahoo! Groups Links > > > >
On Thu, Dec 2, 2010 at 8:09 AM, mike amundsen <mamund@...> wrote: > HTTP (and REST) is about passing around varying public representations > (via negotiated media-types) of private data|models, not the models > (or resources, or tables, or classes, etc.) themselves. > > Maybe some other "representation-less" app-level protocol is what you > want? XMPP? no, no, no. :) A model has a host of entities, relationships included, and they all can have representations. Some times they are bundled up, other times they have their very own resource. It's about finding a meaningful / useful balance between it all I'm after. I can dump a complete model in a Topic Maps format (XTM/JTM) at one resource, a TM fragment at another, but still have a full RESTful API into the innards using other representations, all the way down to individual occurrences or properties inside the data model (where I'm using SKOS as a heavy hitter ontology). So, balance. And being useful. Just fishing for experience. Kind regards, Alex -- Project Wrangler, SOA, Information Alchemist, UX, RESTafarian, Topic Maps --- http://shelter.nu/blog/ ---------------------------------------------- ------------------ http://www.google.com/profiles/alexander.johannesen ---
OK. mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me #RESTFest 2010 http://rest-fest.googlecode.com On Wed, Dec 1, 2010 at 16:15, Alexander Johannesen <alexander.johannesen@...m> wrote: > On Thu, Dec 2, 2010 at 8:09 AM, mike amundsen <mamund@...> wrote: >> HTTP (and REST) is about passing around varying public representations >> (via negotiated media-types) of private data|models, not the models >> (or resources, or tables, or classes, etc.) themselves. >> >> Maybe some other "representation-less" app-level protocol is what you >> want? XMPP? > > no, no, no. :) > > A model has a host of entities, relationships included, and they all > can have representations. Some times they are bundled up, other times > they have their very own resource. It's about finding a meaningful / > useful balance between it all I'm after. I can dump a complete model > in a Topic Maps format (XTM/JTM) at one resource, a TM fragment at > another, but still have a full RESTful API into the innards using > other representations, all the way down to individual occurrences or > properties inside the data model (where I'm using SKOS as a heavy > hitter ontology). > > So, balance. And being useful. Just fishing for experience. > > > Kind regards, > > Alex > -- > Project Wrangler, SOA, Information Alchemist, UX, RESTafarian, Topic Maps > --- http://shelter.nu/blog/ ---------------------------------------------- > ------------------ http://www.google.com/profiles/alexander.johannesen --- >
On Thu, Dec 2, 2010 at 9:15 AM, Dan Brickley <danbri@...> wrote: > One boring detail is the question of what the URI for a SKOS concept > should be; whether it needs to be named with a #blahblah or not. > > This is roughly the topic known on the TAG mailing list as > http-range-14 and I shudder at the thought of revisiting it again... Excellent observation. Luckily I'm a Topic Maps guy, and we don't have this silly problem. :) But this, too, cuts a bit into some of the stuff I'm trying to work out, say different resources for identifiers, locators and identity indicators, and so on, or how to deal with various degrees of identification as hyperlinks, how to best represent identifiers based on content-type, and on and on. I realize it's becoming a bit too specific (even though I personally believe that the concept of proper persistent identification management is paramount to all future software systems), so I'll just leave it at that. Regards, Alex -- Project Wrangler, SOA, Information Alchemist, UX, RESTafarian, Topic Maps --- http://shelter.nu/blog/ ---------------------------------------------- ------------------ http://www.google.com/profiles/alexander.johannesen ---
Alexander, On Dec 1, 2010, at 11:22 PM, Alexander Johannesen wrote: > Luckily I'm a Topic Maps guy, and we don't have > this silly problem. :) Note that simply stating <subjectIndicatorRef xlink:href="http://images/image-of-jim.jpg"> does not provide the ability to point to the image representation you receive upon a GET. IOW, a URI by definition never points to the representation. It always points to the resource (the membership function) that maps to a set of representations or URIs over time. The indirection is a feature of URIs *per design*. Jan
On Thu, Dec 2, 2010 at 9:51 AM, Jan Algermissen <algermissen1971@...> wrote: > Note that simply stating <subjectIndicatorRef xlink:href="http://images/image-of-jim.jpg"> > does not provide the ability to point to the image representation you receive upon a GET. Um, no, of course not, that's not in the realm of identity management, so I seriously don't understand your use of an image for a subject indicator? > IOW, a URI by definition never points to the representation. It always points to > the resource (the membership function) that maps to a set of representations or URIs over time. The concept here is inference about representation before resolving the URI to find out what it represents. In RDF it happens after resolving it, in Topic maps in terms of subject identification it happens before. Kind regards, Alex -- Project Wrangler, SOA, Information Alchemist, UX, RESTafarian, Topic Maps --- http://shelter.nu/blog/ ---------------------------------------------- ------------------ http://www.google.com/profiles/alexander.johannesen ---
Dan Brickley wrote: > > This is roughly the topic known on the TAG mailing list as > http-range-14 and I shudder at the thought of revisiting it again... > Seems inevitable. Don't get me wrong, I agree with the finding, due to my understanding of the architecture. The problem is, if we were to poll Web developers we'd find that an overwhelming majority think it's wrong -- due to a lack of understanding of Web architecture. http://cacm.acm.org/magazines/2008/7/5366-web-science/fulltext TBL raises an interesting point -- Web Design is taught literally everywhere; Web architecture and protocols, not so much. I'm a much better Web developer for having taken the time to learn the protocols and architecture; but it gets exhausting having to explain to folks that they aren't stupid or dumb, just ignorant, and that's why their ideas are untenable. Particularly for me -- I'm just not very good at diplomatic politesse, even on those (rare) occasions when I do try... ;-) My Uncle studied under Feynman at Cal Tech, and later spent decades accrediting University-level Physics courses. Nowadays, the "Feynman Lectures on Physics" is the de-facto standard for how introductory Physics is taught (even when some other textbook is used). What's needed is something along the lines of "Six Easy Pieces" for inclusion in Web Design courses, for a solid grounding in the architectural fundamentals of the Web; in an approachable fashion for non-architects, as the basis for accreditation requirements. Perhaps, if Web developers were taught the proper fundamentals of the architecture, there'd be no reason to revisit -14 (along with any number of other isses I'd thought long-settled). But I doubt it can happen fast enough to head that off. As TBL points out, Web evolution outpaces the ability to observe it; which makes the fundamentals all the more imperative to keep that evolution from following a crash-and- burn path, in the aftermath of which we look back in hindsight and see that it was preventable because the fundamentals were right all along. As it stands now, the "throw out HTTP and start over" crowd seems to be gaining the upper hand, primarily due to the unpopularity of things like media types, or -14. This way lies disaster, but what to do about it? -Eric http://www.scientificamerican.com/article.cfm?id=long-live-the-web http://en.wikipedia.org/wiki/Feynman_Lectures_on_Physics
Thanks Eric. I read the entire thread. I get it now about media-type..
at least for the most part. I am still not sure if I should create a
single media-type for our particular application or not.. or how you
determine if you should or not. Partly, the confusion lies with the
number of posts I've seen about "you should register your media-types
with IANA to help build it up".
When you register with IANA, do
you provide your SDK doc that explains your REST api, the resources,
what they require (request info) and what they return (response info)?
If so, do developers like you and I go to some central IANA site and try
to find a media-type via some description, and if found, we find out
how to use that service that provides the API? I am not quite
understanding what the benefit is of centrally registered media-types.
If I find something useful,
do I now have my business depend on this service API hoping it never
goes down, can handle my volume of requests, and so forth? Or do I
contact the owner of the service and find out how I can make use of it
in my own deployments?
This idea of IANA and registering
media-types reminds me of the days when SOAP services with a UDDI lookup
and such were all the rage. I never followed along with SOAP, so not
sure what ever happened to that idea, but I am guessing it's not really
used much and never took off. I loved the idea of making my app more
robust by using other services that were usually freely available, but
as I said above.. how do you depend on another service without knowing
it's capabilities.. can it handle my needs while it is also handling
others.. what happens if they just shut it down, etc.
Thanks.
--- On Tue, 11/30/10, Eric J. Bowman <eric@...> wrote:
From: Eric J. Bowman <eric@...>
Subject: Re: [rest-discuss] Link relations [was: A media type for case files, dossiers and documents]
To: "Kevin Duffey" <andjarnic@...>
Cc: "Rest Discussion List" <rest-discuss@yahoogroups.com>
Date: Tuesday, November 30, 2010, 10:44 PM
Kevin Duffey wrote:
>
> I am still struggling to understand when to use different media types
> correctly. If the API I am providing provides a unique solution, but
> I am returning chunks of xml (or json) that represent a specific
> resource, do all these different resources that belong to the overall
> API use the same media type, or should they use different media
> types, one for each resource? I mean, if I
> have /users, /orders, /sellers and /bids, should I be using something
> like application/vnd.com.mycompany.orders+xml for /orders? I've
> resorted back to using application/xml and application/json for the
> convenience, but I am not opposed to using application specific or
> even resource specific media types if that is a best practices that
> the REST community at large is leaning towards.
>
Minting resource-specific media types is a REST anti-pattern, exactly
the sort of coupling REST seeks to avoid. This thread may help:
http://tech.groups.yahoo.com/group/rest-discuss/message/16793
Your /users, /orders, /sellers and /bids resources can all be
represented by HTML: media type != resource type.
-Eric
On Dec 1, 2010, at 11:55 PM, Alexander Johannesen wrote: > On Thu, Dec 2, 2010 at 9:51 AM, Jan Algermissen <algermissen1971@...> wrote: >> Note that simply stating <subjectIndicatorRef xlink:href="http://images/image-of-jim.jpg"> >> does not provide the ability to point to the image representation you receive upon a GET. > > Um, no, of course not, that's not in the realm of identity management, > so I seriously don't understand your use of an image for a subject > indicator? Hmm, dunno, but AFAIR back in 2001 the idea was that the 'document' (the image in this case) referenced by subjectIndicatorRef is sort of 'about' the abstract concept (Jim). TopicMaps use(d) subjectIndicatorRef to distinguish between concept and document (sorry 'bout the fuzzy terms here). > >> IOW, a URI by definition never points to the representation. It always points to >> the resource (the membership function) that maps to a set of representations or URIs over time. > > The concept here is inference about representation before resolving > the URI to find out what it represents. > In RDF it happens after > resolving it, in Topic maps in terms of subject identification it > happens before. Not sure what 'inference about the representation' is. (But the sentence sounds real nice, anyway :-) Jan > > > Kind regards, > > Alex > -- > Project Wrangler, SOA, Information Alchemist, UX, RESTafarian, Topic Maps > --- http://shelter.nu/blog/ ---------------------------------------------- > ------------------ http://www.google.com/profiles/alexander.johannesen --- > > > ------------------------------------ > > Yahoo! Groups Links > > >
Hello! On Wed, 2010-12-01 at 16:05 -0800, Kevin Duffey wrote: > When you register with IANA, do you provide your SDK doc that explains > your REST api, the resources, what they require (request info) and > what they return (response info)? If so, do developers like you and I > go to some central IANA site and try to find a media-type via some > description, and if found, we find out how to use that service that > provides the API? I am not quite understanding what the benefit is of > centrally registered media-types. If I find something useful, do I > now have my business depend on this service API hoping it never goes > down, can handle my volume of requests, and so forth? Or do I contact > the owner of the service and find out how I can make use of it in my > own deployments? As far as I understand, the media type is not about a particular API or (worse) an API provider. Instead, it describes how information is represented and what certain parts of the content actually mean. For example, in the text/html type, it is clearly described what <a href="..."> means. Everyone who has read the specs for text/html now knows this and knows what to do with something like <a>, if it comes as part of a message with the text/html media type. There is no provider or service that's tied to that media type definition. The media type (once documented and ideally registered) lives entirely on its own, independent of any provider or particular API. Juergen -- Juergen Brendel MuleSoft
Kevin: A media-type is not a service (think HTML, Atom, etc.). Registering your media type is just that; a registry entry that lists the media type and points to a document w/ some boilerplate particulars[1]. That boiler plate must point to one stable URL where the curious may go to learn more about your media type. [1] http://www.iana.org/cgi-bin/mediatypes.pl mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me #RESTFest 2010 http://rest-fest.googlecode.com On Wed, Dec 1, 2010 at 19:05, Kevin Duffey <andjarnic@...> wrote: > > > Thanks Eric. I read the entire thread. I get it now about media-type.. at > least for the most part. I am still not sure if I should create a single > media-type for our particular application or not.. or how you determine if > you should or not. Partly, the confusion lies with the number of posts I've > seen about "you should register your media-types with IANA to help build it > up". > > When you register with IANA, do you provide your SDK doc that explains your > REST api, the resources, what they require (request info) and what they > return (response info)? If so, do developers like you and I go to some > central IANA site and try to find a media-type via some description, and if > found, we find out how to use that service that provides the API? I am not > quite understanding what the benefit is of centrally registered > media-types. If I find something useful, do I now have my business depend > on this service API hoping it never goes down, can handle my volume of > requests, and so forth? Or do I contact the owner of the service and find > out how I can make use of it in my own deployments? > > This idea of IANA and registering media-types reminds me of the days when > SOAP services with a UDDI lookup and such were all the rage. I never > followed along with SOAP, so not sure what ever happened to that idea, but I > am guessing it's not really used much and never took off. I loved the idea > of making my app more robust by using other services that were usually > freely available, but as I said above.. how do you depend on another service > without knowing it's capabilities.. can it handle my needs while it is also > handling others.. what happens if they just shut it down, etc. > > Thanks. > > --- On *Tue, 11/30/10, Eric J. Bowman <eric@...>* wrote: > > > From: Eric J. Bowman <eric@...> > > Subject: Re: [rest-discuss] Link relations [was: A media type for case > files, dossiers and documents] > To: "Kevin Duffey" <andjarnic@...> > Cc: "Rest Discussion List" <rest-discuss@yahoogroups.com> > Date: Tuesday, November 30, 2010, 10:44 PM > > > Kevin Duffey wrote: > > > > I am still struggling to understand when to use different media types > > correctly. If the API I am providing provides a unique solution, but > > I am returning chunks of xml (or json) that represent a specific > > resource, do all these different resources that belong to the overall > > API use the same media type, or should they use different media > > types, one for each resource? I mean, if I > > have /users, /orders, /sellers and /bids, should I be using something > > like application/vnd.com.mycompany.orders+xml for /orders? I've > > resorted back to using application/xml and application/json for the > > convenience, but I am not opposed to using application specific or > > even resource specific media types if that is a best practices that > > the REST community at large is leaning towards. > > > > Minting resource-specific media types is a REST anti-pattern, exactly > the sort of coupling REST seeks to avoid. This thread may help: > > http://tech.groups.yahoo.com/group/rest-discuss/message/16793 > > Your /users, /orders, /sellers and /bids resources can all be > represented by HTML: media type != resource type. > > -Eric > > > > >
Kevin Duffey wrote: > > Thanks Eric. I read the entire thread. I get it now about > media-type.. at least for the most part. I am still not sure if I > should create a single media-type for our particular application or > not.. or how you determine if you should or not. Partly, the > confusion lies with the number of posts I've seen about "you should > register your media-types with IANA to help build it up". > I've been doing REST development for a dozen years, and have yet to create a media type. If you _must_ create a media type, you _must_ register it. But that's a pretty big "if". Consider the multitude of applications on the real-world Web using text/html -- it is _not_ required for each application, be it banking or travel reservations or e-mail or shopping, to have its own media type. Re-using ubiquitous types has the benefit of scaling out-of-the-box; creating your own, even if it's registered, will only achieve Internet scale if it becomes ubiquitous -- another pretty big "if". My advice on creating your own media types remains unchanged -- don't. -Eric
Hello Eric, I have a question about your statement... On Wed, 2010-12-01 at 17:27 -0700, Eric J. Bowman wrote: > I've been doing REST development for a dozen years, and have yet to > create a media type. If you _must_ create a media type, you _must_ > register it. But that's a pretty big "if". Consider the multitude of > applications on the real-world Web using text/html -- it is _not_ > required for each application, be it banking or travel reservations or > e-mail or shopping, to have its own media type. Re-using ubiquitous > types has the benefit of scaling out-of-the-box; creating your own, > even if it's registered, will only achieve Internet scale if it > becomes > ubiquitous -- another pretty big "if". My advice on creating your own > media types remains unchanged -- don't. So, I think I understand the point you are making about the Internet scale. And it's true, there are lots of types out there, which can do a lot of things for you. Some of them may use XML as underlying encoding, others something else, and all of them will probably have some sort of library available for the language of your choice to help you parse and deal with that sort of content type. But that's exactly where I can also see an issue: When I choose a number of (different) media types, I now also need to equip my clients with all the necessary libraries to read and parse this data, and my server with the right libs to create this sort of content. Doesn't that add undue weight and dependencies to your software? Sure, it depends on your use case, but generally, I like to keep the number of dependencies for my software small. For example, I once dealt with an Atom library, which was just heavy and slow. We finally went to plain JSON, the software was much smaller and things started to fly. If I just need a simple collection of stuff, is it really wise to go all out and use Atom if something (admittedly self-made) smaller would suffice as well? I'm not bashing on Atom here, that's just an example. Juergen -- Juergen Brendel MuleSoft
Thank you all for the help in understanding this. Finally..I got it. ;) IANA, media types and what they are about. I understood the context of them with regards to html, such as images, text/html and all that, but never quite understood related to rest when you would want to specify something custom for your own api. Now, I see that you really don't. If your api supports handling of images, or returning images, you would use the appropriate media type in the http headers for those particular types.
For my needs, it seems just using application/xml and applicatino/json fit the bill. Most clients support this out of the box (er.. most languages that I know of) to some degree or another, and really my services are simply returning chunks of xml or json (depending on the Accept header), so there isn't any reason that I can see why I would use anything else.
Again, thank you. Good detail and answers.
--- On Wed, 12/1/10, Juergen Brendel <juergen.brendel@mulesoft.com> wrote:
From: Juergen Brendel <juergen.brendel@...>
Subject: Re: [rest-discuss] Link relations [was: A media type for case files, dossiers and documents]
To: "Rest Discussion List" <rest-discuss@yahoogroups.com>
Date: Wednesday, December 1, 2010, 4:40 PM
Hello Eric,
I have a question about your statement...
On Wed, 2010-12-01 at 17:27 -0700, Eric J. Bowman wrote:
> I've been doing REST development for a dozen years, and have yet to
> create a media type. If you _must_ create a media type, you _must_
> register it. But that's a pretty big "if". Consider the multitude of
> applications on the real-world Web using text/html -- it is _not_
> required for each application, be it banking or travel reservations or
> e-mail or shopping, to have its own media type. Re-using ubiquitous
> types has the benefit of scaling out-of-the-box; creating your own,
> even if it's registered, will only achieve Internet scale if it
> becomes
> ubiquitous -- another pretty big "if". My advice on creating your own
> media types remains unchanged -- don't.
So, I think I understand the point you are making about the Internet
scale. And it's true, there are lots of types out there, which can do a
lot of things for you. Some of them may use XML as underlying encoding,
others something else, and all of them will probably have some sort of
library available for the language of your choice to help you parse and
deal with that sort of content type.
But that's exactly where I can also see an issue: When I choose a number
of (different) media types, I now also need to equip my clients with all
the necessary libraries to read and parse this data, and my server with
the right libs to create this sort of content.
Doesn't that add undue weight and dependencies to your software? Sure,
it depends on your use case, but generally, I like to keep the number of
dependencies for my software small. For example, I once dealt with an
Atom library, which was just heavy and slow. We finally went to plain
JSON, the software was much smaller and things started to fly. If I just
need a simple collection of stuff, is it really wise to go all out and
use Atom if something (admittedly self-made) smaller would suffice as
well?
I'm not bashing on Atom here, that's just an example.
Juergen
--
Juergen Brendel
MuleSoft
--- In rest-discuss@yahoogroups.com, Kevin Duffey <andjarnic@...> wrote: > > Thanks Eric. I read the entire thread. I get it now about media-type.. > at least for the most part. I am still not sure if I should create a > single media-type for our particular application or not.. or how you > determine if you should or not. Partly, the confusion lies with the > number of posts I've seen about "you should register your media-types > with IANA to help build it up". The idea isn't to "build up" the IANA registry. The idea is that there is always a well understood way to know what a message means and where to find the associated spec. The IANA isn't a "service API" either; user agents don't consult it in an automated way as they run. It is better to think of it more as a dictionary for developers who code the support for the formats into their programs. Another way to contrast it is that SOAP+UDDI is about encouraging and supporting the proliferation of formats while REST and the IANA registry is about managing it or even restricting it. The more formats there are, the harder it is for general interoperability. Keep in mind that REST is designed for the scale of the web -- if you are not planning to operate at that scale (or even a reasonable fraction of that scale) then some of the principles will seem a bit strange. Think of it this way: imagine that the services you are writing need to work with many millions of individual machines that you don't have access to because they are located in many different organizations or even in people's homes. Are you going to: (a) cook up your own format and hope that somehow software that supports that format gets onto all of the target machines; or (b) try and use a format that software on those machines already understand? Probably (b). If there is no suitable format for the use case, then you may have to create a new format. But you'd likely want to get input and agreement from potential implementors and experts in the domain you are targeting so that the format gets adopted and onto the target machines. You also want to make sure that the spec can easily be found and when machines communicate using the format, that it can be easily identified with a media type. That's why you standardize your format and register a media type for it. Now if you're saying: "Hold on! Millions of machines? Different organizations? I just need to be able to talk to the server that Bob, who sits 2 cubes over, is building" then maybe REST isn't what you need. That doesn't mean you can't use many if not most of its principles and leave others out. Just be careful about calling it REST (well at least on this list ;-). Andrew
Hello! On Wed, 2010-12-01 at 18:03 -0800, Kevin Duffey wrote: > For my needs, it seems just using application/xml and applicatino/json > fit the bill. Most clients support this out of the box (er.. most > languages that I know of) to some degree or another, and really my > services are simply returning chunks of xml or json (depending on the > Accept header), so there isn't any reason that I can see why I would > use anything else. The problem with using application/xml or application/json is that they are entirely devoid of meaning. A media type should expresses a particular meaning for the data. For example, in text/html there is a meaning defined for "<a>": It's a link you can follow! But in application/xml? Just looking at that media type tells me nothing at all really. I might have an XML dialect where <foo> means one thing, you might have one where <foo> means something completely different. Just application/xml doesn't tell the client anything about how to interpret the data. If you use application/xml then you still cannot point to a separate definition of the type's meaning (for example in the IANA registry). Only YOUR clients will know what to do with the data. Other clients cannot just start to use your API and there is no place they can go to find out how to make sense of the data. Juergen -- Juergen Brendel MuleSoft
Juergen,
Valid point. I am not sure what media type would fit then. I am trying to follow the HATEOS design, I have an entry point that returns some links based on credentials, from there a client would use those to make calls to any of my resources, and each response would return a relevant chunk of XML or JSON with links for each resource accessible at that point (for example GET /orders/id would return a specific order along with one or more links that can be used to operate on the order).
So now I'll ask, what media type I could possibly use with my own xml/json structure? It almost sounds like you're saying I shouldn't be returning my own made up structure, that I should instead use an existing media type, like one with xhtml or something. Is there a media type that allows for any sort of specific format to a domain to be returned? Or does that now fall into a case where I should create my own media type and register it with IANA?
The problem with using application/xml or application/json is that they
are entirely devoid of meaning. A media type should expresses a
particular meaning for the data. For example, in text/html there is a
meaning defined for "<a>": It's a link you can follow!
But in application/xml? Just looking at that media type tells me nothing
at all really. I might have an XML dialect where <foo> means one thing,
you might have one where <foo> means something completely different.
Just application/xml doesn't tell the client anything about how to
interpret the data.
If you use application/xml then you still cannot point to a separate
definition of the type's meaning (for example in the IANA registry).
Only YOUR clients will know what to do with the data. Other clients
cannot just start to use your API and there is no place they can go to
find out how to make sense of the data.
<!--
#yiv1516577240 #yiv1516577240ygrp-mkp {
border:1px solid #d8d8d8;font-family:Arial;margin:10px 0;padding:0 10px;}
#yiv1516577240 #yiv1516577240ygrp-mkp hr {
border:1px solid #d8d8d8;}
#yiv1516577240 #yiv1516577240ygrp-mkp #yiv1516577240hd {
color:#628c2a;font-size:85%;font-weight:700;line-height:122%;margin:10px 0;}
#yiv1516577240 #yiv1516577240ygrp-mkp #yiv1516577240ads {
margin-bottom:10px;}
#yiv1516577240 #yiv1516577240ygrp-mkp .yiv1516577240ad {
padding:0 0;}
#yiv1516577240 #yiv1516577240ygrp-mkp .yiv1516577240ad p {
margin:0;}
#yiv1516577240 #yiv1516577240ygrp-mkp .yiv1516577240ad a {
color:#0000ff;text-decoration:none;}
#yiv1516577240 #yiv1516577240ygrp-sponsor #yiv1516577240ygrp-lc {
font-family:Arial;}
#yiv1516577240 #yiv1516577240ygrp-sponsor #yiv1516577240ygrp-lc #yiv1516577240hd {
margin:10px 0px;font-weight:700;font-size:78%;line-height:122%;}
#yiv1516577240 #yiv1516577240ygrp-sponsor #yiv1516577240ygrp-lc .yiv1516577240ad {
margin-bottom:10px;padding:0 0;}
#yiv1516577240 a {
color:#1e66ae;}
#yiv1516577240 #yiv1516577240actions {
font-family:Verdana;font-size:11px;padding:10px 0;}
#yiv1516577240 #yiv1516577240activity {
background-color:#e0ecee;float:left;font-family:Verdana;font-size:10px;padding:10px;}
#yiv1516577240 #yiv1516577240activity span {
font-weight:700;}
#yiv1516577240 #yiv1516577240activity span:first-child {
text-transform:uppercase;}
#yiv1516577240 #yiv1516577240activity span a {
color:#5085b6;text-decoration:none;}
#yiv1516577240 #yiv1516577240activity span span {
color:#ff7900;}
#yiv1516577240 #yiv1516577240activity span .yiv1516577240underline {
text-decoration:underline;}
#yiv1516577240 .yiv1516577240attach {
clear:both;display:table;font-family:Arial;font-size:12px;padding:10px 0;width:400px;}
#yiv1516577240 .yiv1516577240attach div a {
text-decoration:none;}
#yiv1516577240 .yiv1516577240attach img {
border:none;padding-right:5px;}
#yiv1516577240 .yiv1516577240attach label {
display:block;margin-bottom:5px;}
#yiv1516577240 .yiv1516577240attach label a {
text-decoration:none;}
#yiv1516577240 blockquote {
margin:0 0 0 4px;}
#yiv1516577240 .yiv1516577240bold {
font-family:Arial;font-size:13px;font-weight:700;}
#yiv1516577240 .yiv1516577240bold a {
text-decoration:none;}
#yiv1516577240 dd.yiv1516577240last p a {
font-family:Verdana;font-weight:700;}
#yiv1516577240 dd.yiv1516577240last p span {
margin-right:10px;font-family:Verdana;font-weight:700;}
#yiv1516577240 dd.yiv1516577240last p span.yiv1516577240yshortcuts {
margin-right:0;}
#yiv1516577240 div.yiv1516577240attach-table div div a {
text-decoration:none;}
#yiv1516577240 div.yiv1516577240attach-table {
width:400px;}
#yiv1516577240 div.yiv1516577240file-title a, #yiv1516577240 div.yiv1516577240file-title a:active, #yiv1516577240 div.yiv1516577240file-title a:hover, #yiv1516577240 div.yiv1516577240file-title a:visited {
text-decoration:none;}
#yiv1516577240 div.yiv1516577240photo-title a, #yiv1516577240 div.yiv1516577240photo-title a:active, #yiv1516577240 div.yiv1516577240photo-title a:hover, #yiv1516577240 div.yiv1516577240photo-title a:visited {
text-decoration:none;}
#yiv1516577240 div#yiv1516577240ygrp-mlmsg #yiv1516577240ygrp-msg p a span.yiv1516577240yshortcuts {
font-family:Verdana;font-size:10px;font-weight:normal;}
#yiv1516577240 .yiv1516577240green {
color:#628c2a;}
#yiv1516577240 .yiv1516577240MsoNormal {
margin:0 0 0 0;}
#yiv1516577240 o {
font-size:0;}
#yiv1516577240 #yiv1516577240photos div {
float:left;width:72px;}
#yiv1516577240 #yiv1516577240photos div div {
border:1px solid #666666;height:62px;overflow:hidden;width:62px;}
#yiv1516577240 #yiv1516577240photos div label {
color:#666666;font-size:10px;overflow:hidden;text-align:center;white-space:nowrap;width:64px;}
#yiv1516577240 #yiv1516577240reco-category {
font-size:77%;}
#yiv1516577240 #yiv1516577240reco-desc {
font-size:77%;}
#yiv1516577240 .yiv1516577240replbq {
margin:4px;}
#yiv1516577240 #yiv1516577240ygrp-actbar div a:first-child {
margin-right:2px;padding-right:5px;}
#yiv1516577240 #yiv1516577240ygrp-mlmsg {
font-size:13px;font-family:Arial, helvetica, clean, sans-serif;}
#yiv1516577240 #yiv1516577240ygrp-mlmsg table {
font-size:inherit;font:100%;}
#yiv1516577240 #yiv1516577240ygrp-mlmsg select, #yiv1516577240 input, #yiv1516577240 textarea {
font:99% Arial, Helvetica, clean, sans-serif;}
#yiv1516577240 #yiv1516577240ygrp-mlmsg pre, #yiv1516577240 code {
font:115% monospace;}
#yiv1516577240 #yiv1516577240ygrp-mlmsg * {
line-height:1.22em;}
#yiv1516577240 #yiv1516577240ygrp-mlmsg #yiv1516577240logo {
padding-bottom:10px;}
#yiv1516577240 #yiv1516577240ygrp-mlmsg a {
color:#1E66AE;}
#yiv1516577240 #yiv1516577240ygrp-msg p a {
font-family:Verdana;}
#yiv1516577240 #yiv1516577240ygrp-msg p#yiv1516577240attach-count span {
color:#1E66AE;font-weight:700;}
#yiv1516577240 #yiv1516577240ygrp-reco #yiv1516577240reco-head {
color:#ff7900;font-weight:700;}
#yiv1516577240 #yiv1516577240ygrp-reco {
margin-bottom:20px;padding:0px;}
#yiv1516577240 #yiv1516577240ygrp-sponsor #yiv1516577240ov li a {
font-size:130%;text-decoration:none;}
#yiv1516577240 #yiv1516577240ygrp-sponsor #yiv1516577240ov li {
font-size:77%;list-style-type:square;padding:6px 0;}
#yiv1516577240 #yiv1516577240ygrp-sponsor #yiv1516577240ov ul {
margin:0;padding:0 0 0 8px;}
#yiv1516577240 #yiv1516577240ygrp-text {
font-family:Georgia;}
#yiv1516577240 #yiv1516577240ygrp-text p {
margin:0 0 1em 0;}
#yiv1516577240 #yiv1516577240ygrp-text tt {
font-size:120%;}
#yiv1516577240 #yiv1516577240ygrp-vital ul li:last-child {
border-right:none !important;
}
-->
Hi,
>Probably (b). If there is no suitable format for the use case, then you
may have to create >a new format. But you'd likely want to get input and
agreement from potential >implementors and experts in the domain you are
targeting so that the format gets adopted >and onto the target machines.
You also want to make sure that the spec can easily be >found and when
machines communicate using the format, that it can be easily identified
>with a media type. That's why you standardize your format and register a
media type for >it.
Funny, I just replied a second ago to Jurgen about this... since application/xml and application/json have no meaning, then what do I use to return my custom domain specific format of information? While one day it would be great for millions of users to make use of my service, I doubt outside of those using our service that our format would ever be adopted for anything else, thus, it doesn't seem like a new media type would be of any use.
So if I use application/xml, my API would not be considered truly RESTful?
>Now if you're saying: "Hold on! Millions of machines? Different
organizations? I just need >to be able to talk to the server that Bob,
who sits 2 cubes over, is building" then maybe >REST isn't what you need.
That doesn't mean you can't use many if not most of its >principles and
leave others out. Just be careful about calling it REST (well at least
on this >list ;-).
I find that Java + Jersey is so easy, I use it for all my needs. I've replaced my old MVC framework with it, in favor of allowing a HTTP based API as well as my site to use the API to build up a UI. Not that my test site for my own learning purposes will ever garner any more attention than just myself, but I like how easy it is to develop with.
I am learning that many people on this list and others indicate that most APIs out there that say they are Rest based are really Http Api based, nothing more than a glorified RPC over HTTP. Fair enough I guess, but for my own stuff, I am not worried about it being 100% Rest compliant. However, and the reason I am asking all these questions is I do want to understand how a true rest implementation would be done, just in case I ever get lucky enough to be part of something (again) that may have an impact greater than my garage server. ;)
Juergen Brendel wrote: > > I'm not bashing on Atom here, that's just an example. > It's a valid example, and a good question. > > So, I think I understand the point you are making about the Internet > scale. And it's true, there are lots of types out there, which can do > a lot of things for you. Some of them may use XML as underlying > encoding, others something else, and all of them will probably have > some sort of library available for the language of your choice to > help you parse and deal with that sort of content type. > Plus, the good ones are all in the IANA standards tree. Which means there's a peer-reviewed trust relationship, at the IP layer, as to the security considerations of the media type (RFC 4288, 4.6). Which is one reason why these types are ubiquitous. For media types in the IANA vendor tree, whomever it was that minted the media type must be trusted to have done a thorough analysis, without the peer review provided by the standardization process. I'd prefer if that whomever wasn't hiding anything to protect a corporate image, or outright ignorant about the topic -- which I can't be sure of outside of the standards tree. It's this trust relationship around security considerations (plus the shared understanding of a processing model), at the IP layer, which enables intermediaries to participate in the communication. It is my belief that the anarchic scalability of the Web would not have occurred, and is unlikely to continue, without this essential network- based shared understanding between participants, where participant can literally mean anything unless you're tunneling. It's also my belief that only if everybody starts tunneling, will everybody understand the benefits of caching -- so go, Web Sockets and SPDY! Prove me wrong. The only thing I can promise, is that if I turn out to be right, I *will* tell you I told you so... > > But that's exactly where I can also see an issue: When I choose a > number of (different) media types, I now also need to equip my > clients with all the necessary libraries to read and parse this data, > and my server with the right libs to create this sort of content. > Implementation details aren't relevant to REST, because the performance bottleneck in REST systems is the network. The solution is caching. The tradeoff may be heavier applications on the client, intermediary and server components. If you're caching dynamically-generated content with a cache connector on the origin server component, then the latency of generating that content only applies to the first hit after that content is created or updated. Same with the CPU cycles and RAM. If you're creating a hypertext system, the idea is that you don't need to worry about equipping clients with libraries, because you're targeting browsers as clients by using media types browsers already support. So stripping down the required libraries is a false economy; the results may not begin to scale as well, in which case what have you gained by optimizing your server code? Maintainability, perhaps, but at the expense of scaling? Doesn't compute. > > Doesn't that add undue weight and dependencies to your software? Sure, > it depends on your use case, but generally, I like to keep the number > of dependencies for my software small. For example, I once dealt with > an Atom library, which was just heavy and slow. We finally went to > plain JSON, the software was much smaller and things started to fly. > If I just need a simple collection of stuff, is it really wise to go > all out and use Atom if something (admittedly self-made) smaller > would suffice as well? > It depends on the goals of your system. If those goals are congruous with the benefits REST provides, then the tradeoff is worth it. I'm not trying to reduce the bytes sent over the wire, I'm trying to reduce how often the bytes are sent over the wire -- while exposing my API in a way intermediaries can accelerate. JSON lacks semantics to express how a URI is used, whereas HTML and Atom have explicit semantics for that, which are standardized -- intermediaries know which links to prefetch, or get a head-start on the DNS lookups, vs. identifiers that don't need to be looked up or fetched. None of these benefits accrue to media types that aren't in the standards tree (with very few exceptions). -Eric
On 12/01/2010 10:55 PM, Kevin Duffey wrote: > > Hi, > > > >Probably (b). If there is no suitable format for the use case, then > you may have to create >a new format. But you'd likely want to get > input and agreement from potential >implementors and experts in the > domain you are targeting so that the format gets adopted >and onto the > target machines. You also want to make sure that the spec can easily > be >found and when machines communicate using the format, that it can > be easily identified >with a media type. That's why you standardize > your format and register a media type for >it. > > Funny, I just replied a second ago to Jurgen about this... since > application/xml and application/json have no meaning, then what do I > use to return my custom domain specific format of information? While > one day it would be great for millions of users to make use of my > service, I doubt outside of those using our service that our format > would ever be adopted for anything else, thus, it doesn't seem like a > new media type would be of any use. > Whatever you choose that makes sense. > > > So if I use application/xml, my API would not be considered truly > RESTful? > Using this media type in itself would not disqualify it from being RESTful but how the media type is leveraged may. Regardless of the media type, out-of-band information (which is not the media type specification) is still required for a consumer to consume the messages. In browsers, this information is gained when I read the web page but in M2M scenarios, I need to know that following a certain link causes a certain state transition. > > >Now if you're saying: "Hold on! Millions of machines? Different > organizations? I just need >to be able to talk to the server that Bob, > who sits 2 cubes over, is building" then maybe >REST isn't what you > need. That doesn't mean you can't use many if not most of its > >principles and leave others out. Just be careful about calling it > REST (well at least on this >list ;-). > > I find that Java + Jersey is so easy, I use it for all my needs. I've > replaced my old MVC framework with it, in favor of allowing a HTTP > based API as well as my site to use the API to build up a UI. Not that > my test site for my own learning purposes will ever garner any more > attention than just myself, but I like how easy it is to develop with. > > I am learning that many people on this list and others indicate that > most APIs out there that say they are Rest based are really Http Api > based, nothing more than a glorified RPC over HTTP. Fair enough I > guess, but for my own stuff, I am not worried about it being 100% Rest > compliant. However, and the reason I am asking all these questions is > I do want to understand how a true rest implementation would be done, > just in case I ever get lucky enough to be part of something (again) > that may have an impact greater than my garage server. ;) > I wouldn't say that they are glorified RPC over HTTP. Quite a few are not but they don't have all the ingredients of REST (and that's ok as long as they are not labeled as such). > > > -- blog: http://eikonne.wordpress.com twitter: http://twitter.com/eikonne
Eric J. Bowman wrote: > I'm not trying to reduce the bytes sent over the wire, > I'm trying to reduce how often the bytes are sent over > the wire... Well said. And also: reduce how often the bytes are assembled by the origin server. Robert Brewer fumanchu@...
Kevin Duffey wrote: > > Valid point. I am not sure what media type would fit then. I am > trying to follow the HATEOS design, I have an entry point that > returns some links based on credentials, from there a client would > use those to make calls to any of my resources, and each response > would return a relevant chunk of XML or JSON with links for each > resource accessible at that point (for example GET /orders/id would > return a specific order along with one or more links that can be used > to operate on the order). > I've never seen an order-processing system that couldn't be modeled as HTML. In fact, I've rarely seen an order-processing system that wasn't HTML. In OOP terminology, the goal is to distribute not your objects, but your object interfaces. REST says, make those object interfaces uniform. Which means participants have a network-based shared understanding of your state transitions (links, forms), IOW, a self- documenting API. It's perfectly acceptable to model your data as JSON or as XML (bearing in mind that schemas are an orthogonal concern). The trick is to create an HTML interface for either JSON or XML data, which instructs user-agents how to interact with that data. I'd choose either JSON or XML, instead of trying to do both, depending on whether you're more comfortable transforming that data into HTML using Javascript or XSLT. > > So if I use application/xml, my API would not be considered truly > RESTful? > No, not if you're using application/xml as the hypertext engine driving application state. If it's just a snippet of XML which gets read by, say, an HTML front-end driving application state, then it's OK because the processing model (parse as XML, handling XInclude/XLink/rdf:about) is adequate to the task. If that XML snippet contains URIs the user is supposed to click on to transition the application to the next steady- state (which aren't XLinks), well, that's what <a> and atom:link are for, there's no corollary in application/xml (besides XLink). Also, most order forms are simply tabular data, the semantics of which don't exist in application/xml like they do in application/xhtml+xml or text/html with <table>. Same with lists, same with forms. > > So now I'll ask, what media type I could possibly use with my own > xml/json structure? It almost sounds like you're saying I shouldn't > be returning my own made up structure, that I should instead use an > existing media type, like one with xhtml or something. Is there a > media type that allows for any sort of specific format to a domain to > be returned? Or does that now fall into a case where I should create > my own media type and register it with IANA? > It falls into a case where you should refactor. You have tabular data, so you need to choose a data type which expresses such semantics (i.e. HTML, or DocBook). The whole point of media types is that they are _not_ domain-specific, but rather, represent a shared understanding of a processing model at the network (IP) layer. This is the fundamental tradeoff of the style: "[A] uniform interface degrades efficiency, since information is transferred in a standardized form rather than one which is specific to an application's needs." An order consists of item numbers, descriptions, quantity, unit price and total price. You *could* re-invent the <table> wheel and register it as a new media type, but it's more scalable (maintainable, portable) to re-use HTML even if it isn't a precise fit. If you create a new media type, then you need to distribute a custom user-agent. When you upgrade your API, you must simultaneously update that user-agent. The success of the Web is due to the common user-agent. What I really don't want, is for any system I interact with to require me to install yet another piece of software, and keep it up-to-date. That's coupling. So much easier for everyone concerned, to target the browser. That way, I only need to install and maintain one user-agent regardless of how many different systems I interact with. Such decoupling allows clients and servers to evolve independently. So there is a cost associated with the minting of new media types -- coupling -- unless and until the new media type achieves significant uptake. -Eric
I get everything you are saying..finally thanks to a few of you that set me clear on this whole media-type issue.
I am however having a hard time thinking about telling clients that they basically need to parse html to use my API. I much rather say "for /orders, you get this chunk of xml back with these potential elements.. parse it to get the data you need". Or in JSON. As I use Java/JAX-RS with Jersey, it handles automatically turning my objects into either xml or json, whatever the Accept header specifies. Anyway, for my own learning, it is good to know what you said, and it does make sense. However, it seems odd to me to return things in HTML as opposed to xml or json, when it's just chunks.. that is, a user places 100 orders over 3 months, then comes in and asks to see a history of orders. I return an xml chunk with their 100 orders and related info. That seems perfectly fine to provide in xml or json, allowing any client to parse the response as they see fit. I would obviously have some sort of api doc that would explain the response.
I guess what I am grappling with is that for the most part, I would suspect most services like the one I am messing around with to learn, would be used by specific clients, not anyone and everyone out on the web. More so, I don't see anyone needing to use my particular bits of data I return for their own use.. that is, if I were to register a media type that represents a generic ordering document, that might make sense, but in my case, if I am building up a REST api for my specific little service, it doesn't seem like returning HTML would make any more sense than returning xml or json. I certainly can see if I was building my own web site, where I have some javascript make ajax requests and I return a chunk of HTML instead of XML or JSON, so that my own site consuming my API can benefit from having HTML directly, rather than xml or json then have to build up the html on the fly in the browser. But for say a mobile app that had a native client that
allowed a user to log in and pull up their recent orders, a chunk of XML would fit well. HTML seems more difficult to have to parse and deal with.. at least the way I think. Again, if I were going to display it in a browser..maybe it's fine, but if I wanted to do something with the data before displaying it or maybe it's not a web browser at all, html seems out of place. That's just my opinion tho from the bits I've learned the past few days.
--- On Thu, 12/2/10, Eric J. Bowman <eric@...> wrote:
From: Eric J. Bowman <eric@...>
Subject: Re: [rest-discuss] Link relations [was: A media type for case files, dossiers and documents]
To: "Kevin Duffey" <andjarnic@...>
Cc: "Rest Discussion List" <rest-discuss@yahoogroups.com>, juergen.brendel@...
Date: Thursday, December 2, 2010, 10:32 PM
Kevin Duffey wrote:
>
> Valid point. I am not sure what media type would fit then. I am
> trying to follow the HATEOS design, I have an entry point that
> returns some links based on credentials, from there a client would
> use those to make calls to any of my resources, and each response
> would return a relevant chunk of XML or JSON with links for each
> resource accessible at that point (for example GET /orders/id would
> return a specific order along with one or more links that can be used
> to operate on the order).
>
I've never seen an order-processing system that couldn't be modeled as
HTML. In fact, I've rarely seen an order-processing system that wasn't
HTML. In OOP terminology, the goal is to distribute not your objects,
but your object interfaces. REST says, make those object interfaces
uniform. Which means participants have a network-based shared
understanding of your state transitions (links, forms), IOW, a self-
documenting API.
It's perfectly acceptable to model your data as JSON or as XML (bearing
in mind that schemas are an orthogonal concern). The trick is to
create an HTML interface for either JSON or XML data, which instructs
user-agents how to interact with that data. I'd choose either JSON or
XML, instead of trying to do both, depending on whether you're more
comfortable transforming that data into HTML using Javascript or XSLT.
>
> So if I use application/xml, my API would not be considered truly
> RESTful?
>
No, not if you're using application/xml as the hypertext engine driving
application state. If it's just a snippet of XML which gets read by,
say, an HTML front-end driving application state, then it's OK because
the processing model (parse as XML, handling XInclude/XLink/rdf:about)
is adequate to the task. If that XML snippet contains URIs the user is
supposed to click on to transition the application to the next steady-
state (which aren't XLinks), well, that's what <a> and atom:link are
for, there's no corollary in application/xml (besides XLink).
Also, most order forms are simply tabular data, the semantics of which
don't exist in application/xml like they do in application/xhtml+xml or
text/html with <table>. Same with lists, same with forms.
>
> So now I'll ask, what media type I could possibly use with my own
> xml/json structure? It almost sounds like you're saying I shouldn't
> be returning my own made up structure, that I should instead use an
> existing media type, like one with xhtml or something. Is there a
> media type that allows for any sort of specific format to a domain to
> be returned? Or does that now fall into a case where I should create
> my own media type and register it with IANA?
>
It falls into a case where you should refactor. You have tabular data,
so you need to choose a data type which expresses such semantics (i.e.
HTML, or DocBook). The whole point of media types is that they are
_not_ domain-specific, but rather, represent a shared understanding of
a processing model at the network (IP) layer. This is the fundamental
tradeoff of the style:
"[A] uniform interface degrades efficiency, since information is
transferred in a standardized form rather than one which is specific to
an application's needs."
An order consists of item numbers, descriptions, quantity, unit price
and total price. You *could* re-invent the <table> wheel and register
it as a new media type, but it's more scalable (maintainable, portable)
to re-use HTML even if it isn't a precise fit. If you create a new
media type, then you need to distribute a custom user-agent. When you
upgrade your API, you must simultaneously update that user-agent.
The success of the Web is due to the common user-agent. What I really
don't want, is for any system I interact with to require me to install
yet another piece of software, and keep it up-to-date. That's coupling.
So much easier for everyone concerned, to target the browser. That way,
I only need to install and maintain one user-agent regardless of how
many different systems I interact with. Such decoupling allows clients
and servers to evolve independently. So there is a cost associated
with the minting of new media types -- coupling -- unless and until the
new media type achieves significant uptake.
-Eric
Kevin Duffey wrote: > > I am however having a hard time thinking about telling clients that > they basically need to parse html to use my API. I much rather say > "for /orders, you get this chunk of xml back with these potential > elements.. parse it to get the data you need". (...) I would > obviously have some sort of api doc that would explain the response. > Right -- that API document is your HTML. Which doesn't mean anyone has to parse that HTML, they can use XML or JSON directly. The drawback is that if you change that API, any user-agent directly accessing the raw data will break; whereas if they're parsing your HTML they'll be updated automatically. > > I guess what I am grappling with is that for the most part, I would > suspect most services like the one I am messing around with to learn, > would be used by specific clients, not anyone and everyone out on the > web. > Doesn't matter. Nobody coding a consumer for your API will understand a custom media type without training. Whereas if you express your API as HTML, you don't have this problem; anyone will be able to understand it provided they understand HTML (a safe assumption), and you won't need any custom media types. -Eric
Kevin:
I've been doing quite a bit of work in the area of making decisions on how
to code clients for Internet apps. Your comments about how XHTML seems
inappropriate remind me of a set of decisions we all make (consciously or
not) about implementing solutions for Internet apps. Here's a peek into one
aspect of my current thinking on this. Hopefully it hits some of teh points
to raise and provides some ideas on how you can approach your
decision-making.
NOTE: I cover some of this in a talk and the slides (and C# code) for that
talk are here:http://amundsen.com/talks/#beyond-web20
CONSIDERATIONS
When coding clients for application-level protocols (HTTP) over distributed
networks (i.e. the "Web"), these things (among others) must be taken into
account:
1 - how does the client know all the addresses (URIs) that will be needed to
execute operations?
2 - how does the client know how to properly construct specific requests
(searches, filters, etc.) to send to the addresses?
3 - how does the client "understand" the responses returned from those
requests?
4 - how does the client know the order in which these actions
(requests/responses) must take place (you can't create a new order until you
create a new customer, you can't register more than ten pending orders per
day, etc.).
You have two general approaches:
- code these details into the client (non-hypermedia) and re-code the client
when the details change or;
- code these details into the message (hypermedia) and reformat the message
when the details change.
A NON-HYPERMEDIA APPROACH:
1 - When coding the client application programmers will get a long list of
URIs (from documentation) and hard-code them into the client application or
encode the URI list in some static config file, etc. and make that available
to the client code. It's possible that some URI construction rules can be
used instead of a static list. Then programmers write code that knows how to
execute the construction rules at runtime based on the state of the client,
etc. The client application will also have some rules in code in order to
associate each fixed/constructed URI w/ some "action" (get a user record,
search for users added last week, add a new user, etc.) and the client code
will select the proper address at runtime based on the state of the client,
etc.
2 - When sending requests from the client to the server (the "actions"
mentioned in #1 above), programmers will write code that knows the format
details of the message (XML, JSON, CSV, etc), the layout details (XML
elements named "email", "hat-size", etc.), which elements are required,
optional, etc. Programmers will write code that, at runtime, associates
client state with each of these "fields", populates the structures and sends
them to the proper URI (from #1).
3 - When receiving responses, client applications know, ahead of time, what
to format expect (XML, JSON, etc.), the exact layout of each response
(elements and attributes, etc.), and how to render them visually for humans
(or arrange the data returned in the proper memory "slots" for M2M apps).
4 - The client application will have all the rules for application flow hard
coded. It will "know" that customers cannot have more than ten pending
orders or that order detail lines can't be sent to the server before an
order document is created, etc.
When using the this approach, changing any of those items over time (new
addresses for new requests that return new responses in a
new application-flow order) will require re-coding the client and
re-deploying that new code to replace all the existing "old" client code.
A HYPERMEDIA-DRIVEN APPROACH
Using Fielding's REST style as a guide ("hypermedia as the engine of
application state"):
1 - The goal is to reduce the number of addresses to the fewest reasonably
possible. One pre-established address is a nice goal - the "starting"
address. After that, all other addresses are expected to come in the
responses. XHTML has a built-in element for this data - the anchor (<a ...
/>) tag.
2 - The information about what fields to use when crafting a request are
contained in the responses to requests, not hard-coded in the client
application. XHTML has built-in elements for this, too. FORMs w/ INPUT,
SELECT, and TEXTAREA elements. Clients know ahead of time how to handle each
of these elements; they are universal for all types of requests (for users,
customers, stores, orders, etc.). Also, the FORM element has the associated
URI for this action when the client receives the response so there is no
need to hard-code any other URIs in the client, either.
3 - The information about what fields & layouts to expect in responses and
how to "render" them is also included in XHTML. Like the FORM elements,
response elements are generic and of a limited set. Clients to do not need
to know a set of specific data elements (<email />, <hat-size />, etc.) and
when to expect them and how to render them. Instead, client code is written
to know how to render the generic set of elements (DL, DT, DD, DIV, SPAN,
TABLE, etc) in a response.
4 - The responses carry the "next possible steps" for the application flow.
XHTML elements such as <a /> and <form /> will appear when it's appropriate
(the response to create order will have links/controls to create order
lines, once ten pending orders are created for a customer, the response will
no longer in include a "create pending order" link, etc.).
When using the second approach, new addresses for new requests that return
new responses w/ new app-flow details will not require changing the client
code. Because all that information is included in the responses; the media
type (XHTML) has "affordances" for carrying that application control
information (<a />, <form />, etc.) within the responses. XHTML has an
advantage due to it's built-in hypermedia controls. XML and JSON do not have
these.
CHOOSING WHICH APPROACH TO USE
Now, it may turn out that you are creating an application that:
1) has only one address
2) has only one request format
3) has only one response format
4) has only one possible application flow
If that's the case, you don't need the advantages that a hypermedia-driven
implementation affords; all that work may be overkill and waste. Using a
non-hypermedia format (e.g. CSV, XML, JSON, etc.) and hard-coding the
details in the client will work much better with less overall effort.
Or, you may have a small set of addresses, or a small set of request
formats, or a small set of response formats, or a small set of app-flows.
Now you need to think a bit more on whether your varying set of addresses,
request and response formats, and app-flows are numerous enough to make it
worth while to adopt a hypermedia-driven implementation or stick to
hard-coding clients.
Or you may have an application where, even w/ a wide range of address,
requests, responses, and app-flows, these values hardly change over the life
of the application (days, weeks, years, etc.). Does it make sense to use a
hypermedia-driven implementation if the formats never change?
Or you may be the only one writing the client. Just like it's often more
effort to document a simple app than build it, using a hypermedia-driven
implementation in order to never change the code that you yourself could
write more quickly and efficiently anyway may be too much effort for the
return.
So,,,
_When_ you choose one approach over the other is entirely up to you based on
your particular constraints (time, money, complexity of the app, variance of
the app over time, etc.). But if you _do_ choose a hypermedia-driven
approach,you'll want to use an existing hypermedia type (XHTML) or design
and implement your own custom hypermedia type.
Hope that ramble helps.
mca
http://amundsen.com/blog/
http://twitter.com@mamund
http://mamund.com/foaf.rdf#me
#RESTFest 2010
http://rest-fest.googlecode.com
On Fri, Dec 3, 2010 at 02:54, Kevin Duffey <andjarnic@...> wrote:
>
>
> I get everything you are saying..finally thanks to a few of you that set me
> clear on this whole media-type issue.
>
> I am however having a hard time thinking about telling clients that they
> basically need to parse html to use my API. I much rather say "for /orders,
> you get this chunk of xml back with these potential elements.. parse it to
> get the data you need". Or in JSON. As I use Java/JAX-RS with Jersey, it
> handles automatically turning my objects into either xml or json, whatever
> the Accept header specifies. Anyway, for my own learning, it is good to know
> what you said, and it does make sense. However, it seems odd to me to return
> things in HTML as opposed to xml or json, when it's just chunks.. that is, a
> user places 100 orders over 3 months, then comes in and asks to see a
> history of orders. I return an xml chunk with their 100 orders and related
> info. That seems perfectly fine to provide in xml or json, allowing any
> client to parse the response as they see fit. I would obviously have some
> sort of api doc that would explain the response.
>
> I guess what I am grappling with is that for the most part, I would suspect
> most services like the one I am messing around with to learn, would be used
> by specific clients, not anyone and everyone out on the web. More so, I
> don't see anyone needing to use my particular bits of data I return for
> their own use.. that is, if I were to register a media type that represents
> a generic ordering document, that might make sense, but in my case, if I am
> building up a REST api for my specific little service, it doesn't seem like
> returning HTML would make any more sense than returning xml or json. I
> certainly can see if I was building my own web site, where I have some
> javascript make ajax requests and I return a chunk of HTML instead of XML or
> JSON, so that my own site consuming my API can benefit from having HTML
> directly, rather than xml or json then have to build up the html on the fly
> in the browser. But for say a mobile app that had a native client that
> allowed a user to log in and pull up their recent orders, a chunk of XML
> would fit well. HTML seems more difficult to have to parse and deal with..
> at least the way I think. Again, if I were going to display it in a
> browser..maybe it's fine, but if I wanted to do something with the data
> before displaying it or maybe it's not a web browser at all, html seems out
> of place. That's just my opinion tho from the bits I've learned the past few
> days.
>
>
>
> --- On *Thu, 12/2/10, Eric J. Bowman <eric@...>* wrote:
>
>
> From: Eric J. Bowman <eric@...>
> Subject: Re: [rest-discuss] Link relations [was: A media type for case
> files, dossiers and documents]
> To: "Kevin Duffey" <andjarnic@...>
> Cc: "Rest Discussion List" <rest-discuss@yahoogroups.com>,
> juergen.brendel@...
> Date: Thursday, December 2, 2010, 10:32 PM
>
>
>
>
> Kevin Duffey wrote:
> >
> > Valid point. I am not sure what media type would fit then. I am
> > trying to follow the HATEOS design, I have an entry point that
> > returns some links based on credentials, from there a client would
> > use those to make calls to any of my resources, and each response
> > would return a relevant chunk of XML or JSON with links for each
> > resource accessible at that point (for example GET /orders/id would
> > return a specific order along with one or more links that can be used
> > to operate on the order).
> >
>
> I've never seen an order-processing system that couldn't be modeled as
> HTML. In fact, I've rarely seen an order-processing system that wasn't
> HTML. In OOP terminology, the goal is to distribute not your objects,
> but your object interfaces. REST says, make those object interfaces
> uniform. Which means participants have a network-based shared
> understanding of your state transitions (links, forms), IOW, a self-
> documenting API.
>
> It's perfectly acceptable to model your data as JSON or as XML (bearing
> in mind that schemas are an orthogonal concern). The trick is to
> create an HTML interface for either JSON or XML data, which instructs
> user-agents how to interact with that data. I'd choose either JSON or
> XML, instead of trying to do both, depending on whether you're more
> comfortable transforming that data into HTML using Javascript or XSLT.
>
> >
> > So if I use application/xml, my API would not be considered truly
> > RESTful?
> >
>
> No, not if you're using application/xml as the hypertext engine driving
> application state. If it's just a snippet of XML which gets read by,
> say, an HTML front-end driving application state, then it's OK because
> the processing model (parse as XML, handling XInclude/XLink/rdf:about)
> is adequate to the task. If that XML snippet contains URIs the user is
> supposed to click on to transition the application to the next steady-
> state (which aren't XLinks), well, that's what <a> and atom:link are
> for, there's no corollary in application/xml (besides XLink).
>
> Also, most order forms are simply tabular data, the semantics of which
> don't exist in application/xml like they do in application/xhtml+xml or
> text/html with <table>. Same with lists, same with forms.
>
> >
> > So now I'll ask, what media type I could possibly use with my own
> > xml/json structure? It almost sounds like you're saying I shouldn't
> > be returning my own made up structure, that I should instead use an
> > existing media type, like one with xhtml or something. Is there a
> > media type that allows for any sort of specific format to a domain to
> > be returned? Or does that now fall into a case where I should create
> > my own media type and register it with IANA?
> >
>
> It falls into a case where you should refactor. You have tabular data,
> so you need to choose a data type which expresses such semantics (i.e.
> HTML, or DocBook). The whole point of media types is that they are
> _not_ domain-specific, but rather, represent a shared understanding of
> a processing model at the network (IP) layer. This is the fundamental
> tradeoff of the style:
>
> "[A] uniform interface degrades efficiency, since information is
> transferred in a standardized form rather than one which is specific to
> an application's needs."
>
> An order consists of item numbers, descriptions, quantity, unit price
> and total price. You *could* re-invent the <table> wheel and register
> it as a new media type, but it's more scalable (maintainable, portable)
> to re-use HTML even if it isn't a precise fit. If you create a new
> media type, then you need to distribute a custom user-agent. When you
> upgrade your API, you must simultaneously update that user-agent.
>
> The success of the Web is due to the common user-agent. What I really
> don't want, is for any system I interact with to require me to install
> yet another piece of software, and keep it up-to-date. That's coupling.
> So much easier for everyone concerned, to target the browser. That way,
> I only need to install and maintain one user-agent regardless of how
> many different systems I interact with. Such decoupling allows clients
> and servers to evolve independently. So there is a cost associated
> with the minting of new media types -- coupling -- unless and until the
> new media type achieves significant uptake.
>
> -Eric
>
>
>
>
>
On 12/03/2010 03:09 AM, Eric J. Bowman wrote: > > Kevin Duffey wrote: > > > > I am however having a hard time thinking about telling clients that > > they basically need to parse html to use my API. I much rather say > > "for /orders, you get this chunk of xml back with these potential > > elements.. parse it to get the data you need". (...) I would > > obviously have some sort of api doc that would explain the response. > > > > Right -- that API document is your HTML. Which doesn't mean anyone has > to parse that HTML, they can use XML or JSON directly. The drawback is > that if you change that API, any user-agent directly accessing the raw > data will break; whereas if they're parsing your HTML they'll be updated > automatically. > > > > > I guess what I am grappling with is that for the most part, I would > > suspect most services like the one I am messing around with to learn, > > would be used by specific clients, not anyone and everyone out on the > > web. > > > > Doesn't matter. Nobody coding a consumer for your API will understand > a custom media type without training. Whereas if you express your API > as HTML, you don't have this problem; anyone will be able to understand > it provided they understand HTML (a safe assumption), and you won't > need any custom media types. > I concur but even when HTML is used, there is still some training needed, right? If the returned consists of multiple links (and not just one state transition), how does my client know based on some other rules what "link" to take without some training? > > > > blog: http://eikonne.wordpress.com twitter: http://twitter.com/eikonne
Eb wrote: > > > > > Doesn't matter. Nobody coding a consumer for your API will > > understand a custom media type without training. Whereas if you > > express your API as HTML, you don't have this problem; anyone will > > be able to understand it provided they understand HTML (a safe > > assumption), and you won't need any custom media types. > > > > I concur but even when HTML is used, there is still some training > needed, right? If the returned consists of multiple links (and not > just one state transition), how does my client know based on some > other rules what "link" to take without some training? > If the user is human, the link text imparts this knowledge. If the user is a machine, the link relation does, i.e. rel='next'. -Eric
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 I was wondering about this last night after coming across RMM, and in the process trying to apply it to the REST API i'm developing for foswiki And given that I want to fully support all resource media types that are present in the wiki (and you can attach anything - image, word doc, executable, whatever) that makes adding xml into the payload pretty completely and uttterly useless. so, given REST media-types that force you to not corrupt the payload with non-resource related information, we're left with 2 other choices: 1 toss it in the http header 2 use a separate verb. personally, I'm leaning towards using the OPTIONS verb - that way you could enable all resources to be asked about their API, and it could be possible to answer in a number of media-types, code, text, or specification language. Can you guys please help me learn more by blowing holes in the idea? Sven On 23/11/10 02:57, mike amundsen wrote: > Jakob: > > Jan Algermissen addresses some of this in his model [1]. > > Over the last several months, I have been focusing on [Hyper]media > Types directly [2]. While that work is far from complete, some of the > material there might be of interest. > > > [1] http://nordsc.com/ext/classification_of_http_based_apis.html > [2] http://amundsen.com/hypermedia/hfactor/ > > mca > http://amundsen.com/blog/ > http://twitter.com@mamund > http://mamund.com/foaf.rdf#me > > > #RESTFest 2010 > http://rest-fest.googlecode.com > > > > > On Mon, Nov 22, 2010 at 10:38, Jakob Strauch <jakob.strauch@...> wrote: >> In [1] the richardson maturity model is explained by fowler. I�ve already seen this in the recently published book "REST in practice". I�m missing "media types" between level 2 and 3. I think, before you may think about linkrel�s you should think about proper media types... >> >> What do you think? >> >> -Jakob >> >> [1] http://martinfowler.com/articles/richardsonMaturityModel.html >> >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> >> > > > ------------------------------------ > > Yahoo! Groups Links > > > -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAkz4fDUACgkQPAwzu0QrW+m0rACgqsTBDl4cAYTv9FeLiwp64vzY HgAAoJUnAy5JbEiucl3l32xZehmdxNKr =8EcO -----END PGP SIGNATURE-----
On Fri, Dec 3, 2010 at 6:46 AM, Eric J. Bowman <eric@...>wrote: > > > Eb wrote: > > > > > > > > Doesn't matter. Nobody coding a consumer for your API will > > > understand a custom media type without training. Whereas if you > > > express your API as HTML, you don't have this problem; anyone will > > > be able to understand it provided they understand HTML (a safe > > > assumption), and you won't need any custom media types. > > > > > > > I concur but even when HTML is used, there is still some training > > needed, right? If the returned consists of multiple links (and not > > just one state transition), how does my client know based on some > > other rules what "link" to take without some training? > > > > If the user is human, the link text imparts this knowledge. If the > user is a machine, the link relation does, i.e. rel='next'. > > -Eric > > > This is where my understanding of REST breaks down. For an M2M scenario, a machine still has to be told what rel='next' means. Whether you use a custom media type or HTML, if you add a new link type or change the format of the response, the client will still need to be changed to incorporate knowledge about these changes. I understand that a human user can read the text and react intelligently, it's the autonomous machine client where my understanding fails. -- Scott Banwart sbanwart@...
> I guess what I am grappling with is that for the most part, I would
> suspect most services like the one I am messing around with to learn,
> would be used by specific clients, not anyone and everyone out on the
> web.
>
>Doesn't matter. Nobody coding a consumer for your API will understand
>a custom media type without training. Whereas if you express your API
>as HTML, you don't have this problem; anyone will be able to understand
>it provided they understand HTML (a safe assumption), and you won't
>need any custom media types.
Right.. that's a great point as you've said before. What I am not sure if I even did it like this is how I would use HTML to wrap all my custom data. Basically HTML is for displaying information, where as most services that I know of return data for consumption for any number of needs. How do I wrap my orders, bids, items, etc as html elements? I suppose I could use tables or <li> with ids and clients could parse those elements looking for ids, and <a> for links on what to do next with the resources? Seems very odd to me that anyone would return raw data in this fashion for clients other than browsers to consume tho.
Mike, a most excellent response.. very well said, again I have a better understanding of all this thanks to you and Eric.
Here I thought I was implementing HATEOAS by providing links back in my response and a single entry point for all consumers to initiate their communication with my api.
I totally get the concept of an evolving api breaking clients. Because of that, it was my impression that providing versions of your api prevent that, at least to some degree. Clients using v1, continue to use v1 until they are ready to be written to use v2, or..they just stay on v1. Like most software that adheres to major version changes as major feature changes, a v2 would be something probably quite a bit different.. different response formats, perhaps... but by using links in the responses, I can, from my api, direct clients to new resources if need be. Perhaps the old /orders is no longer valid.. we have a /neworders to handle all kinds of new data. Since all the clients use the links elements to navigate based on the rel="" value, they shouldn't break, that I know of. Well.. to be fair, if the response format changes, then yes, they would break.. but if you keep the response format the same but modify the resources called via the links, then the
clients will "evolve" with the api automatically.
What I am having a hard time swallowing is providing an API that returns data as HTML. I think it's going to be hard to go out there and tell consumers, "here is my api, which returns HTML, but not for visual use.. you have to parse the HTML yourself, even though your not a browser.. you're going to become a browser basically, in order to use my api". This just seems hard to get acceptance or by in by potential consumers. I would imagine most consumers of apis are expecting xml or json, and to get back html, and have to actually parse it at least to me I'd be questioning why html, which as far as I knew was used to display data in a visual manner, is being sent back for me to parse to use the data in it.
--- On Fri, 12/3/10, mike amundsen <mamund@...> wrote:
From: mike amundsen <mamund@...>
Subject: Re: [rest-discuss] Link relations [was: A media type for case files, dossiers and documents]
To: "Kevin Duffey" <andjarnic@...>
Cc: "Eric J. Bowman" <eric@...>, "Rest Discussion List" <rest-discuss@yahoogroups.com>, juergen.brendel@mulesoft.com
Date: Friday, December 3, 2010, 2:08 AM
Kevin:
I've been doing quite a bit of work in the area of making decisions on how to code clients for Internet apps. Your comments about how XHTML seems inappropriate remind me of a set of decisions we all make (consciouslyor not) about implementing solutions for Internet apps. Here's a peek into one aspect of my current thinking on this. Hopefully it hits some of teh points to raise and provides some ideas on how you can approach your decision-making.
NOTE: I cover some of this in a talk and the slides (and C# code) for that talk are here:http://amundsen.com/talks/#beyond-web20
CONSIDERATIONSWhen coding clients for application-level protocols (HTTP) over distributed networks (i.e. the "Web"), these things (among others) must be taken into account:
1 - how does the client know all the addresses (URIs) that will be needed to execute operations? 2 - how does the client know how to properly construct specific requests (searches, filters, etc.) to send to the addresses?
3 - how does the client "understand" the responses returned from those requests?4 - how does the client know the order in which these actions (requests/responses) must take place (you can't create a new order until you create a new customer, you can't register more than ten pending orders per day, etc.).
You have two general approaches:- code these details into the client (non-hypermedia) and re-code the client when the details change or;
- code these details into the message (hypermedia) and reformat the message when the details change.
A NON-HYPERMEDIA APPROACH:1 - When coding the client application programmers will get a long list of URIs (from documentation) and hard-code them into the client application or encode the URI list in some static config file, etc. and make that available to the client code. It's possible that some URI construction rules can be used instead of a static list. Then programmers write code that knows how to execute the construction rules at runtime based on the state of the client, etc. The client application will also have some rules in code in order to associate each fixed/constructed URI w/ some "action" (get a user record, search for users added last week, add a new user, etc.) and the client code will select the proper address at runtime based on the state of the client, etc.
2 - When sending requests from the client to the server (the "actions" mentioned in #1 above), programmers will write code that knows the format details of the message (XML, JSON, CSV, etc), the layout details (XML elements named "email", "hat-size", etc.), which elements are required, optional, etc. Programmers will write code that, at runtime, associates client state with each of these "fields", populates the structures and sends them to the proper URI (from #1).
3 - Whenreceivingresponses, client applications know, ahead of time, what to format expect (XML, JSON, etc.), the exact layout of each response (elements and attributes, etc.), and how to render them visually for humans (or arrange the data returned in the proper memory "slots" for M2M apps).
4 - The client application will have all the rules for application flow hard coded. It will "know" that customers cannot have more than ten pending orders or that order detail lines can't be sent to the server before an order document is created, etc.
When using the this approach, changing any of those items over time (new addresses for new requests that return new responses in a newapplication-flow order) will require re-coding the client and re-deploying that new code to replace all the existing "old" client code.
A HYPERMEDIA-DRIVEN APPROACHUsing Fielding's REST style as a guide ("hypermedia as the engine of application state"):1 - The goal is to reduce the number of addresses to the fewest reasonably possible. One pre-established address is a nice goal - the "starting" address. After that, all other addresses are expected to come in the responses. XHTML has a built-in element for this data - the anchor (<a ... />) tag.
2 - The information about what fields to use when crafting a request are contained in the responses to requests, not hard-coded in the client application. XHTML has built-in elements for this, too. FORMs w/ INPUT, SELECT, and TEXTAREA elements. Clients know ahead of time how to handle each of these elements; they are universal for all types of requests (for users, customers, stores, orders, etc.). Also, the FORM element has the associated URI for this action when the clientreceivesthe response so there is no need to hard-code any other URIs in the client, either.
3 - The information about what fields & layouts to expect in responses and how to "render" them is also included in XHTML. Like the FORM elements, response elements are generic and of a limited set. Clients to do not need to know a set of specific data elements (<email />, <hat-size />, etc.) and when to expect them and how to render them. Instead, client code is written to know how to render the generic set of elements (DL, DT, DD, DIV, SPAN, TABLE, etc) in a response.
4 - The responses carry the "next possible steps" for the application flow. XHTML elements such as <a /> and <form /> will appear when it's appropriate (the response to create order will have links/controls to create order lines, once ten pending orders are created for a customer, the response will no longer in include a "create pending order" link, etc.).
When using the second approach, new addresses for new requests that return new responses w/ new app-flowdetailswill not require changing the client code. Because all that information is included in the responses; the media type (XHTML) has "affordances" for carrying that application control information (<a />, <form />, etc.) within the responses. XHTML has an advantage due to it's built-in hypermedia controls. XML and JSON do not have these.
CHOOSING WHICH APPROACH TO USENow, it may turn out that you are creating anapplicationthat:1) has only one address2) has only one request format3) has only one response format
4) has only one possible application flow
If that's the case, you don't need the advantages that a hypermedia-driven implementation affords; all that work may be overkill and waste. Using a non-hypermedia format (e.g. CSV, XML, JSON, etc.) and hard-coding the details in the client will work much better with less overall effort.
Or, you may have a small set of addresses, or a small set of request formats, or a small set of response formats, or a small set of app-flows. Now you need to think a bit more on whether your varying set of addresses, request and response formats, and app-flows are numerous enough to make it worth while to adopt a hypermedia-driven implementation or stick to hard-coding clients.
Or you may have an application where, even w/ a wide range of address, requests, responses, and app-flows, these values hardly change over the life of the application (days, weeks, years, etc.). Does it make sense to use a hypermedia-driven implementation if the formats never change?
Or you may be the only one writing the client. Just like it's often more effort to document a simple app than build it, using a hypermedia-driven implementation in order to never change the code that you yourself could write more quickly and efficiently anyway may be too much effort for the return.
So,,,_When_ you choose one approach over the other is entirely up to you based on your particular constraints (time, money, complexity of the app, variance of the app over time, etc.). But if you _do_ choose a hypermedia-driven approach,you'll want to use an existing hypermedia type (XHTML) or design and implement your own custom hypermedia type.
Hope that ramble helps.
mcahttp://amundsen.com/blog/
http://twitter.com@mamund
http://mamund.com/foaf.rdf#me
#RESTFest 2010
http://rest-fest.googlecode.com
On Fri, Dec 3, 2010 at 02:54, Kevin Duffey <andjarnic@...> wrote:
I get everything you are saying..finally thanks to a few of you that set me clear on this whole media-type issue.
I am however having a hard time thinking about telling clients that they basically need to parse html to use my API. I much rather say "for /orders, you get this chunk of xml back with these potential elements.. parse it to get the data you need". Or in JSON. As I use Java/JAX-RS with Jersey, it handles automatically turning my objects into either xml or json, whatever the Accept header specifies. Anyway, for my own learning, it is good to know what you said, and it does make sense. However, it seems odd to me to return things in HTML as opposed to xml or json, when it's just chunks.. that is, a user places 100 orders over 3 months, then comes in and asks to see a history of orders. I return an xml chunk with their 100 orders and related
info. That seems perfectly fine to provide in xml or json, allowing any client to parse the response as they see fit. I would obviously have some sort of api doc that would explain the response.
I guess what I am grappling with is that for the most part, I would suspect most services like the one I am messing around with to learn, would be used by specific clients, not anyone and everyone out on the web. More so, I don't see anyone needing to use my particular bits of data I return for their own use.. that is, if I were to register a media type that represents a generic ordering document, that might make sense, but in my case, if I am building up a REST api for my specific little service, it doesn't seem like returning HTML would make any more sense than returning xml or json. I certainly can see if I was building my own web site, where I have some javascript make ajax requests and I return a chunk of HTML instead of XML or JSON, so that my own
site consuming my API can benefit from having HTML directly, rather than xml or json then have to build up the html on the fly in the browser. But for say a mobile app that had a native client that allowed a user to log in and pull up their recent orders, a chunk of XML would fit well. HTML seems more difficult to have to parse and deal with.. at least the way I think. Again, if I were going to display it in a browser..maybe it's fine, but if I wanted to do something with the data before displaying it or maybe it's not a web browser at all, html seems out of place. That's just my opinion tho from the bits I've learned the past few days.
--- On Thu, 12/2/10, Eric J. Bowman <eric@...> wrote:
From: Eric J. Bowman <eric@...>
Subject: Re: [rest-discuss] Link relations [was: A media type for
case files, dossiers and documents]
To: "Kevin Duffey" <andjarnic@...>
Cc: "Rest Discussion List" <rest-discuss@yahoogroups.com>, juergen.brendel@...
Date: Thursday, December 2, 2010, 10:32 PM
Kevin Duffey wrote:
>
> Valid point. I am not sure what media type would fit then. I am
> trying to follow the HATEOS design, I have an entry point that
> returns some links based on credentials, from there a client would
> use those to make calls to any of my resources, and each response
> would return a relevant chunk of XML or JSON with links for each
> resource accessible at that point (for example GET /orders/id would
> return a specific order along with one or more links that can be used
> to operate on the order).
>
I've never seen an order-processing system that couldn't be modeled as
HTML. In fact, I've rarely seen an order-processing system that wasn't
HTML. In OOP terminology, the goal is to distribute not your objects,
but your object interfaces. REST says, make those object interfaces
uniform. Which means participants have a network-based shared
understanding of your state transitions (links, forms), IOW, a self-
documenting API.
It's perfectly acceptable to model your data as JSON or as XML (bearing
in mind that schemas are an orthogonal concern). The trick is to
create an HTML interface for either JSON or XML data, which instructs
user-agents how to interact with that data. I'd choose either JSON or
XML, instead of trying to do both, depending on whether you're more
comfortable transforming that data into HTML using Javascript or XSLT.
>
> So if I use application/xml, my API would not be considered truly
> RESTful?
>
No, not if you're using application/xml as the hypertext engine driving
application state. If it's just a snippet of XML which gets read by,
say, an HTML front-end driving application state, then it's OK because
the processing model (parse as XML, handling XInclude/XLink/rdf:about)
is adequate to the task. If that XML snippet contains URIs the user is
supposed to click on to transition the application to the next steady-
state (which aren't XLinks), well, that's what <a> and atom:link are
for, there's no corollary in application/xml (besides XLink).
Also, most order forms are simply tabular data, the semantics of which
don't exist in application/xml like they do in application/xhtml+xml or
text/html with <table>. Same with lists, same with forms.
>
> So now I'll ask, what media type I could possibly use with my own
> xml/json structure? It almost sounds like you're saying I shouldn't
> be returning my own made up structure, that I should instead use an
> existing media type, like one with xhtml or something. Is there a
> media type that allows for any sort of specific format to a domain to
> be returned? Or does that now fall into a case where I should create
> my own media type and register it with IANA?
>
It falls into a case where you should refactor. You have tabular data,
so you need to choose a data type which expresses such semantics (i.e.
HTML, or DocBook). The whole point of media types is that they are
_not_ domain-specific, but rather, represent a shared understanding of
a processing model at the network (IP) layer. This is the fundamental
tradeoff of the style:
"[A] uniform interface degrades efficiency, since information is
transferred in a standardized form rather than one which is specific to
an application's needs."
An order consists of item numbers, descriptions, quantity, unit price
and total price. You *could* re-invent the <table> wheel and register
it as a new media type, but it's more scalable (maintainable, portable)
to re-use HTML even if it isn't a precise fit. If you create a new
media type, then you need to distribute a custom user-agent. When you
upgrade your API, you must simultaneously update that user-agent.
The success of the Web is due to the common user-agent. What I really
don't want, is for any system I interact with to require me to install
yet another piece of software, and keep it up-to-date. That's coupling.
So much easier for everyone concerned, to target the browser. That way,
I only need to install and maintain one user-agent regardless of how
many different systems I interact with. Such decoupling allows clients
and servers to evolve independently. So there is a cost associated
with the minting of new media types -- coupling -- unless and until the
new media type achieves significant uptake.
-Eric
+1 to Scott's response
--- On Fri, 12/3/10, Scott Banwart <sbanwart@gmail.com> wrote:
From: Scott Banwart <sbanwart@...>
Subject: Re: [rest-discuss] Link relations [was: A media type for case files, dossiers and documents]
To: "Rest Discussion List" <rest-discuss@yahoogroups.com>
Date: Friday, December 3, 2010, 5:24 AM
On Fri, Dec 3, 2010 at 6:46 AM, Eric J. Bowman <eric@...> wrote:
Eb wrote:
>
> >
> > Doesn't matter. Nobody coding a consumer for your API will
> > understand a custom media type without training. Whereas if you
> > express your API as HTML, you don't have this problem; anyone will
> > be able to understand it provided they understand HTML (a safe
> > assumption), and you won't need any custom media types.
> >
>
> I concur but even when HTML is used, there is still some training
> needed, right? If the returned consists of multiple links (and not
> just one state transition), how does my client know based on some
> other rules what "link" to take without some training?
>
If the user is human, the link text imparts this knowledge. If the
user is a machine, the link relation does, i.e. rel='next'.
-Eric
This is where my understanding of REST breaks down. For an M2M scenario, a machine still has to be told what rel='next' means. Whether you use a custom media type or HTML, if you add a new link type or change the format of the response, the client will still need to be changed to incorporate knowledge about these changes.
I understand that a human user can read the text and react intelligently, it's the autonomous machine client where my understanding fails.
--
Scott Banwart
sbanwart@...
Kevin:
Glad the post helped.
<snip>
What I am having a hard time swallowing is providing an API that returns
data as HTML. I think it's going to be hard to go out there and tell
consumers, "here is my api, which returns HTML, but not for visual use.. you
have to parse the HTML yourself, even though your not a browser.. you're
going to become a browser basically, in order to use my api". This just
seems hard to get acceptance or by in by potential consumers. I would
imagine most consumers of apis are expecting xml or json, and to get back
html, and have to actually parse it at least to me I'd be questioning why
html, which as far as I knew was used to display data in a visual manner, is
being sent back for me to parse to use the data in it.
</snip>
If you don't like [X]HTML, you don't have to use it. You;ll note in my
slides (from the previous post), I don't use XHTML, I use a custom media
type built using XML as the format; could have been JSON, or some other
format, too. The value of XHTML is not in DIV, SPAN, etc. it's in A, FORM,
etc. Use XForms and XInclude and you get the same basic functionality.
Don't like XForms? design your own hypermedia controls to handle navigation
links, query templates, and idempotent & non-idempotent "send" operations
and you have all the application controls you need.
Fielding, himself stated that hypermedia does not require HTML:
"Hypertext does not need to be HTML on a browser; machines can follow links
when they understand the data format and relationship types"[1]
You see lots of talk about XHTML here because it is well-known\understood,
and has all the "bits" needed in a media type to support hypermedia-driven
applications in the style Roy calls REST. This is the same reason you see
lots of talk here about HTTP; because it has all the bits needed in a
app-level protocol to support the REST style. Neither are _required_, they
are very well-suited for Roy's style.
Hope that clears
[1]http://www.slideshare.net/royfielding/a-little-rest-and-relaxation (see
slide #50)
mca
http://amundsen.com/blog/
http://twitter.com@mamund
http://mamund.com/foaf.rdf#me
#RESTFest 2010
http://rest-fest.googlecode.com
On Fri, Dec 3, 2010 at 10:27, Kevin Duffey <andjarnic@...> wrote:
>
>
> Mike, a most excellent response.. very well said, again I have a better
> understanding of all this thanks to you and Eric.
>
> Here I thought I was implementing HATEOAS by providing links back in my
> response and a single entry point for all consumers to initiate their
> communication with my api.
>
> I totally get the concept of an evolving api breaking clients. Because of
> that, it was my impression that providing versions of your api prevent that,
> at least to some degree. Clients using v1, continue to use v1 until they are
> ready to be written to use v2, or..they just stay on v1. Like most software
> that adheres to major version changes as major feature changes, a v2 would
> be something probably quite a bit different.. different response formats,
> perhaps... but by using links in the responses, I can, from my api, direct
> clients to new resources if need be. Perhaps the old /orders is no longer
> valid.. we have a /neworders to handle all kinds of new data. Since all the
> clients use the links elements to navigate based on the rel="" value, they
> shouldn't break, that I know of. Well.. to be fair, if the response format
> changes, then yes, they would break.. but if you keep the response format
> the same but modify the resources called via the links, then the clients
> will "evolve" with the api automatically.
>
> What I am having a hard time swallowing is providing an API that returns
> data as HTML. I think it's going to be hard to go out there and tell
> consumers, "here is my api, which returns HTML, but not for visual use.. you
> have to parse the HTML yourself, even though your not a browser.. you're
> going to become a browser basically, in order to use my api". This just
> seems hard to get acceptance or by in by potential consumers. I would
> imagine most consumers of apis are expecting xml or json, and to get back
> html, and have to actually parse it at least to me I'd be questioning why
> html, which as far as I knew was used to display data in a visual manner, is
> being sent back for me to parse to use the data in it.
>
>
>
>
> --- On *Fri, 12/3/10, mike amundsen <mamund@...>* wrote:
>
>
> From: mike amundsen <mamund@...>
> Subject: Re: [rest-discuss] Link relations [was: A media type for case
> files, dossiers and documents]
> To: "Kevin Duffey" <andjarnic@...>
> Cc: "Eric J. Bowman" <eric@...>, "Rest Discussion List" <
> rest-discuss@yahoogroups.com>, juergen.brendel@...
> Date: Friday, December 3, 2010, 2:08 AM
>
>
> Kevin:
>
> I've been doing quite a bit of work in the area of making decisions on how
> to code clients for Internet apps. Your comments about how XHTML seems
> inappropriate remind me of a set of decisions we all make (consciously or
> not) about implementing solutions for Internet apps. Here's a peek into one
> aspect of my current thinking on this. Hopefully it hits some of teh points
> to raise and provides some ideas on how you can approach your
> decision-making.
>
> NOTE: I cover some of this in a talk and the slides (and C# code) for that
> talk are here:http://amundsen.com/talks/#beyond-web20
>
> CONSIDERATIONS
> When coding clients for application-level protocols (HTTP) over distributed
> networks (i.e. the "Web"), these things (among others) must be taken into
> account:
> 1 - how does the client know all the addresses (URIs) that will be needed
> to execute operations?
> 2 - how does the client know how to properly construct specific requests
> (searches, filters, etc.) to send to the addresses?
> 3 - how does the client "understand" the responses returned from those
> requests?
> 4 - how does the client know the order in which these actions
> (requests/responses) must take place (you can't create a new order until you
> create a new customer, you can't register more than ten pending orders per
> day, etc.).
>
> You have two general approaches:
> - code these details into the client (non-hypermedia) and re-code the
> client when the details change or;
> - code these details into the message (hypermedia) and reformat the message
> when the details change.
>
> A NON-HYPERMEDIA APPROACH:
> 1 - When coding the client application programmers will get a long list of
> URIs (from documentation) and hard-code them into the client application or
> encode the URI list in some static config file, etc. and make that available
> to the client code. It's possible that some URI construction rules can be
> used instead of a static list. Then programmers write code that knows how to
> execute the construction rules at runtime based on the state of the client,
> etc. The client application will also have some rules in code in order to
> associate each fixed/constructed URI w/ some "action" (get a user record,
> search for users added last week, add a new user, etc.) and the client code
> will select the proper address at runtime based on the state of the client,
> etc.
>
> 2 - When sending requests from the client to the server (the "actions"
> mentioned in #1 above), programmers will write code that knows the format
> details of the message (XML, JSON, CSV, etc), the layout details (XML
> elements named "email", "hat-size", etc.), which elements are required,
> optional, etc. Programmers will write code that, at runtime, associates
> client state with each of these "fields", populates the structures and sends
> them to the proper URI (from #1).
>
> 3 - When receiving responses, client applications know, ahead of time, what
> to format expect (XML, JSON, etc.), the exact layout of each response
> (elements and attributes, etc.), and how to render them visually for humans
> (or arrange the data returned in the proper memory "slots" for M2M apps).
>
> 4 - The client application will have all the rules for application flow
> hard coded. It will "know" that customers cannot have more than ten pending
> orders or that order detail lines can't be sent to the server before an
> order document is created, etc.
>
> When using the this approach, changing any of those items over time (new
> addresses for new requests that return new responses in a
> new application-flow order) will require re-coding the client and
> re-deploying that new code to replace all the existing "old" client code.
>
> A HYPERMEDIA-DRIVEN APPROACH
> Using Fielding's REST style as a guide ("hypermedia as the engine of
> application state"):
> 1 - The goal is to reduce the number of addresses to the fewest reasonably
> possible. One pre-established address is a nice goal - the "starting"
> address. After that, all other addresses are expected to come in the
> responses. XHTML has a built-in element for this data - the anchor (<a ...
> />) tag.
>
> 2 - The information about what fields to use when crafting a request are
> contained in the responses to requests, not hard-coded in the client
> application. XHTML has built-in elements for this, too. FORMs w/ INPUT,
> SELECT, and TEXTAREA elements. Clients know ahead of time how to handle each
> of these elements; they are universal for all types of requests (for users,
> customers, stores, orders, etc.). Also, the FORM element has the associated
> URI for this action when the client receives the response so there is no
> need to hard-code any other URIs in the client, either.
>
> 3 - The information about what fields & layouts to expect in responses and
> how to "render" them is also included in XHTML. Like the FORM elements,
> response elements are generic and of a limited set. Clients to do not need
> to know a set of specific data elements (<email />, <hat-size />, etc.) and
> when to expect them and how to render them. Instead, client code is written
> to know how to render the generic set of elements (DL, DT, DD, DIV, SPAN,
> TABLE, etc) in a response.
>
> 4 - The responses carry the "next possible steps" for the application flow.
> XHTML elements such as <a /> and <form /> will appear when it's appropriate
> (the response to create order will have links/controls to create order
> lines, once ten pending orders are created for a customer, the response will
> no longer in include a "create pending order" link, etc.).
>
> When using the second approach, new addresses for new requests that return
> new responses w/ new app-flow details will not require changing the client
> code. Because all that information is included in the responses; the media
> type (XHTML) has "affordances" for carrying that application control
> information (<a />, <form />, etc.) within the responses. XHTML has an
> advantage due to it's built-in hypermedia controls. XML and JSON do not have
> these.
>
> CHOOSING WHICH APPROACH TO USE
> Now, it may turn out that you are creating an application that:
> 1) has only one address
> 2) has only one request format
> 3) has only one response format
> 4) has only one possible application flow
>
> If that's the case, you don't need the advantages that a hypermedia-driven
> implementation affords; all that work may be overkill and waste. Using a
> non-hypermedia format (e.g. CSV, XML, JSON, etc.) and hard-coding the
> details in the client will work much better with less overall effort.
>
> Or, you may have a small set of addresses, or a small set of request
> formats, or a small set of response formats, or a small set of app-flows.
> Now you need to think a bit more on whether your varying set of addresses,
> request and response formats, and app-flows are numerous enough to make it
> worth while to adopt a hypermedia-driven implementation or stick to
> hard-coding clients.
>
> Or you may have an application where, even w/ a wide range of address,
> requests, responses, and app-flows, these values hardly change over the life
> of the application (days, weeks, years, etc.). Does it make sense to use a
> hypermedia-driven implementation if the formats never change?
>
> Or you may be the only one writing the client. Just like it's often more
> effort to document a simple app than build it, using a hypermedia-driven
> implementation in order to never change the code that you yourself could
> write more quickly and efficiently anyway may be too much effort for the
> return.
>
> So,,,
> _When_ you choose one approach over the other is entirely up to you based
> on your particular constraints (time, money, complexity of the app, variance
> of the app over time, etc.). But if you _do_ choose a hypermedia-driven
> approach,you'll want to use an existing hypermedia type (XHTML) or design
> and implement your own custom hypermedia type.
>
> Hope that ramble helps.
>
> mca
> http://amundsen.com/blog/
> http://twitter.com@mamund
> http://mamund.com/foaf.rdf#me
>
>
> #RESTFest 2010
> http://rest-fest.googlecode.com
>
>
>
> On Fri, Dec 3, 2010 at 02:54, Kevin Duffey <andjarnic@...<http://mc/compose?to=andjarnic@...>
> > wrote:
>
>
>
> I get everything you are saying..finally thanks to a few of you that set me
> clear on this whole media-type issue.
>
> I am however having a hard time thinking about telling clients that they
> basically need to parse html to use my API. I much rather say "for /orders,
> you get this chunk of xml back with these potential elements.. parse it to
> get the data you need". Or in JSON. As I use Java/JAX-RS with Jersey, it
> handles automatically turning my objects into either xml or json, whatever
> the Accept header specifies. Anyway, for my own learning, it is good to know
> what you said, and it does make sense. However, it seems odd to me to return
> things in HTML as opposed to xml or json, when it's just chunks.. that is, a
> user places 100 orders over 3 months, then comes in and asks to see a
> history of orders. I return an xml chunk with their 100 orders and related
> info. That seems perfectly fine to provide in xml or json, allowing any
> client to parse the response as they see fit. I would obviously have some
> sort of api doc that would explain the response.
>
> I guess what I am grappling with is that for the most part, I would suspect
> most services like the one I am messing around with to learn, would be used
> by specific clients, not anyone and everyone out on the web. More so, I
> don't see anyone needing to use my particular bits of data I return for
> their own use.. that is, if I were to register a media type that represents
> a generic ordering document, that might make sense, but in my case, if I am
> building up a REST api for my specific little service, it doesn't seem like
> returning HTML would make any more sense than returning xml or json. I
> certainly can see if I was building my own web site, where I have some
> javascript make ajax requests and I return a chunk of HTML instead of XML or
> JSON, so that my own site consuming my API can benefit from having HTML
> directly, rather than xml or json then have to build up the html on the fly
> in the browser. But for say a mobile app that had a native client that
> allowed a user to log in and pull up their recent orders, a chunk of XML
> would fit well. HTML seems more difficult to have to parse and deal with..
> at least the way I think. Again, if I were going to display it in a
> browser..maybe it's fine, but if I wanted to do something with the data
> before displaying it or maybe it's not a web browser at all, html seems out
> of place. That's just my opinion tho from the bits I've learned the past few
> days.
>
>
>
> --- On *Thu, 12/2/10, Eric J. Bowman <eric@...<http://mc/compose?to=eric@...>
> >* wrote:
>
>
> From: Eric J. Bowman <eric@...<http://mc/compose?to=eric@...>
> >
> Subject: Re: [rest-discuss] Link relations [was: A media type for case
> files, dossiers and documents]
> To: "Kevin Duffey" <andjarnic@...<http://mc/compose?to=andjarnic@...>
> >
> Cc: "Rest Discussion List" <rest-discuss@yahoogroups.com<http://mc/compose?to=rest-discuss@yahoogroups.com>>,
> juergen.brendel@...<http://mc/compose?to=juergen.brendel@...>
> Date: Thursday, December 2, 2010, 10:32 PM
>
>
>
>
> Kevin Duffey wrote:
> >
> > Valid point. I am not sure what media type would fit then. I am
> > trying to follow the HATEOS design, I have an entry point that
> > returns some links based on credentials, from there a client would
> > use those to make calls to any of my resources, and each response
> > would return a relevant chunk of XML or JSON with links for each
> > resource accessible at that point (for example GET /orders/id would
> > return a specific order along with one or more links that can be used
> > to operate on the order).
> >
>
> I've never seen an order-processing system that couldn't be modeled as
> HTML. In fact, I've rarely seen an order-processing system that wasn't
> HTML. In OOP terminology, the goal is to distribute not your objects,
> but your object interfaces. REST says, make those object interfaces
> uniform. Which means participants have a network-based shared
> understanding of your state transitions (links, forms), IOW, a self-
> documenting API.
>
> It's perfectly acceptable to model your data as JSON or as XML (bearing
> in mind that schemas are an orthogonal concern). The trick is to
> create an HTML interface for either JSON or XML data, which instructs
> user-agents how to interact with that data. I'd choose either JSON or
> XML, instead of trying to do both, depending on whether you're more
> comfortable transforming that data into HTML using Javascript or XSLT.
>
> >
> > So if I use application/xml, my API would not be considered truly
> > RESTful?
> >
>
> No, not if you're using application/xml as the hypertext engine driving
> application state. If it's just a snippet of XML which gets read by,
> say, an HTML front-end driving application state, then it's OK because
> the processing model (parse as XML, handling XInclude/XLink/rdf:about)
> is adequate to the task. If that XML snippet contains URIs the user is
> supposed to click on to transition the application to the next steady-
> state (which aren't XLinks), well, that's what <a> and atom:link are
> for, there's no corollary in application/xml (besides XLink).
>
> Also, most order forms are simply tabular data, the semantics of which
> don't exist in application/xml like they do in application/xhtml+xml or
> text/html with <table>. Same with lists, same with forms.
>
> >
> > So now I'll ask, what media type I could possibly use with my own
> > xml/json structure? It almost sounds like you're saying I shouldn't
> > be returning my own made up structure, that I should instead use an
> > existing media type, like one with xhtml or something. Is there a
> > media type that allows for any sort of specific format to a domain to
> > be returned? Or does that now fall into a case where I should create
> > my own media type and register it with IANA?
> >
>
> It falls into a case where you should refactor. You have tabular data,
> so you need to choose a data type which expresses such semantics (i.e.
> HTML, or DocBook). The whole point of media types is that they are
> _not_ domain-specific, but rather, represent a shared understanding of
> a processing model at the network (IP) layer. This is the fundamental
> tradeoff of the style:
>
> "[A] uniform interface degrades efficiency, since information is
> transferred in a standardized form rather than one which is specific to
> an application's needs."
>
> An order consists of item numbers, descriptions, quantity, unit price
> and total price. You *could* re-invent the <table> wheel and register
> it as a new media type, but it's more scalable (maintainable, portable)
> to re-use HTML even if it isn't a precise fit. If you create a new
> media type, then you need to distribute a custom user-agent. When you
> upgrade your API, you must simultaneously update that user-agent.
>
> The success of the Web is due to the common user-agent. What I really
> don't want, is for any system I interact with to require me to install
> yet another piece of software, and keep it up-to-date. That's coupling.
> So much easier for everyone concerned, to target the browser. That way,
> I only need to install and maintain one user-agent regardless of how
> many different systems I interact with. Such decoupling allows clients
> and servers to evolve independently. So there is a cost associated
> with the minting of new media types -- coupling -- unless and until the
> new media type achieves significant uptake.
>
> -Eric
>
>
>
>
>
>
>
>
>
>
Kevin Duffey wrote: > What I am having a hard time swallowing is providing an > API that returns data as HTML. I think it's going to be > hard to go out there and tell consumers, "here is my api, > which returns HTML, but not for visual use.. you have to > parse the HTML yourself, even though your not a browser.. > you're going to become a browser basically, in order to > use my api". This just seems hard to get acceptance or by > in by potential consumers. I would imagine most consumers > of apis are expecting xml or json, and to get back html, > and have to actually parse it at least to me I'd be > questioning why html, which as far as I knew was used > to display data in a visual manner, is being sent back > for me to parse to use the data in it. Depending on who your consumers are, may I humbly recommend giving Shoji [1] a try. It provides typed link relations in a JSON format just like you're asking for. Robert Brewer fumanchu@... [1] http://www.aminus.org/rbre/shoji/shoji-draft-02.txt
--- In rest-discuss@yahoogroups.com, Sven Dowideit <SvenDowideit@...> wrote: > > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > I was wondering about this last night after coming across RMM, and in > the process trying to apply it to the REST API i'm developing for foswiki > > And given that I want to fully support all resource media types that are > present in the wiki (and you can attach anything - image, word doc, > executable, whatever) that makes adding xml into the payload pretty > completely and uttterly useless. > > so, given REST media-types that force you to not corrupt the payload > with non-resource related information, we're left with 2 other choices: > > 1 toss it in the http header > 2 use a separate verb. > > personally, I'm leaning towards using the OPTIONS verb - that way you > could enable all resources to be asked about their API, and it could be > possible to answer in a number of media-types, code, text, or > specification language. > > Can you guys please help me learn more by blowing holes in the idea? > > Sven > Have you considered another resource that tells you things about the non-hypermedia resource? e.g., an atom feed of the resources, each entry tells you how it is linked. Actually, in this model, the entries are the "key" resources and your "content" resources are linked from the entries. Even if you don't use Atom, you could use a model like this. Arguably, an HTML-based wiki or content repository uses this model as well. Andrew
--- In rest-discuss@yahoogroups.com, Scott Banwart <sbanwart@...> wrote:
>
> On Fri, Dec 3, 2010 at 6:46 AM, Eric J. Bowman <eric@...>wrote:
>
> >
> >
> > Eb wrote:
> > >
> > > >
> > > > Doesn't matter. Nobody coding a consumer for your API will
> > > > understand a custom media type without training. Whereas if you
> > > > express your API as HTML, you don't have this problem; anyone will
> > > > be able to understand it provided they understand HTML (a safe
> > > > assumption), and you won't need any custom media types.
> > > >
> > >
> > > I concur but even when HTML is used, there is still some training
> > > needed, right? If the returned consists of multiple links (and not
> > > just one state transition), how does my client know based on some
> > > other rules what "link" to take without some training?
> > >
> >
> > If the user is human, the link text imparts this knowledge. If the
> > user is a machine, the link relation does, i.e. rel='next'.
> >
> > -Eric
> >
> >
> >
> This is where my understanding of REST breaks down. For an M2M scenario, a
> machine still has to be told what rel='next' means. Whether you use a custom
> media type or HTML, if you add a new link type or change the format of the
> response, the client will still need to be changed to incorporate knowledge
> about these changes.
>
> I understand that a human user can read the text and react intelligently,
> it's the autonomous machine client where my understanding fails.
>
>
> --
> Scott Banwart
> sbanwart@...
>
You are absolutely correct. So don't change the representation format or add link relations once you have deployed clients. Note adding format extensions, including new relations, is a change to the media type. Now, it can be done in a backwards compatible way, so that old clients can ignore the extensions and still use the info it does understand. But there is no magic way to get an old client to understand the new extensions to your format.
You might be saying: "wait a second -- I thought using REST allowed my service to evolve without changing the client by keeping them decoupled?"
The answer is: "it only does this if a representation format is used that allows this."
The best way to do this in my opinion is to design the representation format around the client(s) and not the services.
In my opinion:
HTML is all about browsers and how they see your data. Your social media app's internal data structures have nothing to do with divs, paragraphs, etc. You convert your data into the browser's data schema so it can work with your data to present it to the user.
If you design a format that can represent things in the way the client sees things then when your internal data structures evolve, you just figure out a way to map them to the client format. The client doesn't need to change.
Here's a toy example: Say your client only cares about the size and shape of things. Your client-oriented format represents everything this way:
{
name: "apple",
size: "small",
shape: "sphere"
}
{
name: "building",
size: "big",
shape: "rectangular"
}
Any new data can be appropriately represented for that type of client as it is represented in the terms that the client cares about. Your service can evolve without changing the client.
When you get a new type of client in the mix, you can design a new type of format around that client and use content negotiation to figure out what format should be used for a specific request.
I'm oversimplifying here, but hopefully it gets the point across.
Read section 5.2.1 of Roy's thesis and really think about what the difference is between "Option 3" and REST.
I've discuss this quite a bit on my blog:
http://linkednotbound.net/2010/06/09/hypermedia-is-the-clients-lens/
http://linkednotbound.net/2010/07/19/self-descriptive-hypermedia/
Regards,
Andrew
On Fri, Dec 3, 2010 at 1:24 PM, Scott Banwart <sbanwart@...> wrote: > > > On Fri, Dec 3, 2010 at 6:46 AM, Eric J. Bowman <eric@...>wrote: > >> >> >> Eb wrote: >> > >> > > >> > > Doesn't matter. Nobody coding a consumer for your API will >> > > understand a custom media type without training. Whereas if you >> > > express your API as HTML, you don't have this problem; anyone will >> > > be able to understand it provided they understand HTML (a safe >> > > assumption), and you won't need any custom media types. >> > > >> > >> > I concur but even when HTML is used, there is still some training >> > needed, right? If the returned consists of multiple links (and not >> > just one state transition), how does my client know based on some >> > other rules what "link" to take without some training? >> > >> >> If the user is human, the link text imparts this knowledge. If the >> user is a machine, the link relation does, i.e. rel='next'. >> >> -Eric >> >> > This is where my understanding of REST breaks down. For an M2M scenario, a > machine still has to be told what rel='next' means. Whether you use a custom > media type or HTML, if you add a new link type or change the format of the > response, the client will still need to be changed to incorporate knowledge > about these changes. > > I understand that a human user can read the text and react intelligently, > it's the autonomous machine client where my understanding fails. > > This is why I believe a decent m2m type should focus exclusively on the general mechanics of m2m hypertext (i.e. stuff like outbound links, embedded links/state, query URLs, etc.) and not concern itself with a specific hypertext application. Which might explain why Atom isn't the HTML of m2m. Cheers, Mike
As Andrew says, one option is to use another resource. This is the common solution, eg for Atom. The alternatives you may want to consider are (a) HTTP Link header RFC 5988, which lets you add links to anything (b) most document formats have some way of adding metadata, but alas not all of them, so you can use these where available Link headers are I think quite compelling for this type of application, as there is one whole level of indirection removed. Justin On 3 Dec 2010, at 16:30, wahbedahbe wrote: > > --- In rest-discuss@yahoogroups.com, Sven Dowideit > <SvenDowideit@...> wrote: > > > > -----BEGIN PGP SIGNED MESSAGE----- > > Hash: SHA1 > > > > I was wondering about this last night after coming across RMM, and > in > > the process trying to apply it to the REST API i'm developing for > foswiki > > > > And given that I want to fully support all resource media types > that are > > present in the wiki (and you can attach anything - image, word doc, > > executable, whatever) that makes adding xml into the payload pretty > > completely and uttterly useless. > > > > so, given REST media-types that force you to not corrupt the payload > > with non-resource related information, we're left with 2 other > choices: > > > > 1 toss it in the http header > > 2 use a separate verb. > > > > personally, I'm leaning towards using the OPTIONS verb - that way > you > > could enable all resources to be asked about their API, and it > could be > > possible to answer in a number of media-types, code, text, or > > specification language. > > > > Can you guys please help me learn more by blowing holes in the idea? > > > > Sven > > > > Have you considered another resource that tells you things about the > non-hypermedia resource? e.g., an atom feed of the resources, each > entry tells you how it is linked. Actually, in this model, the > entries are the "key" resources and your "content" resources are > linked from the entries. Even if you don't use Atom, you could use a > model like this. Arguably, an HTML-based wiki or content repository > uses this model as well. > > Andrew > > >
Hello Scott,
On Fri, 2010-12-03 at 08:24 -0500, Scott Banwart wrote:
> This is where my understanding of REST breaks down. For an M2M
> scenario, a machine still has to be told what rel='next' means.
> Whether you use a custom media type or HTML, if you add a new link
> type or change the format of the response, the client will still need
> to be changed to incorporate knowledge about these changes.
>
> I understand that a human user can read the text and react
> intelligently, it's the autonomous machine client where my
> understanding fails.
I was wondering about the same, until I found out that the link
relationship is actually also standardized. There's a place where you
can suggest new ones even. I found a list here:
http://microformats.org/wiki/existing-rel-values
I think a link to another site was posted here on this list a few days
ago.
With that in mind, you can use the 'rel' of a link in your m2m client to
figure out, which link to use.
Juergen
--
Juergen Brendel
MuleSoft
On Fri, Dec 3, 2010 at 11:15 AM, Juergen Brendel <juergen.brendel@...> wrote: > With that in mind, you can use the 'rel' of a link in your m2m client to > figure out, which link to use. That is true but only to a limited extent. Consider the first relation in the list you pointed to, "acquaintance". If i am designing a system that has people i can use this rel in representations of single users. However, that rel cannot be used in a representation of a resource that (conceptually) contains multiple people (eg, a search result resource). That means an automaton client that wants to find all the acquaintances of people with a particular first name will have a very difficult time doing so. At the very least it will require an extra request per person in the result set. Link relations are occasionally useful but they are by no means an m2m silver bullet. Peter barelyenough.org
I was having this same discussion about how you provide the api doc for consumers of the api to a friend. Originally my initial thought was something like:
USERS
URI: /users
METHOD: GET
response: 200 ok with body of xml or json (eg:
<users>
<user>
<id>100</id>
<name>joe</name>
<link rel="delete" href=".../users/100"/>
<link rel="update" href=".../users/100"/>
</user>
<user>
<id>200</id>
<name>john</name>
<link rel="delete" href=".../users/200"/>
<link rel="update" href=".../users/200"/>
</user>
</users>
... blah blah ...
Basically, the above would indicate the URIs to call to get users, but would indicate the links would return what you can do on each user next.
After the dialog in this thread, and from some past threads, rather than a doc that indicates the URIs and such, I'd rather list the REL values that my document responds to, and instruct the consumer to use the href value as is. Thus, trust that the links the API returns will always work. Of course things like caching and such would be included as appropriate. But, rel values alone wouldn't provide very much info, so I would still structure it in some way like a section for Users, section for Orders, section for Sellers, section for Bids and so on. In each section I could list the outcome of a GET so their is an example in the doc that shows the xml snippet that would be returned for parsing to pluck data out of, how to format the xml to post a new item, or put an update, and so forth.
After a few of the posts in response to my questions, I am almost inclined to use the xhtml media type... but I am still stuck on how do I explain, perhaps entice potential consumers of my API why it returns HTML tags instead of direct xml or json.. that it allows my api to evolve and not break them and all that. The problem with this tho is like some others have said, there still has to be some sort of client side knowledge of the actual representation coming back. Maybe not the links and what can be done next to a given resource... but most consumers won't be rendering a web page, they'll want to use that data in some way, perhaps they are a front end to an old mainframe and have to take the data that comes back to transform it to the old mainframe format. I am stuck on how one explains that they are basically writing a web browsing parsing engine in order to use data, even if they have no care to display it in a browser. It seems to me most client
developers would think "why am I going to write a parser to handle html.. browsers already do this, I just want specific data, not have to look through various html elements").
That to me is the hardest sell on this.. and I realize a few of you have responded and said you don't have to use xhtml, it's just well understood and handles HATEOAS.. that I could create my own media type, register it and all that. I am just voicing my thought on the use of xhtml as the way a Restful api would return data content that most likely is not going to be rendered in a browser for display, but used in some other manner. To me that is what XML is for.. and perhaps that is because it's ingrained in so many of us that XML is for that purpose that it's hard to get around that for me.
--- On Fri, 12/3/10, Juergen Brendel <juergen.brendel@...> wrote:
From: Juergen Brendel <juergen.brendel@...>
Subject: Re: [rest-discuss] Link relations [was: A media type for case files, dossiers and documents]
To: "Rest Discussion List" <rest-discuss@yahoogroups.com>
Date: Friday, December 3, 2010, 10:15 AM
Hello Scott,
On Fri, 2010-12-03 at 08:24 -0500, Scott Banwart wrote:
> This is where my understanding of REST breaks down. For an M2M
> scenario, a machine still has to be told what rel='next' means.
> Whether you use a custom media type or HTML, if you add a new link
> type or change the format of the response, the client will still need
> to be changed to incorporate knowledge about these changes.
>
> I understand that a human user can read the text and react
> intelligently, it's the autonomous machine client where my
> understanding fails.
I was wondering about the same, until I found out that the link
relationship is actually also standardized. There's a place where you
can suggest new ones even. I found a list here:
http://microformats.org/wiki/existing-rel-values
I think a link to another site was posted here on this list a few days
ago.
With that in mind, you can use the 'rel' of a link in your m2m client to
figure out, which link to use.
Juergen
--
Juergen Brendel
MuleSoft
Peter,
I think this is what some of the other responses have been about.. if you use a media-type like xhtml, then any client will know what all elements returned mean. However, that still leads to the question another poster had made.. it's not guaranteed that a consumer of the api will always work. If anything it will ignore rel="" values that it was not written to handle. So if the API evolves to include new rel="" types, then only new consumers that know about those ahead of time will know what they mean and whether or not they should follow the href link or not. That is what I think is confusing to many. More so, I can provide my own non-known custom XML chunk back, complete with atom:link or my own link format, with rel/href/type attributes just as well as xhtml can with an <a> link. So even if an API uses XHTML as the media type, the <a> links (anchors) are not going to guarantee any client consuming that api that they will evolve with additional rel=""
types. Probably more important is that they should not break. If the API keeps all the existing links, but adds new ones, then only new consumers that are written with the knowledge of the new API version/links, will make use of them.
Hence why, still, I am having a hard time with the idea of using xhtml as a response type for my data elements over just my own custom xml format with links embedded in it.
Here is my underlying concern. A while ago, on this forum or maybe it was the Jersey forum..can't remember, a few people were asking something about if their API would be considered a true RESTful aPI. Roy and some others said "nope" because it didn't follow the HATEOAS principal to some degree. My concern is, not just for me but anyone that is working on some sort of API that they hope will catch on and provide business to them, that someone like Roy or other well known figure comes along and writes a blog about their service and says it's not RESTful. Somehow, that's like Steve Jobs saying Java is bad and now a crap load of apple people think Java is bad. Hopefully not many with our technical knowledge will read something like that and just assume a given API being blogged about is no good now.. because it's not 100% RESTful according to Roy or some others. But as much as I can, I want to provide what the knowledgeable folks replying to this thread,
and others like Roy and such, to look at any API I produce and say "that is truly RESTful.". So, at least for me, that is why I am messing around with this on the side, making a toy bid site, trying to see if I can provide a truly RESTful api that clients can consume, evolve without breaking clients, and follows HATEOAS. Thus my struggle with the notion of using XHTML instead of my own XML. Ugh!
--- On Fri, 12/3/10, Peter Williams <pezra@...> wrote:
From: Peter Williams <pezra@...>
Subject: Re: [rest-discuss] Link relations [was: A media type for case files, dossiers and documents]
To: juergen.brendel@...
Cc: "Rest Discussion List" <rest-discuss@yahoogroups.com>
Date: Friday, December 3, 2010, 10:51 AM
On Fri, Dec 3, 2010 at 11:15 AM, Juergen Brendel
<juergen.brendel@...> wrote:
> With that in mind, you can use the 'rel' of a link in your m2m client to
> figure out, which link to use.
That is true but only to a limited extent. Consider the first
relation in the list you pointed to, "acquaintance". If i am
designing a system that has people i can use this rel in
representations of single users. However, that rel cannot be used in
a representation of a resource that (conceptually) contains multiple
people (eg, a search result resource). That means an automaton client
that wants to find all the acquaintances of people with a particular
first name will have a very difficult time doing so. At the very
least it will require an extra request per person in the result set.
Link relations are occasionally useful but they are by no means an m2m
silver bullet.
Peter
barelyenough.org
Kevin Duffey wrote: > Peter, > > I think this is what some of the other responses have been about.. if you use a media-type like xhtml, then any client will know what all elements returned mean. However, that still leads to the question another poster had made.. it's not guaranteed that a consumer of the api will always work. If anything it will ignore rel="" values that it was not written to handle. So if the API evolves to include new rel="" types, then only new consumers that know about those ahead of time will know what they mean and whether or not they should follow the href link or not. That is what I think is confusing to many. More so, I can provide my own non-known custom XML chunk back, complete with atom:link or my own link format, with rel/href/type attributes just as well as xhtml can with an <a> link. So even if an API uses XHTML as the media type, the <a> links (anchors) are not going to guarantee any client consuming that api that they will evolve with additional rel="" > types. Probably more important is that they should not break. If the API keeps all the existing links, but adds new ones, then only new consumers that are written with the knowledge of the new API version/links, will make use of them. > > Hence why, still, I am having a hard time with the idea of using xhtml as a response type for my data elements over just my own custom xml format with links embedded in it. A good mediatype like XHTML constrains the format of the message, the representation of it, without constraining the actual message which is encoded. Often the problem is in tying down the definition of "client" or "user agent", when you have a mediatype like XHTML there are many different layers of client and user agent, you have one which handles the protocol, then another which handles the mediatype, and then an unbounded set of agents which consult the message encoded within the serialization and act on different parts of the message, sometimes they need to understand the whole message, but most of the time they only need to understand part of the message, the part which answers the question they have been asked. Link relation are not exclusively part of the media type, sure the media type can depend on a core set of relations like "edit" and "delete" to handle certain media type semantics, but the set of link relations can be unbounded as the rest apply to the message which is encoded, and not the media type. This allows agents which handle the display of a message to look for rel="stylesheet" whilst custom agents/scripts may be looking for rel="acquaintance" in one scenario or rel="mother" in another, similarly the rel's themselves aren't dumb tokens, they are mapped to full URIs which are universal identifiers for the relationship type. So, the client's don't need to be upgraded, depending on your definition of client that is, because you have one client which understands the mediatype and it's semantics, then others which understand different kinds and parts of messages encoded. Make sense? Best, Nathan
One major advantage of using XHTML as your media type is the fact that programmatic clients and humans with browsers can make use of the API. Your QA and Ops folks can do everything manually that the programmatic clients can do, without your having to add a separate management interface. This is also useful for adoption: developers can "surf the API" (as my colleague termed it) to learn how it works.
Those are pretty powerful arguments to make to the business stakeholders of your API ("if we do it this way, we don't have to do this other stuff" and "if we do it this way it will make adoption of the product easier").
Jon
........
Jon Moore
Comcast Interactive Media
From: Kevin Duffey <andjarnic@...<mailto:andjarnic@...>>
Date: Fri, 3 Dec 2010 11:03:47 -0800
To: <juergen.brendel@...<mailto:juergen.brendel@...>>, Peter Williams <pezra@...<mailto:pezra@...>>
Cc: Rest Discussion List <rest-discuss@yahoogroups.com<mailto:rest-discuss@yahoogroups.com>>
Subject: Re: [rest-discuss] Link relations [was: A media type for case files, dossiers and documents]
Peter,
I think this is what some of the other responses have been about.. if you use a media-type like xhtml, then any client will know what all elements returned mean. However, that still leads to the question another poster had made.. it's not guaranteed that a consumer of the api will always work. If anything it will ignore rel="" values that it was not written to handle. So if the API evolves to include new rel="" types, then only new consumers that know about those ahead of time will know what they mean and whether or not they should follow the href link or not. That is what I think is confusing to many. More so, I can provide my own non-known custom XML chunk back, complete with atom:link or my own link format, with rel/href/type attributes just as well as xhtml can with an <a> link. So even if an API uses XHTML as the media type, the <a> links (anchors) are not going to guarantee any client consuming that api that they will evolve with additional rel="" types. Probably more important is that they should not break. If the API keeps all the existing links, but adds new ones, then only new consumers that are written with the knowledge of the new API version/links, will make use of them.
Hence why, still, I am having a hard time with the idea of using xhtml as a response type for my data elements over just my own custom xml format with links embedded in it.
Here is my underlying concern. A while ago, on this forum or maybe it was the Jersey forum..can't remember, a few people were asking something about if their API would be considered a true RESTful aPI. Roy and some others said "nope" because it didn't follow the HATEOAS principal to some degree. My concern is, not just for me but anyone that is working on some sort of API that they hope will catch on and provide business to them, that someone like Roy or other well known figure comes along and writes a blog about their service and says it's not RESTful. Somehow, that's like Steve Jobs saying Java is bad and now a crap load of apple people think Java is bad. Hopefully not many with our technical knowledge will read something like that and just assume a given API being blogged about is no good now.. because it's not 100% RESTful according to Roy or some others. But as much as I can, I want to provide what the knowledgeable folks replying to this thread, and others like Roy and such, to look at any API I produce and say "that is truly RESTful.". So, at least for me, that is why I am messing around with this on the side, making a toy bid site, trying to see if I can provide a truly RESTful api that clients can consume, evolve without breaking clients, and follows HATEOAS. Thus my struggle with the notion of using XHTML instead of my own XML. Ugh!
--- On Fri, 12/3/10, Peter Williams <pezra@...<mailto:pezra@...g>> wrote:
From: Peter Williams <pezra@...<mailto:pezra@barelyenough.org>>
Subject: Re: [rest-discuss] Link relations [was: A media type for case files, dossiers and documents]
To: juergen.brendel@...<mailto:juergen.brendel@...>
Cc: "Rest Discussion List" <rest-discuss@yahoogroups.com<mailto:rest-discuss@yahoogroups.com>>
Date: Friday, December 3, 2010, 10:51 AM
On Fri, Dec 3, 2010 at 11:15 AM, Juergen Brendel
<juergen.brendel@...</mc/compose?to=juergen.brendel%40mulesoft.com>> wrote:
> With that in mind, you can use the 'rel' of a link in your m2m client to
> figure out, which link to use.
That is true but only to a limited extent. Consider the first
relation in the list you pointed to, "acquaintance". If i am
designing a system that has people i can use this rel in
representations of single users. However, that rel cannot be used in
a representation of a resource that (conceptually) contains multiple
people (eg, a search result resource). That means an automaton client
that wants to find all the acquaintances of people with a particular
first name will have a very difficult time doing so. At the very
least it will require an extra request per person in the result set.
Link relations are occasionally useful but they are by no means an m2m
silver bullet.
Peter
barelyenough.org
Kevin Duffey wrote: > > That to me is the hardest sell on this.. and I realize a few of you > have responded and said you don't have to use xhtml, it's just well > understood and handles HATEOAS.. that I could create my own media > type, register it and all that. I am just voicing my thought on the > use of xhtml as the way a Restful api would return data content that > most likely is not going to be rendered in a browser for display, but > used in some other manner. To me that is what XML is for.. and > perhaps that is because it's ingrained in so many of us that XML is > for that purpose that it's hard to get around that for me. > Code on Demand is used, from HTML, to read in your JSON or XML data. It is not required that the HTML be consumed. Stop thinking about HTML as something only meant to render documents in browsers, think of it instead as a standardized set of semantics for describing hypertext APIs. Doesn't necessarily have anything to do with browsers, but its presence allows the API to be learned/debugged using a browser, and doesn't prevent anyone from interacting exclusively with an underlying data type. -Eric
On Fri, Dec 3, 2010 at 12:03 PM, Kevin Duffey <andjarnic@...> wrote: > > Peter, > > I think this is what some of the other responses have been about.. if you > use a media-type like xhtml, then any client will know what all elements > returned mean. Do you mean, for example, the server returns an `application/xhtml` response to a client expecting a search results response and the client knows that in that type of response each div with a class of "person" constitutes a person that can have acquaintances. If you do mean that then you have a real problem. Clients should not need out of band information to interpret the messages they receive. The `content-type` header is designed to convey all the information needed to interpret the message. If you use 'content-type' correctly > However, that still leads to the question another poster had made.. it's > not guaranteed that a consumer of the api will always work. If anything it > will ignore rel="" values that it was not written to handle. So if the API > evolves to include new rel="" types, then only new consumers that know about > those ahead of time will know what they mean and whether or not they should > follow the href link or not. That is what I think is confusing to many. More > so, I can provide my own non-known custom XML chunk back, complete with > atom:link or my own link format, with rel/href/type attributes just as well > as xhtml can with an <a> link. So even if an API uses XHTML as the media > type, the <a> links (anchors) are not going to guarantee any client > consuming that api that they will evolve with additional rel="" types. > Probably more important is that they should not break. If the API keeps all > the existing links, but adds new ones, then only new consumers that are > written with the knowledge of the new API version/links, will make use of > them. > > Hence why, still, I am having a hard time with the idea of using xhtml as a > response type for my data elements over just my own custom xml format with > links embedded in it. > > Here is my underlying concern. A while ago, on this forum or maybe it was > the Jersey forum..can't remember, a few people were asking something about > if their API would be considered a true RESTful aPI. Roy and some others > said "nope" because it didn't follow the HATEOAS principal to some degree. > My concern is, not just for me but anyone that is working on some sort of > API that they hope will catch on and provide business to them, that someone > like Roy or other well known figure comes along and writes a blog about > their service and says it's not RESTful. Somehow, that's like Steve Jobs > saying Java is bad and now a crap load of apple people think Java is bad. > Hopefully not many with our technical knowledge will read something like > that and just assume a given API being blogged about is no good now.. > because it's not 100% RESTful according to Roy or some others. But as much > as I can, I want to provide what the knowledgeable folks replying to this > thread, and others like Roy and such, to look at any API I produce and say > "that is truly RESTful.". So, at least for me, that is why I am messing > around with this on the side, making a toy bid site, trying to see if I can > provide a truly RESTful api that clients can consume, evolve without > breaking clients, and follows HATEOAS. Thus my struggle with the notion of > using XHTML instead of my own XML. Ugh! > > I like using xhtml. However, most of the m2m situations i encounter require the introduction of a new media type. That media type will, often, be xhtml but with some additions rules applied. (Eg, that every div with a class of "person" can be interpreted as a person.) Making the latent semantics explicit with a specific media type id allows for more graceful evolution of the clients and servers. Using xhtml as the basis for your m2m interactions reduces duplication. The `application/html` and `application/vnd.myapp` representations are usually identical. The vendor media type merely informs automaton clients there this are some application specific semantics to the document they are receiving. However, if you think selling xhtml to your consumers as service format will be hard just use xml or json. There is nothing wrong with that as long as you register an id for your new media type. Peter barelyenough.org
Where does an autonomous client get its knowledge of the data semantics in
an XHTML resource representation? In order to do anything useful with the
data, the client needs to know how to parse and attach semantic meaning to
the data. In the same vein, the client would need to understand the semantic
meaning of form elements in order to correctly place data in the proper
elements when POSTing a request. I don't really see how an autonomous client
can be built without some sort of out-of-band knowledge of those semantics.
Am I missing something?
On Fri, Dec 3, 2010 at 2:27 PM, Moore, Jonathan
<jonathan_moore@...>wrote:
>
>
> One major advantage of using XHTML as your media type is the fact that
> programmatic clients and humans with browsers can make use of the API.
> Your QA and Ops folks can do everything manually that the programmatic
> clients can do, without your having to add a separate management interface.
> This is also useful for adoption: developers can "surf the API" (as my
> colleague termed it) to learn how it works.
>
> Those are pretty powerful arguments to make to the business stakeholders
> of your API ("if we do it this way, we don't have to do this other stuff"
> and "if we do it this way it will make adoption of the product easier").
>
> Jon
> ........
> Jon Moore
> Comcast Interactive Media
>
>
>
> From: Kevin Duffey <andjarnic@...>
> Date: Fri, 3 Dec 2010 11:03:47 -0800
> To: <juergen.brendel@...>, Peter Williams <pezra@barelyenough.org
> >
>
> Cc: Rest Discussion List <rest-discuss@...m>
> Subject: Re: [rest-discuss] Link relations [was: A media type for case
> files, dossiers and documents]
>
>
>
>
> Peter,
>
> I think this is what some of the other responses have been about.. if you
> use a media-type like xhtml, then any client will know what all elements
> returned mean. However, that still leads to the question another poster had
> made.. it's not guaranteed that a consumer of the api will always work. If
> anything it will ignore rel="" values that it was not written to handle. So
> if the API evolves to include new rel="" types, then only new consumers that
> know about those ahead of time will know what they mean and whether or not
> they should follow the href link or not. That is what I think is confusing
> to many. More so, I can provide my own non-known custom XML chunk back,
> complete with atom:link or my own link format, with rel/href/type attributes
> just as well as xhtml can with an <a> link. So even if an API uses XHTML as
> the media type, the <a> links (anchors) are not going to guarantee any
> client consuming that api that they will evolve with additional rel=""
> types. Probably more important is that they should not break. If the API
> keeps all the existing links, but adds new ones, then only new consumers
> that are written with the knowledge of the new API version/links, will make
> use of them.
>
> Hence why, still, I am having a hard time with the idea of using xhtml as a
> response type for my data elements over just my own custom xml format with
> links embedded in it.
>
> Here is my underlying concern. A while ago, on this forum or maybe it was
> the Jersey forum..can't remember, a few people were asking something about
> if their API would be considered a true RESTful aPI. Roy and some others
> said "nope" because it didn't follow the HATEOAS principal to some degree.
> My concern is, not just for me but anyone that is working on some sort of
> API that they hope will catch on and provide business to them, that someone
> like Roy or other well known figure comes along and writes a blog about
> their service and says it's not RESTful. Somehow, that's like Steve Jobs
> saying Java is bad and now a crap load of apple people think Java is bad.
> Hopefully not many with our technical knowledge will read something like
> that and just assume a given API being blogged about is no good now..
> because it's not 100% RESTful according to Roy or some others. But as much
> as I can, I want to provide what the knowledgeable folks replying to this
> thread, and others like Roy and such, to look at any API I produce and say
> "that is truly RESTful.". So, at least for me, that is why I am messing
> around with this on the side, making a toy bid site, trying to see if I can
> provide a truly RESTful api that clients can consume, evolve without
> breaking clients, and follows HATEOAS. Thus my struggle with the notion of
> using XHTML instead of my own XML. Ugh!
>
>
>
> --- On *Fri, 12/3/10, Peter Williams <pezra@...>* wrote:
>
>
> From: Peter Williams <pezra@barelyenough.org>
> Subject: Re: [rest-discuss] Link relations [was: A media type for case
> files, dossiers and documents]
> To: juergen.brendel@mulesoft.com
> Cc: "Rest Discussion List" <rest-discuss@yahoogroups.com>
> Date: Friday, December 3, 2010, 10:51 AM
>
>
>
> On Fri, Dec 3, 2010 at 11:15 AM, Juergen Brendel
> <juergen.brendel@...<http://mc/compose?to=juergen.brendel%40mulesoft.com>>
> wrote:
> > With that in mind, you can use the 'rel' of a link in your m2m client to
> > figure out, which link to use.
>
> That is true but only to a limited extent. Consider the first
> relation in the list you pointed to, "acquaintance". If i am
> designing a system that has people i can use this rel in
> representations of single users. However, that rel cannot be used in
> a representation of a resource that (conceptually) contains multiple
> people (eg, a search result resource). That means an automaton client
> that wants to find all the acquaintances of people with a particular
> first name will have a very difficult time doing so. At the very
> least it will require an extra request per person in the result set.
> Link relations are occasionally useful but they are by no means an m2m
> silver bullet.
>
> Peter
> barelyenough.org
>
>
>
>
--
Scott Banwart
Integration Strategy Partner
800-718-4990 (Toll Free)
440-399-3311 (Office)
330-671-9286 (Mobile)
http://www.stonedonut.com/
Peter Williams wrote: > > However, if you think selling xhtml to your consumers as service > format will be hard just use xml or json. There is nothing wrong > with that as long as you register an id for your new media type. > Just a terminology nitpick: nothing that isn't in the registry is a media type. Register a media type for your new data type. Rather, register a media type for your family of data types; IOW it should be both forwards and backwards compatible, so you don't wind up versioning the media type when you version the data type. -Eric
On Fri, Dec 3, 2010 at 8:04 PM, Scott Banwart <sbanwart@...> wrote: > > > Where does an autonomous client get its knowledge of the data semantics in > an XHTML resource representation? In order to do anything useful with the > data, the client needs to know how to parse and attach semantic meaning to > the data. In the same vein, the client would need to understand the semantic > meaning of form elements in order to correctly place data in the proper > elements when POSTing a request. I don't really see how an autonomous client > can be built without some sort of out-of-band knowledge of those semantics. > > Am I missing something? > > no Cheers, Mike
On Fri, Dec 3, 2010 at 1:13 PM, Eric J. Bowman <eric@...> wrote: > Peter Williams wrote: >> >> However, if you think selling xhtml to your consumers as service >> format will be hard just use xml or json. There is nothing wrong >> with that as long as you register an id for your new media type. >> > > Just a terminology nitpick: nothing that isn't in the registry is a > media type. I respectfully disagree. I phrased it the way i did on purpose. > Rather, > register a media type for your family of data types; IOW it should be > both forwards and backwards compatible, so you don't wind up versioning > the media type when you version the data type. I think i agree. A media type should be a way to represent resources of multiple related types. And a media type can evolve (in some ways) without requiring a new identifier. Peter barelyenough.org
Everytime I read these posts I always end up feeling like I get it ... almost. You know; it's that feeling of "almost, but not quite entirely unlike tea" - you understand the words, but don't really get the complete meaning. Anyway ... talking about representing resources in xhtml - here is one point I find very interesting: >> I don't really see how an autonomous client >> can be built without some sort of out-of-band knowledge of those >> semantics. >> Am I missing something? > no I absolutely agree with this - the client must be told how to understand the semantics. This is what programmers are for - we read documentation and write code (among other things). But who says documentation only exists in a separate PDF-document? Why not put it right there in the API using xhtml? Someone mentioned "browsing the API" in a previous mail. That's a concept I find truely interesting. If the API *is* the documention then it might be that the total time spent on writing the API plus the documentation is less than the extra time we spend on writing a slightly different style of API - namely an xhtml API instead of a application/vnd.xxx+xml API. With an xhtml API we just mail the root URL to the consumers - and that's all they need. But, "hey", you say, "the consumers still need to know the sematics?". Yes they do - but all that information can be exposed in humand readable text, formated using xhtml, right inside the API. /J�rn
That's a nice side-effect of using XHTML, but I don't see how I would gain much by using REST vs. something else for an m2m scenario. The client still ends up being tightly coupled to the representation regardless whether it's XHTML, XML or JSON. If the structure or semantics of the representation change, an autonomous client will need to be changed as well. I don't see how this is any different from the types of coupling seen with SOAP and other RPC mechanisms. On Fri, Dec 3, 2010 at 4:14 PM, Jrn Wildt <jw@fjeldgruppen.dk> wrote: > Everytime I read these posts I always end up feeling like I get it ... > almost. You know; it's that feeling of "almost, but not quite entirely > unlike tea" - you understand the words, but don't really get the complete > meaning. > > Anyway ... talking about representing resources in xhtml - here is one > point I find very interesting: > > > I don't really see how an autonomous client >>> can be built without some sort of out-of-band knowledge of those >>> semantics. >>> Am I missing something? >>> >> no >> > > I absolutely agree with this - the client must be told how to understand > the semantics. This is what programmers are for - we read documentation and > write code (among other things). But who says documentation only exists in a > separate PDF-document? Why not put it right there in the API using xhtml? > Someone mentioned "browsing the API" in a previous mail. That's a concept I > find truely interesting. > > If the API *is* the documention then it might be that the total time spent > on writing the API plus the documentation is less than the extra time we > spend on writing a slightly different style of API - namely an xhtml API > instead of a application/vnd.xxx+xml API. With an xhtml API we just mail the > root URL to the consumers - and that's all they need. > > But, "hey", you say, "the consumers still need to know the sematics?". Yes > they do - but all that information can be exposed in humand readable text, > formated using xhtml, right inside the API. > > /Jrn > > -- Scott Banwart sbanwart@...
What I dislike about the xhtml solution though, is the lack of explicitly stating what kind of resource we are working with. If I was creating a service that mixed banking and farming then I am quite sure my consumers would want to know when they received a representation of a bank account versus a tractor. But maybe this is what content negotiation would be good for? If I pointed my browser at the resource I would get xhtml - if I pointed my farmbank client at it I would get application/farmbank+xml. But now we are missing the documentation for the application/farmbank+xml media type which must be shipped as a separate PDF. That kind of nullifies the benefits of automatic documentation we got from the xhtml API. Now we could add a standard XSL transformation from application/farmbank+xml to xhtml: - If application/farmbank+xml used atom:link then these could be converted to <a>-elements. - All unknown XML elements could be converted to <div class="X">...</div> - XForm elements could be served as-is assuming the browser could handle them. A reason for using XForm can be found here: http://iansrobinson.com/2010/09/02/using-typed-links-to-forms/ - And so on ... In this way we can help the API consumers a lot, with little effort, by documenting our XML API using automatically generated xhtml. That way we get the best of both worlds ... or what? /J�rn ----- Original Message ----- From: "J�rn Wildt" <jw@...> To: "Scott Banwart" <sbanwart@...>; "Mike Kelly" <mike@...> Cc: "Rest Discussion List" <rest-discuss@yahoogroups.com> Sent: Friday, December 03, 2010 10:14 PM Subject: Re: [rest-discuss] Link relations [was: A media type for case files, dossiers and documents] > Everytime I read these posts I always end up feeling like I get it ... > almost. You know; it's that feeling of "almost, but not quite entirely > unlike tea" - you understand the words, but don't really get the complete > meaning. > > Anyway ... talking about representing resources in xhtml - here is one > point > I find very interesting: > >>> I don't really see how an autonomous client >>> can be built without some sort of out-of-band knowledge of those >>> semantics. >>> Am I missing something? >> no > > I absolutely agree with this - the client must be told how to understand > the > semantics. This is what programmers are for - we read documentation and > write code (among other things). But who says documentation only exists in > a > separate PDF-document? Why not put it right there in the API using xhtml? > Someone mentioned "browsing the API" in a previous mail. That's a concept > I > find truely interesting. > > If the API *is* the documention then it might be that the total time spent > on writing the API plus the documentation is less than the extra time we > spend on writing a slightly different style of API - namely an xhtml API > instead of a application/vnd.xxx+xml API. With an xhtml API we just mail > the > root URL to the consumers - and that's all they need. > > But, "hey", you say, "the consumers still need to know the sematics?". Yes > they do - but all that information can be exposed in humand readable text, > formated using xhtml, right inside the API. > > /J�rn > >
> That's a nice side-effect of using XHTML, but I don't see how I would gain > much by using REST vs. something else for an m2m scenario. The client > still > ends up being tightly coupled to the representation regardless whether > it's > XHTML, XML or JSON. If the structure or semantics of the representation > change, an autonomous client will need to be changed as well. I don't see > how this is any different from the types of coupling seen with SOAP and > other RPC mechanisms. Oh, well, there certainly are some benfits in REST that you don't get in SOAP: 1) Hypermedia In SOAP you need to agree and all possible URLs and you cannot change them afterwards. In REST you can require your client to follow links instead of assuming URLs. 2) Identifiable resources Have you ever tried to mail a link to a SOAP service saying, "hey, look at this cool thing"? You can do that in REST, not in SOAP. 3) Standard ways of caching/scaling ... and, hmmm, well, all the other good stuff in Roy's thesis ;-) /J�rn
On Fri, Dec 3, 2010 at 9:36 PM, Jrn Wildt <jw@...> wrote: > What I dislike about the xhtml solution though, is the lack of explicitly > stating what kind of resource we are working with. If I was creating a > service that mixed banking and farming then I am quite sure my consumers > would want to know when they received a representation of a bank account > versus a tractor. Unless it's an entry point they will be following a link there, which will indicate how the representation should be interpreted. Cheers, Mike
On Fri, Dec 3, 2010 at 1:49 PM, Mike Kelly <mike@...> wrote: > On Fri, Dec 3, 2010 at 8:04 PM, Scott Banwart <sbanwart@...> wrote: > >> Where does an autonomous client get its knowledge of the data semantics in >> an XHTML resource representation? In order to do anything useful with the >> data, the client needs to know how to parse and attach semantic meaning to >> the data. In the same vein, the client would need to understand the semantic >> meaning of form elements in order to correctly place data in the proper >> elements when POSTing a request. I don't really see how an autonomous client >> can be built without some sort of out-of-band knowledge of those semantics. >> >> Am I missing something? >> >> > no > Clients, whether autonomous or not, should be informed of the semantics (or media type) of the message by the server. This is done using the content-type header. In an m2m situation xhtml, by itself, is rarely a useful media type. Xhtml provides hypermedia application description semantics (<a>, <link> and <form>) and document structuring semantics. The document structuring semantics are of little direct use in most m2m scenarios. However, one can define new media types that extend xhtml. These extensions would inherit the application description semantics and add domain specific semantics to certain elements in the document. You could serve the same octet sequence as `application/xhtml` to web browsers or as `application/vnd.myapp` to specialized autonomous clients. The specialized media type would inform the autonomous client how to extract the information it cares about (either simple properties of the resource, simple relationships to other resources or complex instructions for further requests) from the representation. Peter barelyenough.org
On Fri, Dec 3, 2010 at 9:54 PM, Mike Kelly <mike@...> wrote: > On Fri, Dec 3, 2010 at 9:36 PM, Jrn Wildt <jw@...> wrote: >> What I dislike about the xhtml solution though, is the lack of explicitly >> stating what kind of resource we are working with. If I was creating a >> service that mixed banking and farming then I am quite sure my consumers >> would want to know when they received a representation of a bank account >> versus a tractor. > > Unless it's an entry point they will be following a link there, which > will indicate how the representation should be interpreted. > stringing links relations out from an entry point like this is a way to write hypertext applications that don't necessitate typed resources or representations Cheers, Mike
On 12/03/2010 04:55 PM, Peter Williams wrote: > > On Fri, Dec 3, 2010 at 1:49 PM, Mike Kelly <mike@... > <mailto:mike@...>> wrote: > > On Fri, Dec 3, 2010 at 8:04 PM, Scott Banwart <sbanwart@... > <mailto:sbanwart@...>> wrote: > > Where does an autonomous client get its knowledge of the data > semantics in an XHTML resource representation? In order to do > anything useful with the data, the client needs to know how to > parse and attach semantic meaning to the data. In the same > vein, the client would need to understand the semantic meaning > of form elements in order to correctly place data in the > proper elements when POSTing a request. I don't really see how > an autonomous client can be built without some sort of > out-of-band knowledge of those semantics. > > Am I missing something? > > > no > > > Clients, whether autonomous or not, should be informed of the > semantics (or media type) of the message by the server. This is done > using the content-type header. > > In an m2m situation xhtml, by itself, is rarely a useful media type. > Xhtml provides hypermedia application description semantics (<a>, > <link> and <form>) and document structuring semantics. The document > structuring semantics are of little direct use in most m2m scenarios. > However, one can define new media types that extend xhtml. These > extensions would inherit the application description semantics and add > domain specific semantics to certain elements in the document. You > could serve the same octet sequence as `application/xhtml` to web > browsers or as `application/vnd.myapp` to specialized autonomous > clients. The specialized media type would inform the autonomous > client how to extract the information it cares about (either simple > properties of the resource, simple relationships to other resources or > complex instructions for further requests) from the representation. > > I assume this is determined at runtime, right? How does the client know what properties, relationships or complex instructions provide information on how to extract information? -- blog: http://eikonne.wordpress.com twitter: http://twitter.com/eikonne
On 12/03/2010 04:34 PM, Scott Banwart wrote: > > That's a nice side-effect of using XHTML, but I don't see how I would > gain much by using REST vs. something else for an m2m scenario. The > client still ends up being tightly coupled to the representation > regardless whether it's XHTML, XML or JSON. If the structure or > semantics of the representation change, an autonomous client will need > to be changed as well. I don't see how this is any different from the > types of coupling seen with SOAP and other RPC mechanisms. > > Really? Depends on what you're trying to do. It you're just making API style calls, then maybe there are no advantages. But if you're dealing with many clients, evolving servers, workflow etc etc, you may gain much. There will always be coupling, nothing that I'm aware of completely removes that, but there are degrees of coupling and the REST style strives to minimize it to a few things. Eb -- blog: http://eikonne.wordpress.com twitter: http://twitter.com/eikonne
On Fri, Dec 3, 2010 at 4:50 PM, Eb <amaeze@...> wrote: > > > On 12/03/2010 04:55 PM, Peter Williams wrote: > > > > On Fri, Dec 3, 2010 at 1:49 PM, Mike Kelly <mike@...> wrote: > >> On Fri, Dec 3, 2010 at 8:04 PM, Scott Banwart <sbanwart@...>wrote: >> >>> Where does an autonomous client get its knowledge of the data semantics >>> in an XHTML resource representation? In order to do anything useful with the >>> data, the client needs to know how to parse and attach semantic meaning to >>> the data. In the same vein, the client would need to understand the semantic >>> meaning of form elements in order to correctly place data in the proper >>> elements when POSTing a request. I don't really see how an autonomous client >>> can be built without some sort of out-of-band knowledge of those semantics. >>> >>> Am I missing something? >>> >>> >> no >> > > Clients, whether autonomous or not, should be informed of the semantics (or > media type) of the message by the server. This is done using the > content-type header. > > In an m2m situation xhtml, by itself, is rarely a useful media type. Xhtml > provides hypermedia application description semantics (<a>, <link> and > <form>) and document structuring semantics. The document structuring > semantics are of little direct use in most m2m scenarios. However, one can > define new media types that extend xhtml. These extensions would inherit > the application description semantics and add domain specific semantics to > certain elements in the document. You could serve the same octet sequence > as `application/xhtml` to web browsers or as `application/vnd.myapp` to > specialized autonomous clients. The specialized media type would inform > the autonomous client how to extract the information it cares about (either > simple properties of the resource, simple relationships to other resources > or complex instructions for further requests) from the representation. > > > > I assume this is determined at runtime, right? How does the client know > what properties, relationships or complex instructions provide information > on how to extract information? > A client knows that the meaning of a message is because it has been taught (coded, configures or whatever) to understand the media type of that message. When a client makes the request it can inform the server of the media types it is capable of handling use the `accept` request header. An autonomous client would set that to `application/vnd.myapp`. The server would (usually) respond with a representation conforming to the specification of `application/vnd.myapp`. For example, the specification might say something like the representation is an xhtml representation and any element with a @class containing 'person' can be treated as a person entity. Peter barelyenough.org
Peter Williams wrote: > On Fri, Dec 3, 2010 at 4:50 PM, Eb <amaeze@...> wrote: >> On 12/03/2010 04:55 PM, Peter Williams wrote: >> On Fri, Dec 3, 2010 at 1:49 PM, Mike Kelly <mike@...> wrote: >>> On Fri, Dec 3, 2010 at 8:04 PM, Scott Banwart <sbanwart@...>wrote: >>> >>>> Where does an autonomous client get its knowledge of the data semantics >>>> in an XHTML resource representation? In order to do anything useful with the >>>> data, the client needs to know how to parse and attach semantic meaning to >>>> the data. In the same vein, the client would need to understand the semantic >>>> meaning of form elements in order to correctly place data in the proper >>>> elements when POSTing a request. I don't really see how an autonomous client >>>> can be built without some sort of out-of-band knowledge of those semantics. >>>> >>>> Am I missing something? >>>> >>> no >>> >> Clients, whether autonomous or not, should be informed of the semantics (or >> media type) of the message by the server. This is done using the >> content-type header. >> >> In an m2m situation xhtml, by itself, is rarely a useful media type. Xhtml >> provides hypermedia application description semantics (<a>, <link> and >> <form>) and document structuring semantics. The document structuring >> semantics are of little direct use in most m2m scenarios. However, one can >> define new media types that extend xhtml. These extensions would inherit >> the application description semantics and add domain specific semantics to >> certain elements in the document. You could serve the same octet sequence >> as `application/xhtml` to web browsers or as `application/vnd.myapp` to >> specialized autonomous clients. The specialized media type would inform >> the autonomous client how to extract the information it cares about (either >> simple properties of the resource, simple relationships to other resources >> or complex instructions for further requests) from the representation. >> >> I assume this is determined at runtime, right? How does the client know >> what properties, relationships or complex instructions provide information >> on how to extract information? >> > > A client knows that the meaning of a message is because it has been taught > (coded, configures or whatever) to understand the media type of that > message. When a client makes the request it can inform the server of the > media types it is capable of handling use the `accept` request header. An > autonomous client would set that to `application/vnd.myapp`. The server > would (usually) respond with a representation conforming to the > specification of `application/vnd.myapp`. For example, the specification > might say something like the representation is an xhtml representation and > any element with a @class containing 'person' can be treated as a person > entity. Is this a joke? take xhtml redefine the meaning of @class to use ambiguous string tokens in order to type entities (which are represented as what exactly in the message?) and serve it up as application/vnd.myapp == "RESTful"?? really?
On 12/03/2010 07:43 PM, Peter Williams wrote: > > > > On Fri, Dec 3, 2010 at 4:50 PM, Eb <amaeze@... > <mailto:amaeze@...>> wrote: > > > > On 12/03/2010 04:55 PM, Peter Williams wrote: >> >> On Fri, Dec 3, 2010 at 1:49 PM, Mike Kelly <mike@... >> <mailto:mike@...>> wrote: >> >> On Fri, Dec 3, 2010 at 8:04 PM, Scott Banwart >> <sbanwart@... <mailto:sbanwart@...>> wrote: >> >> Where does an autonomous client get its knowledge of the >> data semantics in an XHTML resource representation? In >> order to do anything useful with the data, the client >> needs to know how to parse and attach semantic meaning to >> the data. In the same vein, the client would need to >> understand the semantic meaning of form elements in order >> to correctly place data in the proper elements when >> POSTing a request. I don't really see how an autonomous >> client can be built without some sort of out-of-band >> knowledge of those semantics. >> >> Am I missing something? >> >> >> no >> >> >> Clients, whether autonomous or not, should be informed of the >> semantics (or media type) of the message by the server. This is >> done using the content-type header. >> >> In an m2m situation xhtml, by itself, is rarely a useful media >> type. Xhtml provides hypermedia application description >> semantics (<a>, <link> and <form>) and document structuring >> semantics. The document structuring semantics are of little >> direct use in most m2m scenarios. However, one can define new >> media types that extend xhtml. These extensions would inherit >> the application description semantics and add domain specific >> semantics to certain elements in the document. You could serve >> the same octet sequence as `application/xhtml` to web browsers or >> as `application/vnd.myapp` to specialized autonomous clients. >> The specialized media type would inform the autonomous client how >> to extract the information it cares about (either simple >> properties of the resource, simple relationships to other >> resources or complex instructions for further requests) from the >> representation. >> >> > > I assume this is determined at runtime, right? How does the > client know what properties, relationships or complex instructions > provide information on how to extract information? > > > A client knows that the meaning of a message is because it has been > taught (coded, configures or whatever) to understand the media type of > that message. When a client makes the request it can inform the > server of the media types it is capable of handling use the `accept` > request header. An autonomous client would set that to > `application/vnd.myapp`. The server would (usually) respond with a > representation conforming to the specification of > `application/vnd.myapp`. For example, the specification might say > something like the representation is an xhtml representation and any > element with a @class containing 'person' can be treated as a person > entity. Same page, we are. :) -- blog: http://eikonne.wordpress.com twitter: http://twitter.com/eikonne
Hello Jrn. Well, 1 and 2 are not totally correct, if we meant SOAP as in Web Services (SOAP was created before web services standard and was adopted afterward with some adjustments). 1. False. If using WSDL As It Should, you may end up with a dynamic client. Implementations now are static, compile time, which gives you what you mention. But the same fate is destined for REST if client does not follow HATEOAS. HATEOAS is not only using links, but have a way to follow in a dynamic way. Most of clients now have some or all of the links wired in the code. 2. False. SOAP nor Web Services do not hide in any way the resourced ID. Actually, a web service is a resource with an ID. We may be careful not to confuse the business semantics with the transport and envelop semantics. William Martinez Pomares --- In rest-discuss@yahoogroups.com, Jrn Wildt <jw@...> wrote: > > Oh, well, there certainly are some benfits in REST that you don't get in > SOAP: > > 1) Hypermedia > In SOAP you need to agree and all possible URLs and you cannot change them > afterwards. In REST you can require your client to follow links instead of > assuming URLs. > > 2) Identifiable resources > Have you ever tried to mail a link to a SOAP service saying, "hey, look at > this cool thing"? You can do that in REST, not in SOAP. > > 3) Standard ways of caching/scaling > > ... and, hmmm, well, all the other good stuff in Roy's thesis ;-) > > /Jrn >
Peter Williams wrote: > > > > > Just a terminology nitpick: nothing that isn't in the registry is a > > media type. > > I respectfully disagree. I phrased it the way i did on purpose. > The pertinent RFCs only use the term "media type" in reference to values registered in the standards, vendor and personal trees. The term "type" is used in reference to the x. and x- non-trees. The term "access type" is used in reference to things which could become media types. REST uses the term "data type" in reference to the specs targeted by a media type. > > > Rather, register a media type for your family of data types; IOW > > it should be both forwards and backwards compatible, so you don't > > wind up versioning the media type when you version the data type. > > I think i agree. A media type should be a way to represent resources > of multiple related types. And a media type can evolve (in some > ways) without requiring a new identifier. > The data type is what evolves; that evolution is not bound to any identifier (media type). The text/html and application/xhtml+xml media types are not affected by the evolution of the HTML and XHTML families of data types. -Eric
Nathan wrote: > > A good mediatype like XHTML constrains the format of the message, the > representation of it, without constraining the actual message which > is encoded. > Data type. The XHTML family of data types may be served using any of text/plain, application/xhtml+xml or text/html (except for XHTML 1.1 and basic) as the media type. -Eric
Eric J. Bowman wrote: > Nathan wrote: >> A good mediatype like XHTML constrains the format of the message, the >> representation of it, without constraining the actual message which >> is encoded. >> > > Data type. The XHTML family of data types may be served using any of > text/plain, application/xhtml+xml or text/html (except for XHTML 1.1 and > basic) as the media type. Good catch, cheers.
On Sat, Dec 4, 2010 at 10:28 AM, Eric J. Bowman <eric@...> wrote: > Peter Williams wrote: >> >> > >> > Just a terminology nitpick: nothing that isn't in the registry is a >> > media type. >> >> I respectfully disagree. I phrased it the way i did on purpose. >> > > The pertinent RFCs only use the term "media type" in reference to > values registered in the standards, vendor and personal trees. The > term "type" is used in reference to the x. and x- non-trees. The term > "access type" is used in reference to things which could become media > types. REST uses the term "data type" in reference to the specs > targeted by a media type. I use the term in the spirit of the dissertation. "The data format of a representation is known as a media type" (para 5, section 5.2.1.2). A car that is not registered with the local authorities is still a car, it just not legal to drive it on the public roads. Same with media types. A data format that is not registered with iana is still a media type, it is just not technically legal to use it via http on the public internet. Peter
On Fri, Dec 3, 2010 at 6:02 PM, Nathan <nathan@...> wrote: > Peter Williams wrote: >> >> On Fri, Dec 3, 2010 at 4:50 PM, Eb <amaeze@...> wrote: >>> >>> On 12/03/2010 04:55 PM, Peter Williams wrote: >>> On Fri, Dec 3, 2010 at 1:49 PM, Mike Kelly <mike@...> wrote: >>>> >>>> On Fri, Dec 3, 2010 at 8:04 PM, Scott Banwart >>>> <sbanwart@...>wrote: >>>> >>>>> Where does an autonomous client get its knowledge of the data semantics >>>>> in an XHTML resource representation? In order to do anything useful >>>>> with the >>>>> data, the client needs to know how to parse and attach semantic meaning >>>>> to >>>>> the data. In the same vein, the client would need to understand the >>>>> semantic >>>>> meaning of form elements in order to correctly place data in the proper >>>>> elements when POSTing a request. I don't really see how an autonomous >>>>> client >>>>> can be built without some sort of out-of-band knowledge of those >>>>> semantics. >>>>> >>>>> Am I missing something? >>>>> >>>> no >>>> >>> Clients, whether autonomous or not, should be informed of the semantics >>> (or >>> media type) of the message by the server. This is done using the >>> content-type header. >>> >>> In an m2m situation xhtml, by itself, is rarely a useful media type. >>> Xhtml >>> provides hypermedia application description semantics (<a>, <link> and >>> <form>) and document structuring semantics. The document structuring >>> semantics are of little direct use in most m2m scenarios. However, one >>> can >>> define new media types that extend xhtml. These extensions would inherit >>> the application description semantics and add domain specific semantics >>> to >>> certain elements in the document. You could serve the same octet >>> sequence >>> as `application/xhtml` to web browsers or as `application/vnd.myapp` to >>> specialized autonomous clients. The specialized media type would inform >>> the autonomous client how to extract the information it cares about >>> (either >>> simple properties of the resource, simple relationships to other >>> resources >>> or complex instructions for further requests) from the representation. >>> >>> I assume this is determined at runtime, right? How does the client know >>> what properties, relationships or complex instructions provide >>> information >>> on how to extract information? >>> >> >> A client knows that the meaning of a message is because it has been taught >> (coded, configures or whatever) to understand the media type of that >> message. When a client makes the request it can inform the server of the >> media types it is capable of handling use the `accept` request header. An >> autonomous client would set that to `application/vnd.myapp`. The server >> would (usually) respond with a representation conforming to the >> specification of `application/vnd.myapp`. For example, the specification >> might say something like the representation is an xhtml representation and >> any element with a @class containing 'person' can be treated as a person >> entity. > > Is this a joke? take xhtml redefine the meaning of @class to use ambiguous > string tokens in order to type entities (which are represented as what > exactly in the message?) and serve it up as application/vnd.myapp == > "RESTful"?? I did not propose this a way to be restful. I merely suggested it as a way to design new media types (that could be used to implement restful applications) while utilizing the huge amount machinery that exists to generate, parse and operate on xhtml. It is an implementation detail, but a significant one. My architectural point was that it is impossible to use xhtml as a media type in m2m scenarios while following the rest style. Xhtml simply does not specify the semantics autonomous clients need. If a goal seeking agent "just knows" anything other than an entry point uri and how to interpret one or more media types based on their specifications you have strayed from the rest architectural style. This in includes "just knowing" that it can find particular bits of information, or hyper media controls, at particular places in the document. That type of knowledge must be derived from a combination of the media type specification and interpreting the representation received from the server. Regarding your specific critique of using class to identify important data: How is saying, "the account balance can be found in the element identified by the css selector `.account .balance`" any different than saying "the account balance can be found in the element identified by the xpath `/account/balance`"? Peter barelyenough.org
Peter Williams wrote: > > I use the term in the spirit of the dissertation. "The data format of > a representation is known as a media type" (para 5, section 5.2.1.2). > You're leaving out the key to that sentence, which is "[48]", which is a link to RFC 2048 to provide a definition of the term "media type". The spirit of the dissertation is that media types are only found in the IANA registry, because they're part of the IP layer re-used in RESTful protocols like HTTP, or other protocols like SPDY. Nothing else is a "media type", by definition, and it causes endless confusion to use the term otherwise, IMO. -Eric
Also, note the title of this document: http://www.iana.org/cgi-bin/mediatypes.pl Implies that a "media type" is something which must be applied for, in order to be recognized as such. -Eric
I don't understand the problem expressed in the last thread, at all. Here's a real-world example of about as simple a REST API as you'll see: http://www.iana.org/cgi-bin/mediatypes.pl It's obviously m2m otherwise you couldn't google for it, and it wouldn't need CAPTCHA, despite being tag soup. Let's call that v1. I've given it the Web Standards treatment. Let's call this v2, and worry about styling it for visual rendering (h2m) later (or never): http://charger.bisonsystems.net/mediatypes.htm The version has to increment, because @name='encoding' and @name='name' won't interoperate with javascript's forms object. I've gone with new name-value pairs more than I've kept the old ones. There's no reason the Perl script couldn't be altered to work with the new, without deprecating the old. This has nothing to do with media type. I'm representing the name-value pairs and constraining the values in the same format used before. Or, I'm describing (or in this case, versioning) an API -- what does that have to do with creating a new data type for describing APIs? IANA wants the widest-possible interoperability with browsers (new and old), as well as assistive devices of all stripes. The DOM and accessibility APIs for HTML 4.01 are highly polished at this point, so I code to them, even if I never care about that when coding to an m2m user-agent. My markup works just fine when parsed with libxslt, which allows HTML input. Granted, HTML 4.01's not for everyone, but you just can't beat it for Internet scale. Besides, forms language is just a bike-shed color. The v2 API is self-documenting, but relies on your understanding the comment. Xforms can increment that counter and dynamically create as many instances of required/optional parameters needed (as table rows in the model), but these aren't API differences, they're annotation differences. The name-value pairs don't change, they're just represented differently. Xforms binds to a model, which I'd represent like so: http://charger.bisonsystems.net/model.xht Notice how the default text (a required accessibility checkpoint) comes from the model. Xforms directly manipulates the model, so when the application-form is complete, the model may be submitted back to the server as application/xhtml+xml using PUT or POST. When coding an m2m client, I don't really care about DOMs or accessibility APIs; but I've found that accessibility is all about m2m -- so coding to such APIs simplifies my work from one project to the next, rather than inventing new and unproven languages with no thought to things like accessibility APIs, DOMs or progressive rendering. There are just too many out-of-the-box benefits to using HTML for me to stop recommending it for m2m, unless someone can explain just exactly what it is I allegedly can't do with the v2 API when the user is a machine. Remember, REST doesn't eliminate the need for a clue, I'm not providing anything I expect you can use for "code generation" or anything else enterprisey/Javaish. But especially in light of Xforms' ability to turn any data structure I can imagine as XML into a form, and submit an instance to the server as a payload instead of using absurdly-long query URIs, I just don't get what the problem is, unless someone can explain it to me in terms of the example I've given here. All I'm really trying to do is move some name-value pairs around, and there are plenty of ways to do that in REST using ubiquitous standards. More complex data structures can also be easily modeled; lists embedded in tables nested in <dd> and suchlike all work together to achieve the goal of extending limited, standardized semantics across network boundaries. I'd rather stick to this example, since it's real-world- based and limited to name-value pairs (<dl>s, forms and simple tables). -Eric
On Wed, Dec 1, 2010 at 11:26 PM, Eric J. Bowman <eric@...> wrote: > Dan Brickley wrote: >> >> This is roughly the topic known on the TAG mailing list as >> http-range-14 and I shudder at the thought of revisiting it again... >> > > Seems inevitable. Don't get me wrong, I agree with the finding, due to > my understanding of the architecture. The problem is, if we were to > poll Web developers we'd find that an overwhelming majority think it's > wrong -- due to a lack of understanding of Web architecture. Appreciate it's hard for you tell from way up there in your ivory tower, but an overwhelming majority would not even be aware of its existence. Not all that surprising really, given that it's a solution to a problem that doesn't actually exist in the Muggle world. Cheers, Mike
On Saturday, December 4, 2010, Eric J. Bowman <eric@...> wrote: > Peter Williams wrote: >> >> I use the term in the spirit of the dissertation. "The data format of >> a representation is known as a media type" (para 5, section 5.2.1.2). >> > > You're leaving out the key to that sentence, which is "[48]", which is > a link to RFC 2048 to provide a definition of the term "media type". > The spirit of the dissertation is that media types are only found in the > IANA registry, because they're part of the IP layer re-used in RESTful > protocols like HTTP, or other protocols like SPDY. Nothing else is a > "media type", by definition, and it causes endless confusion to use the > term otherwise, IMO. So if restful protocol existed that identified message format and semantics in way other than using mime media type strings it would have to different terminology than that used in the dissertation? I think that link is there because iana is the most completed list of media types in existence. Peter barelyenough.org
On Sun, Dec 5, 2010 at 7:18 AM, Eric J. Bowman <eric@...>wrote: > > > I don't understand the problem expressed in the last thread, at all. > Here's a real-world example of about as simple a REST API as you'll see: > > http://www.iana.org/cgi-bin/mediatypes.pl > > It's obviously m2m otherwise you couldn't google for it, and it > wouldn't need CAPTCHA, despite being tag soup. Let's call that v1. > I've given it the Web Standards treatment. Let's call this v2, and > worry about styling it for visual rendering (h2m) later (or never): > > While technically a web spider would count as m2m interaction, it makes a lousy example. A web spider blindly follows the links it finds as it is concerned only with following links and cataloging the response. A web spider doesn't need understand the semantic meaning of the response and it doesn't need to use that information to choose the proper link to follow. An autonomous client will have to have knowledge of those semantics in order correctly navigate the links. Unless you have some sort of super-AI that is capable of inferring those semantics on-the-fly, that knowledge will have to be programmed into the client ahead of time using some out-of-band description. That client is now tightly coupled to a particular representation. -- Scott Banwart
On 12/05/2010 12:19 PM, Scott Banwart wrote: > > > > On Sun, Dec 5, 2010 at 7:18 AM, Eric J. Bowman <eric@... > <mailto:eric@...>> wrote: > > I don't understand the problem expressed in the last thread, at all. > Here's a real-world example of about as simple a REST API as > you'll see: > > http://www.iana.org/cgi-bin/mediatypes.pl > > It's obviously m2m otherwise you couldn't google for it, and it > wouldn't need CAPTCHA, despite being tag soup. Let's call that v1. > I've given it the Web Standards treatment. Let's call this v2, and > worry about styling it for visual rendering (h2m) later (or never): > > > > While technically a web spider would count as m2m interaction, it > makes a lousy example. A web spider blindly follows the links it finds > as it is concerned only with following links and cataloging the > response. A web spider doesn't need understand the semantic meaning of > the response and it doesn't need to use that information to choose the > proper link to follow. > > An autonomous client will have to have knowledge of those semantics in > order correctly navigate the links. Unless you have some sort of > super-AI that is capable of inferring those semantics on-the-fly, that > knowledge will have to be programmed into the client ahead of time > using some out-of-band description. That client is now tightly coupled > to a particular representation. > > -- > Scott Banwart > Is it coupled to the representation or to link relations? There is a huge difference IMO. I can change representation and as long as the relations stay the same, my client doesn't have to break. If I'm fortunate, I can even leverage link relations that already exist. -- blog: http://eikonne.wordpress.com twitter: http://twitter.com/eikonne
On Sun, Dec 5, 2010 at 12:42 PM, Eb <amaeze@...> wrote: > > > On 12/05/2010 12:19 PM, Scott Banwart wrote: > > > > > > On Sun, Dec 5, 2010 at 7:18 AM, Eric J. Bowman <eric@...>wrote: > >> >> >> I don't understand the problem expressed in the last thread, at all. >> Here's a real-world example of about as simple a REST API as you'll see: >> >> http://www.iana.org/cgi-bin/mediatypes.pl >> >> It's obviously m2m otherwise you couldn't google for it, and it >> wouldn't need CAPTCHA, despite being tag soup. Let's call that v1. >> I've given it the Web Standards treatment. Let's call this v2, and >> worry about styling it for visual rendering (h2m) later (or never): >> >> > While technically a web spider would count as m2m interaction, it makes a > lousy example. A web spider blindly follows the links it finds as it is > concerned only with following links and cataloging the response. A web > spider doesn't need understand the semantic meaning of the response and it > doesn't need to use that information to choose the proper link to follow. > > An autonomous client will have to have knowledge of those semantics in > order correctly navigate the links. Unless you have some sort of super-AI > that is capable of inferring those semantics on-the-fly, that knowledge will > have to be programmed into the client ahead of time using some out-of-band > description. That client is now tightly coupled to a particular > representation. > > -- > Scott Banwart > > > Is it coupled to the representation or to link relations? There is a huge > difference IMO. I can change representation and as long as the relations > stay the same, my client doesn't have to break. If I'm fortunate, I can > even leverage link relations that already exist. > > -- > blog: http://eikonne.wordpress.com > twitter: http://twitter.com/eikonne > > ._,_.__ > If the autonomous client needs to make decisions based on the content of the representation, then it will be tightly coupled to it. For example, if the decision on what link to follow next is based on a status value in the representation, if the structure of the status value changes, the client will break. -- Scott Banwart
In my experience, the way to write "autonomous" clients is to always write them to understand only the _structure_ of the message, and never attempt to get M2M clients to understand the _content_ or _meaning_ of the message. Therefore, the only "binding" that needs to be done is on the message's structural elements. This means programmers and media type designers may need to change their own understanding of what constitutes structure in a message (the whole message, BTW). For example, while it seems obvious that XML-style element names and attribute names are structure, it is also possible to view some of the _values_ of these elements as structure. More than one existing media type definition limits the possible _values_ of select attributes and elements; therefore converting these values into structural elements. Most of the reasons I have for authoring my own media types are focused on this one issue; making sure the message has a structure that my client applications can use to accomplish their tasks. It would be nice if structural information would be _layered_ onto existing media types (XHTML for example), too. I am quite sure this is possible, but I've not found many examples of this and have not had time to explore this area myself. If anyone has ideas/pointers on this, please feel free to pass them along. mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me #RESTFest 2010 http://rest-fest.googlecode.com On Sun, Dec 5, 2010 at 14:02, Scott Banwart <sbanwart@...> wrote: > > > > > On Sun, Dec 5, 2010 at 12:42 PM, Eb <amaeze@...> wrote: > >> >> >> On 12/05/2010 12:19 PM, Scott Banwart wrote: >> >> >> >> >> >> On Sun, Dec 5, 2010 at 7:18 AM, Eric J. Bowman <eric@...>wrote: >> >>> >>> >>> I don't understand the problem expressed in the last thread, at all. >>> Here's a real-world example of about as simple a REST API as you'll see: >>> >>> http://www.iana.org/cgi-bin/mediatypes.pl >>> >>> It's obviously m2m otherwise you couldn't google for it, and it >>> wouldn't need CAPTCHA, despite being tag soup. Let's call that v1. >>> I've given it the Web Standards treatment. Let's call this v2, and >>> worry about styling it for visual rendering (h2m) later (or never): >>> >>> >> While technically a web spider would count as m2m interaction, it makes a >> lousy example. A web spider blindly follows the links it finds as it is >> concerned only with following links and cataloging the response. A web >> spider doesn't need understand the semantic meaning of the response and it >> doesn't need to use that information to choose the proper link to follow. >> >> An autonomous client will have to have knowledge of those semantics in >> order correctly navigate the links. Unless you have some sort of super-AI >> that is capable of inferring those semantics on-the-fly, that knowledge will >> have to be programmed into the client ahead of time using some out-of-band >> description. That client is now tightly coupled to a particular >> representation. >> >> -- >> Scott Banwart >> >> >> Is it coupled to the representation or to link relations? There is a huge >> difference IMO. I can change representation and as long as the relations >> stay the same, my client doesn't have to break. If I'm fortunate, I can >> even leverage link relations that already exist. >> >> -- >> blog: http://eikonne.wordpress.com >> twitter: http://twitter.com/eikonne >> >> ._,_.__ >> > > If the autonomous client needs to make decisions based on the content of > the representation, then it will be tightly coupled to it. For example, if > the decision on what link to follow next is based on a status value in the > representation, if the structure of the status value changes, the client > will break. > > -- > Scott Banwart > > > >
On Sun, Dec 5, 2010 at 3:06 PM, mike amundsen <mamund@...> wrote: > In my experience, the way to write "autonomous" clients is to always write > them to understand only the _structure_ of the message, and never attempt to > get M2M clients to understand the _content_ or _meaning_ of the > message. Therefore, the only "binding" that needs to be done is on the > message's structural elements. > > This means programmers and media type designers may need to change their > own understanding of what constitutes structure in a message (the whole > message, BTW). For example, while it seems obvious that XML-style element > names and attribute names are structure, it is also possible to view some of > the _values_ of these elements as structure. More than one existing media > type definition limits the possible _values_ of select attributes and > elements; therefore converting these values into structural elements. > > Most of the reasons I have for authoring my own media types are focused on > this one issue; making sure the message has a structure that my client > applications can use to accomplish their tasks. > > It would be nice if structural information would be _layered_ onto existing > media types (XHTML for example), too. I am quite sure this is possible, but > I've not found many examples of this and have not had time to explore this > area myself. If anyone has ideas/pointers on this, please feel free to pass > them along. > > mca > http://amundsen.com/blog/ > http://twitter.com@mamund > http://mamund.com/foaf.rdf#me > > > #RESTFest 2010 > http://rest-fest.googlecode.com > > How then is a media type different from a WSDL contract? In either case a change to the message structure can break a client. Am I missing something? I too would be very interested to see message structure layered onto XHTML. I'm guessing the tricky part there would be providing a way to identify the message structure since the media type would be generic. -- Scott Banwart http://rogue-technology.com/blog/ http://twitter.com/sbanwart http://identi.ca/sbanwart
<snip> > How then is a media type different from a WSDL contract? </snip> Since WSDL _is_ a registered media type, i suspect you mean something else by your question. <snip> In either case a change to the message structure can break a client. </snip> yep. we can break all media types by making changes. it's really easy to screw up image formats (PNG, JPEG, etc.). lucky for most, HTML was designed to make it pretty hard to "break" clients, but i've done it<g>. again, i suspect i'm not getting the specifics implied in your comment here. <snip> > I too would be very interested to see message structure layered onto XHTML. </snip> the @profile provides a _way_ to do this, but implementation details are left undefined. i am currently experimenting a number of possibilities in this area. <snip> > I'm guessing the tricky part there would be providing a way to identify the message structure since the media type would be generic. </snip> yes, this is the tricky part. the base media type need not be "generic" in order to make this tricky, BTW. mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me #RESTFest 2010 http://rest-fest.googlecode.com On Sun, Dec 5, 2010 at 17:20, Scott Banwart <sbanwart@...> wrote: > On Sun, Dec 5, 2010 at 3:06 PM, mike amundsen <mamund@...> wrote: >> >> In my experience, the way to write "autonomous" clients is to always write >> them to understand only the _structure_ of the message, and never attempt to >> get M2M clients to understand the _content_ or _meaning_ of the >> message.Therefore, the only "binding" that needs to be done is on the >> message's structural elements. >> This means programmers and media type designers may need to change their >> own understanding of what constitutes structure in a message (the whole >> message, BTW). For example, while it seemsobviousthat XML-style element >> names and attribute names are structure, it is also possible to view some of >> the _values_ of these elements as structure. More than one existing media >> type definition limits the possible _values_ of select attributes and >> elements; therefore converting these values into structural elements. >> Most of the reasons I have for authoring my own media types are focused on >> this one issue; making sure the message has a structure that my client >> applications can use to accomplish their tasks. >> It would be nice if structural information would be _layered_ onto >> existing media types (XHTML for example), too. I am quite sure this is >> possible, but I've not found many examples of this and have not had time to >> explore this area myself. If anyone has ideas/pointers on this, please feel >> free to pass them along. >> mca >> http://amundsen.com/blog/ >> http://twitter.com@mamund >> http://mamund.com/foaf.rdf#me >> >> >> #RESTFest 2010 >> http://rest-fest.googlecode.com >> > > How then is a media type different from a WSDL contract? In either case a > change to the message structure can break a client. > > Am I missing something? > > I too would be very interested to see message structure layered onto XHTML. > I'm guessing the tricky part there would be providing a way to identify the > message structure since the media type would be generic. > > -- > Scott Banwart > http://rogue-technology.com/blog/ > http://twitter.com/sbanwart > http://identi.ca/sbanwart > >
That would probably have read better as "a contract expressed as WSDL". You've answered my question thought. REST doesn't really address the problem of contract evolution from an m2m standpoint. I'm still stuck coordinating changes between media types and autonomous clients to prevent breakage. I just wanted to make sure that was the case lest I build that isn't RESTful. On Sun, Dec 5, 2010 at 5:52 PM, mike amundsen <mamund@...> wrote: > <snip> > > How then is a media type different from a WSDL contract? > </snip> > Since WSDL _is_ a registered media type, i suspect you mean something > else by your question. > > <snip> > In either case a change to the message structure can break a client. > </snip> > yep. we can break all media types by making changes. it's really easy > to screw up image formats (PNG, JPEG, etc.). lucky for most, HTML was > designed to make it pretty hard to "break" clients, but i've done > it<g>. again, i suspect i'm not getting the specifics implied in your > comment here. > > <snip> > > I too would be very interested to see message structure layered onto > XHTML. > </snip> > the @profile provides a _way_ to do this, but implementation details > are left undefined. i am currently experimenting a number of > possibilities in this area. > > <snip> > > I'm guessing the tricky part there would be providing a way to identify > the message structure since the media type would be generic. > </snip> > yes, this is the tricky part. the base media type need not be > "generic" in order to make this tricky, BTW. > > mca > http://amundsen.com/blog/ > http://twitter.com@mamund > http://mamund.com/foaf.rdf#me > > > #RESTFest 2010 > http://rest-fest.googlecode.com > > > > > On Sun, Dec 5, 2010 at 17:20, Scott Banwart <sbanwart@...> wrote: > > On Sun, Dec 5, 2010 at 3:06 PM, mike amundsen <mamund@...> wrote: > >> > >> In my experience, the way to write "autonomous" clients is to always > write > >> them to understand only the _structure_ of the message, and never > attempt to > >> get M2M clients to understand the _content_ or _meaning_ of the > >> message. Therefore, the only "binding" that needs to be done is on the > >> message's structural elements. > >> This means programmers and media type designers may need to change their > >> own understanding of what constitutes structure in a message (the whole > >> message, BTW). For example, while it seems obvious that XML-style > element > >> names and attribute names are structure, it is also possible to view > some of > >> the _values_ of these elements as structure. More than one existing > media > >> type definition limits the possible _values_ of select attributes and > >> elements; therefore converting these values into structural elements. > >> Most of the reasons I have for authoring my own media types are focused > on > >> this one issue; making sure the message has a structure that my client > >> applications can use to accomplish their tasks. > >> It would be nice if structural information would be _layered_ onto > >> existing media types (XHTML for example), too. I am quite sure this is > >> possible, but I've not found many examples of this and have not had time > to > >> explore this area myself. If anyone has ideas/pointers on this, please > feel > >> free to pass them along. > >> mca > >> http://amundsen.com/blog/ > >> http://twitter.com@mamund > >> http://mamund.com/foaf.rdf#me > >> > >> > >> #RESTFest 2010 > >> http://rest-fest.googlecode.com > >> > > > > How then is a media type different from a WSDL contract? In either case a > > change to the message structure can break a client. > > > > Am I missing something? > > > > I too would be very interested to see message structure layered onto > XHTML. > > I'm guessing the tricky part there would be providing a way to identify > the > > message structure since the media type would be generic. > > > > -- > > Scott Banwart > > http://rogue-technology.com/blog/ > > http://twitter.com/sbanwart > > http://identi.ca/sbanwart > > > > > -- Scott Banwart http://rogue-technology.com/blog/ http://twitter.com/sbanwart http://identi.ca/sbanwart
Scott: <snip> REST doesn't really address the problem of contract evolution from an m2m standpoint. </snip> or any other standpoint. Hypermedia-driven message design provides for including state transition descriptions within the message exchange itself rather than through an external "contract" document. The details on what these app controls look like and what information the cover is beyond Fielding's dissertation. It seems to me that many who read and comment on his work are focussed on [X]HTML as the example hypermedia type. The [X]HTML media type has solid state transition elements for human-driven cases but lacks good app controls for M2M work. I think this aspect of XHTML has led many to think using the REST model over HTTP is inappropriate for M2M interactions. So far, I don't share that POV. mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me #RESTFest 2010 http://rest-fest.googlecode.com On Sun, Dec 5, 2010 at 18:05, Scott Banwart <sbanwart@...> wrote: > That would probably have read better as "a contract expressed as WSDL". > You've answered my question thought. REST doesn't really address the problem > of contract evolution from an m2m standpoint. I'm still stuck coordinating > changes between media types and autonomous clients to prevent breakage. I > just wanted to make sure that was the case lest I build that isn't RESTful. > > On Sun, Dec 5, 2010 at 5:52 PM, mike amundsen <mamund@...> wrote: >> >> <snip> >> > How then is a media type different from a WSDL contract? >> </snip> >> Since WSDL _is_ a registered media type, i suspect you mean something >> else by your question. >> >> <snip> >> In either case a change to the message structure can break a client. >> </snip> >> yep. we can break all media types by making changes. it's really easy >> to screw up image formats (PNG, JPEG, etc.). lucky for most, HTML was >> designed to make it pretty hard to "break" clients, but i've done >> it<g>. again, i suspect i'm not getting the specifics implied in your >> comment here. >> >> <snip> >> > I too would be very interested to see message structure layered onto >> > XHTML. >> </snip> >> the @profile provides a _way_ to do this, but implementation details >> are left undefined. i am currently experimenting a number of >> possibilities in this area. >> >> <snip> >> > I'm guessing the tricky part there would be providing a way to identify >> > the message structure since the media type would be generic. >> </snip> >> yes, this is the tricky part. the base media type need not be >> "generic" in order to make this tricky, BTW. >> >> mca >> http://amundsen.com/blog/ >> http://twitter.com@mamund >> http://mamund.com/foaf.rdf#me >> >> >> #RESTFest 2010 >> http://rest-fest.googlecode.com >> >> >> >> >> On Sun, Dec 5, 2010 at 17:20, Scott Banwart <sbanwart@...> wrote: >> > On Sun, Dec 5, 2010 at 3:06 PM, mike amundsen <mamund@...> wrote: >> >> >> >> In my experience, the way to write "autonomous" clients is to always >> >> write >> >> them to understand only the _structure_ of the message, and never >> >> attempt to >> >> get M2M clients to understand the _content_ or _meaning_ of the >> >> message.Therefore, the only "binding" that needs to be done is on the >> >> message's structural elements. >> >> This means programmers and media type designers may need to change >> >> their >> >> own understanding of what constitutes structure in a message (the whole >> >> message, BTW). For example, while it seemsobviousthat XML-style >> >> element >> >> names and attribute names are structure, it is also possible to view >> >> some of >> >> the _values_ of these elements as structure. More than one existing >> >> media >> >> type definition limits the possible _values_ of select attributes and >> >> elements; therefore converting these values into structural elements. >> >> Most of the reasons I have for authoring my own media types are focused >> >> on >> >> this one issue; making sure the message has a structure that my client >> >> applications can use to accomplish their tasks. >> >> It would be nice if structural information would be _layered_ onto >> >> existing media types (XHTML for example), too. I am quite sure this is >> >> possible, but I've not found many examples of this and have not had >> >> time to >> >> explore this area myself. If anyone has ideas/pointers on this, please >> >> feel >> >> free to pass them along. >> >> mca >> >> http://amundsen.com/blog/ >> >> http://twitter.com@mamund >> >> http://mamund.com/foaf.rdf#me >> >> >> >> >> >> #RESTFest 2010 >> >> http://rest-fest.googlecode.com >> >> >> > >> > How then is a media type different from a WSDL contract? In either case >> > a >> > change to the message structure can break a client. >> > >> > Am I missing something? >> > >> > I too would be very interested to see message structure layered onto >> > XHTML. >> > I'm guessing the tricky part there would be providing a way to identify >> > the >> > message structure since the media type would be generic. >> > >> > -- >> > Scott Banwart >> > http://rogue-technology.com/blog/ >> > http://twitter.com/sbanwart >> > http://identi.ca/sbanwart >> > >> > > > > > -- > Scott Banwart > http://rogue-technology.com/blog/ > http://twitter.com/sbanwart > http://identi.ca/sbanwart > >
Scott Banwart wrote: > > You've answered my question thought. REST doesn't really address the > problem of contract evolution from an m2m standpoint. > Of course not. Contracts govern the interaction between components, which is a separate concern from messaging between connectors. REST covers messaging between connectors, in a uniform manner independent of any contract governing the behavior of components. -Eric
Scott Banwart wrote: > > An autonomous client will have to have knowledge of those semantics > in order correctly navigate the links. Unless you have some sort of > super-AI that is capable of inferring those semantics on-the-fly, > that knowledge will have to be programmed into the client ahead of > time using some out-of-band description. That client is now tightly > coupled to a particular representation. > I'm still not seeing the problem. Any developer who coded a client against v1 of my example, will still have a functional client when v2 is rolled out -- they've coded an automaton which knows what data to plug into what variables, formatted as an urlencoded query URI. Any human can manipulate v2 as easily as v1, because the user-agent automatically updated. I wouldn't expect an m2m client to auto- update. But any developer can update the code of such an m2m client by following the annotated template the HTML of v2 provides, at their leisure. The client is coupled to the API, not the representation; how is this avoided outside of REST? Those are the super-AI clients I'm failing to understand... What out-of-band knowledge is required to interpret the v2 API, aside from the media type, which lays out explicitly that <dt> and <th> are metadata, while <dd> and <td> are data, etc. allowing the name-value pairs in the representation to be easily discerned? -Eric
Something else to note about the example I coded -- it only took a few hours, using CSE Validator Pro v10 as the entire toolchain. The result is a document which validates to its schema, with QC assurance of its interoperability based on adherence to best current practices. When you mint a new data type, what toolchain exists to assure accessibility and validation to promote interoperability? How many hours would it take someone who's never looked at your representation before, to figure it out let alone version it? When folks start talking alternatives like WSDL, XSD and code generation, I shudder to think of all the added complexity brought to bear to solve a bunch of problems long accounted for in HTML, preferring to actually *complete* projects. -Eric
Peter Williams wrote: > > Eric J. Bowman wrote: > > Peter Williams wrote: > >> > >> I use the term in the spirit of the dissertation. "The data > >> format of a representation is known as a media type" (para 5, > >> section 5.2.1.2). > >> > > > > You're leaving out the key to that sentence, which is "[48]", which > > is a link to RFC 2048 to provide a definition of the term "media > > type". The spirit of the dissertation is that media types are only > > found in the IANA registry, because they're part of the IP layer > > re-used in RESTful protocols like HTTP, or other protocols like > > SPDY. Nothing else is a "media type", by definition, and it causes > > endless confusion to use the term otherwise, IMO. > > So if restful protocol existed that identified message format and > semantics in way other than using mime media type strings it would > have to different terminology than that used in the dissertation? I > think that link is there because iana is the most completed list of > media types in existence. > The IANA registry is the *only* list of media types in existence, because nothing else qualifies as a media type. Waka and HTTP 2 won't have headers named "Content-Type" and will most likely introduce a layer of indirection between the value of whatever the header is called, and the full strings we're used to seeing now, but will still be referring to media types. Anyway, it was previously discussed that the sentence in question is a bit buggy, which is in part to blame for the confusion. REST can be instantiated over IP by any protocol using media types, and meeting the other constraints like caching, not just by HTTP. But the media type is the means by which resource and representation are decoupled. REST doesn't preclude defining some other registry of media types, which follows the definition of media type, yet follows different procedures and uses different trees -- but this must happen at the IP layer, not within HTTP APIs. -Eric
mike amundsen wrote: > > the @profile provides a _way_ to do this, but implementation details > are left undefined. i am currently experimenting a number of > possibilities in this area. > Or RDFa. If a domain-specific vocabulary existed for the problem of submitting media type applications, then I'd be able to further annotate my markup. The result, is that an m2m client would be able to interpret the API based on the standardized vocabulary, regardless of what I name my variables. I'd be able to change the API *and* auto- update m2m clients. -Eric
> > Or RDFa. If a domain-specific vocabulary existed for the problem of > submitting media type applications, then I'd be able to further > annotate my markup. The result, is that an m2m client would be able > to interpret the API based on the standardized vocabulary, regardless > of what I name my variables. I'd be able to change the API *and* > auto- update m2m clients. > I've also mentioned using GRDDL to transform RDFa into a standalone RDF document which uses typed relations to describe hypertext controls. The takeaway, is that REST is step one -- the solid foundation upon which these outside-the-scope-of-REST m2m implementations may be built, using ubiquitous media types to describe the API. -Eric
On Sun, Dec 5, 2010 at 5:12 PM, Eric J. Bowman <eric@...> wrote: <snip/> > Anyway, it was previously discussed that the sentence in question is a > bit buggy, which is in part to blame for the confusion. REST can be > instantiated over IP by any protocol using media types, and meeting the > other constraints like caching, not just by HTTP. But the media type > is the means by which resource and representation are decoupled. I see in table 5-1[1] "media type" is listed in the modern web examples column. That does seem to bolster your argument. However, the dissertation uses the term while describing the style rather than http specifically. Your interpretation would seem to imply that a stack (networking system and restful application protocol) that eschewed iana registered media types would have to have to use a different term for "the flavor of representations". Perhaps it is a bug that the dissertation uses "media type" in the way it does. However, that cannot be undone. Given that the dissertation does use the term the way it does i stand by my assertion that there are two valid uses of "media type". One which means a data format that is registered with iana, and another that means a type, defined by a specification, of representation used in a restful system. Regardless, it seems a point of little importance. [1]: <http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm#tab_5_1> Peter barelyenough.org
On Sun, Dec 5, 2010 at 4:49 PM, Eric J. Bowman <eric@...> wrote: > Scott Banwart wrote: >> >> An autonomous client will have to have knowledge of those semantics >> in order correctly navigate the links. Unless you have some sort of >> super-AI that is capable of inferring those semantics on-the-fly, >> that knowledge will have to be programmed into the client ahead of >> time using some out-of-band description. That client is now tightly >> coupled to a particular representation. >> > > I'm still not seeing the problem. Any developer who coded a client > against v1 of my example, will still have a functional client when v2 > is rolled out -- they've coded an automaton which knows what data to > plug into what variables, formatted as an urlencoded query URI. > > Any human can manipulate v2 as easily as v1, because the user-agent > automatically updated. I wouldn't expect an m2m client to auto- > update. But any developer can update the code of such an m2m client by > following the annotated template the HTML of v2 provides, at their > leisure. The client is coupled to the API, not the representation; how > is this avoided outside of REST? Those are the super-AI clients I'm > failing to understand... > > What out-of-band knowledge is required to interpret the v2 API, aside > from the media type, which lays out explicitly that <dt> and <th> are > metadata, while <dd> and <td> are data, etc. allowing the name-value > pairs in the representation to be easily discerned? Let my try to continue your example. Say i write a script to crawl my wiki and apply for media types based on the documentation there. Of course, i point my script to the iana page. Scenario (a): Iana adds a "donate" form. My script breaks because it expected only one form on the page. The out-of-band knowledge was that where is only one form on the page. Adding a second form to a page is perfectly legal in (x)html. There is no way for my script to tell the server it can only work with the single form version of the page. Scenerio (b): I point my script to your page. It does not work, of course, because there is no input named "name". Your page is a incompatible derivative of the iana page. However, that is not surfaced explicitly anywhere. Worse yet, what the script can/should expect is not documented anywhere. In both cases the fundamental issue is that my script is coupled to a specific service implementation rather than a media type. Any change in the specific implementation is likely to cause issues. (Browsers, on the other hand, are coupled to the html media type. They can take any valid html document and render it successfully.) Peter barelyenough.org
Peter Williams wrote: > > Let my try to continue your example. Say i write a script to crawl my > wiki and apply for media types based on the documentation there. Of > course, i point my script to the iana page. > > Scenario (a): Iana adds a "donate" form. My script breaks because it > expected only one form on the page. The out-of-band knowledge was > that where is only one form on the page. Adding a second form to a > page is perfectly legal in (x)html. There is no way for my script to > tell the server it can only work with the single form version of the > page. > I still don't follow. If you coded up a client according to the self- documenting API, the presence of a new form would not change the behavior of that client -- it doesn't change how URLs are constructed or the target of the POST. > > Scenerio (b): I point my script to your page. It does not work, of > course, because there is no input named "name". > Why would your script be looking for that input? Why would you expect to not have to re-code the script if the API changes? Why not just treat the HTML as a recipe? > > Your page is a incompatible derivative of the iana page. However, > that is not surfaced explicitly anywhere. Worse yet, what the script > can/should expect is not documented anywhere. > The v2 API is self-documenting. It's obvious that sending a registrant name is now the variable 'registrant-name', so change the script to send that instead of the variable 'name', etc. The assumption is that the page I made would replace the previous one, at the same URI -- pretty explicit that this is the new version. I fail to see what isn't documented. > > In both cases the fundamental issue is that my script is coupled to a > specific service implementation rather than a media type. Any change > in the specific implementation is likely to cause issues. (Browsers, > on the other hand, are coupled to the html media type. They can take > any valid html document and render it successfully.) > No, the changes are not going to break any client whose developer used the HTML as a guideline to coding the behavior of the client, by treating it as API documentation. The HTML can always be modified, and an ontology created such that looking for a specific RDFa attribute with a specific value returns the current variable name. I don't see how any of these problems are solved by creating a custom media type, or disappear when a custom media type is used (unless it's being treated as an object key instead of as a processing model declaration). The HTML I wrote clearly specifies the API, in a way which can be understood by anyone on the planet who speaks English; any misunderstandings are cleared up by actually driving the API using that active documentation. I'm still not seeing the alleged drawbacks here. -Eric
I see that now. I was trying to conflate links-as-API with REST. After further review, I now see that the API/contract is an orthogonal concern. On Sun, Dec 5, 2010 at 6:38 PM, Eric J. Bowman <eric@...>wrote: > Scott Banwart wrote: > > > > You've answered my question thought. REST doesn't really address the > > problem of contract evolution from an m2m standpoint. > > > > Of course not. Contracts govern the interaction between components, > which is a separate concern from messaging between connectors. REST > covers messaging between connectors, in a uniform manner independent of > any contract governing the behavior of components. > > -Eric > -- Scott Banwart http://rogue-technology.com/blog/ http://twitter.com/sbanwart http://identi.ca/sbanwart
--- In rest-discuss@yahoogroups.com, "Eric J. Bowman" <eric@...> wrote: > I'm still not seeing the alleged drawbacks here. > > -Eric > Well, I think that any client coded up to the format specification (in this case HTML) should be able to interact with any RESTful service that claims to use that format for its representations. And sure -- you are using XHTML and any browser can interact with your service. Ok. But these M2M clients that are keying off of the id values etc. (which one can argue is "out of band knowledge" -- but it is questionable) can't interact with ANY service that is similarly designed using the same media type. The issue is this: the meaning of the id values is described in the HTML you are serving, but is really only readable by a human being who is doing the coding. It is an odd sort of "out of band" -- it is sort of like a signal embedded within a signal. But the client itself is not picking up the extra signal -- the human being developing (not using) the client is. I still consider it out of band but see how this is debatable. To me your example, while hard to argue against, still doesn't yield the benefits I'd like to see from a RESTful m2m system. With your methodology, I still can't write a single client that a wide range of services can be targeted against. I write a client to your v1 system with the benefit that it will still work when you upgrade to v2 -- it is a nice benefit but still not the end goal that at least I am shooting for. I keep hearing that what I'm targeting is not possible -- thing is I know that it is, because I've done it for specific use cases. The problem is boiling it down to the key essential, generic methodologies that apply across domains. I'm continuing to plug away at it when I have time though... Andrew
On Sun, Dec 5, 2010 at 10:24 PM, Eric J. Bowman <eric@...> wrote: > Peter Williams wrote: >> >> Let my try to continue your example. Say i write a script to crawl my >> wiki and apply for media types based on the documentation there. Of >> course, i point my script to the iana page. >> >> Scenario (a): Iana adds a "donate" form. My script breaks because it >> expected only one form on the page. The out-of-band knowledge was >> that where is only one form on the page. Adding a second form to a >> page is perfectly legal in (x)html. There is no way for my script to >> tell the server it can only work with the single form version of the >> page. >> > > I still don't follow. If you coded up a client according to the self- > documenting API, the presence of a new form would not change the > behavior of that client -- it doesn't change how URLs are constructed > or the target of the POST. What "self-documenting API" are you speaking of? In this example, my thought was that the script would GET <http://www.iana.org/cgi-bin/mediatypes.pl> and follow any redirects. It will then inspect the representation received for a form (based on it's out-of-band knowledge that the page it is looking for contains a form). It would then construct a representation to send to the server by iterating over the input elements of the form, seting the values to ones extract from my wiki (based on it's out-of-band knowledge about what the various fields in the form mean). It would then make a request to the resource specified in form's @action with the method specified in form's @method and the body constructed in the previous step. This would fail, or work only by coincidence, if a second form were added to be page. If it was the sort of system that was ok to break occasionally I would probably just live with the coupling. Decoupling has real costs and you should make sure you need it. If, on the other hand, it was a high value system, i would create a media type that specified the form and it's inputs. The would bring that knowledge need by the script in-band, and allow the script to inform the server of it's requirements. It would also allow clients to work reliably with more than one provider of the service. I think this may be in line with Mike's approach of converting content into structure by specifying it in the media type. (Correct me if i misunderstood you, Mike.) How you would design such a system to avoid such coupling? Peter
Peter Williams wrote: > > > > > I still don't follow. If you coded up a client according to the > > self- documenting API, the presence of a new form would not change > > the behavior of that client -- it doesn't change how URLs are > > constructed or the target of the POST. > > What "self-documenting API" are you speaking of? > http://charger.bisonsystems.net/mediatypes.htm Every form field has a label attached, and links to the documentation describing the requirements for the field. Every set of name-value pairs is clearly indicated; any client which groks HTML can generate a list. It is clearly described that these name-value pairs are POSTed to the target URI as an urlencoded query string. Out-of-band information like how to generate name-value pairs from the form code, is a commonly shared understanding defined by the media type. What other information is needed to use this API, that isn't in-band? > > In this example, my thought was that the script would GET > <http://www.iana.org/cgi-bin/mediatypes.pl> and follow any redirects. > It will then inspect the representation received for a form (based on > it's out-of-band knowledge that the page it is looking for contains a > form). > It's going to take more out-of-band knowledge than that. I don't understand the expectation that a form be self-describing such that it's generally usable by random, automated clients, as an argument in favor of attempting such functionality through the minting of custom media types. Domain-specific vocabulary, i.e. RDF ontologies, seems to be the future direction; which has no bearing on media type beyond which ubiquitous ones to choose to represent the API as an active document. > > It would then construct a representation to send to the server > by iterating over the input elements of the form, seting the values to > ones extract from my wiki (based on it's out-of-band knowledge about > what the various fields in the form mean). It would then make a > request to the resource specified in form's @action with the method > specified in form's @method and the body constructed in the previous > step. > Why would it do that? I'd write such a client by first determining the name-value pairs the API is expecting, tell it which associated data from the wiki goes where, and POST to the target as an urlencoded query. > > This would fail, or work only by coincidence, if a second form were > added to be page. > My way doesn't fail or break, but continues to work so long as the server supports the name-value pairs as POSTed query strings. I don't understand the requirement that I shouldn't have to hard-code anything. > > It would also allow clients to work reliably with more than one > provider of the service. I think this may be in line with Mike's > approach of converting content into structure by specifying it in the > media type. (Correct me if i misunderstood you, Mike.) > > How you would design such a system to avoid such coupling? > I already have? Any service provider can support receiving POSTs with the specified name-value pairs as urlencoded queries, using whatever markup/forms language they want. I understand what you guys are saying, what I don't understand is why this is a problem if a domain-specific ontology (like GoodRelations) exists. With an ontology, the markup doesn't matter, because clients are bound to, for example, @instance- of='foo' rather than any markup language or structure, allowing variation between sites with less hard-coding. -Eric
"wahbedahbe" wrote: > > And sure -- you are using XHTML and any browser can interact with > your service. Ok. But these M2M clients that are keying off of the id > values etc. (which one can argue is "out of band knowledge" -- but it > is questionable) can't interact with ANY service that is similarly > designed using the same media type. > Those id values are neither here nor there in terms of the API. The markup is how it is because of WCAG checkpoints, which means every field has a label bound to its @id using @for. I don't understand why an m2m client keys off of anything -- I've distributed an API in a standardized format which makes the name-value pairs obvious, as well as the format and method the server expects. Anyone can code an m2m client to make those POST requests with the specified name-value pairs. > > The issue is this: the meaning of the id values is described in the > HTML you are serving, but is really only readable by a human being > who is doing the coding. It is an odd sort of "out of band" -- it is > sort of like a signal embedded within a signal. But the client itself > is not picking up the extra signal -- the human being developing (not > using) the client is. I still consider it out of band but see how > this is debatable. > The id values aren't intended for human consumption, either. They are there for machine-readability by assistive devices, because that's the most-widely-interoperable manner of binding a label to a field. > > To me your example, while hard to argue against, still doesn't yield > the benefits I'd like to see from a RESTful m2m system. With your > methodology, I still can't write a single client that a wide range of > services can be targeted against. > It's possible to write such a client for shopping carts which re-use the GoodRelations ontology. But I still don't see why the desire to write such clients has anything to do with the media type. If you're creating media types which attempt to lead the user-agent around by the nose, it isn't REST -- in REST, the media type instructs the user-agent how to determine what the state transitions are, so the user can choose between them. Don't change this basic paradigm and tell me it's REST, or a proper use case for custom media types. > > I keep hearing that what I'm targeting is not possible -- thing is I > know that it is, because I've done it for specific use cases. The > problem is boiling it down to the key essential, generic > methodologies that apply across domains. I'm continuing to plug away > at it when I have time though... > What is possible, and what is RESTful, are not always the same thing. While it's possible to reach your goals by defining media types as object keys, the result is a library-based API instead of a network- based API -- the opposite of a uniform interface. So I say, use media types properly and look for solutions to this problem where it's properly solved -- domain-specific vocabulary embedded within ubiquitous media types. -Eric
Scott Banwart wrote: > > While technically a web spider would count as m2m interaction, it > makes a lousy example. > Disagree. Serendipitous re-use is the goal; if you've created a custom media type, how does Google know how to index it? How will Web or DNS accelerators interact with it? That is, unless it resembles HTML closely enough for sniffing to decide to treat it as HTML. There's a whole world of infrastructure out there which is coded to a limited number of ubiquitous media types -- don't throw that out for the sake of an optimization (using media types as object keys). > > A web spider blindly follows the links it finds as it is concerned > only with following links and cataloging the response. > Disagree. Spiders are increasingly sophisticated, googlebot has been interacting with GET forms for years now. Google understands the GoodRelations ontology, with increased quality of search results as the benefit for any site that implements it. None of which applies when a custom media type is used. This only works because the semantics are expressed using a standardized ontology for domain-specific vocabulary, which has no bearing on the media type (works with any markup language that supports RDFa); this is the component layer in REST. > > A web spider doesn't need understand the semantic meaning of the > response and it doesn't need to use that information to choose the > proper link to follow. > Like a web spider, an m2m client needs to understand what URIs in the content constitute links, first. The choice is made by comparing the link relations available, against the machine-user goal. If the goal is to use my v2 API to submit a media type application, an m2m client should first check to see if it's already registered (which assumes some future, machine-readable IANA registry); it knows where to look by searching for <link rel='index'/> (which I just added) by its Xpath. That's part of the paragraph of out-of-band documentation my API requires. If the m2m client just came from there, to find the application form, there's no need to dereference it. Either way, the m2m client knows the links for each top-level type, so this operation matches the link text to the text in the form's dropdown box for the selected type, and the subtype-name search continues. If the question is, how does the m2m client know how to find the application form from the index page, I'd stick role='form' on the link. A bit of a hack, I'm sure there are better ways, but I believe mechanisms do exist within (X)HTML to annotate links in such a way that machines can follow them. If the question is, how does the m2m client *infer* all of this vs. having to be coded against this interface, I'm confused because I don't see where this capability exists in any architectural style, or how defining custom media types would be a solution even if it were RESTful to do so for such purpose. -Eric
On Mon, Dec 6, 2010 at 2:45 PM, Eric J. Bowman <eric@...> wrote: > Peter Williams wrote: >> >> > >> > I still don't follow. If you coded up a client according to the >> > self- documenting API, the presence of a new form would not change >> > the behavior of that client -- it doesn't change how URLs are >> > constructed or the target of the POST. >> >> What "self-documenting API" are you speaking of? >> > > http://charger.bisonsystems.net/mediatypes.htm > > Every form field has a label attached, and links to the documentation > describing the requirements for the field. Every set of name-value > pairs is clearly indicated; any client which groks HTML can generate a > list. It is clearly described that these name-value pairs are POSTed > to the target URI as an urlencoded query string. Out-of-band > information like how to generate name-value pairs from the form code, > is a commonly shared understanding defined by the media type. What > other information is needed to use this API, that isn't in-band? >> >> In this example, my thought was that the script would GET >> <http://www.iana.org/cgi-bin/mediatypes.pl> and follow any redirects. >> It will then inspect the representation received for a form (based on >> it's out-of-band knowledge that the page it is looking for contains a >> form). >> > > It's going to take more out-of-band knowledge than that. I don't > understand the expectation that a form be self-describing such that > it's generally usable by random, automated clients, as an argument in > favor of attempting such functionality through the minting of custom > media types. Domain-specific vocabulary, i.e. RDF ontologies, seems to > be the future direction; which has no bearing on media type beyond > which ubiquitous ones to choose to represent the API as an active > document. Rdfa is an interesting technology. It might help in significant ways in the area. However, i don't think it is going to be an panacea. > >> >> It would then construct a representation to send to the server >> by iterating over the input elements of the form, seting the values to >> ones extract from my wiki (based on it's out-of-band knowledge about >> what the various fields in the form mean). It would then make a >> request to the resource specified in form's @action with the method >> specified in form's @method and the body constructed in the previous >> step. >> > > Why would it do that? I'd write such a client by first determining the > name-value pairs the API is expecting, tell it which associated data > from the wiki goes where, and POST to the target as an urlencoded query. Why would you ignore the hyper media controls on the page? Why would you honor the target of the form and not the specification of the expected representation? What if the form is updated with a new drop down? If you use the request representation specified in the form script would continue to work. If you don't it would only work if the server is nice enough to support both the representation it specifies in the page and some other arbitrary inputs that poorly designed scripts might send.sent >> >> This would fail, or work only by coincidence, if a second form were >> added to be page. >> > > My way doesn't fail or break, but continues to work so long as the > server supports the name-value pairs as POSTed query strings. I don't > understand the requirement that I shouldn't have to hard-code anything. It doesn't break as long as the server doesn't change or changes only in very limited ways. The client has an even harder time, because it is stuck asking html when what it wants something much more specific. >> >> It would also allow clients to work reliably with more than one >> provider of the service. I think this may be in line with Mike's >> approach of converting content into structure by specifying it in the >> media type. (Correct me if i misunderstood you, Mike.) >> >> How you would design such a system to avoid such coupling? >> > > I already have? Any service provider can support receiving POSTs with > the specified name-value pairs as urlencoded queries, using whatever > markup/forms language they want. I understand what you guys are saying, > what I don't understand is why this is a problem if a domain-specific > ontology (like GoodRelations) exists. With an ontology, the markup > doesn't matter, because clients are bound to, for example, @instance- > of='foo' rather than any markup language or structure, allowing > variation between sites with less hard-coding. Yes, anyone can write an autonomous client that is tightly bound to your particular service. Often that is good enough. Rdfa may help in the future. However, i think it suffer too because the expectations/requirements of the client are not surfaced explicitly in the request. Having multiple representations and using content negotiation can reduce the coupling and associated maintenance cost quite significantly. It allows servers to continue to support older autonomous clients, while continuing to evolve and provide significant new features. It also allows autonomous clients to interact with more than one implementation of a service by making the requirements and expectations explicit in a specification. Peter barelyenough.org
On Mon, Dec 6, 2010 at 4:56 PM, Eric J. Bowman <eric@...> wrote: > "wahbedahbe" wrote: >> >> And sure -- you are using XHTML and any browser can interact with >> your service. Ok. But these M2M clients that are keying off of the id >> values etc. (which one can argue is "out of band knowledge" -- but it >> is questionable) can't interact with ANY service that is similarly >> designed using the same media type. >> > > Those id values are neither here nor there in terms of the API. The > markup is how it is because of WCAG checkpoints, which means every > field has a label bound to its @id using @for. I don't understand why > an m2m client keys off of anything -- I've distributed an API in a > standardized format which makes the name-value pairs obvious, as well > as the format and method the server expects. Anyone can code an m2m > client to make those POST requests with the specified name-value pairs. > I think we have a different view on what benefits REST can provide. If I understand your POV then you are saying that REST makes it easy for a developer to understand the request body format (and I suppose parameters if applicable, for example, for form-url-encoded name-value pairs) and methods that are acceptable at the URIs exposed by the server. While this is true for your example, I am saying that REST can do better than that. I am saying that a client coded only to a media type (a STANDARD, REGISTERED one -- I am not debating that point) can adapt to a service that uses that type. The name-value pairs and methods at the service's URIs are not coded into the client at all. So in my ideal world, it is possible for the developer of the RESTful client to never see any of the representations of the service(s) the client will interact with. This is possible for browser development is it not? >> >> The issue is this: the meaning of the id values is described in the >> HTML you are serving, but is really only readable by a human being >> who is doing the coding. It is an odd sort of "out of band" -- it is >> sort of like a signal embedded within a signal. But the client itself >> is not picking up the extra signal -- the human being developing (not >> using) the client is. I still consider it out of band but see how >> this is debatable. >> > > The id values aren't intended for human consumption, either. They are > there for machine-readability by assistive devices, because that's the > most-widely-interoperable manner of binding a label to a field. > Fair enough. So how does the client in your model know "Ah the form that accepts variables foo and bar can be used now"? I had assumed the ids, is this not so? >> >> To me your example, while hard to argue against, still doesn't yield >> the benefits I'd like to see from a RESTful m2m system. With your >> methodology, I still can't write a single client that a wide range of >> services can be targeted against. >> > > It's possible to write such a client for shopping carts which re-use > the GoodRelations ontology. But I still don't see why the desire to > write such clients has anything to do with the media type. If you're > creating media types which attempt to lead the user-agent around by the > nose, it isn't REST -- in REST, the media type instructs the user-agent > how to determine what the state transitions are, so the user can choose > between them. Don't change this basic paradigm and tell me it's REST, > or a proper use case for custom media types. > It's not clear to me what the difference is between "leading the user-agent around by the nose" and "instructing the user-agent how to determine what the state transitions are". Can you clarify? >> >> I keep hearing that what I'm targeting is not possible -- thing is I >> know that it is, because I've done it for specific use cases. The >> problem is boiling it down to the key essential, generic >> methodologies that apply across domains. I'm continuing to plug away >> at it when I have time though... >> > > What is possible, and what is RESTful, are not always the same thing. > While it's possible to reach your goals by defining media types as > object keys, the result is a library-based API instead of a network- > based API -- the opposite of a uniform interface. So I say, use media > types properly and look for solutions to this problem where it's > properly solved -- domain-specific vocabulary embedded within > ubiquitous media types. > > -Eric > I'm definitely not espousing the "media type as object key" approach -- I think you are mis-interpreting my statements. I'm trying to generalize the approach that I believe was used in HTML, VoiceXML, CCXML and some efforts of my own (though those did not get standardized/registered and thus perhaps aren't fully RESTful from that perspective). In all of these examples, the client was coded to a media type that was not specific to a service or it's internal objects at all. In fact, I'd say that the media type was more "client centric" than "service centric". The representation, when loaded by the client informed it how to set the internal state of processing resources on the client (the view rendered on the screen for HTML, prompt playback and loading speech-rec grammars for VoiceXML, the state of calls and conferences for CCXML). The representation also informed the client how to map events generated by those resources (mouse/keyboard events for HTML, speech-rec/dtmf events for VoiceXML, call/conference events for CCXML) to application state transitions (following links, submitting forms). I see the user-agent as a mediator between an interface to the user (human or machine) and the HTTP-based interface to the server. The representation customizes the mediation between these interfaces at each application state. I believe that the representation format design should be driven by the interface to the human or machine "user" that is specific to each type of user agent. The above was perhaps a lot to digest -- I haven't had much luck explaining this in short forum postings. I've broken down the long form of the argument into articles that I'm writing on my blog (http://linkednotbound.net/) It is slow going (at this rate it will take over a year) but hopefully I'll get there some day... Andrew
Andrew Wahbe wrote: > > I think we have a different view on what benefits REST can provide. If > I understand your POV then you are saying that REST makes it easy for > a developer to understand the request body format (and I suppose > parameters if applicable, for example, for form-url-encoded name-value > pairs) and methods that are acceptable at the URIs exposed by the > server. > We're basically on the same page. The issue is making APIs as machine- accessible as they are human-accessible. The discussion is how well HTML is suited to this task for the general (i.e. non-CCXML) case. My position is that profound benefits arise from having one representation serve both purposes -- accessible, self-documenting APIs reduce costs associated with development, deployment and maintenance; with out-of- the-box Internet scale and serendipitous re-use. If the data needs to be modeled as a hierarchical collection, then ditch the <head> and wrap the markup in Atom. This approach applies to myriad projects, without spending time creating and documenting markup languages, increasing developer productivity. Etc. > > The above was perhaps a lot to digest -- I haven't had much luck > explaining this in short forum postings. > You're making progress, but I'm a hard audience and I really believe your goals are acievable via RDFa in XHTML. > > While this is true for your example, I am saying that REST can do > better than that. I am saying that a client coded only to a media type > (a STANDARD, REGISTERED one -- I am not debating that point) can adapt > to a service that uses that type. The name-value pairs and methods at > the service's URIs are not coded into the client at all. So in my > ideal world, it is possible for the developer of the RESTful client to > never see any of the representations of the service(s) the client will > interact with. This is possible for browser development is it not? > OK, there are two approaches here. One is a human-readable API which guides the hard-coding of name-value pairs and their meaning. The other is a machine-readable API which auto-updates user-agents coded to the media type. I'm of the opinion a "polyglot" approach is possible. > > > > > The id values aren't intended for human consumption, either. They > > are there for machine-readability by assistive devices, because > > that's the most-widely-interoperable manner of binding a label to a > > field. > > > > Fair enough. So how does the client in your model know "Ah the form > that accepts variables foo and bar can be used now"? I had assumed the > ids, is this not so? > RDFa. I've updated the content of both links I posted, the DOCTYPE of both uses the XHTML+RDFa DTD. I only bothered annotating the contact information. Each application is associated with an URL, which scopes the markup describing three foaf:Person's. The policy, or contract, is to refer to the registrant as dc:publisher, the maintenance/information contact as dc:mediator and the editor as dc:creator. The same metadata can also be used to annotate the imaginary XForms document. Now there could be a distributed media type registry, with each provider collecting the same information in an interoperable fashion -- the variable names don't have to be registrant-, contact- and editor-; they could just as easily be reggie-, connie- and eddie-. So there's no hard-coding of name-value pairs, without being coupled to any one particular representation of the submission form; client components are as compatible as their understanding of the media type allows them to be. > > > > > It's possible to write such a client for shopping carts which re-use > > the GoodRelations ontology. But I still don't see why the desire to > > write such clients has anything to do with the media type. If > > you're creating media types which attempt to lead the user-agent > > around by the nose, it isn't REST -- in REST, the media type > > instructs the user-agent how to determine what the state > > transitions are, so the user can choose between them. Don't change > > this basic paradigm and tell me it's REST, or a proper use case for > > custom media types. > > > > It's not clear to me what the difference is between "leading the > user-agent around by the nose" and "instructing the user-agent how to > determine what the state transitions are". Can you clarify? > You probably aren't falling into this trap, but it does seem quite common to me, that media types are being used to key specific behaviors. Components shouldn't dispatch decision-trees based on media type, they should dispatch based on the hypertext representation. -Eric
> > Each application is associated with an URL, which scopes the markup... > Erm, each *registration* is associated, etc. -Eric
Peter Williams wrote: > > > > >> > >> In this example, my thought was that the script would GET > >> <http://www.iana.org/cgi-bin/mediatypes.pl> and follow any > >> redirects. It will then inspect the representation received for a > >> form (based on it's out-of-band knowledge that the page it is > >> looking for contains a form). > >> > > > > It's going to take more out-of-band knowledge than that. I don't > > understand the expectation that a form be self-describing such that > > it's generally usable by random, automated clients, as an argument > > in favor of attempting such functionality through the minting of > > custom media types. Domain-specific vocabulary, i.e. RDF > > ontologies, seems to be the future direction; which has no bearing > > on media type beyond which ubiquitous ones to choose to represent > > the API as an active document. > > Rdfa is an interesting technology. It might help in significant ways > in the area. However, i don't think it is going to be an panacea. > But it sounds like panacea is exactly what everyone wants... ;-) If RDFa isn't it, I believe it at least serves as proof-of-concept that the policy/contract problem is best solved at the component layer, not the connector layer. It at least points a reasonable way forward, providing an integration point for the linked-data world and REST architecture. I'd like to see RDFa support added to existing XML languages, it would be interesting to see the same domain-specific vocabulary used to manipulate both a Web interface and a telephony interface to the same goal. > > > > > Why would it do that? I'd write such a client by first determining > > the name-value pairs the API is expecting, tell it which associated > > data from the wiki goes where, and POST to the target as an > > urlencoded query. > > Why would you ignore the hyper media controls on the page? Why would > you honor the target of the form and not the specification of the > expected representation? > > What if the form is updated with a new drop down? If you use the > request representation specified in the form script would continue to > work. If you don't it would only work if the server is nice enough to > support both the representation it specifies in the page and some > other arbitrary inputs that poorly designed scripts might send.sent > OK. So what we're after isn't just the method, target, name-value pairs and format -- IOW, the service API at the connector layer. What we're after is an API for the dereferenced representation. Since each form control can be assigned an @id, each control is a resource with a hash URI. RDF is all about self-describing resources, RDFa embeds this mechanism into the hypertext control itself. The benefit, of course, is that the API hasn't been touched, from the h2m perspective. Plus, there's only one document to maintain, instead of resorting to media-type conneg. RDFa is enough of a juggernaut right now that it's OK to beware of the hype cycle, and I'm by no means an expert, but it looks to be the real deal as far as solving a bulk of the problems which come to this list. My aforementioned toolchain validates to the DOCTYPE just fine, so adding RDFa into the example was quick and painless. Unless I screwed up, there are many parsers to choose from which can extract the graph; or, I could use GRDDL to transform the embedded RDFa into a standalone RDF document. > > Yes, anyone can write an autonomous client that is tightly bound to > your particular service. Often that is good enough. Rdfa may help in > the future. However, i think it suffer too because the > expectations/requirements of the client are not surfaced explicitly in > the request. > Please elaborate, in terms of my updated example. > > Having multiple representations and using content negotiation can > reduce the coupling and associated maintenance cost quite > significantly. It allows servers to continue to support older > autonomous clients, while continuing to evolve and provide significant > new features. It also allows autonomous clients to interact with more > than one implementation of a service by making the requirements and > expectations explicit in a specification. > I'm a big fan of conneg, where it makes sense. But the costs of both development and maintenance of both conneg and duplicate-purpose, different-audience representations seem much higher than they do for a polyglot approach. If the human-targeted self-documenting links and annotation text are removed in favor of a machine-targeted document, to save bytes or why-ever, that machine-targeted document is harder to maintain and can't be debugged-by-browser. So I hope RDFa does work out, I think this thread has exposed the limits of HTML without it. -Eric
I can see lots of good stuff in a XHTML API. But I can also see some good stuff in using a specific XML format - so what if we could generate the HTML on the fly from the XML? It should be relatively easy to write an XML stylesheet that converts any XML to readable HTML plus convert ATOM links til HTML <a>-elements. This stylesheet could then be refered in the XML such that the browsers would apply it to the XML (I know at least IE can do that). In this way we could get both a browsable HTML API and a simple machine-readable API? /Jrn
That would be very nice. Start off with custom XML types, and provide an XSL that transforms it. The question I have is..as it's been years since I've done any XSLT stuff, can you provide some way, via the XML custom type with links in it, to provide the XSL as a link option without having to return the HTML yourself.. not sure that makes sense. But basically one of the links would provide the XSL or an XSL/T service to transform the XML into HTML without the client having to do anything more than calling the link? Almost as if you could include the XSL as part of the XML and the browser could somehow see that there is XSL and XML and transform it for u.
--- On Tue, 12/7/10, Jorn Wildt <jw@...> wrote:
From: Jorn Wildt <jw@fjeldgruppen.dk>
Subject: [rest-discuss] Re: HTML REST API example [was: Link relations]
To: rest-discuss@yahoogroups.com
Date: Tuesday, December 7, 2010, 2:13 AM
I can see lots of good stuff in a XHTML API. But I can also see some good stuff in using a specific XML format - so what if we could generate the HTML on the fly from the XML?
It should be relatively easy to write an XML stylesheet that converts any XML to readable HTML plus convert ATOM links til HTML <a>-elements. This stylesheet could then be refered in the XML such that the browsers would apply it to the XML (I know at least IE can do that).
In this way we could get both a browsable HTML API and a simple machine-readable API?
/Jørn
Here's an example. All you need to do is to refer the stylesheet in the
output XML.
/Jørn
Output XML:
<?xml version="1.0" encoding="utf-8"?>
<?xml-stylesheet type="text/xsl" href="xml2html.xsl"? >
Stylesheet (xml2html.xsl):
<?xml version="1.0" encoding="utf-8"?>
<xsl:stylesheet version="1.0"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:atom="http://www.w3.org/2005/Atom">
<xsl:template match="/">
<html>
<head>
<title><xsl:value-of select="name(/*)"/></title>
<style>
span.label { font-style: italic; }
div.indent { margin-left: 15px; }
</style>
<script src="/CBrain.F2.Rest.Server.Host/xml2html.js"/>
<!--FIXME-->
</head>
<body>
<h2><xsl:value-of select="name(/*)"/></h2>
<xsl:apply-templates select="*"/>
</body>
</html>
</xsl:template>
<xsl:template match="*">
<div>
<span class="label">
<xsl:value-of select="name()"/>:
</span>
<xsl:value-of select="text()"/>
<div class="indent">
<xsl:apply-templates select="*"/>
</div>
</div>
</xsl:template>
<!-- Atom link -->
<xsl:template match="atom:link">
<div>
Link (<xsl:value-of select="@rel"/>):
<a>
<xsl:attribute name="href">
<xsl:value-of select="@href"/>
</xsl:attribute><xsl:value-of select="@title"/>
</a>
</div>
</xsl:template>
----- Original Message -----
From: "Kevin Duffey" <andjarnic@...>
To: <rest-discuss@yahoogroups.com>; "Jorn Wildt" <jw@...>
Sent: Wednesday, December 08, 2010 3:38 AM
Subject: Re: [rest-discuss] Re: HTML REST API example [was: Link relations]
That would be very nice. Start off with custom XML types, and provide an XSL
that transforms it. The question I have is..as it's been years since I've
done any XSLT stuff, can you provide some way, via the XML custom type with
links in it, to provide the XSL as a link option without having to return
the HTML yourself.. not sure that makes sense. But basically one of the
links would provide the XSL or an XSL/T service to transform the XML into
HTML without the client having to do anything more than calling the link?
Almost as if you could include the XSL as part of the XML and the browser
could somehow see that there is XSL and XML and transform it for u.
"Jorn Wildt" wrote: > > I can see lots of good stuff in a XHTML API. But I can also see some > good stuff in using a specific XML format - so what if we could > generate the HTML on the fly from the XML? > Note that model.xht in my example, can be stripped down further by removing the accessibility markup. It can also have a schema, I like RELAX NG + Schematron, which constrains the markup; and documentation for that schema, none of which affects the media type. The concept is that it's a model, used by the XForms client-side MVC architecture. All XForms does is generate a user interface for an underlying XML document, i.e. it's just a transformation. > > It should be relatively easy to write an XML stylesheet that converts > any XML to readable HTML plus convert ATOM links til HTML > <a>-elements. This stylesheet could then be refered in the XML such > that the browsers would apply it to the XML (I know at least IE can > do that). > If you mean XSLT, yes, it works both client- and server- side. Note that the following examples are actually the same files at the same locations, from the origin-server POV: [1] http://charger.bisonsystems.net/xmltest/index.htm [2] http://charger.bisonsystems.net/xmltest/index.xht [3] http://charger.bisonsystems.net/xmltest/index.xml (IE users) (yes, I'm still slacking on the /conneg/ stuff and XForms for my demo) The HTML representation dereferenced at [1] isn't stored anywhere, except caches. Note that [3] relies on IE sniffing the output as text/ html and escalating privileges without asking the user. Anyway, they're generated on-the-fly from the same stub file and XSLT stylesheet in the same locations on the HD, applying a style to the Atom representations whose bi-directional transfer my API is based on. When the XSLT runs on the client component, it's the Code on Demand constraint. It results in more network activity, but less overall bandwidth due to independent caching of the components making up a "page", particularly the HTML template. So the user-perceived performance of [2] and [3] is greater than that of [1], provided the user-agent supports caching and XSLT; [3] isn't REST, just a pragmatic hack. Bear in mind, this is a PHP-driven demo, the only links expected to work are the 'view' menu and the in-the-loop links to posts and comments; the latency is also expected, my focus was correctness of headers. My toolchain expands from CSE Validator Pro to include Eclipse with phpeclipse and oXygen, plus httpwatch; but commercial tools aren't required to develop to my chosen technologies, and I'm spending less overall than the licensing of apps like Dreamweaver, for best-in- class products. It's the ubiquity of the data types which allows QA/QC on-the-fly at a low cost, for high productivity -- not the case if you go off the reservation with a custom XML format. > > In this way we could get both a browsable HTML API and a simple > machine-readable API? > Yes and no. You're on the right track, but the generic-XML-with-schema data format using application/xml as a media type (and this holds true even if you register a media type) approach still falls outside of REST. A uniform interface is one where your data type is common knowledge at the IP layer, i.e. standardized. While new data types are allowed, it's rare when one achieves ubiquity, particularly over a short period of time. I stick with creating schemas within existing standardized data types, using ubiquitous media types to designate the intended processing model. As a result, my APIs achieve Internet scale out-of- the-box, without losing development time writing out-of-band documentation to explain my unique-snowflake markup to others when HTML and Atom can be universally understood by humans and machines. Which isn't a golden-hammer approach, because I don't insist on using this technique where it doesn't apply. But in most cases, it does apply, so the efficiencies gained by using it outweigh the efficiency lost by using data types which aren't an exact fit. Which is exactly the point of REST: "The trade-off, though, is that a uniform interface degrades efficiency, since information is transferred in a standardized form rather than one which is specific to an application's needs." So REST results in economies of scale not only in terms of data being sent over the wire, but also in terms of development costs and time, ongoing maintenance costs and time, plus serendipitous re-use. These benefits of a uniform interface are the result of standardization, so make sure you understand the implications of rolling your own project- specific data types, even if they're transformed into HTML. -Eric
Eric, I would like to ask another question. What is your point of view on OpenSearch "application/opensearchdescription+xml" (http://www.opensearch.org/Specifications/OpenSearch/1.1)? This seems to fall into your category of a media type that might as well have been constructed using plain XHTML. I see one benfit of this media-type: if my browser recognize the media-type it can offer to save the search engine reference and use it for the browser's search box. But the use of a XHTML microformat could have been used as well. One could even argue that ATOM falls into the same category. So ... it seems like any of the accepted *-XML media-types might as well have been represented using XHTML? I would also appreciate some examples of well-applied "special" (less known?) media-types - just to get an understanding of when a new media type makes sense? /J�rn
Jrn Wildt wrote: > > I would like to ask another question. What is your point of view on > OpenSearch "application/opensearchdescription+xml" > (http://www.opensearch.org/Specifications/OpenSearch/1.1)? > > This seems to fall into your category of a media type that might as > well have been constructed using plain XHTML. > Actually, it falls into my category of "things that aren't media types." I don't know why it hasn't been approved, but you're right, I think it reinvents a whole lotta wheels in describing what a search interface should look like, without actually being a search interface... the whole approach cries out for re-purposing as RDFa, such that it can annotate search interfaces across forms languages. Providing a search interface for Web queries just isn't a use case that's crying out for some other solution than HTML -- it's an IDL, not a self-documenting hypertext API. Bear in mind that I'm saying this in 20/20 hindsight; OpenSearch predates RDFa. > > I see one benfit of this media-type: if my browser recognize the > media-type it can offer to save the search engine reference and use > it for the browser's search box. But the use of a XHTML microformat > could have been used as well. > While at the same time, this approach can be taken too far. But you're right, Opera can magic-wand any form, regardless of purpose, independent of media type (limited only by Opera's understanding of the data types). > > One could even argue that ATOM falls into the same category. > Nope -- HTML's shortcomings in describing collection/member semantics require a different markup language and processing model to solve. My demo uses application/ x.xbel+xml (yeah, I need to add the 'x.'), a data type which solves the problems of no collection/member semantics in HTML, and no valid "list of annotated links" semantics in Atom. The usage is distinct, as well, in that its intent is the exchange of bookmarks between browsers and services, not rendering for display (an m2m data type, testable by humans via importing the document into bookmarks and looking at the results in the browser chrome), so the processing model isn't the same as for HTML or Atom. > > So ... it seems like any of the accepted *-XML media-types might as > well have been represented using XHTML? > I disagree. SVG for example, does things I'd not want to try in XHTML. With a completely different processing model. The markup and processing model for a forms language shouldn't vary based on the purpose of the form (search), that markup and processing model has nothing to do with representing or rendering vector-based images. Same with VoiceXML/ CCXML. These use-cases aren't solved by using application/xml plus a schema; use-cases which are solved that way are most likely better off using well-defined, common-knowledge types like HTML and Atom -- they're data types; media types aren't meant as data-type identifiers, but as processing-model identifiers. I don't see how application/xslt+xml (XSLT 2+) could be done as annotated XHTML, given that its processing model is Turing-complete and its intent is the transformation of text rather than the description of a hypertext API... but note that XSLT may also be embedded within XHTML without changing the media type, because handling embedded XSLT is part of the application/xhtml+xml processing model. RELAX NG should have a +xml media type, given that the processing model of a schema language isn't otherwise described except by application/relax-ng-compact-syntax. MathML is another good example of something where the semantics of the markup lie outside the capabilities of HTML, even if the purpose is rendering the data for human users. See also DocBook, which has many similarities with HTML and indeed some overlapping markup semantics, yet has a completely different processing model. > > I would also appreciate some examples of well-applied "special" (less > known?) media-types - just to get an understanding of when a new > media type makes sense? > Unfortunately, many of the best examples, like SVG, lack media types at the moment, but this will eventually change. Others, like application/ mathml+xml, represent examples of extending HTML, i.e. may be used with a host language or may be standalone (while XForms requires a host language but without a media type). There aren't any hard-and-fast answers here, it's very much situation-dependent. But, for my rule of thumb, I'd say tell me why it can't be HTML, first. Or Atom, or anything else. REST is based on the principle of generality, meaning your API probably isn't a unique snowflake requiring a new processing model to do something nobody's ever done before in a browser (i.e. an HTML-driven search interface), AFAIC. Or some other user-agent with commodity status in the world-at-large, rather than requiring your own special user-agent to decipher. -Eric
On Tue, Dec 7, 2010 at 1:40 AM, Eric J. Bowman <eric@...> wrote: >> Yes, anyone can write an autonomous client that is tightly bound to >> your particular service. Often that is good enough. Rdfa may help in >> the future. However, i think it suffer too because the >> expectations/requirements of the client are not surfaced explicitly in >> the request. >> > > Please elaborate, in terms of my updated example. My biggest concern is that rdfa is hidden. The processing model for (x)html is basically "render the document so that the user can interpret it base on the latent semantics in the natural language text and let the user decide which other related resources to interact with next". In this processing model rdfa is entirely superfluous, as is a lot of the other markup in most web pages. It is perfectly legal for an intermediate to complete rewrite the html as long at that transformation does not negatively impact the processing model (ie, it is still easy for the human at the end to understand the page). A use case for such transformations is page simplification for rendering on small form factor devices. This ability for intermediates to provide value is a very important feature of rest architectures. Basing an autonomous client architecture entirely on rdfa embedded in xhtml seems risky because the media type provides no protection for the rdfa. An intermediate that chooses to strip the rdfa out would be perfectly legal. Removing the rdfa has zero impact on the html processing model. A system that relies for it's basic function on xhtml+rdfa is effectively relying on an implementation detail of the current set of web servers and intermediates. That implementation detail could, legally, change at any moment. Most autonomous clients require a different processing model than the one provided by (x)html. Automata that can use use (x)html processing model -- eg, search engine spiders -- definitely should. However, the clients i usually write lack the requisite natural language interpretation ai needed to achieve the necessary level of clarity regarding the various bits of data and relationships in an html document. Machine readable precision can be layered into xhtml using rdfa, but a client that relies on the rdfa being there has a fundamentally different processing model than a browser. I think an `application/rdfa+xhtml+xml` media type would solve most of these problem, though. That would surface the fact that the rdfa was an important part of the processing model for the representation to both the server and intermediates. Peter barelyenough.org
Peter Williams wrote:
> On Tue, Dec 7, 2010 at 1:40 AM, Eric J. Bowman <eric@...> wrote:
>>> Yes, anyone can write an autonomous client that is tightly bound to
>>> your particular service. Often that is good enough. Rdfa may help in
>>> the future. However, i think it suffer too because the
>>> expectations/requirements of the client are not surfaced explicitly in
>>> the request.
>>>
>> Please elaborate, in terms of my updated example.
>
> My biggest concern is that rdfa is hidden.
>
> The processing model for (x)html is basically "render the document so
> that the user can interpret it base on the latent semantics in the
> natural language text and let the user decide which other related
> resources to interact with next". In this processing model rdfa is
> entirely superfluous, as is a lot of the other markup in most web
> pages. It is perfectly legal for an intermediate to complete rewrite
> the html as long at that transformation does not negatively impact the
> processing model (ie, it is still easy for the human at the end to
> understand the page). A use case for such transformations is page
> simplification for rendering on small form factor devices. This
> ability for intermediates to provide value is a very important feature
> of rest architectures.
AFAICT, that's usually just the view of the DOM that is simplified, not
the actual XHTML itself, however would be interested to know if you can
point to a specific example or whether it's just an edge-case worry.
In any event, surely an autonomous client would not be using one of
these intermediates(?)
> Basing an autonomous client architecture entirely on rdfa embedded in
> xhtml seems risky because the media type provides no protection for
> the rdfa. An intermediate that chooses to strip the rdfa out would be
> perfectly legal. Removing the rdfa has zero impact on the html
> processing model. A system that relies for it's basic function on
> xhtml+rdfa is effectively relying on an implementation detail of the
> current set of web servers and intermediates. That implementation
> detail could, legally, change at any moment.
Not sure if that is the case, but I'll check it out, and if it is
legally the case will see what I can do to ensure that it isn't legally
the case, the 1.1 specs are all still in LC at the minute so still time
to fix if indeed it is an issue.
> Most autonomous clients require a different processing model than the
> one provided by (x)html. Automata that can use use (x)html processing
> model -- eg, search engine spiders -- definitely should. However, the
> clients i usually write lack the requisite natural language
> interpretation ai needed to achieve the necessary level of clarity
> regarding the various bits of data and relationships in an html
> document. Machine readable precision can be layered into xhtml using
> rdfa, but a client that relies on the rdfa being there has a
> fundamentally different processing model than a browser.
It's generally a different client design TBH, many older/simpler
clients, and in some respects many developers, feel the need to consider
the entire data packet, when in reality you typically only look to see
if what you need is there, and if not move along. It's more of a hook-in
based design where not only are you looking for the presence of RDFa,
but your typically looking for the presence of certain properties or for
things of a certain type, it's entirely natural, and feasible, to have
several different clients all considering the same data at the same
time, tasked with different jobs. Along the lines of env.onNewData =
myfunc; myfunc = function(data) { if data.hasType == "Person" then
showPersonDetails(data) } - if the data the specific client, perhaps
better termed "agent", isn't there, then the client/agent simply does
nothing.
> I think an `application/rdfa+xhtml+xml` media type would solve most of
> these problem, though. That would surface the fact that the rdfa was
> an important part of the processing model for the representation to
> both the server and intermediates.
TBH, people wouldn't set the mediatype correctly, and if any
intermediate was considering the document itself to strip it down, then
it's looking at the data any way so can see if it contains RDFa or
whatever else.
Best,
Nathan
Peter Williams wrote: > > My biggest concern is that rdfa is hidden. > I empathize with that position. Here's my problem: initially I thought it was surely an error that RDFa and XForms don't have media types. I then decided that the folks who know more about this than I do, probably know what they're doing. So, I based my notion of media type around the reality. But maybe those experts are in error; in which case I've drawn all the wrong conclusions about REST and media types. Using an XForms-specific media type sure would make my life easier, in that conneg could be used to determine user-agent capability. Surely that's the point of it all. I felt really comfortable with my REST knowledge before my pursuit of XForms and RDFa stood my notion of what media types are on its head (plus Roy's REST-APIs-must-be-hypertext- driven post) a couple of years back. Understanding why XForms and RDFa *aren't* media types, let alone explaining it, isn't easy. > > The processing model for (x)html is basically "render the document so > that the user can interpret it base on the latent semantics in the > natural language text and let the user decide which other related > resources to interact with next". > Disagree. Accessibility markup removes human natural-language interpretation from the equation, i.e. <link rel='glossary'/>, like other accessibility markup, isn't meant to be seen by most users. The pattern <hx><ul> with a list of links is interpreted as a menu, while a sighted user could care less about the markup syntax so long as what's rendered *looks* like a menu. These markup semantics are also part of the processing model, which isn't limited to natural-language rendering for human consumption, which is why HTML isn't entirely useless for repurposing as an m2m format (despite assumptions to the contrary). > > In this processing model rdfa is entirely superfluous, as is a lot of > the other markup in most web pages. > REST basically says superfluous markup is a good thing. Data formats aren't optimized for any one use case or type of user, or even user- agent. Read what's needed and ignore the rest, it isn't about saving bytes. Take my example markup -- just how many ways are there to set the language of the document, anyway? Well, five in (X)HTML. I'd prefer to pick one, and no user-agent will be looking for more than one, but for the sake of accessibility (this is a WCAG checkpoint) I use them all, so nobody's left out. To me, RDFa is just an extension to existing accessibility markup. What's superfluous in the markup varies based on the capabilities or coding of user-agents, this is an expected byproduct of decoupling. > > It is perfectly legal for an intermediate to complete rewrite the > html as long at that transformation does not negatively impact the > processing model (ie, it is still easy for the human at the end to > understand the page). A use case for such transformations is page > simplification for rendering on small form factor devices. This > ability for intermediates to provide value is a very important > feature of rest architectures. > Transcoding proxies typically care about converting markup to be well- formed, while using other tricks common to Web accelerators to combat typical problems (i.e. a 200x100 image sized to 20x10 may be reduced). But I've never heard of one stripping markup attributes to save bytes. Theoretically, the presence of an XHTML+RDFa DTD tells intermediaries to leave valid attributes alone -- they're obviously part of the data type. > > Basing an autonomous client architecture entirely on rdfa embedded in > xhtml seems risky because the media type provides no protection for > the rdfa. An intermediate that chooses to strip the rdfa out would be > perfectly legal. Removing the rdfa has zero impact on the html > processing model. > This isn't the purpose of the media type; it's a problem which falls in the realm of the data type version. Intermediaries which strip out valid markup that's included in the DOCTYPE are just broken. The media type really says nothing about what markup is allowed -- text/html works just fine for HTML 3.2 which has no <div> or <span>, so I can see an intermediary removing that invalid markup -- but not based on the media type, which is only meant to identify a processing model, not the version of the data type. > > A system that relies for it's basic function on xhtml+rdfa is > effectively relying on an implementation detail of the current set of > web servers and intermediates. > I really don't understand your point. My server knows nothing of RDFa, and neither do any intermediaries my data passes through. Markup languages, and their extensions, are not such an implementation detail. Anyway, wouldn't these problems you mention be even greater for custom media/data types? > > Most autonomous clients require a different processing model than the > one provided by (x)html. Automata that can use use (x)html processing > model -- eg, search engine spiders -- definitely should. However, the > clients i usually write lack the requisite natural language > interpretation ai needed to achieve the necessary level of clarity > regarding the various bits of data and relationships in an html > document. Machine readable precision can be layered into xhtml using > rdfa, but a client that relies on the rdfa being there has a > fundamentally different processing model than a browser. > Does it? The semantics of a radio-button control are set in stone, "variable x takes one and only one of this set of y values." All RDFa does is annotate the control with some context, the processing model of the control is the same whether or not that RDFa is parsed; RDFa can't override the semantics of the control to mean "variable x takes one or more of this set of y values" or anything else, nor can RDFa express such semantics in the absence of a hypertext control. The alternative is to create a data type which does specifically what you want it to do, and assign it a media type as an identifier. In which case it will only be understood by a new class of client you're creating to consume it, coupling client to server until such time as multiple independent, interoperable implementations exist (the definition of standardization). Whereas with RDFa, any user-agent can manipulate the form properly even if it doesn't know (via RDFa) what the form controls *mean*, which is graceful degradation. > > I think an `application/rdfa+xhtml+xml` media type would solve most of > these problem, though. That would surface the fact that the rdfa was > an important part of the processing model for the representation to > both the server and intermediates. > Any intermediary that cares about the payload will be introspecting it anyway; it should glean the importance of RDFa attributes to the markup from the presence of an RDFa DOCTYPE, which seems to be enough surfacing not only in my opinion, but also in the opinion of those responsible for RDFa. RDFa+XHTML is just a version within the family of data types the media type references; media type isn't bound to version. Assigining media types to every extension of HTML or Atom defeats the purpose by being too finely-grained -- the processing model of form controls isn't changed by adding RDFa, so assigning it a new media type just loses backwards compatibility of the data type with non-RDFa components which could otherwise participate in the communication (i.e. by prefetching or doing DNS lookups because they know what a link is). -Eric
Hi Eric, I haven't read everything you've written here because I don't have time, but it looks like you're starting to repeat yourself. Using a machine oriented media type for a machine oriented application makes perfect sense. Tangling a machine application up in HTML raises the barrier to entry vs. a more succinct machine type that is built for purpose. People in the real world want lower barriers to adoption. If you find in the future that there is a real requirement for an HTML interface to your application, then provide it additionally by leveraging content negotiation. Cheers, Mike On Fri, Dec 10, 2010 at 7:38 AM, Eric J. Bowman <eric@...> wrote: > Peter Williams wrote: >> >> My biggest concern is that rdfa is hidden. >> > > I empathize with that position. Here's my problem: initially I thought > it was surely an error that RDFa and XForms don't have media types. I > then decided that the folks who know more about this than I do, probably > know what they're doing. So, I based my notion of media type around the > reality. But maybe those experts are in error; in which case I've drawn > all the wrong conclusions about REST and media types. > > Using an XForms-specific media type sure would make my life easier, in > that conneg could be used to determine user-agent capability. Surely > that's the point of it all. I felt really comfortable with my REST > knowledge before my pursuit of XForms and RDFa stood my notion of what > media types are on its head (plus Roy's REST-APIs-must-be-hypertext- > driven post) a couple of years back. Understanding why XForms and RDFa > *aren't* media types, let alone explaining it, isn't easy. > >> >> The processing model for (x)html is basically "render the document so >> that the user can interpret it base on the latent semantics in the >> natural language text and let the user decide which other related >> resources to interact with next". >> > > Disagree. Accessibility markup removes human natural-language > interpretation from the equation, i.e. <link rel='glossary'/>, like > other accessibility markup, isn't meant to be seen by most users. The > pattern <hx><ul> with a list of links is interpreted as a menu, while a > sighted user could care less about the markup syntax so long as what's > rendered *looks* like a menu. These markup semantics are also part of > the processing model, which isn't limited to natural-language rendering > for human consumption, which is why HTML isn't entirely useless for > repurposing as an m2m format (despite assumptions to the contrary). > >> >> In this processing model rdfa is entirely superfluous, as is a lot of >> the other markup in most web pages. >> > > REST basically says superfluous markup is a good thing. Data formats > aren't optimized for any one use case or type of user, or even user- > agent. Read what's needed and ignore the rest, it isn't about saving > bytes. Take my example markup -- just how many ways are there to set > the language of the document, anyway? Well, five in (X)HTML. I'd > prefer to pick one, and no user-agent will be looking for more than one, > but for the sake of accessibility (this is a WCAG checkpoint) I use them > all, so nobody's left out. To me, RDFa is just an extension to existing > accessibility markup. What's superfluous in the markup varies based on > the capabilities or coding of user-agents, this is an expected byproduct > of decoupling. > >> >> It is perfectly legal for an intermediate to complete rewrite the >> html as long at that transformation does not negatively impact the >> processing model (ie, it is still easy for the human at the end to >> understand the page). A use case for such transformations is page >> simplification for rendering on small form factor devices. This >> ability for intermediates to provide value is a very important >> feature of rest architectures. >> > > Transcoding proxies typically care about converting markup to be well- > formed, while using other tricks common to Web accelerators to combat > typical problems (i.e. a 200x100 image sized to 20x10 may be reduced). > But I've never heard of one stripping markup attributes to save bytes. > Theoretically, the presence of an XHTML+RDFa DTD tells intermediaries > to leave valid attributes alone -- they're obviously part of the data > type. > >> >> Basing an autonomous client architecture entirely on rdfa embedded in >> xhtml seems risky because the media type provides no protection for >> the rdfa. An intermediate that chooses to strip the rdfa out would be >> perfectly legal. Removing the rdfa has zero impact on the html >> processing model. >> > > This isn't the purpose of the media type; it's a problem which falls in > the realm of the data type version. Intermediaries which strip out > valid markup that's included in the DOCTYPE are just broken. The media > type really says nothing about what markup is allowed -- text/html > works just fine for HTML 3.2 which has no <div> or <span>, so I can see > an intermediary removing that invalid markup -- but not based on the > media type, which is only meant to identify a processing model, not the > version of the data type. > >> >> A system that relies for it's basic function on xhtml+rdfa is >> effectively relying on an implementation detail of the current set of >> web servers and intermediates. >> > > I really don't understand your point. My server knows nothing of RDFa, > and neither do any intermediaries my data passes through. Markup > languages, and their extensions, are not such an implementation detail. > Anyway, wouldn't these problems you mention be even greater for custom > media/data types? > >> >> Most autonomous clients require a different processing model than the >> one provided by (x)html. Automata that can use use (x)html processing >> model -- eg, search engine spiders -- definitely should. However, the >> clients i usually write lack the requisite natural language >> interpretation ai needed to achieve the necessary level of clarity >> regarding the various bits of data and relationships in an html >> document. Machine readable precision can be layered into xhtml using >> rdfa, but a client that relies on the rdfa being there has a >> fundamentally different processing model than a browser. >> > > Does it? The semantics of a radio-button control are set in stone, > "variable x takes one and only one of this set of y values." All RDFa > does is annotate the control with some context, the processing model of > the control is the same whether or not that RDFa is parsed; RDFa can't > override the semantics of the control to mean "variable x takes one or > more of this set of y values" or anything else, nor can RDFa express > such semantics in the absence of a hypertext control. > > The alternative is to create a data type which does specifically what > you want it to do, and assign it a media type as an identifier. In > which case it will only be understood by a new class of client you're > creating to consume it, coupling client to server until such time as > multiple independent, interoperable implementations exist (the > definition of standardization). Whereas with RDFa, any user-agent can > manipulate the form properly even if it doesn't know (via RDFa) what > the form controls *mean*, which is graceful degradation. > >> >> I think an `application/rdfa+xhtml+xml` media type would solve most of >> these problem, though. That would surface the fact that the rdfa was >> an important part of the processing model for the representation to >> both the server and intermediates. >> > > Any intermediary that cares about the payload will be introspecting it > anyway; it should glean the importance of RDFa attributes to the markup > from the presence of an RDFa DOCTYPE, which seems to be enough surfacing > not only in my opinion, but also in the opinion of those responsible for > RDFa. RDFa+XHTML is just a version within the family of data types the > media type references; media type isn't bound to version. Assigining > media types to every extension of HTML or Atom defeats the purpose by > being too finely-grained -- the processing model of form controls isn't > changed by adding RDFa, so assigning it a new media type just loses > backwards compatibility of the data type with non-RDFa components which > could otherwise participate in the communication (i.e. by prefetching > or doing DNS lookups because they know what a link is). > > -Eric > > > ------------------------------------ > > Yahoo! Groups Links > > > >
Mike Kelly wrote: > > Using a machine oriented media type for a machine oriented application > makes perfect sense. > Except when it doesn't. The debate is about creating media types to trigger specific m2m application behaviors, which is the sort of coupling we're seeking to avoid by using REST in the first place. My XBEL example is m2m, but the application behavior is beyond the scope of what a media type can do -- whether to import it as bookmarks or transform it into HTML for manipulation is dictated by hypertext, not the string appearing in Content-Location. > > Tangling a machine application up in HTML raises the barrier to entry > vs. a more succinct machine type that is built for purpose. People in > the real world want lower barriers to adoption. > Creating new media types for shopping carts and order forms, when each use case has been done a million times in HTML, based on the notion that HTML can't also be machine readable, raises the barrier to adoption and goes against REST. This is pragmatic advice, despite your consistent hurling of ad hominems to the contrary, and is worth repeating given the rise of RDFa. > > If you find in the future that there is a real requirement for an HTML > interface to your application, then provide it additionally by > leveraging content negotiation. > It's ridiculous to suggest that REST can be hacked on later somehow via conneg. The real requirement is to extend a limited set of standardized semantics across network boundaries (to paraphrase the thesis). The goal of REST is not to have 1,000 media types dedicated to order forms and shopping carts, but for those APIs to be described using some sort of *standardized* data type. You don't need an XBEL client to manage bookmarks, you can use a browser's intrinsic capability *or* an HTML interface despite its being machine-oriented. I'm not saying it has to be HTML, I'm saying it may as well be -- humans need to be able to understand m2m APIs because it's humans who have to code the m2m consumers. The API needs to be documented somewhere... why not make it a self-documenting API that a human coder of an m2m consumer can drive manually to gain the required understanding for such a project? In which case it needs to be accessible, and of course there's Code on Demand which means you'll need javascript bindings... HTML is the most-capable, fully-fleshed-out data type out there for the common case of describing a hypertext API -- regardless of the nature of the user as human or machine. If you feel I'm being repetitive, then you always have the option of not jumping in with a categorical rejection of using HTML to describe hypertext APIs due to machine orientation. HTML has always been used to describe hypertext APIs, which have always been readable by machines, which is why this thread uses such an old-school example as a starting point -- REST is a study of the phenomenon represented by the humble IANA registration form, which is easily extended to m2m without turning media types into data-format identifiers with magical powers to direct application flow. -Eric
--- In rest-discuss@yahoogroups.com, "Eric J. Bowman" <eric@...> wrote: > > Mike Kelly wrote: > > > > Using a machine oriented media type for a machine oriented application > > makes perfect sense. > > > > Except when it doesn't. The debate is about creating media types to > trigger specific m2m application behaviors, which is the sort of > coupling we're seeking to avoid by using REST in the first place. My > XBEL example is m2m, but the application behavior is beyond the scope > of what a media type can do -- whether to import it as bookmarks or > transform it into HTML for manipulation is dictated by hypertext, not > the string appearing in Content-Location. > Well is that the debate? You seem to be arguing against using media types specific to the resource types in the application. For example: application/customer+xml application/account+xml application/transaction+xml say for a banking application. I think many folks in this thread would likely agree with you. Let's move past that and assume we are not suggesting a media type for each type of resource (or even a subset of resource types). Now let's instead ask: for a given machine-client, is it better to use HTML (or another existing format) with RDFa/microformats/etc. layered on top or to design a hypermedia format specifically for that type of client. What are the pros and cons of layering a data model (that is not universally understood by all clients supporting the format) on top of a standard format vs. defining a new *standard* (or perhaps *standardizable*) format for a client when existing formats do not seem to be sufficient? For example, in a bank I can use HTML for the tellers machines, web banking (and likely even Automated Teller Machines) and VoiceXML for phone banking apps and use conneg (or even just separate URI spaces) to choose between them. If I want a RESTful interface for automated check processing machines would it be better to: a) create an format specifically for check processing machines (not specific to my banks resource types, e.g. application/checkproc+xml); or b) use HTML with a "check processor" data model layered on top. I prefer (a) and I don't think it violates any REST constraints at all. Standardizing the format would allow a bank to buy check processing machines from any vendor, plug them in, configure a single URI and go. You could argue that a standard data model layered on top of HTML could do the same thing. (I have no substantial background in the banking industry -- maybe there are business reasons that this makes no sense but I hope the example conveys the idea.) I think the differences between the two approaches are subtle and worth discussing, but let's get past the is/isn't RESTful arguments. Regards, Andrew
"wahbedahbe" wrote: > > Let's move past that and assume we are not suggesting a media type > for each type of resource (or even a subset of resource types). > Right, but what I'm trying to explain is that once we move past that, we're beyond the scope of REST. While it's fine to discuss the issues here, media type and data type design aren't constrained by REST, which constrains how media and data types are *used* in API design; with the goal of standardization of any new types, with just enough procedural hurdles to limit proliferation of lookalike types (i.e. this is "just like Atom" except for a twist, when the twist is an allowable extension within the existing media type). IOW, it isn't just resource types and their subsets, it also isn't about domain-specific media types; except when that problem domain simply can't be addressed by re-using (extending) existing ubiquitous types (the uniform interface). You can attempt to extend the uniform interface if you want, I prefer to use it as-is whenever possible. > > I think the differences between the two approaches are subtle and > worth discussing, but let's get past the is/isn't RESTful arguments. > JSON works just fine for transferring name-value pairs with a minimal degree of typing. But it doesn't constitute a hypertext API; it needs hypertext controls describing how the JSON is manipulated. One possibility is Code on Demand within HTML, to provide a self-documenting API which describes what the variables are and delimits the values they may contain (and, what they *mean*, for the m2m case). Another possibility is JSON schema, the point is having a self-documenting hypertext API using ubiquitous types. Beyond that, machine readability (self-describing vs. self-documenting or self-descriptiveness) is a component-layer concern, outside the scope of REST. This debate inherently toes the line between REST/NOT REST; I argue against solutions which implement m2m at the connector layer (making it a REST problem), in favor of handling it at the component layer (moving the problem outside of REST considerations). If the starting point of your API design process is to reject HTML (etc.) out-of-hand in favor of creating new types (even if standardizable), you're working far too hard, because you've started at the wrong layer (this isn't a REST problem, so don't muck about creating new messaging semantics between connectors unless your API really is a unique snowflake you expect to be widely adopted quickly enough to get any ROI from REST implemented that way -- which comes with no guarantee if standardization fails or uptake is insignificant). > > Now let's instead ask: for a given machine-client, is it better to use > HTML (or another existing format) with RDFa/microformats/etc. layered > on top or to design a hypermedia format specifically for that type of > client. What are the pros and cons of layering a data model (that is > not universally understood by all clients supporting the format) on > top of a standard format vs. defining a new *standard* (or perhaps > *standardizable*) format for a client when existing formats do not > seem to be sufficient? > The reason RDFa obsoletes microformats (my aging demo code is an outdated hodgepodge) is the parsing model. I came at this from the standpoint of using RDFa to replace microformats, not as any sort of holy grail, only realizing the value in terms of REST much later -- what kept me away from the RDF world before the advent of RDFa was the unRESTfulness of the systems using it and the awkward-at-best integration with HTML. RDF never would've clicked on that light bulb in my brain if it hadn't been for the microformats effort. Embedding m2m RDF in h2m HTML enables a powerful RAD approach to the domain- specific vocabulary problem, without mistaking it for a connector-layer problem requiring a new standardized hypertext language to solve. Thus simplifying REST development, while the developer builds skills which transfer from one REST project to the next (instead of creating or learning a new markup language and processing model for each project). Anyway, the parsing model. Each microformat has its own parsing model. So out-of-band knowledge of the microformat is a prerequisite. RDFa is a generic parsing model based on link relations. We're past self- descriptiveness here, and into self-describing -- we don't want domain- specific vocabularies to need formal standardization, what's required is a distributed mechanism for ad-hoc adoption using namespaces to avoid naming collisions. My example API re-uses FOAF and Dublin Core, but could just as easily re-use a nonstandardized specification like GoodRelations. We're beyond the scope of the uniform interface where standardization matters, since the problem being solved has no impact on messaging between connectors; so there's no need of a registry -- RDFa is a generic parsing model for distributed, ad-hoc adoption of vocabularies without the procedural hurdles required for centralized management of media types, data types and link relations in an orderly, uniform-interfacey manner. So the bar for adoption is much lower, due to the wide availability of HTML, Atom and RDFa parsers. If you define a new standardizable data type and register a proper media type for it, you delay the benefits of REST until not only multiple, independent, interoperable systems make use of it (standardization vs. specification), but they achieve ubiquity as well (wide availability of libraries and tooling). Working within the *existing* uniform interface is capable of achieving the benefits of REST from the get-go, particularly as RDFa support is added to other markup languages. Lower development costs, and quicker time-to-market result from using off-the-shelf libraries and avoiding standardization process delays and overhead. The API is just the name-value pairs; that's self-descriptive with HTML and in many cases JSON (HTML forms and JSON schema delimit the allowed values). But some mechanism is needed to describe the interaction with the name-value pairs of the connector-layer API, at the component layer, to avoid having to treat this as a problem best solved by minting new media/data types. Layering a self-describing mechanism on top (the component layer) only has any real value if it's a generic parsing model with a wide range of vocabularies available in the public domain. Creating a vocabulary and having it adopted is much simpler than minting data/media types, in that it doesn't require standardization or mass uptake to be effective within the REST style. Particularly since common patterns may be expressed using FOAF and/or Dublin Core, or other standardized vocabularies, reducing development costs and time-to-market even further by allowing a mix-and-match approach -- how would that work if every permutation required a new media type to describe exactly the same processing model? Serendipitous re-use comes not from being useful in browsers, but by not requiring uncommon libraries to build components (client, server or intermediary) which use your API. Also, you shouldn't be setting up a must-understand situation; a user-agent that doesn't grok RDFa being driven by a human works as a fallback because the paradigm is must- ignore. There is no guarantee that a purpose-built type, even if it achieves standardization, will ever have the scalability of HTML out- of-the-box. Or take progressive rendering into consideration, or have javascript bindings or accessibility APIs, all of which contribute to the overall goals of REST adoption. > > For example, in a bank I can use HTML for the tellers machines, web > banking (and likely even Automated Teller Machines) and VoiceXML for > phone banking apps and use conneg (or even just separate URI spaces) > to choose between them. > > If I want a RESTful interface for automated check processing machines > would it be better to: a) create an format specifically for check > processing machines (not specific to my banks resource types, e.g. > application/checkproc+xml); or b) use HTML with a "check processor" > data model layered on top. > OK, I'll humor you, but there are already electronic interchange formats for this sort of thing; besides, such systems have requirements which don't make them good candidates for REST. > > I prefer (a) and I don't think it violates any REST constraints at > all. Standardizing the format would allow a bank to buy check > processing machines from any vendor, plug them in, configure a single > URI and go. > Only if the media/data types are not only standardized, but also ubiquitous enough to actually be used in lieu of proprietary solutions widely enough to foster competition. It takes time, whereas ubiquitous types are insta-REST-enabled. You're making an assumption that I require your check format to be HTML. I don't. REST requires a self-documenting hypertext API which describes how to manipulate that format, most likely HTML because that's exactly the sort of thing HTML is designed for and excels at, with RDFa raising the intriguing possibility of polyglot documents which effectively target both human and machine users, via agents built around the same widely-available libraries and vocabularies. Speaking, as usual, pragmatically of Web instantiation of the style, rather than saying REST normatively requires HTML or anything of the sort, of course. My view here, is that an m2m data type can either be a polyglot based on HTML, or a machine-oriented standalone on which an HTML interface is based; either way, I consider the HTML (or whatever) to be mandatory active online API documentation (the hypertext constraint), whether components are hard-coded to it or follow a standardized parsing model to infer the meaning of the hypertext controls self-describing how the API is manipulated via representation. -Eric
Great post, looking forward to the book. BTW, the correct link is http://linkeddata.org/ . -Eric Brian Sletten wrote: > > > This is a newbie open-ended question. So if this is too generic I > > will read up more and come back. > It is a big, open question, but I'll try to push you in the right > direction. > > > I am trying to understand how REST and metadata initiatives are > > related to each other. Why do you need Dublin Core/XMP etc. ? Are > > these microformats ? > > On the surface, there is no direct connection except the Web > architecture. REST is an architectural style for managing information > resources. It has very little standard metadata associated with it > other than that it inherits from HTTP. > > Dublin Core is a framework for describing publication metadata. It > was produced by a bunch of librarians through the OCLC in Dublin, OH. > It originally started around the Warwick Framework but was recast as > the poster child for RDF along the way. Dublin Core is mostly used to > describe authorship, subject designation, publication dates, etc. It > is actually a more complicated framework that supports > interoperability across metadata profiles, but for your purposes > here, it is an RDF vocabulary for describing resources with standard > metadata terms (dc:title, dc:subject, dc:creator, etc.) > > It would be used either directly as RDF: > > http://bosatsu.net/index.html http://purl.org/dc/terms/creator > http://purl.org/net/bsletten > > This is a simple fact or "triple" connecting a document to an > author, indicated by a 303 non-network-addressable resource through > the Dublin Core creator relationship). RDF statements follow a > subject - predicate - value relationship but can have many different > serializations. In this case, both the subject and the relationship > are global and resolvable: > > http://purl.org/dc/terms/creator > > This can resolve both human-readable and machine-processable versions > of the relationship. The data model allows you to use relationships > from other vocabularies so it makes it very easy to accumulate data > from the Web. People are now starting to weave RDF into XHTML, HTML, > SVG, ODF, etc., generating it on the fly, exposing it natively as > part of the Linked Data Project (http://linkedata.org). > > There are technologies that build on RDF such as SKOS and OWL to > allow you to organize the terms and resources in new and interesting > ways. You can then start to do certain types of inference over the > data organized this way. One of the exciting parts is that you can > organize other people's data the way you want to see it relatively > easily. > > RDF and microformats serve similar goals (to describe documents and > resources) but they have much different scopes. RDF has a data model > associated with it and is largely intended to support global > references and relationships. Microformats are intended to be simple, > developer-friendly ways of encoding certain domains (events, people, > reviews, organizations, etc.) > > The good news is that it is easy to convert Microformats into a form > that can be used with RDF so it is all good metadata. > > XMP is based on an older version of RDF and was intended as a way of > allowing Adobe's various partners to contribute tools in a > document-processing framework and allowing them all to annotate a > document, image, etc. with metadata (camera information, filters > applied, etc.) It isn't super-wildly used but I think the adoption of > RDFa by ODF is going to help spur interest here again. > > The excellent "RESTful Web Services Cookbook" and "REST in Practice" > books touch upon the relationship between REST and Semantic Web > technologies like RDF, but I am taking a much deeper dive in a book I > am writing for Addison-Wesley called "Resource-Oriented > Architectures : Building Webs of Data".
> > > > > I think an `application/rdfa+xhtml+xml` media type would solve most > > of these problem, though. That would surface the fact that the > > rdfa was an important part of the processing model for the > > representation to both the server and intermediates. > > > > Any intermediary that cares about the payload will be introspecting it > anyway; it should glean the importance of RDFa attributes to the > markup from the presence of an RDFa DOCTYPE, which seems to be enough > surfacing not only in my opinion, but also in the opinion of those > responsible for RDFa. RDFa+XHTML is just a version within the family > of data types the media type references; media type isn't bound to > version. Assigining media types to every extension of HTML or Atom > defeats the purpose by being too finely-grained -- the processing > model of form controls isn't changed by adding RDFa, so assigning it > a new media type just loses backwards compatibility of the data type > with non-RDFa components which could otherwise participate in the > communication (i.e. by prefetching or doing DNS lookups because they > know what a link is). > What could stand some fleshing-out as part of the effort to revamp the IANA registry, is the profile-parameter mechanism defined in RFC 3236. It's targeted at intermediaries to give them an idea of the conformance level of the payload without introspecting it, but it's behind the times where some folks want RDFa + XForms + MathML + XHTML more than they want XBasic. If the modularity of XHTML were bound to media types (sharing the same basic processing model), we'd be talking about dozens of media types instead of just application/xhtml+xml -- the definition of which states that it applies to all permutations derived from the modular approach, i.e. a family of forward-backward compatible data types. The profile parameter allows dozens of media type strings to be created without losing sight of the conformance to the basic processing model. This didn't occur to me earlier because I haven't seen it phrased as "protection" of attributes before, but I think it addresses your concern. -Eric
Hi, I apologise beforehand, as I do not know if this has been already asked and answered or even if it makes sense at all, but looking at HTTP 1.1 spec for Agent-driven negotiation (http://tools.ietf.org/html/rfc2616#section-12.2) I am unsure if I am interpreting the following sentence right: "Selection is based on a list of the available representations of the response included within the header fields or entity-body of the initial response, with each representation identified by its own URI." What I interpret by its reading is that by de-referencing the URI under which Agent-driven negotiation is done, you may be provided with an entity-body containing a list of URIs mapped one-to-one to each of the different representations mapped under the original URI. I understand the only way to do so is by adding the media-type somehow to the URIs, as in: http://server/myresource.xml http://server/myresource.html http://server/myresource?format=json etc. However, if we define a Resource as a non-typed conceptual mapping to a named information (as per Roy Fielding's definition http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm#sec_5_2_1_1, further clarified in his post http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven), should not the different available representations be identified using the same URI? Is it not incorrect to "name" differently the original concept the client was looking for under different URIs just because different representations are available? Is it not more appropriate to use server-driven negotiation after the specific agent-driven negotiation stage has been done? For instance, once the user agent receives the list of available/acceptable representations by means of the agent-driven negotiation, it selects one of them by issuing a new request under the same URI, setting the Accept header only with the desired media-type, already known to be accepted by the origin server. I would really glad to hear your comments in order to get a better understanding of the spec. Kind regards, Alejandro Nicolas Mascarell
There is nothing in the spec that details the representation to use for a 300 response. However, it does indicate that the selection can be done automatically or "manually by the user selecting from a generated (possibly hypertext) menu."[1] This leads to an assumption that a set of hypertext links (e.g. HTML anchor tags) would work just fine for human consumption: <a href="...">Pie char</a> <a href="...">Data Table</a> <a href="...">Text List</a> Since the HTML A tag has "type" as an optional "typoe" attribute[2], the same content can be modified to help client applications make automated choices. <a href="..." type="image/png">Pie char</a> <a href="..." type="text/html">Data Table</a> <a href="..." type="text/plain">Text List</a> Finally, since the HTTP 1.1 spec also indicates the information could be carried in a header, the new Web Linking spec[3] could be used as a guide for returning the same information as Link Headers: Link: <...>; type="image/png",<...>;type="text/html",<...>;type="text/plain" This last option works well for clients that are not expecting an HTML response body (e.g. image viewers that want to negotiate for a preferred binary format, etc.). [1] http://www.w3.org/Protocols/rfc2616/rfc2616-sec12.html#sec12.2 [2] http://www.w3.org/TR/html4/struct/links.html#h-12.2 [3] http://tools.ietf.org/html/rfc5988#section-5 mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me #RESTFest 2010 http://rest-fest.googlecode.com On Mon, Dec 13, 2010 at 06:44, Alejandro Nicolas <anicola@...> wrote: > Hi, > > I apologise beforehand, as I do not know if this has been already > asked and answered or even if it makes sense at all, but looking at > HTTP 1.1 spec for Agent-driven negotiation > (http://tools.ietf.org/html/rfc2616#section-12.2) I am unsure if I am > interpreting the following sentence right: > > "Selection is based on a list of the available representations of the > response included within the header fields or entity-body of the > initial response, with each representation identified by its own URI." > > What I interpret by its reading is that by de-referencing the URI > under which Agent-driven negotiation is done, you may be provided with > an entity-body containing a list of URIs mapped one-to-one to each of > the different representations mapped under the original URI. I > understand the only way to do so is by adding the media-type somehow > to the URIs, as in: > > http://server/myresource.xml > http://server/myresource.html > http://server/myresource?format=json > etc. > > However, if we define a Resource as a non-typed conceptual mapping to > a named information (as per Roy Fielding's definition > http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm#sec_5_2_1_1, > further clarified in his post > http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven), > should not the different available representations be identified using > the same URI? Is it not incorrect to "name" differently the original > concept the client was looking for under different URIs just because > different representations are available? Is it not more appropriate to > use server-driven negotiation after the specific agent-driven > negotiation stage has been done? For instance, once the user agent > receives the list of available/acceptable representations by means of > the agent-driven negotiation, it selects one of them by issuing a new > request under the same URI, setting the Accept header only with the > desired media-type, already known to be accepted by the origin server. > > I would really glad to hear your comments in order to get a better > understanding of the spec. > > Kind regards, > Alejandro Nicolas Mascarell > > > ------------------------------------ > > Yahoo! Groups Links > > > >
--- In rest-discuss@yahoogroups.com, mike amundsen <mamund@...> wrote: > > There is nothing in the spec that details the representation to use > for a 300 response. However, it does indicate that the selection can > be done automatically or "manually by the user selecting from a > generated (possibly hypertext) menu."[1] > > This leads to an assumption that a set of hypertext links (e.g. HTML > anchor tags) would work just fine for human consumption: > <a href="...">Pie char</a> > <a href="...">Data Table</a> > <a href="...">Text List</a> > > Since the HTML A tag has "type" as an optional "typoe" attribute[2], > the same content can be modified to help client applications make > automated choices. > <a href="..." type="image/png">Pie char</a> > <a href="..." type="text/html">Data Table</a> > <a href="..." type="text/plain">Text List</a> > > Finally, since the HTTP 1.1 spec also indicates the information could > be carried in a header, the new Web Linking spec[3] could be used as a > guide for returning the same information as Link Headers: > Link: <...>; type="image/png",<...>;type="text/html",<...>;type="text/plain" > > This last option works well for clients that are not expecting an HTML > response body (e.g. image viewers that want to negotiate for a > preferred binary format, etc.). > > [1] http://www.w3.org/Protocols/rfc2616/rfc2616-sec12.html#sec12.2 > [2] http://www.w3.org/TR/html4/struct/links.html#h-12.2 > [3] http://tools.ietf.org/html/rfc5988#section-5 > > mca > http://amundsen.com/blog/ > http://twitter.com@mamund > http://mamund.com/foaf.rdf#me > > > #RESTFest 2010 > http://rest-fest.googlecode.com > > Great answer! Andrew
On Sat, Dec 11, 2010 at 8:13 AM, Eric J. Bowman <eric@...> wrote: >> >> > >> > I think an `application/rdfa+xhtml+xml` media type would solve most >> > of these problem, though. That would surface the fact that the >> > rdfa was an important part of the processing model for the >> > representation to both the server and intermediates. >> > >> >> Any intermediary that cares about the payload will be introspecting it >> anyway; it should glean the importance of RDFa attributes to the >> markup from the presence of an RDFa DOCTYPE, which seems to be enough >> surfacing not only in my opinion, but also in the opinion of those >> responsible for RDFa. RDFa+XHTML is just a version within the family >> of data types the media type references; media type isn't bound to >> version. Assigining media types to every extension of HTML or Atom >> defeats the purpose by being too finely-grained -- the processing >> model of form controls isn't changed by adding RDFa, so assigning it >> a new media type just loses backwards compatibility of the data type >> with non-RDFa components which could otherwise participate in the >> communication (i.e. by prefetching or doing DNS lookups because they >> know what a link is). >> > > What could stand some fleshing-out as part of the effort to revamp the > IANA registry, is the profile-parameter mechanism defined in RFC 3236. > It's targeted at intermediaries to give them an idea of the conformance > level of the payload without introspecting it, but it's behind the > times where some folks want RDFa + XForms + MathML + XHTML more than > they want XBasic. > > If the modularity of XHTML were bound to media types (sharing the same > basic processing model), we'd be talking about dozens of media types > instead of just application/xhtml+xml -- the definition of which states > that it applies to all permutations derived from the modular approach, > i.e. a family of forward-backward compatible data types. > > The profile parameter allows dozens of media type strings to be created > without losing sight of the conformance to the basic processing model. > This didn't occur to me earlier because I haven't seen it phrased as > "protection" of attributes before, but I think it addresses your > concern. That is an interesting thought. I am not super-fond of changing the processing model of a media type so drastically with a parameter. On the other hand it would almost certainly work just fine in practice. Why does this require revamping anything? Couldn't you just use a profile value of <http://www.w3.org/MarkUp/DTD/xhtml-rdfa-1.dtd> today? The downside is that very few, if any, intermediates would actually understand it. This raises the possibility of hard to reproduce issues arising from intermediates mishandling the body. It does give intermediates a chance, though, by surfacing the expectations of the client and server. Peter
On Mon, Dec 13, 2010 at 4:34 PM, wahbedahbe <andrew.wahbe@...> wrote: > > --- In rest-discuss@yahoogroups.com, mike amundsen <mamund@...> wrote: >> >> There is nothing in the spec that details the representation to use >> for a 300 response. However, it does indicate that the selection can >> be done automatically or "manually by the user selecting from a >> generated (possibly hypertext) menu."[1] >> >> This leads to an assumption that a set of hypertext links (e.g. HTML >> anchor tags) would work just fine for human consumption: >> <a href="...">Pie char</a> >> <a href="...">Data Table</a> >> <a href="...">Text List</a> >> >> Since the HTML A tag has "type" as an optional "typoe" attribute[2], >> the same content can be modified to help client applications make >> automated choices. >> <a href="..." type="image/png">Pie char</a> >> <a href="..." type="text/html">Data Table</a> >> <a href="..." type="text/plain">Text List</a> >> >> Finally, since the HTTP 1.1 spec also indicates the information could >> be carried in a header, the new Web Linking spec[3] could be used as a >> guide for returning the same information as Link Headers: >> Link: <...>; type="image/png",<...>;type="text/html",<...>;type="text/plain" >> >> This last option works well for clients that are not expecting an HTML >> response body (e.g. image viewers that want to negotiate for a >> preferred binary format, etc.). >> >> [1] http://www.w3.org/Protocols/rfc2616/rfc2616-sec12.html#sec12.2 >> [2] http://www.w3.org/TR/html4/struct/links.html#h-12.2 >> [3] http://tools.ietf.org/html/rfc5988#section-5 >> >> mca >> http://amundsen.com/blog/ >> http://twitter.com@mamund >> http://mamund.com/foaf.rdf#me >> >> >> #RESTFest 2010 >> http://rest-fest.googlecode.com >> >> > > Great answer! > > Andrew > Maybe not: http://lists.w3.org/Archives/Public/public-html/2009Oct/0658.html "The purpose of conneg is to remove such explicit type indications from the distributed content (HTML) so that such coupling of standards would not be embedded in the web for eternity. The rel attribute should be used to state the purpose of a generic link so that the user agent can adjust its acceptance criteria according to that purpose, not according to some specific media format. Existing formats are revised or replaced by alternative formats far more often than the relation semantics (and names) change, and giving the user agent the flexibility to support new media types without changing existing content is critical to enabling deployment of new types." Cheers, Mike
<snip> > Maybe not: > > http://lists.w3.org/Archives/Public/public-html/2009Oct/0658.html </snip> As I recall, that discussion thread focused on using strongly-typed links in 200 responses, not 300 responses. As clients today continue to use the Accept header as a way to inform servers of their preferences (and servers use this information to select a representation for response), it seems appropriate to map out an Agent-driven solution that uses the same parameters. Also, the the model that uses <a ... type="..." /> in an 300 response need not be hard-coded and could be expected to change over time (e.g. the "image/png" format is not available as a possible representation of that particular resource from that particular server on this particular day, etc.). However, a more acceptable<g> solution for some clients might be: <a href="..." rel="image">Pie char</a> <a href="..." rel="markup">Data Table</a> <a href="..." rel="text">Text List</a> As the spec does not detail any single solution, I see these two approaches as implementation details that can be worked out between clients and servers supporting the agent-driven negotiation. I'd very much like to hear about real-world examples of Agent-driven negotiation. Anyone out there care to share? mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me #RESTFest 2010 http://rest-fest.googlecode.com On Mon, Dec 13, 2010 at 12:08, Mike Kelly <mike@...> wrote: > On Mon, Dec 13, 2010 at 4:34 PM, wahbedahbe <andrew.wahbe@...> wrote: >> >> --- In rest-discuss@yahoogroups.com, mike amundsen <mamund@...> wrote: >>> >>> There is nothing in the spec that details the representation to use >>> for a 300 response. However, it does indicate that the selection can >>> be done automatically or "manually by the user selecting from a >>> generated (possibly hypertext) menu."[1] >>> >>> This leads to an assumption that a set of hypertext links (e.g. HTML >>> anchor tags) would work just fine for human consumption: >>> <a href="...">Pie char</a> >>> <a href="...">Data Table</a> >>> <a href="...">Text List</a> >>> >>> Since the HTML A tag has "type" as an optional "typoe" attribute[2], >>> the same content can be modified to help client applications make >>> automated choices. >>> <a href="..." type="image/png">Pie char</a> >>> <a href="..." type="text/html">Data Table</a> >>> <a href="..." type="text/plain">Text List</a> >>> >>> Finally, since the HTTP 1.1 spec also indicates the information could >>> be carried in a header, the new Web Linking spec[3] could be used as a >>> guide for returning the same information as Link Headers: >>> Link: <...>; type="image/png",<...>;type="text/html",<...>;type="text/plain" >>> >>> This last option works well for clients that are not expecting an HTML >>> response body (e.g. image viewers that want to negotiate for a >>> preferred binary format, etc.). >>> >>> [1] http://www.w3.org/Protocols/rfc2616/rfc2616-sec12.html#sec12.2 >>> [2] http://www.w3.org/TR/html4/struct/links.html#h-12.2 >>> [3] http://tools.ietf.org/html/rfc5988#section-5 >>> >>> mca >>> http://amundsen.com/blog/ >>> http://twitter.com@mamund >>> http://mamund.com/foaf.rdf#me >>> >>> >>> #RESTFest 2010 >>> http://rest-fest.googlecode.com >>> >>> >> >> Great answer! >> >> Andrew >> > > Maybe not: > > http://lists.w3.org/Archives/Public/public-html/2009Oct/0658.html > > "The purpose of conneg is to remove such explicit type indications > from the distributed content (HTML) so that such coupling of > standards would not be embedded in the web for eternity. The rel > attribute should be used to state the purpose of a generic link > so that the user agent can adjust its acceptance criteria > according to that purpose, not according to some specific > media format. Existing formats are revised or replaced by > alternative formats far more often than the relation semantics > (and names) change, and giving the user agent the flexibility > to support new media types without changing existing content > is critical to enabling deployment of new types." > > > Cheers, > Mike > > > ------------------------------------ > > Yahoo! Groups Links > > > >
mike amundsen wrote: > <snip> >> Maybe not: >> >> http://lists.w3.org/Archives/Public/public-html/2009Oct/0658.html > </snip> > As I recall, that discussion thread focused on using strongly-typed > links in 200 responses, not 300 responses. > > As clients today continue to use the Accept header as a way to inform > servers of their preferences (and servers use this information to > select a representation for response), it seems appropriate to map out > an Agent-driven solution that uses the same parameters. Also, the > the model that uses <a ... type="..." /> in an 300 response need not > be hard-coded and could be expected to change over time (e.g. the > "image/png" format is not available as a possible representation of > that particular resource from that particular server on this > particular day, etc.). > > However, a more acceptable<g> solution for some clients might be: > <a href="..." rel="image">Pie char</a> > <a href="..." rel="markup">Data Table</a> > <a href="..." rel="text">Text List</a> > > As the spec does not detail any single solution, I see these two > approaches as implementation details that can be worked out between > clients and servers supporting the agent-driven negotiation. > > I'd very much like to hear about real-world examples of Agent-driven > negotiation. Anyone out there care to share? A ppears to me that there's two unique use-cases which could be grouped under the banner of agent driven negotiation: Providing the @type attribute as an additional indicator as to which resource you may want to request next: <a href="/data/image" type="image/png">Pie char</a> <a href="/data/table" type="text/html">Data Table</a> <a href="/data/list" type="text/plain">Text List</a> Providing the @type attribute to drive conneg: <a href="/data" type="image/png">Pie char</a> <a href="/data" type="text/html">Data Table</a> <a href="/data" type="text/plain">Text List</a> The first case is used often, for example blogs publish links to atom feeds and rss feeds with the @type set. The second example isn't defined in HTML, or it's semantics, and isn't support by any browser vendors or suchlike, indeed I've never seen any examples of it in the wild - as an aside I have asked about doing this previously and both the HTML WG and HTTP WG both rejected the notion, with differing reasons that both complimented Roy T Fielding's quote above. Best, Nathan
On Mon, Dec 13, 2010 at 12:08 PM, Mike Kelly <mike@...> wrote: > On Mon, Dec 13, 2010 at 4:34 PM, wahbedahbe <andrew.wahbe@...> wrote: >> >> --- In rest-discuss@yahoogroups.com, mike amundsen <mamund@...> wrote: >>> >>> There is nothing in the spec that details the representation to use >>> for a 300 response. However, it does indicate that the selection can >>> be done automatically or "manually by the user selecting from a >>> generated (possibly hypertext) menu."[1] >>> >>> This leads to an assumption that a set of hypertext links (e.g. HTML >>> anchor tags) would work just fine for human consumption: >>> <a href="...">Pie char</a> >>> <a href="...">Data Table</a> >>> <a href="...">Text List</a> >>> >>> Since the HTML A tag has "type" as an optional "typoe" attribute[2], >>> the same content can be modified to help client applications make >>> automated choices. >>> <a href="..." type="image/png">Pie char</a> >>> <a href="..." type="text/html">Data Table</a> >>> <a href="..." type="text/plain">Text List</a> >>> >>> Finally, since the HTTP 1.1 spec also indicates the information could >>> be carried in a header, the new Web Linking spec[3] could be used as a >>> guide for returning the same information as Link Headers: >>> Link: <...>; type="image/png",<...>;type="text/html",<...>;type="text/plain" >>> >>> This last option works well for clients that are not expecting an HTML >>> response body (e.g. image viewers that want to negotiate for a >>> preferred binary format, etc.). >>> >>> [1] http://www.w3.org/Protocols/rfc2616/rfc2616-sec12.html#sec12.2 >>> [2] http://www.w3.org/TR/html4/struct/links.html#h-12.2 >>> [3] http://tools.ietf.org/html/rfc5988#section-5 >>> >>> mca >>> http://amundsen.com/blog/ >>> http://twitter.com@mamund >>> http://mamund.com/foaf.rdf#me >>> >>> >>> #RESTFest 2010 >>> http://rest-fest.googlecode.com >>> >>> >> >> Great answer! >> >> Andrew >> > > Maybe not: > > http://lists.w3.org/Archives/Public/public-html/2009Oct/0658.html > > "The purpose of conneg is to remove such explicit type indications > from the distributed content (HTML) so that such coupling of > standards would not be embedded in the web for eternity. The rel > attribute should be used to state the purpose of a generic link > so that the user agent can adjust its acceptance criteria > according to that purpose, not according to some specific > media format. Existing formats are revised or replaced by > alternative formats far more often than the relation semantics > (and names) change, and giving the user agent the flexibility > to support new media types without changing existing content > is critical to enabling deployment of new types." > > > Cheers, > Mike > As Mike and Nathan have both said: using the type attribute to drive the values of the Accept header and using it to help guide the selection of links are quite different and I interpreted Roy's comments to be about the former. But for me the most important point is that when returning a 300 response, the server is saying that content negotiation can't be done and the client has to chose for itself. This could be because the Accept header values were not specific enough or some server-related issue. Once you've got to the point where you are returning 300 you have to provide the client with a set of options -- the expected media type for each option is a reasonable thing to specify. You can't expect the next step to rely on content negotiation or it would have worked on the first request. i.e. GET /foo resulting in a 300 response listing only /foo as an option (and expecting conneg to work on the 2nd try) isn't going to go anywhere. Regards, Andrew
Peter Williams wrote: > > That is an interesting thought. I am not super-fond of changing the > processing model of a media type so drastically with a parameter. On > the other hand it would almost certainly work just fine in practice. > I don't see it as changing the processing model. XForms extends the processing model; RDFa doesn't -- parsing and rendering aren't any different, there's just an additional parsing model layered on top. I use a variety of tools which didn't need updating to understand xhtml-rdfa-1.dtd as XHTML, these tools aren't required to understand RDF triples as such, neither are intermediaries (without ruling out the possibility that they could understand RDFa). > > Why does this require revamping anything? Couldn't you just use a > profile value of <http://www.w3.org/MarkUp/DTD/xhtml-rdfa-1.dtd> > today? The downside is that very few, if any, intermediates would > actually understand it. This raises the possibility of hard to > reproduce issues arising from intermediates mishandling the body. It > does give intermediates a chance, though, by surfacing the > expectations of the client and server. > I just meant adding some more examples; the idea is to make media type descriptions easier to maintain, perhaps even wiki-like. Any intermediary that's based on validating to a DTD would be able to use the DTD in a profile parameter; implementing the profile parameter is probably a bigger hindrance than understanding DTDs. If an intermediary doesn't implement ;profile, it should still interoperate with application/xhtml+xml and the given DTD, minus the efficiency of exposing conformance level over-the-wire. -Eric
I thank you all for your healthy contribution to my answer. In reality my doubt was more in a abstract level. Somehow I felt the statement "with each representation identified by its own URI" not in fully conformance with the REST architectural style. Regarding the implementation, I can image an entity body for a 300 response with anchors embedding the target media-types as dynamically generated and short-lived, being a different case that a normal body for a 200 response. It is probably good to expand the conneg possibilities by leveraging the media types, even if I also think that's not the original intent in the HTML specification. But, what if the client doesn't even support html? Perhaps it is an automated client not understanding HTML. IMHO conneg HTTP spec should be defined within the HTTP protocol boundaries. Moreover, IMHO agent-driven conneg it is not only intended as a fail-over case for the server-driven one, so the user-agent should be able to rely in the normal content negotiation process, i.e. by setting Accept headers when knowing the representations available behind the resource's URI: "Agent-driven negotiation is advantageous when the response would vary over commonly-used dimensions (such as type, language, or encoding), when the origin server is unable to determine a user agent's capabilities from examining the request, and generally when public caches are used to distribute server load and reduce network usage." Cheers, Alejandro On 13 December 2010 19:53, Andrew Wahbe <andrew.wahbe@...> wrote: > > > On Mon, Dec 13, 2010 at 12:08 PM, Mike Kelly <mike@...<mike%40mykanjo.co.uk>> > wrote: > > On Mon, Dec 13, 2010 at 4:34 PM, wahbedahbe <andrew.wahbe@gmail.com<andrew.wahbe%40gmail.com>> > wrote: > >> > >> --- In rest-discuss@yahoogroups.com <rest-discuss%40yahoogroups.com>, > mike amundsen <mamund@...> wrote: > >>> > >>> There is nothing in the spec that details the representation to use > >>> for a 300 response. However, it does indicate that the selection can > >>> be done automatically or "manually by the user selecting from a > >>> generated (possibly hypertext) menu."[1] > >>> > >>> This leads to an assumption that a set of hypertext links (e.g. HTML > >>> anchor tags) would work just fine for human consumption: > >>> <a href="...">Pie char</a> > >>> <a href="...">Data Table</a> > >>> <a href="...">Text List</a> > >>> > >>> Since the HTML A tag has "type" as an optional "typoe" attribute[2], > >>> the same content can be modified to help client applications make > >>> automated choices. > >>> <a href="..." type="image/png">Pie char</a> > >>> <a href="..." type="text/html">Data Table</a> > >>> <a href="..." type="text/plain">Text List</a> > >>> > >>> Finally, since the HTTP 1.1 spec also indicates the information could > >>> be carried in a header, the new Web Linking spec[3] could be used as a > >>> guide for returning the same information as Link Headers: > >>> Link: <...>; > type="image/png",<...>;type="text/html",<...>;type="text/plain" > >>> > >>> This last option works well for clients that are not expecting an HTML > >>> response body (e.g. image viewers that want to negotiate for a > >>> preferred binary format, etc.). > >>> > >>> [1] http://www.w3.org/Protocols/rfc2616/rfc2616-sec12.html#sec12.2 > >>> [2] http://www.w3.org/TR/html4/struct/links.html#h-12.2 > >>> [3] http://tools.ietf.org/html/rfc5988#section-5 > >>> > >>> mca > >>> http://amundsen.com/blog/ > >>> http://twitter.com@mamund > >>> http://mamund.com/foaf.rdf#me > >>> > >>> > >>> #RESTFest 2010 > >>> http://rest-fest.googlecode.com > >>> > >>> > >> > >> Great answer! > >> > >> Andrew > >> > > > > Maybe not: > > > > http://lists.w3.org/Archives/Public/public-html/2009Oct/0658.html > > > > "The purpose of conneg is to remove such explicit type indications > > from the distributed content (HTML) so that such coupling of > > standards would not be embedded in the web for eternity. The rel > > attribute should be used to state the purpose of a generic link > > so that the user agent can adjust its acceptance criteria > > according to that purpose, not according to some specific > > media format. Existing formats are revised or replaced by > > alternative formats far more often than the relation semantics > > (and names) change, and giving the user agent the flexibility > > to support new media types without changing existing content > > is critical to enabling deployment of new types." > > > > > > Cheers, > > Mike > > > > As Mike and Nathan have both said: using the type attribute to drive > the values of the Accept header and using it to help guide the > selection of links are quite different and I interpreted Roy's > comments to be about the former. > > But for me the most important point is that when returning a 300 > response, the server is saying that content negotiation can't be done > and the client has to chose for itself. This could be because the > Accept header values were not specific enough or some server-related > issue. Once you've got to the point where you are returning 300 you > have to provide the client with a set of options -- the expected media > type for each option is a reasonable thing to specify. You can't > expect the next step to rely on content negotiation or it would have > worked on the first request. > i.e. GET /foo resulting in a 300 response listing only /foo as an > option (and expecting conneg to work on the 2nd try) isn't going to go > anywhere. > > Regards, > > Andrew > > >
Well, I have to correct myself about my statement "IMHO conneg HTTP spec should be defined within the HTTP protocol boundaries" as that is the case. Following Mike Amundsen's answer, it is US who have been talking about using HTML's anchor type attribute to implement this feature in HTML, but leaving aside the validity of this approach as per http://lists.w3.org/Archives/Public/public-html/2009Oct/0658.html, HTML spec only talks about an entity body possibly containing an hypertextual menu to the different representations URI's, which was at the core of my doubt ;) On 14 December 2010 10:36, Alejandro Nicolas <anicola@...> wrote: > > I thank you all for your healthy contribution to my answer. > In reality my doubt was more in a abstract level. Somehow I felt the statement "with each representation identified by its own URI" not in fully conformance with the REST architectural style. > Regarding the implementation, I can image an entity body for a 300 response with anchors embedding the target media-types as dynamically generated and short-lived, being a different case that a normal body for a 200 response. It is probably good to expand the conneg possibilities by leveraging the media types, even if I also think that's not the original intent in the HTML specification. But, what if the client doesn't even support html? Perhaps it is an automated client not understanding HTML. IMHO conneg HTTP spec should be defined within the HTTP protocol boundaries. > Moreover, IMHO agent-driven conneg it is not only intended as a fail-over case for the server-driven one, so the user-agent should be able to rely in the normal content negotiation process, i.e. by setting Accept headers when knowing the representations available behind the resource's URI: > "Agent-driven negotiation is advantageous when the response would vary over commonly-used dimensions (such as type, language, or encoding), when the origin server is unable to determine a user agent's capabilities from examining the request, and generally when public caches are used to distribute server load and reduce network usage." > Cheers, > Alejandro > On 13 December 2010 19:53, Andrew Wahbe <andrew.wahbe@...> wrote: >> >> >> >> On Mon, Dec 13, 2010 at 12:08 PM, Mike Kelly <mike@mykanjo.co.uk> wrote: >> > On Mon, Dec 13, 2010 at 4:34 PM, wahbedahbe <andrew.wahbe@...> wrote: >> >> >> >> --- In rest-discuss@yahoogroups.com, mike amundsen <mamund@...> wrote: >> >>> >> >>> There is nothing in the spec that details the representation to use >> >>> for a 300 response. However, it does indicate that the selection can >> >>> be done automatically or "manually by the user selecting from a >> >>> generated (possibly hypertext) menu."[1] >> >>> >> >>> This leads to an assumption that a set of hypertext links (e.g. HTML >> >>> anchor tags) would work just fine for human consumption: >> >>> <a href="...">Pie char</a> >> >>> <a href="...">Data Table</a> >> >>> <a href="...">Text List</a> >> >>> >> >>> Since the HTML A tag has "type" as an optional "typoe" attribute[2], >> >>> the same content can be modified to help client applications make >> >>> automated choices. >> >>> <a href="..." type="image/png">Pie char</a> >> >>> <a href="..." type="text/html">Data Table</a> >> >>> <a href="..." type="text/plain">Text List</a> >> >>> >> >>> Finally, since the HTTP 1.1 spec also indicates the information could >> >>> be carried in a header, the new Web Linking spec[3] could be used as a >> >>> guide for returning the same information as Link Headers: >> >>> Link: <...>; type="image/png",<...>;type="text/html",<...>;type="text/plain" >> >>> >> >>> This last option works well for clients that are not expecting an HTML >> >>> response body (e.g. image viewers that want to negotiate for a >> >>> preferred binary format, etc.). >> >>> >> >>> [1] http://www.w3.org/Protocols/rfc2616/rfc2616-sec12.html#sec12.2 >> >>> [2] http://www.w3.org/TR/html4/struct/links.html#h-12.2 >> >>> [3] http://tools.ietf.org/html/rfc5988#section-5 >> >>> >> >>> mca >> >>> http://amundsen.com/blog/ >> >>> http://twitter.com@mamund >> >>> http://mamund.com/foaf.rdf#me >> >>> >> >>> >> >>> #RESTFest 2010 >> >>> http://rest-fest.googlecode.com >> >>> >> >>> >> >> >> >> Great answer! >> >> >> >> Andrew >> >> >> > >> > Maybe not: >> > >> > http://lists.w3.org/Archives/Public/public-html/2009Oct/0658.html >> > >> > "The purpose of conneg is to remove such explicit type indications >> > from the distributed content (HTML) so that such coupling of >> > standards would not be embedded in the web for eternity. The rel >> > attribute should be used to state the purpose of a generic link >> > so that the user agent can adjust its acceptance criteria >> > according to that purpose, not according to some specific >> > media format. Existing formats are revised or replaced by >> > alternative formats far more often than the relation semantics >> > (and names) change, and giving the user agent the flexibility >> > to support new media types without changing existing content >> > is critical to enabling deployment of new types." >> > >> > >> > Cheers, >> > Mike >> > >> >> As Mike and Nathan have both said: using the type attribute to drive >> the values of the Accept header and using it to help guide the >> selection of links are quite different and I interpreted Roy's >> comments to be about the former. >> >> But for me the most important point is that when returning a 300 >> response, the server is saying that content negotiation can't be done >> and the client has to chose for itself. This could be because the >> Accept header values were not specific enough or some server-related >> issue. Once you've got to the point where you are returning 300 you >> have to provide the client with a set of options -- the expected media >> type for each option is a reasonable thing to specify. You can't >> expect the next step to rely on content negotiation or it would have >> worked on the first request. >> i.e. GET /foo resulting in a 300 response listing only /foo as an >> option (and expecting conneg to work on the 2nd try) isn't going to go >> anywhere. >> >> Regards, >> >> Andrew >> >>
I was reading Ian Robinsons "Using typed links to forms" (again, it's a good
read) at http://iansrobinson.com/2010/09/02/using-typed-links-to-forms/.
Here he introduces the idea of adding XForms to a link relation, such that
instead of having only:
<link rel="" type="" href="" title="">
and document out of band how to post data to that link, one could have an
XForm at the end of the link and then use that information as the
online/inline/in-band documentation. I think that's quite a neat idea, but
it strikes me that it's getting awfull close to a WSDL spec.
Both XForm and WSDL defines how to POST data adhering to a specific XML
schema. If you then add a list of links to XForms, the list would be much
like the WSDL list of methods/messages. The rel="" types would specify WSDL
message names.
In all we could have an HTML 5 list like the one below and call it the WSDL
method list:
<dl>
<dt><a rel="" type="" href="">Title</a></dt>
<dd>Description</dd>
</dl>
Each of the achors then points to an XForm which in turn points to an XML
schema definition and defines POST urls - just like WSDL does.
Is this good or bad? Is it better or worse than WSDL and WADL? Other
opinions?
Thanks, J�rn
My reading of Robinson's description does not look very much like WSDL/WADL. - The "lightly typed links" point to dynamic documents (XForms docs), not static schema - These XForms can contain as much or as little as the server wishes at the moment (including "pre-filled" values) - XForms supports more than POST The idea I get from Robinson here is that you can use links to forms (XForms in his example) as a way to reduce the "inline" application control data within any response representation. It seems his notion is that this reduces coupling for request operations and helps clean up these links by making it a bit less likely to use action-type link relation patterns. I see WSDL/WADL as doing quite the opposite; increasing coupling (clients are expected to "consume" the schema at design-time, not run-time) and favoring action-orientation (WSDL is very much an RPC pattern). mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me #RESTFest 2010 http://rest-fest.googlecode.com On Thu, Dec 23, 2010 at 02:28, Jrn Wildt <jw@...> wrote: > I was reading Ian Robinsons "Using typed links to forms" (again, it's a good > read) at http://iansrobinson.com/2010/09/02/using-typed-links-to-forms/. > Here he introduces the idea of adding XForms to a link relation, such that > instead of having only: > > <link rel="" type="" href="" title=""> > > and document out of band how to post data to that link, one could have an > XForm at the end of the link and then use that information as the > online/inline/in-band documentation. I think that's quite a neat idea, but > it strikes me that it's getting awfull close to a WSDL spec. > > Both XForm and WSDL defines how to POST data adhering to a specific XML > schema. If you then add a list of links to XForms, the list would be much > like the WSDL list of methods/messages. The rel="" types would specify WSDL > message names. > > In all we could have an HTML 5 list like the one below and call it the WSDL > method list: > > <dl> > <dt><a rel="" type="" href="">Title</a></dt> > <dd>Description</dd> > </dl> > > Each of the achors then points to an XForm which in turn points to an XML > schema definition and defines POST urls - just like WSDL does. > > Is this good or bad? Is it better or worse than WSDL and WADL? Other > opinions? > > Thanks, Jrn > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
Hello. I still look for someone that explains in clear terms why is WSDL so bad. At the end the answer is usually it supports RPC, which is not the case lately. Actually, it never was, as document style services should be the ones to use. Oh, well. Why is WSDL needed? To describe the actual service consumption process. Where do we find the service? Where do we find the document format we need to send or expect to receive? Where do we find the message send a receive pattern? It is there, in the WSDL. Now, REST allows that flow to be discovered, rather than written in one place. Ian's description of the use of XForms achieves that, so they are similar in the goal, may not be so similar in the process. Both will required some out-of-band information. XForms is not forcing the POST method, actually. Is is a way to describe the expected data in a known format, reducing coupling. You may later change the XFORM given some data for the request is changed, and nothing breaks (as a matter of fact, with WSDL should happen the same, but people use compile time binding, stubs, that break that possibility). That said, I think the benefit of this form is it avoid, at least a little, the early binding that WADL or WSDL have. So, you are creating totally dynamic clients instead of static ones, pre-build with stubs and things like that. There is still one problem to solve, though, the semantics of the data fields in the Form. But that should be managed as a pre defined context, with a general, agreed upon, glossary. Cheers. William Martinez Pomares. --- In rest-discuss@yahoogroups.com, Jrn Wildt <jw@...> wrote: > > I was reading Ian Robinsons "Using typed links to forms" (again, it's a good > read) at http://iansrobinson.com/2010/09/02/using-typed-links-to-forms/. > Here he introduces the idea of adding XForms to a link relation, such that > instead of having only: > > <link rel="" type="" href="" title=""> > > and document out of band how to post data to that link, one could have an > XForm at the end of the link and then use that information as the > online/inline/in-band documentation. I think that's quite a neat idea, but > it strikes me that it's getting awfull close to a WSDL spec. > > Both XForm and WSDL defines how to POST data adhering to a specific XML > schema. If you then add a list of links to XForms, the list would be much > like the WSDL list of methods/messages. The rel="" types would specify WSDL > message names. > > In all we could have an HTML 5 list like the one below and call it the WSDL > method list: > > <dl> > <dt><a rel="" type="" href="">Title</a></dt> > <dd>Description</dd> > </dl> > > Each of the achors then points to an XForm which in turn points to an XML > schema definition and defines POST urls - just like WSDL does. > > Is this good or bad? Is it better or worse than WSDL and WADL? Other > opinions? > > Thanks, Jrn >
William: I did a quick scan an found about 50 discussion threads dating back to mid 2007 on REST-Discuss that at least mentioned WSDL/WADL in some way. WE've certainly been talking about it quite a bit<g>. <snip> I still look for someone that explains in clear terms why is WSDL so bad. </snip> Personally, I don't find WSDL to be "bad." I do, however, assert that the WSDL pattern is not compatible w/ Fielding's particular arch model. WSDL, as I've experienced it (and continue to experience it) does not employ hypermedia to drive the transfer of application state. The WSDL implementations I see today still assume design-time consumption and binding of a static, strongly-typed schema. I have yet to find WSDL implementations that induce the "Architectural Properties of Key Interest"[1] Fielding outlines; especially Modifiability. For these reasons, when working in an arch model that follows the Fielding's REST, I resist adding WSDL|WADL implementations to the architecture. <snip>At the end the answer is usually it supports RPC, which is not the case lately.</snip> I find this interesting. I still see WSDL delivered to me by third-party integrators that is in the RPC style. In fact, I cannot recall ever implementating a document-style WSDL integration for production use. I'd very much like to see some examples of this. The closest I've been able to come to any "document-style" w/ WSDL I've been given is to implement a single XML message-handler that treats a range of calls eash as a unique "message format" instead of a single document model for a related set of request/response interactions. Maybe I am simply working w/ rather dull folks who are not keeping up on WSDL evolution. Please feel free to pass along any pointers you have on document-style WSDL implementations (here or off-list, if you like). [1] http://www.ics.uci.edu/~fielding/pubs/dissertation/net_app_arch.htm#sec_2_3 mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me #RESTFest 2010 http://rest-fest.googlecode.com On Thu, Dec 23, 2010 at 09:54, William Martinez Pomares <wmartinez@...> wrote: > > Hello. > I still look for someone that explains in clear terms why is WSDL so bad. At the end the answer is usually it supports RPC, which is not the case lately. Actually, it never was, as document style services should be the ones to use. Oh, well. > > Why is WSDL needed? To describe the actual service consumption process. Where do we find the service? Where do we find the document format we need to send or expect to receive? Where do we find the message send a receive pattern? It is there, in the WSDL. > > Now, REST allows that flow to be discovered, rather than written in one place. Ian's description of the use of XForms achieves that, so they are similar in the goal, may not be so similar in the process. Both will required some out-of-band information. XForms is not forcing the POST method, actually. Is is a way to describe the expected data in a known format, reducing coupling. You may later change the XFORM given some data for the request is changed, and nothing breaks (as a matter of fact, with WSDL should happen the same, but people use compile time binding, stubs, that break that possibility). > > That said, I think the benefit of this form is it avoid, at least a little, the early binding that WADL or WSDL have. So, you are creating totally dynamic clients instead of static ones, pre-build with stubs and things like that. There is still one problem to solve, though, the semantics of the data fields in the Form. But that should be managed as a pre defined context, with a general, agreed upon, glossary. > > Cheers. > > William Martinez Pomares. > > --- In rest-discuss@yahoogroups.com, Jrn Wildt <jw@...> wrote: >> >> I was reading Ian Robinsons "Using typed links to forms" (again, it's a good >> read) at http://iansrobinson.com/2010/09/02/using-typed-links-to-forms/. >> Here he introduces the idea of adding XForms to a link relation, such that >> instead of having only: >> >> <link rel="" type="" href="" title=""> >> >> and document out of band how to post data to that link, one could have an >> XForm at the end of the link and then use that information as the >> online/inline/in-band documentation. I think that's quite a neat idea, but >> it strikes me that it's getting awfull close to a WSDL spec. >> >> Both XForm and WSDL defines how to POST data adhering to a specific XML >> schema. If you then add a list of links to XForms, the list would be much >> like the WSDL list of methods/messages. The rel="" types would specify WSDL >> message names. >> >> In all we could have an HTML 5 list like the one below and call it the WSDL >> method list: >> >> <dl> >> <dt><a rel="" type="" href="">Title</a></dt> >> <dd>Description</dd> >> </dl> >> >> Each of the achors then points to an XForm which in turn points to an XML >> schema definition and defines POST urls - just like WSDL does. >> >> Is this good or bad? Is it better or worse than WSDL and WADL? Other >> opinions? >> >> Thanks, Jrn >> > > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
> Now, REST allows that flow to be discovered, rather than written in one > place Yes, that's certainly one important difference. With SOAP/WSDL you are given everything up front and have to figure out what to use when. With REST you are only served the links that fits the actual state. You cannot do the last with WSDL. Thanks, for the feedback everybody. > That said, I think the benefit of this form is it avoid, at least a > little, the early binding that WADL or WSDL have. I guess that depends on how you consume the XForm. I would take the referenced XML schema in the model definition and automatically create classes from it. That would, with my knowledge, help me a lot getting started consuming the resource. That would be compile type coupling. It is still, from my point of view, easier to work with a native class representation of the model then with the bare XML itself. But, again, using XForm certainly avoids early binding of the application flow. /J�rn ----- Original Message ----- From: "William Martinez Pomares" <wmartinez@...> To: <rest-discuss@yahoogroups.com> Sent: Thursday, December 23, 2010 3:54 PM Subject: [rest-discuss] Re: XForms vs. WSDL? Hello. I still look for someone that explains in clear terms why is WSDL so bad. At the end the answer is usually it supports RPC, which is not the case lately. Actually, it never was, as document style services should be the ones to use. Oh, well. Why is WSDL needed? To describe the actual service consumption process. Where do we find the service? Where do we find the document format we need to send or expect to receive? Where do we find the message send a receive pattern? It is there, in the WSDL. Now, REST allows that flow to be discovered, rather than written in one place. Ian's description of the use of XForms achieves that, so they are similar in the goal, may not be so similar in the process. Both will required some out-of-band information. XForms is not forcing the POST method, actually. Is is a way to describe the expected data in a known format, reducing coupling. You may later change the XFORM given some data for the request is changed, and nothing breaks (as a matter of fact, with WSDL should happen the same, but people use compile time binding, stubs, that break that possibility). That said, I think the benefit of this form is it avoid, at least a little, the early binding that WADL or WSDL have. So, you are creating totally dynamic clients instead of static ones, pre-build with stubs and things like that. There is still one problem to solve, though, the semantics of the data fields in the Form. But that should be managed as a pre defined context, with a general, agreed upon, glossary. Cheers. William Martinez Pomares. --- In rest-discuss@yahoogroups.com, J�rn Wildt <jw@...> wrote: > > I was reading Ian Robinsons "Using typed links to forms" (again, it's a > good > read) at http://iansrobinson.com/2010/09/02/using-typed-links-to-forms/. > Here he introduces the idea of adding XForms to a link relation, such that > instead of having only: > > <link rel="" type="" href="" title=""> > > and document out of band how to post data to that link, one could have an > XForm at the end of the link and then use that information as the > online/inline/in-band documentation. I think that's quite a neat idea, but > it strikes me that it's getting awfull close to a WSDL spec. > > Both XForm and WSDL defines how to POST data adhering to a specific XML > schema. If you then add a list of links to XForms, the list would be much > like the WSDL list of methods/messages. The rel="" types would specify > WSDL > message names. > > In all we could have an HTML 5 list like the one below and call it the > WSDL > method list: > > <dl> > <dt><a rel="" type="" href="">Title</a></dt> > <dd>Description</dd> > </dl> > > Each of the achors then points to an XForm which in turn points to an XML > schema definition and defines POST urls - just like WSDL does. > > Is this good or bad? Is it better or worse than WSDL and WADL? Other > opinions? > > Thanks, J�rn >
> <snip>At the end the answer is usually it supports RPC, which is not > the case lately.</snip> > I find this interesting. I still see WSDL delivered to me by > third-party integrators that is in the RPC style. In fact, I cannot > recall ever implementating a document-style WSDL integration for > production use. That is also my experience. /J�rn ----- Original Message ----- From: "mike amundsen" <mamund@...> To: "William Martinez Pomares" <wmartinez@...> Cc: <rest-discuss@yahoogroups.com> Sent: Thursday, December 23, 2010 4:24 PM Subject: Re: [rest-discuss] Re: XForms vs. WSDL? William: I did a quick scan an found about 50 discussion threads dating back to mid 2007 on REST-Discuss that at least mentioned WSDL/WADL in some way. WE've certainly been talking about it quite a bit<g>. <snip> I still look for someone that explains in clear terms why is WSDL so bad. </snip> Personally, I don't find WSDL to be "bad." I do, however, assert that the WSDL pattern is not compatible w/ Fielding's particular arch model. WSDL, as I've experienced it (and continue to experience it) does not employ hypermedia to drive the transfer of application state. The WSDL implementations I see today still assume design-time consumption and binding of a static, strongly-typed schema. I have yet to find WSDL implementations that induce the "Architectural Properties of Key Interest"[1] Fielding outlines; especially Modifiability. For these reasons, when working in an arch model that follows the Fielding's REST, I resist adding WSDL|WADL implementations to the architecture. <snip>At the end the answer is usually it supports RPC, which is not the case lately.</snip> I find this interesting. I still see WSDL delivered to me by third-party integrators that is in the RPC style. In fact, I cannot recall ever implementating a document-style WSDL integration for production use. I'd very much like to see some examples of this. The closest I've been able to come to any "document-style" w/ WSDL I've been given is to implement a single XML message-handler that treats a range of calls eash as a unique "message format" instead of a single document model for a related set of request/response interactions. Maybe I am simply working w/ rather dull folks who are not keeping up on WSDL evolution. Please feel free to pass along any pointers you have on document-style WSDL implementations (here or off-list, if you like). [1] http://www.ics.uci.edu/~fielding/pubs/dissertation/net_app_arch.htm#sec_2_3 mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me #RESTFest 2010 http://rest-fest.googlecode.com On Thu, Dec 23, 2010 at 09:54, William Martinez Pomares <wmartinez@...> wrote: > > Hello. > I still look for someone that explains in clear terms why is WSDL so bad. > At the end the answer is usually it supports RPC, which is not the case > lately. Actually, it never was, as document style services should be the > ones to use. Oh, well. > > Why is WSDL needed? To describe the actual service consumption process. > Where do we find the service? Where do we find the document format we need > to send or expect to receive? Where do we find the message send a receive > pattern? It is there, in the WSDL. > > Now, REST allows that flow to be discovered, rather than written in one > place. Ian's description of the use of XForms achieves that, so they are > similar in the goal, may not be so similar in the process. Both will > required some out-of-band information. XForms is not forcing the POST > method, actually. Is is a way to describe the expected data in a known > format, reducing coupling. You may later change the XFORM given some data > for the request is changed, and nothing breaks (as a matter of fact, with > WSDL should happen the same, but people use compile time binding, stubs, > that break that possibility). > > That said, I think the benefit of this form is it avoid, at least a > little, the early binding that WADL or WSDL have. So, you are creating > totally dynamic clients instead of static ones, pre-build with stubs and > things like that. There is still one problem to solve, though, the > semantics of the data fields in the Form. But that should be managed as a > pre defined context, with a general, agreed upon, glossary. > > Cheers. > > William Martinez Pomares. > > --- In rest-discuss@yahoogroups.com, J�rn Wildt <jw@...> wrote: >> >> I was reading Ian Robinsons "Using typed links to forms" (again, it's a >> good >> read) at http://iansrobinson.com/2010/09/02/using-typed-links-to-forms/. >> Here he introduces the idea of adding XForms to a link relation, such >> that >> instead of having only: >> >> <link rel="" type="" href="" title=""> >> >> and document out of band how to post data to that link, one could have an >> XForm at the end of the link and then use that information as the >> online/inline/in-band documentation. I think that's quite a neat idea, >> but >> it strikes me that it's getting awfull close to a WSDL spec. >> >> Both XForm and WSDL defines how to POST data adhering to a specific XML >> schema. If you then add a list of links to XForms, the list would be much >> like the WSDL list of methods/messages. The rel="" types would specify >> WSDL >> message names. >> >> In all we could have an HTML 5 list like the one below and call it the >> WSDL >> method list: >> >> <dl> >> <dt><a rel="" type="" href="">Title</a></dt> >> <dd>Description</dd> >> </dl> >> >> Each of the achors then points to an XForm which in turn points to an XML >> schema definition and defines POST urls - just like WSDL does. >> >> Is this good or bad? Is it better or worse than WSDL and WADL? Other >> opinions? >> >> Thanks, J�rn >> > > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
Hi Mike! Actually, totally agree with you. I want an architectural discussion of why the WSDL definition is wrong and should not be used. In most of the discussions, it is taken for granted that WSDL is simply wrong. In others, as you mention, it is depicted as non RESTful because actual implementations are static binding, build time code generation things. Totally agree, just that WSDL implementation may not be what was intended. See, if we are able to hit a URL looking for data about how to continue my flow, I can have many resources that can help me. One could be an XFORM. The other one can be a WSDL. Both should tell me what to do next. But wait, for that a WSDL should be something I can consume dynamically. There is nothing in the spec that says it should be static. Implementations do not follow that, and thus All implementations of WSDL are unRESTFul. But that is not WSDL's fault, is it? Actually, it may be. You see, prior WSDL 2.0 we had a very static definition of a WS consumption process. Now, WSDL 2.0 provides some patterns or profiles that are not forcing you to use POST, for instance, and may even work for defining HTTP interactions. That is why I say "lately". Still, not much development on WSDL 2.0 has been done, and not much using it in a RESTful way. I posted an example of a REST services described using WSDL 2.0 some time ago, I found in IBM's developers works. Bad news is you can build your client using WSDL as an static description to generate a stub, or create your client to consume the WSDL as any other resource, as part of a larger RESTful web services. Now, guess how many people actually do any of both things? Yep. At the end, it may be a client implementation problem. Here you have you links to similar ideas, using WSDL to define a "RESTFul" implementation. If you look closer, you will see we may end up again with static stubs, but again it is a client problem, isn't it? http://www.ibm.com/developerworks/webservices/library/ws-rest1 http://www.ibm.com/developerworks/webservices/library/ws-restwsdl/ William Martinez Pomares --- In rest-discuss@yahoogroups.com, mike amundsen <mamund@...> wrote: > > William: > > I did a quick scan an found about 50 discussion threads dating back to > mid 2007 on REST-Discuss that at least mentioned WSDL/WADL in some > way. WE've certainly been talking about it quite a bit<g>. > > <snip> I still look for someone that explains in clear terms why is > WSDL so bad. </snip> > Personally, I don't find WSDL to be "bad." I do, however, assert that > the WSDL pattern is not compatible w/ Fielding's particular arch > model. WSDL, as I've experienced it (and continue to experience it) > does not employ hypermedia to drive the transfer of application state. > The WSDL implementations I see today still assume design-time > consumption and binding of a static, strongly-typed schema. I have yet > to find WSDL implementations that induce the "Architectural Properties > of Key Interest"[1] Fielding outlines; especially Modifiability. For > these reasons, when working in an arch model that follows the > Fielding's REST, I resist adding WSDL|WADL implementations to the > architecture. > > <snip>At the end the answer is usually it supports RPC, which is not > the case lately.</snip> > I find this interesting. I still see WSDL delivered to me by > third-party integrators that is in the RPC style. In fact, I cannot > recall ever implementating a document-style WSDL integration for > production use. I'd very much like to see some examples of this. The > closest I've been able to come to any "document-style" w/ WSDL I've > been given is to implement a single XML message-handler that treats a > range of calls eash as a unique "message format" instead of a single > document model for a related set of request/response interactions. > Maybe I am simply working w/ rather dull folks who are not keeping up > on WSDL evolution. Please feel free to pass along any pointers you > have on document-style WSDL implementations (here or off-list, if you > like). > > [1] http://www.ics.uci.edu/~fielding/pubs/dissertation/net_app_arch.htm#sec_2_3 > > mca > http://amundsen.com/blog/ > http://twitter.com@mamund > http://mamund.com/foaf.rdf#me > > > #RESTFest 2010 > http://rest-fest.googlecode.com > > > > > On Thu, Dec 23, 2010 at 09:54, William Martinez Pomares > <wmartinez@...> wrote: > > > > Hello. > > I still look for someone that explains in clear terms why is WSDL so bad. At the end the answer is usually it supports RPC, which is not the case lately. Actually, it never was, as document style services should be the ones to use. Oh, well. > > > > Why is WSDL needed? To describe the actual service consumption process. Where do we find the service? Where do we find the document format we need to send or expect to receive? Where do we find the message send a receive pattern? It is there, in the WSDL. > > > > Now, REST allows that flow to be discovered, rather than written in one place. Ian's description of the use of XForms achieves that, so they are similar in the goal, may not be so similar in the process. Both will required some out-of-band information. XForms is not forcing the POST method, actually. Is is a way to describe the expected data in a known format, reducing coupling. You may later change the XFORM given some data for the request is changed, and nothing breaks (as a matter of fact, with WSDL should happen the same, but people use compile time binding, stubs, that break that possibility). > > > > That said, I think the benefit of this form is it avoid, at least a little, the early binding that WADL or WSDL have. So, you are creating totally dynamic clients instead of static ones, pre-build with stubs and things like that. There is still one problem to solve, though, the semantics of the data fields in the Form. But that should be managed as a pre defined context, with a general, agreed upon, glossary. > > > > Cheers. > > > > William Martinez Pomares. > > > > --- In rest-discuss@yahoogroups.com, Jrn Wildt <jw@> wrote: > >> > >> I was reading Ian Robinsons "Using typed links to forms" (again, it's a good > >> read) at http://iansrobinson.com/2010/09/02/using-typed-links-to-forms/. > >> Here he introduces the idea of adding XForms to a link relation, such that > >> instead of having only: > >> > >> <link rel="" type="" href="" title=""> > >> > >> and document out of band how to post data to that link, one could have an > >> XForm at the end of the link and then use that information as the > >> online/inline/in-band documentation. I think that's quite a neat idea, but > >> it strikes me that it's getting awfull close to a WSDL spec. > >> > >> Both XForm and WSDL defines how to POST data adhering to a specific XML > >> schema. If you then add a list of links to XForms, the list would be much > >> like the WSDL list of methods/messages. The rel="" types would specify WSDL > >> message names. > >> > >> In all we could have an HTML 5 list like the one below and call it the WSDL > >> method list: > >> > >> <dl> > >> <dt><a rel="" type="" href="">Title</a></dt> > >> <dd>Description</dd> > >> </dl> > >> > >> Each of the achors then points to an XForm which in turn points to an XML > >> schema definition and defines POST urls - just like WSDL does. > >> > >> Is this good or bad? Is it better or worse than WSDL and WADL? Other > >> opinions? > >> > >> Thanks, Jrn > >> > > > > > > > > > > ------------------------------------ > > > > Yahoo! Groups Links > > > > > > > > >
Hi again. --- In rest-discuss@yahoogroups.com, Jrn Wildt <jw@...> wrote: > Yes, that's certainly one important difference. With SOAP/WSDL you are given > everything up front and have to figure out what to use when. With REST you > are only served the links that fits the actual state. You cannot do the last > with WSDL. Well, yes, you can. At least with WSDL (let's forget SOAP). WSDL is a web format, registered in IANA, and I can get it dynamically and consume it dynamically. Nobody does it because of tools, granted. The shame is on the tool designers. > I guess that depends on how you consume the XForm. Totally true. Knowing the XFORM up front will make you write a rigid client. Same for WSDL. Consuming the XFORM dynamically will make everybody happy. It is harder with WSDL, but possible too. Here the real difference is then the actual implementation, plus, I would say, XFORM is still light, and WSDL we may label it as "heavy". Cheers! William Martinez Pomares.
William: <snip> I want an architectural discussion of why the WSDL definition is wrong and should not be used. In most of the discussions, it is taken for granted that WSDL is simply wrong. In others, as you mention, it is depicted as non RESTful because actual implementations are static binding, build time code generation things. Totally agree, just that WSDL implementation may not be what was intended. </snip> Well, any arch discussion will have a context. In the context of the REST arch model, I find the WSDL pattern (WSDL, WADL, GData Python API, etc.) to be inappropriate. I've already mentioned a handful of reasons. Possibly one I did not emphasize is the lack of hypermedia in this external definition pattern. While WSDL can do a good job of expressing the request and response details, I've yet to see WSDL do a good job expressing the state transition details. IOW, which steps are first, second, third, etc. These "steps", in my experience almost always depend on the state of things at the moment and are not at all easy to properly express in a static document. I find it much easier to express the valid state transitions in the context of the _current_ state transition. Thus, when working in a REST arch implementation, I continue to favor including app control information in the response representation rather than in an external document. I've been toying with pulling more and more details on these transitions out of client code and into the message. I've even been experimenting with putting some of the state transition detail in _external_ documents, but have yet to find the WSDL pattern helpful for that work. <snip> See, if we are able to hit a URL looking for data about how to continue my flow, I can have many resources that can help me. One could be an XFORM. The other one can be a WSDL. Both should tell me what to do next. But wait, for that a WSDL should be something I can consume dynamically. There is nothing in the spec that says it should be static. Implementations do not follow that, and thus All implementations of WSDL are unRESTFul. But that is not WSDL's fault, is it? </snip> While I can see that it would be _possible_ to use WSDL at runtime, I've yet to see a viable example. I've toyed with this a bit and found it a non-starter. I'm open to seeing working examples of this. I'm not really interested in reading more technical papers about it, tho. I've read them for years and still see no tangible work in this area. <snip> Here you have you links to similar ideas, using WSDL to define a "RESTFul" implementation. If you look closer, you will see we may end up again with static stubs, but again it is a client problem, isn't it? http://www.ibm.com/developerworks/webservices/library/ws-rest1 http://www.ibm.com/developerworks/webservices/library/ws-restwsdl/ </snip> I read this material years ago. Nothing there leads me to believe WSDL is appropriate for REST style arch models. From my POV, WSDL 2.0 is overly verbose (w/o the corresponding value in the added material) and lacking in key information needed at runtime including valid "next steps" in state transitions. I find this last item vital when working in any M2M scenario. Now, oo get back to your initial sentence, let's have an architecture discussion about WSDL. To wit: "Under what arch model is WSDL a valid|viable component to the implementation?" IOW, drop the idea of arguing the merits of WSDL + REST for a moment. Instead, how about describing an arch model (any arch model) where WSDL (and it's relations) has little or no friction with the desired system properties and the constraints you use to induce those properties. What does _that_ arch model look like? mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me #RESTFest 2010 http://rest-fest.googlecode.com On Thu, Dec 23, 2010 at 13:54, William Martinez Pomares <wmartinez@...> wrote: > Hi Mike! > Actually, totally agree with you. > > I want an architectural discussion of why the WSDL definition is wrong and should not be used. In most of the discussions, it is taken for granted that WSDL is simply wrong. In others, as you mention, it is depicted as non RESTful because actual implementations are static binding, build time code generation things. Totally agree, just that WSDL implementation may not be what was intended. > > See, if we are able to hit a URL looking for data about how to continue my flow, I can have many resources that can help me. One could be an XFORM. The other one can be a WSDL. Both should tell me what to do next. But wait, for that a WSDL should be something I can consume dynamically. There is nothing in the spec that says it should be static. Implementations do not follow that, and thus All implementations of WSDL are unRESTFul. But that is not WSDL's fault, is it? > > Actually, it may be. You see, prior WSDL 2.0 we had a very static definition of a WS consumption process. Now, WSDL 2.0 provides some patterns or profiles that are not forcing you to use POST, for instance, and may even work for defining HTTP interactions. That is why I say "lately". > > Still, not much development on WSDL 2.0 has been done, and not much using it in a RESTful way. I posted an example of a REST services described using WSDL 2.0 some time ago, I found in IBM's developers works. Bad news is you can build your client using WSDL as an static description to generate a stub, or create your client to consume the WSDL as any other resource, as part of a larger RESTful web services. Now, guess how many people actually do any of both things? Yep. > > At the end, it may be a client implementation problem. > > Here you have you links to similar ideas, using WSDL to define a "RESTFul" implementation. If you look closer, you will see we may end up again with static stubs, but again it is a client problem, isn't it? > > http://www.ibm.com/developerworks/webservices/library/ws-rest1 > http://www.ibm.com/developerworks/webservices/library/ws-restwsdl/ > > William Martinez Pomares > > --- In rest-discuss@yahoogroups.com, mike amundsen <mamund@...> wrote: >> >> William: >> >> I did a quick scan an found about 50 discussion threads dating back to >> mid 2007 on REST-Discuss that at least mentioned WSDL/WADL in some >> way. WE've certainly been talking about it quite a bit<g>. >> >> <snip> I still look for someone that explains in clear terms why is >> WSDL so bad. </snip> >> Personally, I don't find WSDL to be "bad." I do, however, assert that >> the WSDL pattern is not compatible w/ Fielding's particular arch >> model. WSDL, as I've experienced it (and continue to experience it) >> does not employ hypermedia to drive the transfer of application state. >> The WSDL implementations I see today still assume design-time >> consumption and binding of a static, strongly-typed schema. I have yet >> to find WSDL implementations that induce the "Architectural Properties >> of Key Interest"[1] Fielding outlines; especially Modifiability. For >> these reasons, when working in an arch model that follows the >> Fielding's REST, I resist adding WSDL|WADL implementations to the >> architecture. >> >> <snip>At the end the answer is usually it supports RPC, which is not >> the case lately.</snip> >> I find this interesting. I still see WSDL delivered to me by >> third-party integrators that is in the RPC style. In fact, I cannot >> recall ever implementating a document-style WSDL integration for >> production use. I'd very much like to see some examples of this. The >> closest I've been able to come to any "document-style" w/ WSDL I've >> been given is to implement a single XML message-handler that treats a >> range of calls eash as a unique "message format" instead of a single >> document model for a related set of request/response interactions. >> Maybe I am simply working w/ rather dull folks who are not keeping up >> on WSDL evolution. Please feel free to pass along any pointers you >> have on document-style WSDL implementations (here or off-list, if you >> like). >> >> [1] http://www.ics.uci.edu/~fielding/pubs/dissertation/net_app_arch.htm#sec_2_3 >> >> mca >> http://amundsen.com/blog/ >> http://twitter.com@mamund >> http://mamund.com/foaf.rdf#me >> >> >> #RESTFest 2010 >> http://rest-fest.googlecode.com >> >> >> >> >> On Thu, Dec 23, 2010 at 09:54, William Martinez Pomares >> <wmartinez@...> wrote: >> > >> > Hello. >> > I still look for someone that explains in clear terms why is WSDL so bad. At the end the answer is usually it supports RPC, which is not the case lately. Actually, it never was, as document style services should be the ones to use. Oh, well. >> > >> > Why is WSDL needed? To describe the actual service consumption process. Where do we find the service? Where do we find the document format we need to send or expect to receive? Where do we find the message send a receive pattern? It is there, in the WSDL. >> > >> > Now, REST allows that flow to be discovered, rather than written in one place. Ian's description of the use of XForms achieves that, so they are similar in the goal, may not be so similar in the process. Both will required some out-of-band information. XForms is not forcing the POST method, actually. Is is a way to describe the expected data in a known format, reducing coupling. You may later change the XFORM given some data for the request is changed, and nothing breaks (as a matter of fact, with WSDL should happen the same, but people use compile time binding, stubs, that break that possibility). >> > >> > That said, I think the benefit of this form is it avoid, at least a little, the early binding that WADL or WSDL have. So, you are creating totally dynamic clients instead of static ones, pre-build with stubs and things like that. There is still one problem to solve, though, the semantics of the data fields in the Form. But that should be managed as a pre defined context, with a general, agreed upon, glossary. >> > >> > Cheers. >> > >> > William Martinez Pomares. >> > >> > --- In rest-discuss@yahoogroups.com, Jrn Wildt <jw@> wrote: >> >> >> >> I was reading Ian Robinsons "Using typed links to forms" (again, it's a good >> >> read) at http://iansrobinson.com/2010/09/02/using-typed-links-to-forms/. >> >> Here he introduces the idea of adding XForms to a link relation, such that >> >> instead of having only: >> >> >> >> <link rel="" type="" href="" title=""> >> >> >> >> and document out of band how to post data to that link, one could have an >> >> XForm at the end of the link and then use that information as the >> >> online/inline/in-band documentation. I think that's quite a neat idea, but >> >> it strikes me that it's getting awfull close to a WSDL spec. >> >> >> >> Both XForm and WSDL defines how to POST data adhering to a specific XML >> >> schema. If you then add a list of links to XForms, the list would be much >> >> like the WSDL list of methods/messages. The rel="" types would specify WSDL >> >> message names. >> >> >> >> In all we could have an HTML 5 list like the one below and call it the WSDL >> >> method list: >> >> >> >> <dl> >> >> <dt><a rel="" type="" href="">Title</a></dt> >> >> <dd>Description</dd> >> >> </dl> >> >> >> >> Each of the achors then points to an XForm which in turn points to an XML >> >> schema definition and defines POST urls - just like WSDL does. >> >> >> >> Is this good or bad? Is it better or worse than WSDL and WADL? Other >> >> opinions? >> >> >> >> Thanks, Jrn >> >> >> > >> > >> > >> > >> > ------------------------------------ >> > >> > Yahoo! Groups Links >> > >> > >> > >> > >> > > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
Mike. Ok, this is interesting, we can get several valid points in the discussion. But first, let me clarify I'm not in favor of WSDL as a hard believer it will make the world happier, or that it is the only way to work on REST. It is just that, without a context, running away from it with no reason is such non-sense that makes me wonder, and I end up like a defender of the lost cause. Now. Let's see: <snip> Well, any arch discussion will have a context. In the context of the REST arch model, I find the WSDL pattern (WSDL, WADL, GData Python API, etc.) to be inappropriate. I've already mentioned a handful of reasons. Possibly one I did not emphasize is the lack of hypermedia in this external definition pattern. </snip> That's an interesting POV. I would not to see WSDL as a pattern, linked to external additional things. Just think of it as a document format for a moment. <snip> While WSDL can do a good job of expressing the request and response details, I've yet to see WSDL do a good job expressing the state transition details. IOW, which steps are first, second, third, etc. These "steps", in my experience almost always depend on the state of things at the moment and are not at all easy to properly express in a static document. </snip> Totally agree with you. I would not recommend either trying to define the complete state machine in one doc, unless you have particular need to have instances of a state machine that may run longer and in parallel with other instances (versioning). And that is beyond REST. For REST, all the transitions may be based on the actual state. I would not use WSDL for a one place description of those transitions, and it is not made for that. It is made for defining the document formats, bindings and message delivery patterns. I guess what gets confused here with REST transitions is the message pattern idea. Those patterns may go beyond the REST style, as they were not designed for that. Actually, the messages are one way, delivery, while a REST interaction in HTTP implementation requires a request and a response. And you are right, the articles I posted are just an example of how WSDL is thought to be compatible with REST, but following that philosophy of enclosing all the interactions. That, let's say, is a pattern (or antipattern, so to speak). <snip> I've been toying with pulling more and more details on these transitions out of client code and into the message. I've even been experimenting with putting some of the state transition detail in _external_ documents, but have yet to find the WSDL pattern helpful for that work. </snip> Yes, actually I cheer the state transition info IN the message. Not in the code nor in external docs. If the WSDL is part of it, then the WSDL should be THE message, not an external doc. Last point on WSDL and REST: WSDL is not the whole, but a document you may get in one get operation, that should tell you the format of the document you need to send next, the URI, and any expected response, if any, once you get the response, that one should drive you to the next one and so on. Just like an XFORM, but yes, awfully verbose. Now, about what architectural style may WSDL be suitable? Well some SOA of course. WSDL was meant to be a web services implementation definition, describing the messages in XML format, the message exchange pattern and the underlying protocol binding. If you have a system not totally built on top of HTTP, with complex exchange patterns and a mix of data and processing intensive operations, then WSDL may (MAY) help you with little bits of automation for client construction. WSDL was not built for REST, and surely REST was not defined for the system I describe, although one or two of the nodes are on the web. Still, WSDL is not magical and may not fit all implementations, may work for integrated application, not necessarily for distributed ones. But describing about the essential SOA constrains and properties may take longer in this comment. And, I hope you now know that I'm not totally comfortable with the idea of REST being the natural SOA solution. It isn't, and maybe that is a discussion we need to start in another thread. Cheers! William Martinez Pomares. BTW even Wooden seems abandoned, so that is why you see almost no practical work on this. Frank Cohen, when I told him WSDL 2.0 was official, smiled back and said: "Who cares?". Oh, well.
William: Thanks for the thoughtful post. It will take me some time to take it all in and, since I think our talk is straying from REST, I'll respond at length directly to you (off list). As always,it is great to converse w/ you on topics like this. Thanks. mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me #RESTFest 2010 http://rest-fest.googlecode.com On Thu, Dec 23, 2010 at 17:25, William Martinez Pomares <wmartinez@...> wrote: > Mike. > Ok, this is interesting, we can get several valid points in the discussion. But first, let me clarify I'm not in favor of WSDL as a hard believer it will make the world happier, or that it is the only way to work on REST. It is just that, without a context, running away from it with no reason is such non-sense that makes me wonder, and I end up like a defender of the lost cause. > > Now. Let's see: > > <snip> > Well, any arch discussion will have a context. In the context of the REST arch model, I find the WSDL pattern (WSDL, WADL, GData Python API, etc.) to be inappropriate. I've already mentioned a handful of reasons. Possibly one I did not emphasize is the lack of hypermedia in this external definition pattern. > </snip> > > That's an interesting POV. I would not to see WSDL as a pattern, linked to external additional things. Just think of it as a document format for a moment. > > <snip> > While WSDL can do a good job of expressing the request and response details, I've yet to see WSDL do a good job expressing the state transition details. IOW, which steps are first, second, third, etc. These "steps", in my experience almost always depend on the state of things at the moment and are not at all easy to properly express in a static document. > </snip> > > Totally agree with you. I would not recommend either trying to define the complete state machine in one doc, unless you have particular need to have instances of a state machine that may run longer and in parallel with other instances (versioning). And that is beyond REST. > > For REST, all the transitions may be based on the actual state. I would not use WSDL for a one place description of those transitions, and it is not made for that. It is made for defining the document formats, bindings and message delivery patterns. > > I guess what gets confused here with REST transitions is the message pattern idea. Those patterns may go beyond the REST style, as they were not designed for that. Actually, the messages are one way, delivery, while a REST interaction in HTTP implementation requires a request and a response. > > And you are right, the articles I posted are just an example of how WSDL is thought to be compatible with REST, but following that philosophy of enclosing all the interactions. That, let's say, is a pattern (or antipattern, so to speak). > > <snip> > I've been toying with pulling more and more details on these transitions out of client code and into the message. I've even been experimenting with putting some of the state transition detail in _external_ documents, but have yet to find the WSDL pattern helpful for that work. > </snip> > Yes, actually I cheer the state transition info IN the message. Not in the code nor in external docs. If the WSDL is part of it, then the WSDL should be THE message, not an external doc. > > Last point on WSDL and REST: WSDL is not the whole, but a document you may get in one get operation, that should tell you the format of the document you need to send next, the URI, and any expected response, if any, once you get the response, that one should drive you to the next one and so on. Just like an XFORM, but yes, awfully verbose. > > Now, about what architectural style may WSDL be suitable? Well some SOA of course. WSDL was meant to be a web services implementation definition, describing the messages in XML format, the message exchange pattern and the underlying protocol binding. If you have a system not totally built on top of HTTP, with complex exchange patterns and a mix of data and processing intensive operations, then WSDL may (MAY) help you with little bits of automation for client construction. WSDL was not built for REST, and surely REST was not defined for the system I describe, although one or two of the nodes are on the web. Still, WSDL is not magical and may not fit all implementations, may work for integrated application, not necessarily for distributed ones. > > But describing about the essential SOA constrains and properties may take longer in this comment. > > And, I hope you now know that I'm not totally comfortable with the idea of REST being the natural SOA solution. It isn't, and maybe that is a discussion we need to start in another thread. > > Cheers! > > William Martinez Pomares. > > BTW even Wooden seems abandoned, so that is why you see almost no practical work on this. Frank Cohen, when I told him WSDL 2.0 was official, smiled back and said: "Who cares?". Oh, well. > > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
Jrn Wildt wrote:
>
> I was reading Ian Robinsons "Using typed links to forms" (again, it's
> a good read)
>
I disagree with the premise. REST's uniform interface requires generic
media and data types. The example creates an application-specific media
and data type (ordering one brand of coffee), where even domain-specific
media and data types (ordering coffee) would be too fine-grained, where
network-specific (generic to IP) media and data types are called for
(ordering anything).
Why not just use XForms forwards instead of backwards, using
standardized data types and link relations like the thesis says? I
don't understand how this would be less pragmatic than the example
given, particularly to the extent of using pragmatism to justify
deliberately avoiding the hypertext constraint in favor of increased
coupling.
Link relations aren't coupling when they're standardized, i.e. common
knowledge at the IP layer across numerous protocols and data types. The
difference is the key difference in REST -- the uniform interface is
network-based, extended link relations are library-based (which is fine
for domain-specific vocabulary, not for directing application flow by
baking HTTP methods into them and such). Methods, headers, data types
and link relations nobody's ever heard of before you minted them are
*not* the same thing as the standardized uniform interface Roy's thesis
describes.
>
> and document out of band how to post data to that link
>
Doesn't compute. The whole point of the hypertext constraint is that
hypertext controls are used to document this in-band, relying on the
common-knowledge processing model of ubiquitous media types. Maybe
this other proposed architectural style has merit, I'm not judging,
only stating that it can't be REST if there's no hypertext constraint.
>
> have an XForm at the end of the link and then use that information as
> the online/inline/in-band documentation. I think that's quite a neat
> idea, but it strikes me that it's getting awfull close to a WSDL spec.
>
Exactly. IDLs, and hypertext-driven APIs, aren't the same paradigm.
It hadn't occurred to me that XForms could be repurposed as an IDL, but
my advice is, don't. IDL-driven APIs are NOT REST.
>
> Both XForm and WSDL defines how to POST data adhering to a specific
> XML schema.
>
Not exactly. IDLs are not self-documenting hypertext controls -- they
cannot instruct a user-agent as to the default values, validate input
(type=e-mail is one thing, black/whitelists are another) using code-on-
demand; nor do they provide any mechanism to link to documentation (or
render it outright as a pop-up) to inform the decision-making of the
humans or machines (RDFa doesn't make much sense in an IDL) interacting
with the API.
>
> If you then add a list of links to XForms, the list would be much
> like the WSDL list of methods/messages.
>
You say that like it's a good thing? ;-) There are two approaches to
the problem of distributing the object interface instead of distributing
the object. One is to distribute a definition of your custom interface.
The other is to use the distributed uniform interface, which is generic
-- if all objects use the same interface, what's left to describe in
an IDL?
>
> The rel="" types would specify WSDL message names.
>
In HTTP, REST's uniform interface includes, amongst other methods: GET,
PUT, POST, PATCH and DELETE. If your interfaces are object-specific
(hint, hint) then yeah, IDLs make sense, but I don't know how you start
from tunneling custom methods (expressed as link relations) over POST
and arrive at REST.
>
> In all we could have an HTML 5 list like the one below and call it
> the WSDL method list:
>
> <dl>
> <dt><a rel="" type="" href="">Title</a></dt>
> <dd>Description</dd>
> </dl>
>
> Each of the achors then points to an XForm which in turn points to an
> XML schema definition and defines POST urls - just like WSDL does.
>
> Is this good or bad? Is it better or worse than WSDL and WADL? Other
> opinions?
>
I really have no basis for comparing non-RESTful approaches. REST is a
hypertext-driven architectural style based on standardized media types
and standardized link relations. While extended link relations may be
used to inform m2m decision-making, if the semantics of the interaction
between connectors is that you can GET something and PUT it back then
just use rel='edit' instead of defining an unlimited number of link
relations to describe resource type (a server-side concern) to the
client in lieu of using standardized hypertext controls.
Note that rel='edit' says nothing about HTTP methods, or even HTTP,
though. If you're assuming it means PUT, I have no idea how you'd
reach that conclusion, even if the media type were application/atom+xml,
because only a hypertext control communicates method to me. There's
nothing in the communication between connectors which says "RFC 5023 in
use". Only hypertext controls provide such instructions, which may be
to GET the rel='edit'...
Or, I may PATCH that resource -- but this is communicated through the
hypertext representation, including the Allow and Accept(-*) headers.
One hypertext control indicates that the entire "instance" (to stick
with XForms) may be PUT as application/atom+xml, while another hypertext
control indicates that categories may be PATCHed using an instance of
application/atomcat+xml -- but this has no bearing on the link relation!
I have no clue how any IDL- or schema-based approach supports the above
paragraph, except as part of an API being driven by out-of-band
information. When folks start talking about RESTful IDLs, schemae and
overloaded link relations, my reaction is WTF are you people even
*talking* about, as I've mentioned before. My snappy comeback, is why
not just use rel='{HTTP method}' and have done with it? Things that
are not hypertext controls:
IDLs (even with a registered media type)
media type definitions (registration doesn't indicate suitability)
schemas (even with a registered media type)
link relations (even when standardized)
These things have their place, and I've even mentioned how IDLs could
be used as a response to OPTIONS requests, but they are *not*
substitutes for the hypertext constraint. Inferring things out-of-band
bad, self-documenting hypertext APIs good.
-Eric
"William Martinez Pomares" wrote: > > It is just that, without a context, running away from it with no > reason is such non-sense that makes me wonder, and I end up like a > defender of the lost cause. > Nothing wrong with defending lost causes... :-) But, I fail to see how any IDL-based approach can describe my vapor-example (I'll get around to the live XForms example on my demo someday) with sufficient detail to instruct (h or m) users as to the *goals* behind using PUT application/atom+xml vs. PATCH application/atomcat+xml. With IDLs, the categories allowed in the PATCH would be listed in a schema, and that schema would need to be updated when the list of categories changes. Why not just give the current list in-band as part of a hypertext control? IDLs add whole layers of indirection which simply aren't needed when hypertext controls are used instead. REST is about declaring to generic user-agents what state transitions are possible, not what application-specific user-agents may infer from out-of-band understanding of the application flow. IOW, I'm turning your question back on you -- I've never seen an IDL that can describe the hypertext-driven APIs I write. That's my context, but I try to keep an open mind, so all anyone has to do to convince me that I'm being nonsensical or dogmatic in my opposition to IDLs is show me what I'm missing by giving a counter-example of what an API would look like which allows an Atom Entry to be PUT-edited or PATCH-tagged without relying on out-of-band inferences (i.e. rel='edit' means PUT, and rel='some URI' means PATCH) specific to the application or the domain (as opposed to generic). -Eric
"William Martinez Pomares" wrote: > > But first, let me clarify I'm not in favor of WSDL as a hard believer > it will make the world happier, or that it is the only way to work on > REST. > I understand; please don't take my hard-ass response personally (always good advice for everyone). For further reading, please see: http://bitworking.org/news/193/Do-we-need-WADL Or, in lieu of revisiting REST-vs-SOAP, peruse Mark Baker's weblog archives, or these: http://www.25hoursaday.com/weblog/2008/08/17/ExplainingRESTToDamienKatz.aspx http://www.infoq.com/articles/mark-baker-REST http://www.kintespace.com/rasx37.html http://www.prescod.net/rest/rest_vs_soap_overview/ -Eric
> > > > > Both XForm and WSDL defines how to POST data adhering to a specific > > XML schema. > > > > Not exactly. IDLs are not self-documenting hypertext controls -- they > cannot instruct a user-agent as to the default values... > Yes, I'm aware of @default. However, in the real world, manipulating one control may change the default of another. XForms allows this to be declarative, HTML forms plus javascript allows it to be imperative; whereas IDLs provide no such facility in-band (such relationships must be inferred from a schema, coupling consumer to producer, precluding generic consumers (unless and until I see an example of one)). -Eric
mike amundsen wrote: > > I find it much easier to express the valid state transitions in the > context of the _current_ state transition. Thus, when working in a > REST arch implementation, I continue to favor including app control > information in the response representation rather than in an external > document. I've been toying with pulling more and more details on these > transitions out of client code and into the message... > +1 The problem with treating rel='edit' (*) as a hypertext control, is that it provides no facility for doing client-side input validation according to a schema (unless that schema, or a link to it, is baked into the user-agent); whereas XForms can provide a schema-driven input control, and even declare the method to be POST instead of PUT or change this (or the media type of the PUT or POST request) on-the-fly (as opposed to baking rel='edit'=PUT and the media type into the user-agent, which requires generic clients to somehow infer this information to use the API). * or any other link relation, standardized or not -- rel='next' isn't a hypertext control, the <link> or <a> it appears in is the hypertext control -Eric
Well, Eric, please recall this is not a defense of WSDL, just an academic, rich discussion :D The "without a context" refers to dooming the IDLs just because they are IDLs, in any possible situation. That said, I totally agree with what you say, taken for granted the WSDL is an IDL used as an out-of-band definition of flow for REST. As Mike pointed out, removing REST from the picture, and trying to make WSDL a dynamically consumable, lean document, not oriented to hypertext driven APIs but to messaging ones, not necessarily over HTTP: is still WSDL as bad? Maybe it is good for something, in other context. So, to answer your question, I guess no, it makes no sense to use a IDL to represent REST Hypertext states transitions or flows. I can almost call it an oxymoron. I hate RPC for services implementation, still RPC may be good in some other situations. Just not for REST anyway. Cheers! William Martinez Pomares. --- In rest-discuss@yahoogroups.com, "Eric J. Bowman" <eric@...> wrote: > > "William Martinez Pomares" wrote: > > > > It is just that, without a context, running away from it with no > > reason is such non-sense that makes me wonder, and I end up like a > > defender of the lost cause. > > > > Nothing wrong with defending lost causes... :-) But, I fail to see > how any IDL-based approach can describe my vapor-example (I'll get > around to the live XForms example on my demo someday) with sufficient > detail to instruct (h or m) users as to the *goals* behind using PUT > application/atom+xml vs. PATCH application/atomcat+xml. With IDLs, the > categories allowed in the PATCH would be listed in a schema, and that > schema would need to be updated when the list of categories changes. > > Why not just give the current list in-band as part of a hypertext > control? IDLs add whole layers of indirection which simply aren't > needed when hypertext controls are used instead. REST is about > declaring to generic user-agents what state transitions are possible, > not what application-specific user-agents may infer from out-of-band > understanding of the application flow. > > IOW, I'm turning your question back on you -- I've never seen an IDL > that can describe the hypertext-driven APIs I write. That's my > context, but I try to keep an open mind, so all anyone has to do to > convince me that I'm being nonsensical or dogmatic in my opposition to > IDLs is show me what I'm missing by giving a counter-example of what an > API would look like which allows an Atom Entry to be PUT-edited or > PATCH-tagged without relying on out-of-band inferences (i.e. rel='edit' > means PUT, and rel='some URI' means PATCH) specific to the application > or the domain (as opposed to generic). > > -Eric >
"William Martinez Pomares" wrote: > > Well, Eric, please recall this is not a defense of WSDL, just an > academic, rich discussion :D > Yup. One which I hope, over time, motivates as much effort going into creating new-n-improved hypertext-control markup languages as currently goes into avoiding the hypertext constraint. I appreciate the debate, as it helps me figure out how to express the answers you're looking for. Specifically, what's wrong with a RESTful system (assuming one exists) based on IDLs? > > The "without a context" refers to dooming the IDLs just because they > are IDLs, in any possible situation. That said, I totally agree with > what you say, taken for granted the WSDL is an IDL used as an > out-of-band definition of flow for REST. > IDLs don't really provide a set of application-state transitions to choose from. REST is all about the API explaining *how* a collection of generic interfaces are coordinated into applications, expressed in-band and decoupled from resource state. I don't understand m2m solutions which start from the premise of moving the "how" out-of-band -- the systems being described are possible with REST and purport to have the same goals, so if that's the way forward all I require is falsification of the hypertext constraint, first. IDLs describe resource interfaces, not goal-driven application interfaces; if m2m is all about goals, then why start by removing their expression from the application states? > > As Mike pointed out, removing REST from the picture, and trying to > make WSDL a dynamically consumable, lean document, not oriented to > hypertext driven APIs but to messaging ones, not necessarily over > HTTP: is still WSDL as bad? Maybe it is good for something, in other > context. > It depends on what's best for the system being designed, sure. But, if the system is intended to exhibit the desirable characteristics induced by the application of the hypertext constraint, then why would it be just as good, or better, not to apply the hypertext constraint? A context where WSDL would be better, is one where coupling client to server around an out-of-band API has benefit. But, none of the examples I've seen of WSDL/WADL describe systems which wouldn't benefit from the decoupling provided by an in-band hypertext API, unless their scope is restricted to the point where REST isn't needed (or rational to purue). > > So, to answer your question, I guess no, it makes no sense to use a > IDL to represent REST Hypertext states transitions or flows. I can > almost call it an oxymoron. I hate RPC for services implementation, > still RPC may be good in some other situations. Just not for REST > anyway. > I wouldn't hesitate to set up a quickie RPC system for the internal purposes of a small office -- which wouldn't need to exhibit a fraction of the desirable characteristics induced by applying REST's constraints. I'd only apply the minimal set of constraints required to induce the characteristics needed within that context; XML-RPC may very well fit the bill, because I'd be more interested in completing the project than adding complexity to proactively solve problems I wouldn't even have. Once we're talking "Web Services" though, we're talking about the problems of extending a limited set of ubiquitous semantics across organizational boundaries over a flaky, anarchic network with global scale. Dealing with these problems adds complexity -- XML-RPC lacks such complexity, so it doesn't solve these problems well. CORBA solves a different set of problems while ignoring these. REST solves these problems with minimal added complexity. WS-* failed to solve these problems, by adding way too much complexity. So my money's still on REST for leveraging the Web's power to the benefit of any system which profits by being on the Web (extended a la ARRESTED or CREST, or not). -Eric
> > I guess that depends on how you consume the XForm. > Totally true. Knowing the XFORM up front will make you write a rigid > client. Same for WSDL. > Consuming the XFORM dynamically will make everybody happy. So lets try to consume the XForm dynamically and see what happens. Don't get me wrong - I am trying hard to figure out the best RESTfull API for my current project - not arguing against you, just trying to understand. The XForm has a model, it's submission data, and some user interface. I can easily see how the submission information could be consumed dynamically - I can extract the action URL, the method, and the media type at runtime and use those instead of hard coded values. The user interface part seems irrelevant for m2m systems, so lets ignore that (but maybe I am wrong?). It's the model that I find problematic. It's described by an XML schema and it seems to me that my client must have this schema baked into the code in order to work with it - how would it otherwise know how to mark-up the data needed to describe, for instance, a purchase order? This is where my understanding of a dynamic client breaks down. The client may have multiple known schemas to choose between - for instance Danish and German variations of purchase order schemas. But that's only semi-dynamic. Happy xmass everybody :-) /J�rn ----- Original Message ----- From: "William Martinez Pomares" <wmartinez@...> To: <rest-discuss@yahoogroups.com> Sent: Thursday, December 23, 2010 8:01 PM Subject: [rest-discuss] Re: XForms vs. WSDL? Hi again. --- In rest-discuss@yahoogroups.com, J�rn Wildt <jw@...> wrote: > Yes, that's certainly one important difference. With SOAP/WSDL you are > given > everything up front and have to figure out what to use when. With REST you > are only served the links that fits the actual state. You cannot do the > last > with WSDL. Well, yes, you can. At least with WSDL (let's forget SOAP). WSDL is a web format, registered in IANA, and I can get it dynamically and consume it dynamically. Nobody does it because of tools, granted. The shame is on the tool designers. > I guess that depends on how you consume the XForm. Totally true. Knowing the XFORM up front will make you write a rigid client. Same for WSDL. Consuming the XFORM dynamically will make everybody happy. It is harder with WSDL, but possible too. Here the real difference is then the actual implementation, plus, I would say, XFORM is still light, and WSDL we may label it as "heavy". Cheers! William Martinez Pomares.
> >
> > If you then add a list of links to XForms, the list would be much
> > like the WSDL list of methods/messages.
> >
> You say that like it's a good thing? ;-)
No, I was not saying it was a good thing (and I do see your smiley, thanks).
I was trying to compare the two and understand why one is better than the
other.
My point is - if I can show that a WSDL and "something + XForm" are
equivalent then why is XForm better than WSDL? I think the discussion so far
have showed the differences. But it also seems that they to some degree are
equivalent - and much of the difference lies in how they are consumed and
the mindset of those implementing the client and server.
Thanks for the feedback.
/J�rn
----- Original Message -----
From: "Eric J. Bowman" <eric@...>
To: "J�rn Wildt" <jw@...>
Cc: "Rest Discussion List" <rest-discuss@yahoogroups.com>
Sent: Friday, December 24, 2010 1:13 AM
Subject: Re: [rest-discuss] XForms vs. WSDL?
J�rn Wildt wrote:
>
> I was reading Ian Robinsons "Using typed links to forms" (again, it's
> a good read)
>
I disagree with the premise. REST's uniform interface requires generic
media and data types. The example creates an application-specific media
and data type (ordering one brand of coffee), where even domain-specific
media and data types (ordering coffee) would be too fine-grained, where
network-specific (generic to IP) media and data types are called for
(ordering anything).
Why not just use XForms forwards instead of backwards, using
standardized data types and link relations like the thesis says? I
don't understand how this would be less pragmatic than the example
given, particularly to the extent of using pragmatism to justify
deliberately avoiding the hypertext constraint in favor of increased
coupling.
Link relations aren't coupling when they're standardized, i.e. common
knowledge at the IP layer across numerous protocols and data types. The
difference is the key difference in REST -- the uniform interface is
network-based, extended link relations are library-based (which is fine
for domain-specific vocabulary, not for directing application flow by
baking HTTP methods into them and such). Methods, headers, data types
and link relations nobody's ever heard of before you minted them are
*not* the same thing as the standardized uniform interface Roy's thesis
describes.
>
> and document out of band how to post data to that link
>
Doesn't compute. The whole point of the hypertext constraint is that
hypertext controls are used to document this in-band, relying on the
common-knowledge processing model of ubiquitous media types. Maybe
this other proposed architectural style has merit, I'm not judging,
only stating that it can't be REST if there's no hypertext constraint.
>
> have an XForm at the end of the link and then use that information as
> the online/inline/in-band documentation. I think that's quite a neat
> idea, but it strikes me that it's getting awfull close to a WSDL spec.
>
Exactly. IDLs, and hypertext-driven APIs, aren't the same paradigm.
It hadn't occurred to me that XForms could be repurposed as an IDL, but
my advice is, don't. IDL-driven APIs are NOT REST.
>
> Both XForm and WSDL defines how to POST data adhering to a specific
> XML schema.
>
Not exactly. IDLs are not self-documenting hypertext controls -- they
cannot instruct a user-agent as to the default values, validate input
(type=e-mail is one thing, black/whitelists are another) using code-on-
demand; nor do they provide any mechanism to link to documentation (or
render it outright as a pop-up) to inform the decision-making of the
humans or machines (RDFa doesn't make much sense in an IDL) interacting
with the API.
>
> If you then add a list of links to XForms, the list would be much
> like the WSDL list of methods/messages.
>
You say that like it's a good thing? ;-) There are two approaches to
the problem of distributing the object interface instead of distributing
the object. One is to distribute a definition of your custom interface.
The other is to use the distributed uniform interface, which is generic
-- if all objects use the same interface, what's left to describe in
an IDL?
>
> The rel="" types would specify WSDL message names.
>
In HTTP, REST's uniform interface includes, amongst other methods: GET,
PUT, POST, PATCH and DELETE. If your interfaces are object-specific
(hint, hint) then yeah, IDLs make sense, but I don't know how you start
from tunneling custom methods (expressed as link relations) over POST
and arrive at REST.
>
> In all we could have an HTML 5 list like the one below and call it
> the WSDL method list:
>
> <dl>
> <dt><a rel="" type="" href="">Title</a></dt>
> <dd>Description</dd>
> </dl>
>
> Each of the achors then points to an XForm which in turn points to an
> XML schema definition and defines POST urls - just like WSDL does.
>
> Is this good or bad? Is it better or worse than WSDL and WADL? Other
> opinions?
>
I really have no basis for comparing non-RESTful approaches. REST is a
hypertext-driven architectural style based on standardized media types
and standardized link relations. While extended link relations may be
used to inform m2m decision-making, if the semantics of the interaction
between connectors is that you can GET something and PUT it back then
just use rel='edit' instead of defining an unlimited number of link
relations to describe resource type (a server-side concern) to the
client in lieu of using standardized hypertext controls.
Note that rel='edit' says nothing about HTTP methods, or even HTTP,
though. If you're assuming it means PUT, I have no idea how you'd
reach that conclusion, even if the media type were application/atom+xml,
because only a hypertext control communicates method to me. There's
nothing in the communication between connectors which says "RFC 5023 in
use". Only hypertext controls provide such instructions, which may be
to GET the rel='edit'...
Or, I may PATCH that resource -- but this is communicated through the
hypertext representation, including the Allow and Accept(-*) headers.
One hypertext control indicates that the entire "instance" (to stick
with XForms) may be PUT as application/atom+xml, while another hypertext
control indicates that categories may be PATCHed using an instance of
application/atomcat+xml -- but this has no bearing on the link relation!
I have no clue how any IDL- or schema-based approach supports the above
paragraph, except as part of an API being driven by out-of-band
information. When folks start talking about RESTful IDLs, schemae and
overloaded link relations, my reaction is WTF are you people even
*talking* about, as I've mentioned before. My snappy comeback, is why
not just use rel='{HTTP method}' and have done with it? Things that
are not hypertext controls:
IDLs (even with a registered media type)
media type definitions (registration doesn't indicate suitability)
schemas (even with a registered media type)
link relations (even when standardized)
These things have their place, and I've even mentioned how IDLs could
be used as a response to OPTIONS requests, but they are *not*
substitutes for the hypertext constraint. Inferring things out-of-band
bad, self-documenting hypertext APIs good.
-Eric
Here comes som ramblings on forms and xforms, trying to understand what has been said on this list. > Methods, headers, data types > and link relations nobody's ever heard of before you minted them are > *not* the same thing as the standardized uniform interface Roy's thesis > describes. Are you then saying that it is impossible to work with domain specific models in REST? No one is ever going to standardize, say, a data format for yoga positions or similar obscure domains. Are we then disallowed to use the internet and REST for such domains? The same applies to link relations - what if there is no relation for a link to "The third normative yoga position of the current position" (which probably don't exists, but hopefully you get my point)? In following I'll try to give my impression of what I think you think ... let's see if I get your point. We have had such discussions before and your proposal was, as I remember it, to use html forms. See http://tech.groups.yahoo.com/group/rest-discuss/message/17057 I guess the same goes here again: use xhtml and html forms to describe how data is posted instead of minting your own media types. Then annotate the <input> elements with a "property" attribute that expresses the semantics of the element. Like for instance <input property="'foaf:name" name="some-irrelvant-serverside-variable-name"/>. So at runtime my client can discover the html form and look for the element with property="foaf:name" in order to transmit the "name" of the data. Which is not the same as looking for the element with name="some-irrelvant-serverside-variable-name" - or rather, knowing ahead of time, that "name" should be encoded as "some-irrelvant-serverside-variable-name". One way to understand this is as a decoupling of format and semantics. The semantics describe "this is a name (foaf:name)" whereas the format describes "the name most be encoded as some-irrelvant-serverside-variable-name". We could even go one step further and check to see if the "name" input is not there, and, if not, then, well, you are not supposed to update it. Is this a proper understanding of what you are saying? But sometimes it makes more sense to post XML data (at least for me). I am not saying "mint a new media type", all I want it to send some more complex data structures than key/value pairs handles without bending over backwards - and send that using application/xml. This is where XForm comes in. With XForms we can describe the input XML format/model and using the html user controls associated with XForms we can do like you did with xhtml forms and annotate the input elements with property="foaf:name" (or similar). Right? /J�rn
Right now I am in the middle of a REST API implementation, and these two
threads have given me much to think about:
http://tech.groups.yahoo.com/group/rest-discuss/message/17114 (XForms vs.
WSDL) and http://tech.groups.yahoo.com/group/rest-discuss/message/17057
(HTML REST API example [was: Link relations]).
I do like Eric's proposal of using standard (x)html forms for updating
data - with or without RDFa embedded in it. But using html for embedding
machine-readable representations of complex data structures for reading
seems a bit like bending over backwards to do something which is straight
forward with XML. This, I think, is mostly about tooling: it is easy to
serialize any data structure in XML with most development platforms, but
there is no support for easy serialization into and from HTML.
Example: I have a case file with a title, a case number and a myriad of
other properties. This can easily be converted to XML using standard tools:
<case xmlns="http://my-casefile-namespace">
<title>My title</title>
</case>
But how would the machine readable HTML look like? Maybe:
<div property="case">
<div property="title">My title</div>
</div>
Many people turns to REST for simplicity. For me this also means simple and
commonly available tools. That's not the case with the HTML serialization
and for this reason I shy away from the HTML representation - it's too
difficult to work with.
On the other hand I also understand that an XML representation in itself
contains no hypermedia controls, like for instance links and link relations.
That's something which is easily implemented in HTML using <form> and <a>
elements.
So I feel trapped somewhere between these two formats and I guess that is
why people turn to their own media-types. Now they get easy tooling and
embedded hypermedia controls in XML (which they must invent themselves).
This comes at the expense of using a not-so-ubiquitous-and-non-standard
format.
Can't we combine and get the best of both worlds?
What is needed is something that:
- Makes it easy to serialize/deserialize any data (m2m scenario).
- Is browsable with a normal browser (h2m scenario).
- Has hyper media controls.
- Can update data through hyper media controls.
Now I am only dreaming up some ideas: what if there were a standard XML
dialect which included links and schema references? Then we had all that was
needed to browse the API data with a machine. It would look at the schema
reference to see if it was a know data type and it could use links and link
relation types to decide where to go next. The schema reference would work
like a media-type identifier, but at another level than the network, and
make it possible to decide what to do with the data.
I have already shown that it's easy to convert any XML into HTML through an
XSL stylesheet, and convert atom link elements to HTML anchors at the same
time. So this would account for the human browsing the API data.
The next thing is updating data. Here I think HTML forms are fine. So a link
relation in the XML data would point to an HTML form that describes how to
update the data. From the server side the incoming data is easily serialized
into data structures using model bindes as in ASP.NET MVC and OpenRasta. I
haven't seen any tools for the client side.
The HTML form could have RDFa added to it to lessen the coupling between the
server and the client, like Eric suggested. Something like:
<form action="..." about="#my-casefile-form">
<div>
<label for="case_title">Case title</label>
<input id="case_title" name="Case.Title" property="dc:title"/>
</div>
</form>
Now the client could look for the "#my-casefile-form" and then look for the
input with property="dc:title" and put the case title in that. In this way
the server would be free to name the inputs what ever it likes. But I do not
expect there to be any tools out there that makes it easy for the client to
fill out the form. Maybe that's a standard client library that needs to be
written?
Does this make sense? Is it possible at all to have this imaginary XML
format/media-type that can contain any XML document and still have links in
it?
I would be happy to use HTML all over the place if it wasn't for the fact
that I haven't seen any useful serialization tools.
/J�rn
J�rn Wildt wrote: > But how would the machine readable HTML look like? Maybe: HTML is machine readable, it's an AST - and RDFa + Microdata augment the basic markup set to allow full mrd.
No no, you are not supposed to know about the model directly, the "interface bit" is what you interact with, these are the hypertext controls. So you do need these for M2M applications of XForms too. The new model is opaque... Justin On Fri, 2010-12-24 at 23:11 +0100, Jørn Wildt wrote: > > It's the model that I find problematic. It's described by an XML > schema and > it seems to me that my client must have this schema baked into the > code in > order to work with it - how would it otherwise know how to mark-up the > data > needed to describe, for instance, a purchase order? This is where my > understanding of a dynamic client breaks down. The client may have > multiple > known schemas to choose between - for instance Danish and German > variations > of purchase order schemas. But that's only semi-dynamic. > > Happy xmass everybody :-) > > /Jørn > > ----- Original Message ----- > From: "William Martinez Pomares" <wmartinez@...> > To: <rest-discuss@yahoogroups.com> > Sent: Thursday, December 23, 2010 8:01 PM > Subject: [rest-discuss] Re: XForms vs. WSDL? > > Hi again. > > --- In rest-discuss@yahoogroups.com, Jørn Wildt <jw@...> wrote: > > Yes, that's certainly one important difference. With SOAP/WSDL you > are > > given > > everything up front and have to figure out what to use when. With > REST you > > are only served the links that fits the actual state. You cannot do > the > > last > > with WSDL. > > Well, yes, you can. At least with WSDL (let's forget SOAP). WSDL is a > web > format, registered in IANA, and I can get it dynamically and consume > it > dynamically. Nobody does it because of tools, granted. The shame is on > the > tool designers. > > > I guess that depends on how you consume the XForm. > > Totally true. Knowing the XFORM up front will make you write a rigid > client. > Same for WSDL. Consuming the XFORM dynamically will make everybody > happy. It > is harder with WSDL, but possible too. > Here the real difference is then the actual implementation, plus, I > would > say, XFORM is still light, and WSDL we may label it as "heavy". > > Cheers! > > William Martinez Pomares. > > > > >
> HTML is machine readable, it's an AST - and RDFa + Microdata augment the > basic markup set to allow full mrd. Yes, certainly. My problem is about tooling. I haven't seen anything that makes it easy to go from internal representation to HTML+RDFa and back again. That is why basic XML is so tempting - any development platform can do all the tedious and and error prone transformation work for you. One of the reasons for turning to REST is simplicity. By adding layers of RDFa onto HTML the API becomes more difficult to consume compared to a straight forward XML format. If I present a REST API to my customers that makes it (a lot) more difficult to consume than a SOAP API then I loose some of my selling points. I am probably wrong though - I do not think Fieldings thesis says anything about being simple, but that certainly is part of the percived goodness of REST amongst most people. Sorry for exposing my ignorance here, I am just trying to find an implementation that is both RESTful and easy to implement with the proper tools. /J�rn ----- Original Message ----- From: "Nathan" <nathan@...> To: "J�rn Wildt" <jw@...> Cc: "Rest Discussion List" <rest-discuss@yahoogroups.com> Sent: Tuesday, December 28, 2010 12:45 PM Subject: Re: [rest-discuss] Combining HTML and XML? > J�rn Wildt wrote: >> But how would the machine readable HTML look like? Maybe: > > HTML is machine readable, it's an AST - and RDFa + Microdata augment the > basic markup set to allow full mrd. >
J�rn Wildt wrote: >> HTML is machine readable, it's an AST - and RDFa + Microdata augment the >> basic markup set to allow full mrd. > > Yes, certainly. My problem is about tooling. I haven't seen anything > that makes it easy to go from internal representation to HTML+RDFa and > back again. That is why basic XML is so tempting - any development > platform can do all the tedious and and error prone transformation work > for you. Indeed, RDFa and Microdata aren't ideal MRD formats, they simply extend the descriptive markup of HTML so that you can describe (in a machine readable way) the things described within the HTML document. It's only suited for specific use-cases and certainly isn't optimized for pure machine to machine only situations. That said, XML and JSON certainly aren't RESTful because 99.999999% of the components on the network will precisely zero about your essentially "made-up" media type, and in order to stop a huge media type explosion (one for every possibly way of modelling the representation of every kind of data under the sun) what's really needed is a universal data model which can describe any kind of data unambiguously within the constraints of a single serialization, the closest thing to this at the minute is RDF/XML, but that has it's own drawbacks and is quite old now. Essentially what I'm saying is, there is no ideal media type to handle the bulk of the worlds data at the minute, somebody really needs to make one. However, caveat is that for pure machine to machine scenarios you only need basic link semantics (something which says dereference this URI) and as soon as you add more, you may as well just use HTML(+RDFa). AFAICT, you're just one in a huge line of people who need a new media type for mrd in m2m scenarios. Best, Nathan > One of the reasons for turning to REST is simplicity. By adding layers > of RDFa onto HTML the API becomes more difficult to consume compared to > a straight forward XML format. If I present a REST API to my customers > that makes it (a lot) more difficult to consume than a SOAP API then I > loose some of my selling points. > > I am probably wrong though - I do not think Fieldings thesis says > anything about being simple, but that certainly is part of the percived > goodness of REST amongst most people. > > Sorry for exposing my ignorance here, I am just trying to find an > implementation that is both RESTful and easy to implement with the > proper tools. > > /J�rn > > ----- Original Message ----- From: "Nathan" <nathan@...> > To: "J�rn Wildt" <jw@...> > Cc: "Rest Discussion List" <rest-discuss@yahoogroups.com> > Sent: Tuesday, December 28, 2010 12:45 PM > Subject: Re: [rest-discuss] Combining HTML and XML? > > >> J�rn Wildt wrote: >>> But how would the machine readable HTML look like? Maybe: >> >> HTML is machine readable, it's an AST - and RDFa + Microdata augment the >> basic markup set to allow full mrd. >> > > >
> AFAICT, you're just one in a huge line of people who need a new media type > for mrd in m2m scenarios. Thanks, that makes me feel a lot better and a lot less stupid :-) /J�rn
On Tue, Dec 28, 2010 at 5:52 AM, Nathan <nathan@...> wrote: > > That said, XML and JSON certainly aren't RESTful because 99.999999% of > the components on the network will precisely zero about your essentially > "made-up" media type, Which constraint is it that states that representations must be understood by more than x% of the components on the network? Using existing media types is usually superior to custom media types/formats for practical reasons. However, using custom (vendor) media types is clearly acceptable in web architecture and the rest architectural style. Peter
Peter Williams wrote: > On Tue, Dec 28, 2010 at 5:52 AM, Nathan <nathan@...> wrote: >> That said, XML and JSON certainly aren't RESTful because 99.999999% of >> the components on the network will know precisely zero about your essentially >> "made-up" media type, > > Which constraint is it that states that representations must be > understood by more than x% of the components on the network? Shall I assume we're forgetting the whole point of the Universal Interface and the core principals of separation of concerns, scalability and independent evolvability here, negating common sense and removing the hugely obvious context of "the web" and "the internet" which applies to almost every mail to rest-discuss and for which REST was actually made? Peter, sorry but the last thing I'm going to do is encourage J�rn, or anybody here, to go and invent a "custom (vendor) media type" for use on the internet without mentioning the massive caveat that only they will understand it - the terms "Universal" and "Custom (Vendor)" are far from complementary. I'll probably black-list myself from the rest community with this next comment, but can this whole culture of run to a custom media type whenever things get tricky and label it as RESTful and "a good thing" culture please just stop, it's a new year ahead, it's painfully obvious that it /doesn't/ work (in reality) and that it ensures the REST community is small and massively misunderstood by almost everybody because it detaches from even the simplest logic and reality. People put things on the web so they are universally accessible, people share things so they can be seen by others, that's the whole damn point. The dissertation and the REST style is truly brilliant, and was applied to reality to make something great, that tradition should be the key focus of RESTafarians, there are some brilliant minds here who understand things well and who can really make a big difference to many people, and the web, at a time when it's really needed, and when people are grokking universality and want to clean up the stateful unrestfulweb 2.0 crap of the last decade. Best intentions, Nathan
Nathan, On Dec 28, 2010, at 7:43 PM, Nathan wrote: > I'll probably black-list myself from the rest community with this next > comment, but can this whole culture of run to a custom media type > whenever things get tricky and label it as RESTful and "a good thing" > culture please just stop, it's a new year ahead, it's painfully obvious > that it /doesn't/ work (in reality) If we want to build user agents that perform automatic requests (such as downloading the image referenced by an HTML <img> tag) we simply *need* hypermedia semantics that express the specific relationship driving the user agent code. There are only two options for this: extending existing types or using link relations *or* defining a specific media type (such as HTML is a specific type for enabling what we want browsers to do). Personally, I think that extensions as well as link relations have serious manageability issues when used to tweak existing types for more or less unrelated sets of use cases. I prefer rolling my own type because it focusses the domain modeling activities in a single (ideally) specification document. It is far easier to distribute, version, QA, ... such a single item of work than a wild bunch of unrelated specs for extensions and link relations. Jan
On Tue, Dec 28, 2010 at 11:43 AM, Nathan <nathan@...> wrote: > Peter Williams wrote: >> >> On Tue, Dec 28, 2010 at 5:52 AM, Nathan <nathan@webr3.org> wrote: >>> >>> That said, XML and JSON certainly aren't RESTful because 99.999999% of >>> the components on the network will know precisely zero about your >>> essentially >>> "made-up" media type, >> >> Which constraint is it that states that representations must be >> understood by more than x% of the components on the network? > > Shall I assume we're forgetting the whole point of the Universal Interface > and the core principals of separation of concerns, scalability and > independent evolvability here, negating common sense and removing the hugely > obvious context of "the web" and "the internet" which applies to almost > every mail to rest-discuss and for which REST was actually made? Of course we are not forgetting uniform interfaces. Http has that under control. Even when used with novel media types the interface is still uniform. Resources are still identified, resources are still manipulated through representations, the messages are still self-descriptive, and (assuming the media-type is well defined) hypermedia is still the engine of application state. The uniform interface section[1] is completely silent on how ubiquitous support of representations needs to be. I fail so see how "separation of concerns, scalability and independent evolvability" come into play regarding this particular design decision. The client and server concerns are still separate; custom media types are just cachable as any other media type; custom media types do not, innately, damage the evolvability of the system. In fact, custom media types are just yet another "downloadable feature-engine" that help "provide for a diverse set of functionality".[2] > Peter, sorry but the last thing I'm going to do is encourage Jrn, or > anybody here, to go and invent a "custom (vendor) media type" for use on the > internet without mentioning the massive caveat that only they will > understand it - the terms "Universal" and "Custom (Vendor)" are far from > complementary. Who said anything about encouraging the use of custom media types? I am merely pointing out the statement "XML and JSON certainly aren't RESTful" is false. It might not be a good idea in Jrn situation, but in some situations it is. > I'll probably black-list myself from the rest community with this next > comment, but can this whole culture of run to a custom media type whenever > things get tricky and label it as RESTful and "a good thing" culture please > just stop, it's a new year ahead, it's painfully obvious that it /doesn't/ > work (in reality) and that it ensures the REST community is small and > massively misunderstood by almost everybody because it detaches from even > the simplest logic and reality. People put things on the web so they are > universally accessible, people share things so they can be seen by others, > that's the whole damn point. The problem with your argument is that custom media types *do* work (in reality). They work on the corporate network. They work on the public internet. They work in a box with a fox. I know they work from personal experience. Custom media types might be inferior to you preferred approach. If you believe that to be true please argue that. Not that using custom media types is unrestful. [1]: http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm#sec_5_1_5 [2]: http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm#sec_5_2_1 Peter barelyenough.org
Hello!
On Tue, 2010-12-28 at 21:30 +0100, Jan Algermissen wrote:
> I prefer rolling my own type because it focusses the domain modeling
> activities in a single (ideally) specification document. It is far
> easier to distribute, version, QA, ... such a single item of work than
> a wild bunch of unrelated specs for extensions and link relations.
Let's recognize that there are two orthogonal concerns that need to be
considered here: What's good/easy/convenient/effective for the developer
vs. the user ('user' here being any active element on the Internet).
On one hand, I agree that a custom media type often can have advantages
for you as the software developer, for exactly the reasons you describe.
On the other hand, using only standardized media types supposedly can
have scalability (performance and adoption) advantages.
Which concern is more important to you? It may depend on your particular
application and situation.
As others have pointed out: Custom media types do work, the Internet
doesn't come to a screeching halt and most intermediate nodes know how
to deal with them in a sufficient best effort way. In the real world,
they simply aren't as bad as some like to claim. They may also make
server and client development easier and more manageable, which is an
important point.
Just recognize that using them may not be as effective for the Internet
as a whole, or for the performance of intermediate nodes. Consider the
trade-offs and make an informed decision.
Many times on this list it was said that this is all fine, but that you
simply can't call it REST if you don't use standardized types. Firstly,
I'm not convinced of that (but surely, someone can provide the proper
quote from Roy's thesis to support their point). Secondly, if that's the
case then I'm happy to start calling it 'semi-REST', so that we can move
on to talk about the technical benefits of REST constraints, including
this one, without getting side-tracked by this somewhat unnecessary
fight over nomenclature.
Juergen
--
Juergen Brendel
MuleSoft
Jan, Jan Algermissen wrote: >> I'll probably black-list myself from the rest community with this next >> comment, but can this whole culture of run to a custom media type >> whenever things get tricky and label it as RESTful and "a good thing" >> culture please just stop, it's a new year ahead, it's painfully obvious >> that it /doesn't/ work (in reality) > > If we want to build user agents that perform automatic requests (such as downloading the image referenced by an HTML <img> tag) we simply *need* hypermedia semantics that express the specific relationship driving the user agent code. > > There are only two options for this: extending existing types or using link relations *or* defining a specific media type (such as HTML is a specific type for enabling what we want browsers to do). There's a third option, design a new generic data media type which has a core set of hypermedia semantics which can be applied to properties. If there was ever a group of people who could do it (provided you get the correct data model) it's you guys, and you, and we, all need it... Surely that third option would be far more beneficial to concentrate than all of these old far-from-perfect approaches? I'd certainly contribute to any such effort wherever I could.. Best, Nathan
> Those insistent on rolling out domain-specific XML elements, rather than > using > RDFa/microdata/classes to embed the same vocabulary into an HTML user > interface, could at least mix in XHTML, XForms, XLink, and WAI-ARIA with > their domain-specific vocabulary rather than reinvent the wheel. Sure, that's what I was trying to describe earlier on - a generic XML vocabulary that would allow for easy two-way serialization of data, while still building on well known hyper media controls. /Jørn
> No no, you are not supposed to know about the model directly, the > "interface bit" is what you interact with, these are the hypertext > controls. So you do need these for M2M applications of XForms too. The > new model is opaque... Doh? But what's the model for then? I can see that I can use the hypertext controls just like in HTML forms. So what is the model used for, except for specifying submission details? Thanks, Jørn ----- Original Message ----- From: "Justin Cormack" <justin@...> To: <rest-discuss@yahoogroups.com> Sent: Tuesday, December 28, 2010 12:50 PM Subject: Re: [rest-discuss] Re: XForms vs. WSDL? > No no, you are not supposed to know about the model directly, the > "interface bit" is what you interact with, these are the hypertext > controls. So you do need these for M2M applications of XForms too. The > new model is opaque... > > Justin > > On Fri, 2010-12-24 at 23:11 +0100, Jørn Wildt wrote: >> >> It's the model that I find problematic. It's described by an XML >> schema and >> it seems to me that my client must have this schema baked into the >> code in >> order to work with it - how would it otherwise know how to mark-up the >> data >> needed to describe, for instance, a purchase order? This is where my >> understanding of a dynamic client breaks down. The client may have >> multiple >> known schemas to choose between - for instance Danish and German >> variations >> of purchase order schemas. But that's only semi-dynamic. >> >> Happy xmass everybody :-) >> >> /Jørn >> >> ----- Original Message ----- >> From: "William Martinez Pomares" <wmartinez@...> >> To: <rest-discuss@yahoogroups.com> >> Sent: Thursday, December 23, 2010 8:01 PM >> Subject: [rest-discuss] Re: XForms vs. WSDL? >> >> Hi again. >> >> --- In rest-discuss@yahoogroups.com, Jørn Wildt <jw@...> wrote: >> > Yes, that's certainly one important difference. With SOAP/WSDL you >> are >> > given >> > everything up front and have to figure out what to use when. With >> REST you >> > are only served the links that fits the actual state. You cannot do >> the >> > last >> > with WSDL. >> >> Well, yes, you can. At least with WSDL (let's forget SOAP). WSDL is a >> web >> format, registered in IANA, and I can get it dynamically and consume >> it >> dynamically. Nobody does it because of tools, granted. The shame is on >> the >> tool designers. >> >> > I guess that depends on how you consume the XForm. >> >> Totally true. Knowing the XFORM up front will make you write a rigid >> client. >> Same for WSDL. Consuming the XFORM dynamically will make everybody >> happy. It >> is harder with WSDL, but possible too. >> Here the real difference is then the actual implementation, plus, I >> would >> say, XFORM is still light, and WSDL we may label it as "heavy". >> >> Cheers! >> >> William Martinez Pomares. >> >> >> >> >> > > >
> If by "easy" you mean as easy as serializing name-value pairs into an XML > element tree, then these goals are in conflict, because magically building > a > good user interface for h2m is hard to script - indeed such interfaces are > an > area of outright competition rather than just standardization! Agreed. But I am not striving towards a good h2m interface - only an acceptable one that will allow a developer of a client to interact with my API using a standard browser. Nothing fancy. I wouldn't expect my perfectly polished end-user website to be used as a REST API. But maybe I am wrong here too. It could just be that, assuming I am going to develop that end-user website anyway, it would be faster to incoorporate m2m details in a fancy end-user website, than developing both a end-user website and a REST API on a secondary webserver. It would just be so different than my (the?) usual conception of a REST API as a stand-alone API decoupled from the end-user website. One typical difference is the access control where the end-user website uses cookies, whereas the REST API uses some of the HTTP authentication chemes. It would also be way to easy to break the API the first time someone puts a new theme template on the end-user website. Better to have web designers working on the end-user website and coders to work on the REST API. /Jørn ----- Original Message ----- From: "Benjamin Hawkes-Lewis" <bhawkeslewis@...> To: "Jørn Wildt" <jw@...> Cc: "Rest Discussion List" <rest-discuss@yahoogroups.com> Sent: Tuesday, December 28, 2010 7:03 PM Subject: Re: [rest-discuss] Combining HTML and XML? On Tue, Dec 28, 2010 at 11:13 AM, Jørn Wildt <jw@...> wrote: > This, I think, is mostly about tooling: it is easy to serialize any data > structure in XML with most development platforms, but there is no support > for > easy serialization into and from HTML. HTML has a document-application vocabulary at its core whereas XML is just a syntax. However, many web frameworks (e.g. Rails, Django) do include "scaffolding" code that represents models as HTML forms. > Example: I have a case file with a title, a case number and a myriad of > other > properties. This can easily be converted to XML using standard tools: > > <case xmlns="http://my-casefile-namespace"> <title>My title</title> > </case> > > But how would the machine readable HTML look like? Maybe: > > <div property="case"> <div property="title">My title</div> </div> Hopefully, something that makes better use of HTML semantics like: <article><h1 property="case:title">My title</h1> ... </article> > So I feel trapped somewhere between these two formats and I guess that is > why > people turn to their own media-types. Now they get easy tooling and > embedded > hypermedia controls in XML (which they must invent themselves). This sounds like more work for both producers and consumers than just serializing to an HTML form. Those insistent on rolling out domain-specific XML elements, rather than using RDFa/microdata/classes to embed the same vocabulary into an HTML user interface, could at least mix in XHTML, XForms, XLink, and WAI-ARIA with their domain-specific vocabulary rather than reinvent the wheel. > Can't we combine and get the best of both worlds? > > What is needed is something that: > > - Makes it easy to serialize/deserialize any data (m2m scenario). - Is > browsable with a normal browser (h2m scenario). - Has hyper media > controls. > - Can update data through hyper media controls. If by "easy" you mean as easy as serializing name-value pairs into an XML element tree, then these goals are in conflict, because magically building a good user interface for h2m is hard to script - indeed such interfaces are an area of outright competition rather than just standardization! Consequently, developers tend to want to customize cookie-cutter user interfaces, replacing the scaffolding erected by web frameworks. > The HTML form could have RDFa added to it to lessen the coupling between > the > server and the client, like Eric suggested. Something like: [snip] > <input id="case_title" name="Case.Title" property="dc:title"/> Note this does not work since RDFa takes the content of the RDFa "content" attribute or, failing that, the concatenated text nodes of the element as the value, *not* the "value" property of the "input", as the value of "dc:title": http://www.w3.org/TR/rdfa-core/#sequence What you want to say is that the .value property may or should be a "dc:title", but HTML5 and friends do not currently offer a way to express that. Adding a way to do this is not easy since "value" might or might not be a provided or valid value. -- Benjamin Hawkes-Lewis
> > But how would the machine readable HTML look like? Maybe: > > > > <div property="case"> <div property="title">My title</div> </div> > Hopefully, something that makes better use of HTML semantics like: > > <article><h1 property="case:title">My title</h1> ... </article> > That would certainly be nice. But it also makes it less possible to have automated tooling for this format. Somehow the HTML serializer would have to know that the "title" property should be marked up as a <h1> element. It might be easier on the client side since it can ignore the <h1> (or <div> or <p> or whatever it is) and just look for property="case:title". I am so used to tools that serializes and deserializes the same format, but maybe that's a wrong way to look. Now that I think of it, I have previously said that HTML forms a fine for submiting data, and url-encoded is certainly a different wire format than HTML. No one expects the client to post back HTML just because it got the data served as HTML. I guess my XML background fails me here. Two-way serialization seems to be in reach, if only it is done in two different formats. As I said previously - ASP.NET MVC and OpenRasta have fine model binders for reading in forms data - and a home brewed "internal data structure" to RDFa serializer should also be possible; either by using a templating syntax (Razor or ASP.NET or what ever else you favor) - or marking up the data with RDFa serialization attributes. Personally I dislike the templating approach since it opens up for too many variations in the mark-up. Actually I prefer a third option: leave the internal data structures untouched and pass in serialization attributes in a separate data structure that informs the serializer on how to present the data, assuming it is going to fit into some standard HTML serialization scheme. With this in mind it starts to make more sense to use RDFa for presenting data, and forms + url-encoded to update data. Server code: - Serialize objects into HTML + RDFa in a standard way for reading. - Serialize objectinto HTML + forms + ??? for instructing clients of ways to update data. - Deserialize url-encoded forms data using model binders. Client code: - Deserialize HTML + RDFa into RDF tripples using a standard RDFa library and work with that. - Or, use XPath to read out snippets of data from the HTML + RDFa. - Serialize data into form url-encodings using information derived from HTML forms using ??? The only thing left is to figure out how to instruct the client to use the forms; should the forms variable names be hard coded into the client, or should there be some indirection here too - like using RDFa for describing the inputs (which it apperently isn't designed for). /Jørn ----- Original Message ----- From: "Benjamin Hawkes-Lewis" <bhawkeslewis@...> To: "Jørn Wildt" <jw@...> Cc: "Rest Discussion List" <rest-discuss@yahoogroups.com> Sent: Tuesday, December 28, 2010 7:03 PM Subject: Re: [rest-discuss] Combining HTML and XML? On Tue, Dec 28, 2010 at 11:13 AM, Jørn Wildt <jw@...> wrote: > This, I think, is mostly about tooling: it is easy to serialize any data > structure in XML with most development platforms, but there is no support > for > easy serialization into and from HTML. HTML has a document-application vocabulary at its core whereas XML is just a syntax. However, many web frameworks (e.g. Rails, Django) do include "scaffolding" code that represents models as HTML forms. > Example: I have a case file with a title, a case number and a myriad of > other > properties. This can easily be converted to XML using standard tools: > > <case xmlns="http://my-casefile-namespace"> <title>My title</title> > </case> > > But how would the machine readable HTML look like? Maybe: > > <div property="case"> <div property="title">My title</div> </div> Hopefully, something that makes better use of HTML semantics like: <article><h1 property="case:title">My title</h1> ... </article> > So I feel trapped somewhere between these two formats and I guess that is > why > people turn to their own media-types. Now they get easy tooling and > embedded > hypermedia controls in XML (which they must invent themselves). This sounds like more work for both producers and consumers than just serializing to an HTML form. Those insistent on rolling out domain-specific XML elements, rather than using RDFa/microdata/classes to embed the same vocabulary into an HTML user interface, could at least mix in XHTML, XForms, XLink, and WAI-ARIA with their domain-specific vocabulary rather than reinvent the wheel. > Can't we combine and get the best of both worlds? > > What is needed is something that: > > - Makes it easy to serialize/deserialize any data (m2m scenario). - Is > browsable with a normal browser (h2m scenario). - Has hyper media > controls. > - Can update data through hyper media controls. If by "easy" you mean as easy as serializing name-value pairs into an XML element tree, then these goals are in conflict, because magically building a good user interface for h2m is hard to script - indeed such interfaces are an area of outright competition rather than just standardization! Consequently, developers tend to want to customize cookie-cutter user interfaces, replacing the scaffolding erected by web frameworks. > The HTML form could have RDFa added to it to lessen the coupling between > the > server and the client, like Eric suggested. Something like: [snip] > <input id="case_title" name="Case.Title" property="dc:title"/> Note this does not work since RDFa takes the content of the RDFa "content" attribute or, failing that, the concatenated text nodes of the element as the value, *not* the "value" property of the "input", as the value of "dc:title": http://www.w3.org/TR/rdfa-core/#sequence What you want to say is that the .value property may or should be a "dc:title", but HTML5 and friends do not currently offer a way to express that. Adding a way to do this is not easy since "value" might or might not be a provided or valid value. -- Benjamin Hawkes-Lewis
> <input id="case_title" name="Case.Title" property="dc:title"/> > > Note this does not work since RDFa takes the content of the RDFa "content" > attribute or, failing that, the concatenated text nodes of the element as > the > value, *not* the "value" property of the "input", as the value of > "dc:title": > > What you want to say is that the .value property may or should be a > "dc:title", > but HTML5 and friends do not currently offer a way to express that. > > Adding a way to do this is not easy since "value" might or might not be a > provided or valid value. Actually, what I want is just to inform the m2m client of which input elements, text areas or dropdowns to use for the various bits of data - exactly like the <label> elements does it for the end-user. Since we are using RDFa in the HTML representation it would be preferable to use the same names do describe what inputs to use. This could also just be HTML 5's "data-" attribute in which case we have: <input id="case_title" name="Case.Title" data-property="dc:title"/> Now my client could look for data-property="dc:title" instead of property="dc:title" and it would still be a valid HTML 5 document. /Jørn ----- Original Message ----- From: "Benjamin Hawkes-Lewis" <bhawkeslewis@...> To: "Jørn Wildt" <jw@...> Cc: "Rest Discussion List" <rest-discuss@yahoogroups.com> Sent: Tuesday, December 28, 2010 7:03 PM Subject: Re: [rest-discuss] Combining HTML and XML? On Tue, Dec 28, 2010 at 11:13 AM, Jørn Wildt <jw@...> wrote: > This, I think, is mostly about tooling: it is easy to serialize any data > structure in XML with most development platforms, but there is no support > for > easy serialization into and from HTML. HTML has a document-application vocabulary at its core whereas XML is just a syntax. However, many web frameworks (e.g. Rails, Django) do include "scaffolding" code that represents models as HTML forms. > Example: I have a case file with a title, a case number and a myriad of > other > properties. This can easily be converted to XML using standard tools: > > <case xmlns="http://my-casefile-namespace"> <title>My title</title> > </case> > > But how would the machine readable HTML look like? Maybe: > > <div property="case"> <div property="title">My title</div> </div> Hopefully, something that makes better use of HTML semantics like: <article><h1 property="case:title">My title</h1> ... </article> > So I feel trapped somewhere between these two formats and I guess that is > why > people turn to their own media-types. Now they get easy tooling and > embedded > hypermedia controls in XML (which they must invent themselves). This sounds like more work for both producers and consumers than just serializing to an HTML form. Those insistent on rolling out domain-specific XML elements, rather than using RDFa/microdata/classes to embed the same vocabulary into an HTML user interface, could at least mix in XHTML, XForms, XLink, and WAI-ARIA with their domain-specific vocabulary rather than reinvent the wheel. > Can't we combine and get the best of both worlds? > > What is needed is something that: > > - Makes it easy to serialize/deserialize any data (m2m scenario). - Is > browsable with a normal browser (h2m scenario). - Has hyper media > controls. > - Can update data through hyper media controls. If by "easy" you mean as easy as serializing name-value pairs into an XML element tree, then these goals are in conflict, because magically building a good user interface for h2m is hard to script - indeed such interfaces are an area of outright competition rather than just standardization! Consequently, developers tend to want to customize cookie-cutter user interfaces, replacing the scaffolding erected by web frameworks. > The HTML form could have RDFa added to it to lessen the coupling between > the > server and the client, like Eric suggested. Something like: [snip] > <input id="case_title" name="Case.Title" property="dc:title"/> Note this does not work since RDFa takes the content of the RDFa "content" attribute or, failing that, the concatenated text nodes of the element as the value, *not* the "value" property of the "input", as the value of "dc:title": http://www.w3.org/TR/rdfa-core/#sequence What you want to say is that the .value property may or should be a "dc:title", but HTML5 and friends do not currently offer a way to express that. Adding a way to do this is not easy since "value" might or might not be a provided or valid value. -- Benjamin Hawkes-Lewis
Juergen Brendel wrote: > Many times on this list it was said that this is all fine, but that you > simply can't call it REST if you don't use standardized types. Firstly, > I'm not convinced of that (but surely, someone can provide the proper > quote from Roy's thesis to support their point). "The trade-off, though, is that a uniform interface degrades efficiency, since information is transferred in a standardized form rather than one which is specific to an application's needs." http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm#sec_5_1_5
On Tue, Dec 28, 2010 at 11:13 AM, Jørn Wildt <jw@...> wrote: > This, I think, is mostly about tooling: it is easy to serialize any data > structure in XML with most development platforms, but there is no support for > easy serialization into and from HTML. HTML has a document-application vocabulary at its core whereas XML is just a syntax. However, many web frameworks (e.g. Rails, Django) do include "scaffolding" code that represents models as HTML forms. > Example: I have a case file with a title, a case number and a myriad of other > properties. This can easily be converted to XML using standard tools: > > <case xmlns="http://my-casefile-namespace"> <title>My title</title> </case> > > But how would the machine readable HTML look like? Maybe: > > <div property="case"> <div property="title">My title</div> </div> Hopefully, something that makes better use of HTML semantics like: <article><h1 property="case:title">My title</h1> ... </article> > So I feel trapped somewhere between these two formats and I guess that is why > people turn to their own media-types. Now they get easy tooling and embedded > hypermedia controls in XML (which they must invent themselves). This sounds like more work for both producers and consumers than just serializing to an HTML form. Those insistent on rolling out domain-specific XML elements, rather than using RDFa/microdata/classes to embed the same vocabulary into an HTML user interface, could at least mix in XHTML, XForms, XLink, and WAI-ARIA with their domain-specific vocabulary rather than reinvent the wheel. > Can't we combine and get the best of both worlds? > > What is needed is something that: > > - Makes it easy to serialize/deserialize any data (m2m scenario). - Is > browsable with a normal browser (h2m scenario). - Has hyper media controls. > - Can update data through hyper media controls. If by "easy" you mean as easy as serializing name-value pairs into an XML element tree, then these goals are in conflict, because magically building a good user interface for h2m is hard to script - indeed such interfaces are an area of outright competition rather than just standardization! Consequently, developers tend to want to customize cookie-cutter user interfaces, replacing the scaffolding erected by web frameworks. > The HTML form could have RDFa added to it to lessen the coupling between the > server and the client, like Eric suggested. Something like: [snip] > <input id="case_title" name="Case.Title" property="dc:title"/> Note this does not work since RDFa takes the content of the RDFa "content" attribute or, failing that, the concatenated text nodes of the element as the value, *not* the "value" property of the "input", as the value of "dc:title": http://www.w3.org/TR/rdfa-core/#sequence What you want to say is that the .value property may or should be a "dc:title", but HTML5 and friends do not currently offer a way to express that. Adding a way to do this is not easy since "value" might or might not be a provided or valid value. -- Benjamin Hawkes-Lewis
rmbrad wrote: > > --- In rest-discuss@yahoogroups.com, Nathan <nathan@...> wrote: >> Juergen Brendel wrote: >>> Many times on this list it was said that this is all fine, but that you >>> simply can't call it REST if you don't use standardized types. Firstly, >>> I'm not convinced of that (but surely, someone can provide the proper >>> quote from Roy's thesis to support their point). >> >> "The trade-off, though, is that a uniform interface degrades efficiency, >> since information is transferred in a standardized form rather than one >> which is specific to an application's needs." >> >> http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm#sec_5_1_5 > > There's also this in 5.3.1: > "REST enables intermediate processing by constraining messages to be self-descriptive: interaction is stateless between requests, standard methods and media types are used to indicate semantics and exchange information, and responses explicitly indicate cacheability." > > http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm#sec_5_3_1 yup and.. "REST components communicate by transferring a representation of a resource in a format matching one of an evolving set of standard data types," http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm#sec_5_2_1_2
Jrn Wildt wrote: > > But using html for embedding machine-readable representations of > complex data structures for reading seems a bit like bending over > backwards to do something which is straight forward with XML. > Re-inventing <table> may be *easier* (I wouldn't know as I've always avoided platforms and tooling entirely, particularly those geared towards the SOA approach which is fundamentally different from REST -- if the problem here is preserving the investment in SOA toolchains and expecting REST to result, I don't know how I can help), but <table> has *simplicity* going for it. The design of <table> markup predates the Web, there's nothing about it that can't be tooled, even if your tools disregard it as a serialization for tabular data. XSLT is both a great deployment and development tool; your toolchain probably allows you to execute a transformation from whatever it is willing to generate for tabular data, into <table> markup, as a pre-deployment operation. I've never felt like I'm bending over backwards to serialize tabular data using <table> markup, because it's so straightforward to me as to have become second-nature. The essence of REST is the re-use of common data structures. > > But how would the machine readable HTML look like? Maybe: > > <div property="case"> > <div property="title">My title</div> > </div> > No, the whole point of using HTML is that its markup expresses very common semantics for any sort of document. I'd put the title inside <head><title>, assuming one case per document. There's no need to re- invent the document title as a microformat, or tell anyone (h or m) what you mean by <head><title> in HTML. Attaching a title to a hypertext document is a problem that's long been solved, why re-visit the issue? > > Many people turns to REST for simplicity. For me this also means > simple and commonly available tools. That's not the case with the > HTML serialization and for this reason I shy away from the HTML > representation - it's too difficult to work with. > You're mistaking 'simplicity' for 'ease-of-use', at least in terms of Roy's thesis. In REST, simplicity means (in this case) that instead of every type of document having its own data type with its own definition of the document title, we have a generic document data type such that any type of document has the same definition of title. REST's power lies in the fact that most APIs can be modeled in terms of document orientation, just as UNIX's power lies in the fact that most everything may be modeled in terms of a file descriptor (filesystem and media type serving the same generic role in each). > > Now I am only dreaming up some ideas: what if there were a standard > XML dialect which included links and schema references? > Like HTML? If the system is document-oriented, then this generic dialect will need to express common semantics like lists, headings, title, tabular data, paragraphs and the like. Which is why HTML extensibility is such a big issue -- it seems like less work to me to extend HTML, than it is to start over from scratch with a custom media type. > > I would be happy to use HTML all over the place if it wasn't for the > fact that I haven't seen any useful serialization tools. > Leading you to define your own serialization, requiring out-of-band documentation explaining the network-opaque algorithm behind it. This isn't the REST style -- in REST, you aren't required to document the algorithm behind your HTML <table>, because that's a standardized part of the data types your media type identifies for IP networks. -Eric
Jrn Wildt wrote: > > One of the reasons for turning to REST is simplicity. By adding > layers of RDFa onto HTML the API becomes more difficult to consume > compared to a straight forward XML format. If I present a REST API to > my customers that makes it (a lot) more difficult to consume than a > SOAP API then I loose some of my selling points. > What I've been trying to explain, is that RDFa exists separately from the REST API. Complexity ensues when the solution to m2m is the creation of custom media types bound to the nature of the data -- REST says it's simpler to use a generic media type, and solve m2m at another layer. The point of REST is that the "ease of consumption" isn't tied to the application stack, like SOAP is -- do SOAP solutions work in general, or do they only work for the particular vendor stack they target? I suppose "ease of consumption" is a selling point, but only for coupling. > > I am probably wrong though - I do not think Fieldings thesis says > anything about being simple, but that certainly is part of the > percived goodness of REST amongst most people. > The thesis defines simplicity here: http://www.ics.uci.edu/~fielding/pubs/dissertation/net_app_arch.htm#sec_2_3_3 -Eric
Peter Williams wrote: > On Tue, Dec 28, 2010 at 11:43 AM, Nathan <nathan@...> wrote: >> Peter Williams wrote: >>> On Tue, Dec 28, 2010 at 5:52 AM, Nathan <nathan@...> wrote: >>>> That said, XML and JSON certainly aren't RESTful because 99.999999% of >>>> the components on the network will know precisely zero about your >>>> essentially >>>> "made-up" media type, >>> Which constraint is it that states that representations must be >>> understood by more than x% of the components on the network? >> Shall I assume we're forgetting the whole point of the Universal Interface >> and the core principals of separation of concerns, scalability and >> independent evolvability here, negating common sense and removing the hugely >> obvious context of "the web" and "the internet" which applies to almost >> every mail to rest-discuss and for which REST was actually made? > > Of course we are not forgetting uniform interfaces. Http has that > under control. Even when used with novel media types the interface is > still uniform. Resources are still identified, resources are still > manipulated through representations, the messages are still > self-descriptive, and (assuming the media-type is well defined) > hypermedia is still the engine of application state. The uniform > interface section[1] is completely silent on how ubiquitous support of > representations needs to be. ?? - from the uniform interface section: "...information is transferred in a standardized form rather than one which is specific to an application's needs. The REST interface is designed to be efficient for large-grain hypermedia data transfer, optimizing for the common case of the Web, but resulting in an interface that is not optimal for other forms of architectural interaction..." This seems pretty clear to me, and far from silent.. standardized forms optimized for the common case of the web and not for specific application's needs. > I fail so see how "separation of concerns, scalability and independent > evolvability" come into play regarding this particular design > decision. The client and server concerns are still separate; custom > media types are just cachable as any other media type; custom media > types do not, innately, damage the evolvability of the system. In > fact, custom media types are just yet another "downloadable > feature-engine" that help "provide for a diverse set of > functionality".[2] "downloadable feature-engine" and "provide for a diverse set of functionality" is related to optional code-on-demand constraint, specifically "downloadable feature-engine"s such as java, flash player etc. again to quote directly from the section you reference: "a representation that consists of instructions in the standard data format of an encapsulated rendering engine (e.g., Java [45])." *standard* data format, even when using a downloadable feature engine such as Java. >> Peter, sorry but the last thing I'm going to do is encourage J�rn, or >> anybody here, to go and invent a "custom (vendor) media type" for use on the >> internet without mentioning the massive caveat that only they will >> understand it - the terms "Universal" and "Custom (Vendor)" are far from >> complementary. > > Who said anything about encouraging the use of custom media types? I > am merely pointing out the statement "XML and JSON certainly aren't > RESTful" is false. It might not be a good idea in J�rn situation, but > in some situations it is. which situations? can you show me an example? ps: you may find it beneficial to read Roy's comments (#31 and #32) here: http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven#comment-753 which covers both XML/JSON and features such quotes as "When representations are provided in hypertext form with typed relations (using microformats of HTML, RDF in N3 or XML, or even SVG), then automated agents can traverse these applications almost as well as any human." >> I'll probably black-list myself from the rest community with this next >> comment, but can this whole culture of run to a custom media type whenever >> things get tricky and label it as RESTful and "a good thing" culture please >> just stop, it's a new year ahead, it's painfully obvious that it /doesn't/ >> work (in reality) and that it ensures the REST community is small and >> massively misunderstood by almost everybody because it detaches from even >> the simplest logic and reality. People put things on the web so they are >> universally accessible, people share things so they can be seen by others, >> that's the whole damn point. > > The problem with your argument is that custom media types *do* work > (in reality). They work on the corporate network. They work on the > public internet. They work in a box with a fox. I know they work > from personal experience. Depends how you define "work" I guess, sure you can make them work for you, optimized for a specific application, but then as we know that's exactly what the uniform interface is /not/ optimized for, so how that can be classed is RESTful is beyond me! > Custom media types might be inferior to you preferred approach. If > you believe that to be true please argue that. Not that using custom > media types is unrestful. I believe it to be true, and that it's a fact that using custom media types is unrestful - however I also believe that the uniform interface together with the media type registry do allow custom media types to work, as in, custom/vendor precludes universal, universal includes custom/vendor. > [1]: http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm#sec_5_1_5 > [2]: http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm#sec_5_2_1 Same references... Best, Nathan
Eric J. Bowman wrote: > J�rn Wildt wrote: >> One of the reasons for turning to REST is simplicity. By adding >> layers of RDFa onto HTML the API becomes more difficult to consume >> compared to a straight forward XML format. If I present a REST API to >> my customers that makes it (a lot) more difficult to consume than a >> SOAP API then I loose some of my selling points. >> > > What I've been trying to explain, is that RDFa exists separately from > the REST API. Complexity ensues when the solution to m2m is the > creation of custom media types bound to the nature of the data -- REST > says it's simpler to use a generic media type, and solve m2m at another > layer. Indeed, do see http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven#comment-754 for more on this, here's a snippet: "When representations are provided in hypertext form with typed relations (using microformats of HTML, RDF in N3 or XML, or even SVG), then automated agents can traverse these applications almost as well as any human. There are plenty of examples in the linked data communities. More important to me is that the same design reflects good human-Web design, and thus we can design the protocols to support both machine and human-driven applications by following the same architectural style." That list would include RDFa now, and RDF in N3 is quoted specifically over other RDF formats because RDF in N3 has basic hypermedia semantics, in that it's the only (current) RDF representation other than RDFa to specifically mention dereferencing URIs. > The point of REST is that the "ease of consumption" isn't tied to the > application stack, like SOAP is -- do SOAP solutions work in general, > or do they only work for the particular vendor stack they target? I > suppose "ease of consumption" is a selling point, but only for coupling. > >> I am probably wrong though - I do not think Fieldings thesis says >> anything about being simple, but that certainly is part of the >> percived goodness of REST amongst most people. >> > > The thesis defines simplicity here: > > http://www.ics.uci.edu/~fielding/pubs/dissertation/net_app_arch.htm#sec_2_3_3 Not often one see's a quote from the other chapters of the dissertation round these parts ;) good to see one at last! Best, Nathan
Nathan wrote: > Eric J. Bowman wrote: >> J�rn Wildt wrote: >>> One of the reasons for turning to REST is simplicity. By adding >>> layers of RDFa onto HTML the API becomes more difficult to consume >>> compared to a straight forward XML format. If I present a REST API to >>> my customers that makes it (a lot) more difficult to consume than a >>> SOAP API then I loose some of my selling points. >>> >> >> What I've been trying to explain, is that RDFa exists separately from >> the REST API. Complexity ensues when the solution to m2m is the >> creation of custom media types bound to the nature of the data -- REST >> says it's simpler to use a generic media type, and solve m2m at another >> layer. > > Indeed, do see > http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven#comment-754 > for more on this, here's a snippet: > > "When representations are provided in hypertext form with typed > relations (using microformats of HTML, RDF in N3 or XML, or even SVG), > then automated agents can traverse these applications almost as well as > any human. There are plenty of examples in the linked data communities. > More important to me is that the same design reflects good human-Web > design, and thus we can design the protocols to support both machine and > human-driven applications by following the same architectural style." > > That list would include RDFa now, and RDF in N3 is quoted specifically > over other RDF formats because RDF in N3 has basic hypermedia semantics, > in that it's the only (current) RDF representation other than RDFa to > specifically mention dereferencing URIs. whoops, almost forgot, XML and SVG are mentioned because "If they are keying off of something unique within the content (like an XML namespace declaration that extends the semantics of a generic type), then it�s okay." - xml namespace of course being shorthand for a prefix+term concatenated to define the URI of properties (like you see in link relations) which can have their own (hypermedia) semantics. Best, Nathan
Peter Williams wrote: > > Using existing media types is usually superior to custom media > types/formats for practical reasons. However, using custom (vendor) > media types is clearly acceptable in web architecture and the rest > architectural style. > I'd say it's clearly the other way around. REST is all about standardization. Those few occasions when Roy has discussed creating a new media type, he always qualifies his example by remarking about the importance of a standardization effort. Roy has even blogged about his precise definition of standardization as apart from specification. The existence of multiple, interoperable, independent implementations is required for standardization. This precludes most of the vnd. tree. REST's uniform interface is the opposite of the creation of media types, in any tree, to solve domain- or application-specific problems. REST is all about using generic media types, which, in the absence of any standardization effort, custom media types are the opposite of. There is more value in learning why REST is how it is, than in defining REST down to be inclusive of custom media types, as REST was never meant to include that approach (the only exception being the creation of new, generic, standardizable types). You can't meet the hypertext constraint if your hypertext isn't part of the uniform interface (has a nice, ubiquitous media type everyone's heard of). -Eric
On Tue, Dec 28, 2010 at 9:51 PM, Jørn Wildt <jw@...> wrote: > Agreed. But I am not striving towards a good h2m interface - only an > acceptable one that will allow a developer of a client to interact with my > API using a standard browser. Nothing fancy. > > I wouldn't expect my perfectly polished end-user website to be used as a REST > API. But maybe I am wrong here too. It could just be that, assuming I am > going to develop that end-user website anyway, it would be faster to > incoorporate m2m details in a fancy end-user website, than developing both a > end-user website and a REST API on a secondary webserver. Indeed. > It would just be so different than my (the?) usual conception of a REST API > as a stand-alone API decoupled from the end-user website. One typical > difference is the access control where the end-user website uses cookies, > whereas the REST API uses some of the HTTP authentication schemes. Typical websites and APIs (even those marketed as RESTful) differ radically from a REST service as defined by Fielding. This is especially clear when Fielding discusses RESTful ecommerce: "the use of cookies to identify a user-specific "shopping basket" within a server-side database could be more efficiently implemented by defining the semantics of shopping items within the hypermedia data formats, allowing the user agent to select and store those items within their own client-side shopping basket, complete with a URI to be used for check-out when the client is ready to purchase" http://www.ics.uci.edu/~fielding/pubs/dissertation/evaluation.htm#sec_6_3 See also: http://tech.groups.yahoo.com/group/rest-discuss/message/3583 This may be a good time to remind you that non-RESTful services may benefit from adopting RESTful ideas, without becoming a Fielding-certifiable RESTful service … Also, what stops you abstracting authentication to accept either cookies-based auth _or_ HTTP auth for the same service? > It would also be way to easy to break the API the first time someone puts a > new theme template on the end-user website. Better to have web designers > working on the end-user website and coders to work on the REST API. You're trying to solve a _social_ problem where you have an absence of quality control over critical parts of your system. People who cannot maintain the HTML for a HATEOS service cannot maintain the HTML for a quality website, and vice versa. Both of these tasks require a close attention to markup semantics and how clients (browsers, spiders, assistive technology, automated testing software, API clients, etc.), and behind them human beings (surfers, searchers, people with disabilities, test-writers, third-party developers, etc.), are going to interface with those semantics. Therefore building parallel systems is a poor solution to that problem compared to cross-training, code review, and automated testing. Building a single HATEOS service forces you to: 1. Think about how human beings will want to view, interact with, and add to your data. 2. Think about how to manage changes so that clients (whether human or machine) can adapt. These are challenges, but being forced to think about human interaction with your service is arguably a very good thing, for example for ensuring your domain modelling is good and your pace of change is right. -- Benjamin Hawkes-Lewis
> Also, what stops you abstracting authentication to accept either > cookies-based auth _or_ HTTP auth for the same service? Nothing. Good point. > Therefore building parallel systems is a poor solution to that problem > compared > to cross-training, code review, and automated testing. Now that point about automated testing turned on yet another lightbulb here. One of the issues I have had with the many websites I have been building is the lack of automatic testing. The HTML generated from ASP.NET makes it way too difficult for that. I have always been annoyed by this but accepted it in lack of better ways (my job has always been in a .NET shop). With ASP.NET MVC a lot has changed to the better. If I start combining the ideas from the discussions here with the goal of making the website testable then things suddenly change a lot in favor of one single User + M2M website. Making the website testable is not only better for QA but it is also a near guarantee that the website can function as a M2M REST API in itself. Interesting indeed :-) /Jørn ----- Original Message ----- From: "Benjamin Hawkes-Lewis" <bhawkeslewis@...> To: "Jørn Wildt" <jw@...> Cc: "Rest Discussion List" <rest-discuss@yahoogroups.com> Sent: Wednesday, December 29, 2010 1:38 AM Subject: Re: [rest-discuss] Combining HTML and XML? On Tue, Dec 28, 2010 at 9:51 PM, Jørn Wildt <jw@...> wrote: > Agreed. But I am not striving towards a good h2m interface - only an > acceptable one that will allow a developer of a client to interact with my > API using a standard browser. Nothing fancy. > > I wouldn't expect my perfectly polished end-user website to be used as a > REST > API. But maybe I am wrong here too. It could just be that, assuming I am > going to develop that end-user website anyway, it would be faster to > incoorporate m2m details in a fancy end-user website, than developing both > a > end-user website and a REST API on a secondary webserver. Indeed. > It would just be so different than my (the?) usual conception of a REST > API > as a stand-alone API decoupled from the end-user website. One typical > difference is the access control where the end-user website uses cookies, > whereas the REST API uses some of the HTTP authentication schemes. Typical websites and APIs (even those marketed as RESTful) differ radically from a REST service as defined by Fielding. This is especially clear when Fielding discusses RESTful ecommerce: "the use of cookies to identify a user-specific "shopping basket" within a server-side database could be more efficiently implemented by defining the semantics of shopping items within the hypermedia data formats, allowing the user agent to select and store those items within their own client-side shopping basket, complete with a URI to be used for check-out when the client is ready to purchase" http://www.ics.uci.edu/~fielding/pubs/dissertation/evaluation.htm#sec_6_3 See also: http://tech.groups.yahoo.com/group/rest-discuss/message/3583 This may be a good time to remind you that non-RESTful services may benefit from adopting RESTful ideas, without becoming a Fielding-certifiable RESTful service … Also, what stops you abstracting authentication to accept either cookies-based auth _or_ HTTP auth for the same service? > It would also be way to easy to break the API the first time someone puts > a > new theme template on the end-user website. Better to have web designers > working on the end-user website and coders to work on the REST API. You're trying to solve a _social_ problem where you have an absence of quality control over critical parts of your system. People who cannot maintain the HTML for a HATEOS service cannot maintain the HTML for a quality website, and vice versa. Both of these tasks require a close attention to markup semantics and how clients (browsers, spiders, assistive technology, automated testing software, API clients, etc.), and behind them human beings (surfers, searchers, people with disabilities, test-writers, third-party developers, etc.), are going to interface with those semantics. Therefore building parallel systems is a poor solution to that problem compared to cross-training, code review, and automated testing. Building a single HATEOS service forces you to: 1. Think about how human beings will want to view, interact with, and add to your data. 2. Think about how to manage changes so that clients (whether human or machine) can adapt. These are challenges, but being forced to think about human interaction with your service is arguably a very good thing, for example for ensuring your domain modelling is good and your pace of change is right. -- Benjamin Hawkes-Lewis
> > Now I am only dreaming up some ideas: what if there were a standard > > XML dialect which included links and schema references? > > > > Like HTML? If the system is document-oriented, then this generic > dialect will need to express common semantics like lists, headings, > title, tabular data, paragraphs and the like. Which is why HTML > extensibility is such a big issue -- it seems like less work to me to > extend HTML, than it is to start over from scratch with a custom media > type. As you may have seen from my other answers, I am getting more and more convinced about the viability of a pure HTML approach. One of my issues with HTML extensions has been that different vendors would use different extension schemes and that would sort of defeat the idea of having one unified way to serialize data back and forth. But now that I start to understand more of RDFa it looks more and more viable to use HTML + RDFa. That would allow the client code to work on generic RDF tripples instead of figuring out all the strange ways to encode a case file, or a purchase order, in HTML. All it requires is that the client implements (downloads) a RDFa preprocessor. Voila - all the difficulties of working with various HTML encodings disappear with one swing of the magic wand :-) All I need now is a way to mark up forms such that they decouple semantics like "this is the case title" from the query variable names. /J�rn ----- Original Message ----- From: "Eric J. Bowman" <eric@...> To: "J�rn Wildt" <jw@...> Cc: "Rest Discussion List" <rest-discuss@yahoogroups.com> Sent: Wednesday, December 29, 2010 12:19 AM Subject: Re: [rest-discuss] Combining HTML and XML? J�rn Wildt wrote: > > But using html for embedding machine-readable representations of > complex data structures for reading seems a bit like bending over > backwards to do something which is straight forward with XML. > Re-inventing <table> may be *easier* (I wouldn't know as I've always avoided platforms and tooling entirely, particularly those geared towards the SOA approach which is fundamentally different from REST -- if the problem here is preserving the investment in SOA toolchains and expecting REST to result, I don't know how I can help), but <table> has *simplicity* going for it. The design of <table> markup predates the Web, there's nothing about it that can't be tooled, even if your tools disregard it as a serialization for tabular data. XSLT is both a great deployment and development tool; your toolchain probably allows you to execute a transformation from whatever it is willing to generate for tabular data, into <table> markup, as a pre-deployment operation. I've never felt like I'm bending over backwards to serialize tabular data using <table> markup, because it's so straightforward to me as to have become second-nature. The essence of REST is the re-use of common data structures. > > But how would the machine readable HTML look like? Maybe: > > <div property="case"> > <div property="title">My title</div> > </div> > No, the whole point of using HTML is that its markup expresses very common semantics for any sort of document. I'd put the title inside <head><title>, assuming one case per document. There's no need to re- invent the document title as a microformat, or tell anyone (h or m) what you mean by <head><title> in HTML. Attaching a title to a hypertext document is a problem that's long been solved, why re-visit the issue? > > Many people turns to REST for simplicity. For me this also means > simple and commonly available tools. That's not the case with the > HTML serialization and for this reason I shy away from the HTML > representation - it's too difficult to work with. > You're mistaking 'simplicity' for 'ease-of-use', at least in terms of Roy's thesis. In REST, simplicity means (in this case) that instead of every type of document having its own data type with its own definition of the document title, we have a generic document data type such that any type of document has the same definition of title. REST's power lies in the fact that most APIs can be modeled in terms of document orientation, just as UNIX's power lies in the fact that most everything may be modeled in terms of a file descriptor (filesystem and media type serving the same generic role in each). > > Now I am only dreaming up some ideas: what if there were a standard > XML dialect which included links and schema references? > Like HTML? If the system is document-oriented, then this generic dialect will need to express common semantics like lists, headings, title, tabular data, paragraphs and the like. Which is why HTML extensibility is such a big issue -- it seems like less work to me to extend HTML, than it is to start over from scratch with a custom media type. > > I would be happy to use HTML all over the place if it wasn't for the > fact that I haven't seen any useful serialization tools. > Leading you to define your own serialization, requiring out-of-band documentation explaining the network-opaque algorithm behind it. This isn't the REST style -- in REST, you aren't required to document the algorithm behind your HTML <table>, because that's a standardized part of the data types your media type identifies for IP networks. -Eric
Peter Williams wrote: > > Of course we are not forgetting uniform interfaces. Http has that > under control. Even when used with novel media types the interface is > still uniform. > No, REST is based on the principle of generality -- this generality is what Roy means by 'uniform'. Application-specific, or domain-specific, data types are the opposite of generic data types which apply to any application. VoiceXML and CCXML may be used to create a uniform phone interface; it would be silly to have separate data types for placing orders by phone vs. handling customer service by phone vs. banking by phone. The whole point of the style is that this specificity is traded for scalability and other benefits which only accrue to generic data types. I have no idea how a generic client like a browser or a phone could gain the benefits of the style, if it had to be extended to grok an entirely new data type for every application, instead of taking a (almost) one-size-fits-all approach. The Web described by REST is the one where banking vs. ordering vs. customer service can all be executed within the same generic user-agent using the same generic media and data types. The Web where my desktop is cluttered with 100 different application-specific clients (or one seriously-bloated browser) doesn't exist, and certainly didn't serve as the basis for REST. It makes no sense to me that Roy set about to formalize the architecture the Web had already realized, yet wound up describing an architecture where each of these applications was based on its own processing model for domain-specific types. Standardization of types is the basis of the Web's success, which is why it's inherent to the REST style derived from that success. Had the Web been a free-for-all of media and data types, and Google could still index it, and Web accelerators somehow still worked, then an ex-post-facto description of its style wouldn't have been based on standardization of types as its fundamental concern (what other concern does REST mention more than once, let alone in a dozen places). > > Resources are still identified, resources are still manipulated > through representations... > How do I know what the representation means? If the answer is that it's common knowledge at the IP layer (standardized), then it may be REST. If I have to consult out-of-band documentation explaining a data type I've never heard of before because it isn't standardized, well, that is exactly the opposite of what Roy's driving at with the style. Removing the requirement for standardization from the thesis, results in a different style altogether. One which is neither based on, nor proven by, the Web as it exists in reality. I just don't understand how anyone can read a thesis which hammers home standardization as much as REST does, and come away thinking that custom media types have anything to do with it. Any remaining doubt should be cleared up by the realization that Roy has never given an example of using a custom media type, except to point out that such a solution would need to be standardized as an absolute requirement before calling it REST. Resources must be manipulated via representations using *standardized* types, there's nothing invisible or optional about the standardization requirement; in fact it's repeated ad nauseum (leading me to believe it can't simply be dismissed as irrelevant even if systems can be made to work regardless). > > the messages are still self-descriptive, and (assuming the > media-type is well defined) hypermedia is still the engine of > application state. The uniform interface section[1] is completely > silent on how ubiquitous support of representations needs to be. > You must be reading some version of the thesis which doesn't mention standardization as having anything to do with the uniform interface. The version I'm reading is very explicit about uniform=standardized. REST doesn't care how well the media type is defined, REST only cares that this understanding is network-based, not application-specific. If it's application-specific, then the whole point of the hypertext constraint is missed -- links don't do any good unless the markup is understood (like <a href>) at the network layer. > > I fail so see how "separation of concerns, scalability and independent > evolvability" come into play regarding this particular design > decision. > Does your API require a custom user-agent, or does it work with generic user-agents? If the latter, then the concern about processing the media type *is* a separate concern from your API. When data type is bound to resource type, the data type must evolve with the resource, coupling client to server. When generic data types are used, client and server may evolve independently. Scalability, in terms of REST, doesn't mean how many hits per second your server can handle -- it means how many components out there can participate in the communication based on their understanding of the data type. Using <a href> and an HTML media type for links, achieves Internet scale. Using <foo fetch> with a custom media type disallows participation by components which grok <a href>, requiring application- specific knowledge to participate -- the opposite of Internet scale. This goes for any markup -- a <table> is the same data structure on billions of existing components (i.e. it scales). Custom object serializations are understood by almost nothing by comparison (i.e. they don't scale). Standardized <table> markup is consistent across a variety of media and data types. The principle of generality is where scalability comes from -- <table> is generic (uniform). > > The client and server concerns are still separate; > These concerns are only separate if the media and data types are standardized. Otherwise they're implementation-specific, or domain- specific; REST decouples client from server based on a network-layer understanding of types, stating clearly that nonstandardized types require a library-based understanding which is defined as coupling, and as the opposite of the uniform interface. > > custom media types are just cachable as any other media type; > This is debatable, and assumes that cachability is the only concern. With standardized types, re-use extends beyond cachability -- as previously discussed, Google does all sorts of stuff with ubiquitous media types, Web accelerators are based on ubiquitous media types, and so on and so forth. None of which is possible without standardized types, which is the enabling factor. REST targets this deployed infrastructure, custom media types bypass it -- a night-and-day difference in scalability. > > The problem with your argument is that custom media types *do* work > (in reality). They work on the corporate network. They work on the > public internet. They work in a box with a fox. I know they work > from personal experience. > What does whether the system works or not have to do with its architectural style? REST isn't a value judgment, it's a set of design constraints. These constraints aren't required for a system to work on the Web or off. They're required to achieve certain desirable characteristics, which may or may not be relevant to a given system. If the goals for a system aren't the same as the goals of REST, then using REST makes no sense. If they are the same, then the goals of REST won't be achieved using custom media types, because that's some other architecture besides REST, and no evidence exists that any other style achieves the benefits of REST. Creating custom media and data types for something HTML can do, has about one billionth as much scalability as following REST by re-using standardized types. REST isn't a style for extending any desired semantics across network boundaries, it's a style for extending a limited set of standardized semantics across network boundaries. REST says refactor into generic types; customizing types to fit the application is some *other* architectural style, and the two approaches couldn't be further apart. Where in REST _doesn't_ it say that types are standardized? -Eric
Juergen Brendel wrote: > > Many times on this list it was said that this is all fine, but that > you simply can't call it REST if you don't use standardized types. > Firstly, I'm not convinced of that (but surely, someone can provide > the proper quote from Roy's thesis to support their point). Secondly, > if that's the case then I'm happy to start calling it 'semi-REST', so > that we can move on to talk about the technical benefits of REST > constraints, including this one, without getting side-tracked by this > somewhat unnecessary fight over nomenclature. > I don't understand how striking out in the opposite direction of REST can be dismissed as a semantic debate, or called semi-REST, when it's something altogether different from Roy's style. I won't cherry-pick any of the dozen places where standardization is mentioned -- reading the thesis should make it obvious that holistically, this style is document-oriented, and the format of those documents is generalized, not customized per application or even per problem domain. A REST API is one where the data being transferred has been refactored into common types. Problems are solved using hypertext to customize application flow. Media and data type are bike-shed colors the system is not dependent upon (what matters is how the documents are linked together, i.e. how to transition from one state to another, so the decision to use XForms vs. HTML forms is based on any criteria *but* the nature of the data). It is NOT REST to refactor the data type into one that's optimized for the nature of the data being transferred, where problems are solved by customizing data types to guide application flow instead of using hypertext. Media and data type are the foundation and framing of the resulting bike shed, which the system is absolutely dependent upon. Customizing data and media types per resource type isn't REST even if the resulting types are standardized. It's anti-REST -- there's nothing in the thesis to support this approach, while everything the thesis does say, supports the approach where data and media types have no relationship to resource type (hypertext is the engine of application state, not media type). So we can't simply move on and start talking about custom media type solutions in terms of REST, because the two approaches are so dissimilar as to not even be related. REST is the opposite approach from optimizing media and data types around resource type, or the concerns of application flow, because in REST media and data type have no relation to what the application states are or how to transition between them. -Eric
J�rn Wildt wrote: >>> Now I am only dreaming up some ideas: what if there were a standard >>> XML dialect which included links and schema references? >>> >> Like HTML? If the system is document-oriented, then this generic >> dialect will need to express common semantics like lists, headings, >> title, tabular data, paragraphs and the like. Which is why HTML >> extensibility is such a big issue -- it seems like less work to me to >> extend HTML, than it is to start over from scratch with a custom media >> type. > > As you may have seen from my other answers, I am getting more and more > convinced about the viability of a pure HTML approach. One of my issues with > HTML extensions has been that different vendors would use different > extension schemes and that would sort of defeat the idea of having one > unified way to serialize data back and forth. Great :) > But now that I start to understand more of RDFa it looks more and more > viable to use HTML + RDFa. That would allow the client code to work on > generic RDF tripples instead of figuring out all the strange ways to encode > a case file, or a purchase order, in HTML. All it requires is that the > client implements (downloads) a RDFa preprocessor. Voila - all the > difficulties of working with various HTML encodings disappear with one swing > of the magic wand :-) RDFa API, RDF API and JS implementations there of can help you there ;) > All I need now is a way to mark up forms such that they decouple semantics > like "this is the case title" from the query variable names. Perhaps, perhaps not, why not simply use contenteditable to edit the title in place, then PUT the entire HTML document back to the server.. all these things can be so simple, if you think about it, what's the point in giving one admin view (an edit article type page) to a client, then taking things in form input, saving them in a database, pulling them back out, and putting them in a template, then trying to sort out http cacheing etc, when you can simply GET an empty page, type right in to it, and PUT it back as a static document? (with optional processing, augmentation/annotation on the server side) Best, Nathan
Hello! On Wed, 2010-12-29 at 00:40 -0700, Eric J. Bowman wrote: > It is NOT REST to refactor the data type into one that's optimized for > the nature of the data being transferred, where problems are solved by > customizing data types to guide application flow instead of using > hypertext. For some reason you seem to think that 'custom media type' means 'not using hypertext for application flow'. However, nobody is advocating this and nobody has suggested that in this context either. I think more often - when people are talking about a custom media type - they mean something like XML-plus-links or JSON-plus-links. You might say "just use HTML then", but that's exactly where we should be discussing pros and cons on a technical level, hopefully without dismissing the non-HTML approach as non-RESTful, now that we all know that it still keeps application flow in hypertext. Granted, the big issue is that the 'JSON-plus-links' or 'XML-plus-links' type is not standardized. But as far as I can tell, an application using those can be fully RESTful in all other aspects. Here's another thought: When we are talking about 'scalability', I always envision that there are two types of scalability: The technical one (having to do with caches and performance and such) and the scalability that concerns itself with acceptance through clients and users (uptake, adoption, popularity, etc.). Now, I go out on a limb here and claim that if you offer an API to an application, where (depending on the 'Accept' header) stuff is either returned in HTML+RDF (fully standardized) or in JSON+links (not standardized), the majority of users will choose the latter. Why? Because it's just so very easy to deal with this style and many languages require zero additional libraries to do so. The JSON+links style of the API will 'scale' better, since it's actually going to be used. Therefore, I don't believe you can dismiss ease of use. In fact, I think it is a big part of the scalability story of REST. Juergen -- Juergen Brendel MuleSoft
Juergen Brendel wrote: > Now, I go out on a limb here and claim that if you offer an API to an > application, where (depending on the 'Accept' header) stuff is either > returned in HTML+RDF (fully standardized) or in JSON+links (not > standardized), the majority of users will choose the latter. Why? > Because it's just so very easy to deal with this style and many > languages require zero additional libraries to do so. The JSON+links > style of the API will 'scale' better, since it's actually going to be > used. Therefore, I don't believe you can dismiss ease of use. In fact, I > think it is a big part of the scalability story of REST. Yup, I fully agree, hence why standardization efforts are about to start to do JSON+RDF (which is JSON w/ Links) and many different approaches have already been taken in linked-data circles, because JSON is so damn easy to use for programmers. However, it's only useful if the serialization is designed for generality with a standardization envisioned, this is the big difference that makes this approach restful, vs 10000 different variants of XML+links or JSON+links for each different application use case. Best, Nathan
Juergen Brendel wrote: > > > It is NOT REST to refactor the data type into one that's optimized > > for the nature of the data being transferred, where problems are > > solved by customizing data types to guide application flow instead > > of using hypertext. > > For some reason you seem to think that 'custom media type' means 'not > using hypertext for application flow'. > Exactly. > > However, nobody is advocating this and nobody has suggested that in > this context either. > Aren't they? -Eric
Hello! On Wed, 2010-12-29 at 13:51 -0700, Eric J. Bowman wrote: > > For some reason you seem to think that 'custom media type' means 'not > > using hypertext for application flow'. > > > > Exactly. > > > > > However, nobody is advocating this and nobody has suggested that in > > this context either. > > > > Aren't they? No, don't think so. Maybe some people do, but I think most here on this list understand the importance of hypertext driven application flow. At least I would think so. :-) At any rate, when I (and probably many others as well) talk about a 'custom media type', we are talking about something based on JSON (nice and easy to process) or maybe XML, but with some link semantic added to it in order to accomplish HATEOAS. Juergen -- Juergen Brendel MuleSoft
On Wed, Dec 29, 2010 at 9:22 PM, Juergen Brendel <juergen.brendel@mulesoft.com> wrote: > > At any rate, when I (and probably many others as well) talk about a > 'custom media type', we are talking about something based on JSON (nice > and easy to process) or maybe XML, but with some link semantic added to > it in order to accomplish HATEOAS. Especially with respect to JSON, this sounds like hyper- without the human-friendly -media. "When I say hypertext, I mean the simultaneous presentation of information and controls such that the information becomes the affordance through which the user (or automaton) obtains choices and selects actions. Hypermedia is just an expansion on what text means to include temporal anchors within a media stream; most researchers have dropped the distinction. "Hypertext does not need to be HTML on a browser. Machines can follow links when they understand the data format and relationship types. … "When representations are provided in hypertext form with typed relations (using microformats of HTML, RDF in N3 or XML, or even SVG), then automated agents can traverse these applications almost as well as any human. There are plenty of examples in the linked data communities. More important to me is that the same design reflects good human-Web design, and thus we can design the protocols to support both machine and human-driven applications by following the same architectural style." - quoth Fielding http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven -- Benjamin Hawkes-Lewis
Juergen Brendel wrote: > > On Wed, 2010-12-29 at 13:51 -0700, Eric J. Bowman wrote: > > > For some reason you seem to think that 'custom media type' means > > > 'not using hypertext for application flow'. > > > > > > > Exactly. > > > > > > > > However, nobody is advocating this and nobody has suggested that > > > in this context either. > > > > > > > Aren't they? > > No, don't think so. Maybe some people do, but I think most here on > this list understand the importance of hypertext driven application > flow. At least I would think so. :-) > I think most folks are still making this too hard on themselves, and others. I'm disappointed by how often what appears to be hypertext- driven, really depends on magical out-of-band processing rules being switched on when encountering nonstandardized strings in Content-Type or @rel, which isn't actually what hypertext as the engine of state means in the context of REST's uniform (standardized) interface. > > At any rate, when I (and probably many others as well) talk about a > 'custom media type', we are talking about something based on JSON > (nice and easy to process) or maybe XML, but with some link semantic > added to it in order to accomplish HATEOAS. > Yes, I know exactly what's being referred to, and I'm aware of why it's done. This is the opposite approach from using standardized hypertext semantics. You can document what's a link and what else means what, but that doesn't make it part of the uniform interface unless and until your custom markup semantics and/or processing model are standardized. My various examples illustrate systems where the data and media type selections are bike-shed colors, because the state transitions are all designed against generic processing models. When systems are designed against custom processing models they become bound to them, impacting portability: http://www.ics.uci.edu/~fielding/pubs/dissertation/net_app_arch.htm#sec_2_3_6 (one of the dozen or so places standardized types are emphasised, in a thesis which otherwise avoids tautology) Any of my examples may be extended to allow a smartphone to negotiate for a standardized telephony interface. No custom media type or phone necessary, since the existing interface has no customized assumptions driving application state -- if this is possible for systems designed against custom processing models, it's a fluke instead of a given. -Eric
Benjamin Hawkes-Lewis wrote: > "When I say hypertext, I mean the simultaneous presentation of > information and controls such that the information becomes the > affordance through which the user (or automaton) obtains choices and > selects actions. Hypermedia is just an expansion on what text means to > include temporal anchors within a media stream; most researchers have > dropped the distinction. > > "Hypertext does not need to be HTML on a browser. Machines can follow > links when they understand the data format and relationship types. … > > "When representations are provided in hypertext form with typed > relations (using microformats of HTML, RDF in N3 or XML, or even SVG), > then automated agents can traverse these applications almost as well > as any human. There are plenty of examples in the linked data > communities. More important to me is that the same design reflects > good human-Web design, and thus we can design the protocols to support > both machine and human-driven applications by following the same > architectural style." > > - quoth Fielding > > http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven :) +1 (000000)
Welcome to the list, Benjamin. > > "When representations are provided in hypertext form with typed > relations (using microformats of HTML, RDF in N3 or XML, or even SVG), > then automated agents can traverse these applications almost as well > as any human. There are plenty of examples in the linked data > communities. More important to me is that the same design reflects > good human-Web design, and thus we can design the protocols to support > both machine and human-driven applications by following the same > architectural style." > Roy's taking a shot at the RDF world, there, but it applies equally to the matter at hand. Search for 'SPARQL +REST' and you'll come across a couple of good papers explaining why SPARQL Protocol isn't REST (which should be obvious to anyone who's read the thesis) and what can be done about it. My interest is the same as Roy's, which is that REST makes a fine architectural style for supporting both human and machine users, and that RDF is much more useful when used within Web architecture instead of at odds with it (which is my position on all Web technology, like AJAX or XSLT or service orientation). -Eric
Eric J. Bowman wrote: > Welcome to the list, Benjamin. > >> "When representations are provided in hypertext form with typed >> relations (using microformats of HTML, RDF in N3 or XML, or even SVG), >> then automated agents can traverse these applications almost as well >> as any human. There are plenty of examples in the linked data >> communities. More important to me is that the same design reflects >> good human-Web design, and thus we can design the protocols to support >> both machine and human-driven applications by following the same >> architectural style." >> > > Roy's taking a shot at the RDF world, there, but it applies equally to > the matter at hand. Search for 'SPARQL +REST' and you'll come across a > couple of good papers explaining why SPARQL Protocol isn't REST (which > should be obvious to anyone who's read the thesis) and what can be done > about it. Indeed, that's mainly because many have mentally positioned SPARQL on the server side making it un-RESTful, when you position it on the client side and GET what you need + leverage HTTP cacheing, it's a whole different ball game.